url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://mathhelpforum.com/differential-geometry/156415-prove-disprove-question.html
# Thread: prove or disprove question 1. ## prove or disprove question Let S be a non-empty set of real numbers and suppose that L is the least upper bound for S. Prove that there is a sequence of points $\displaystyle x_n$ in S such that $\displaystyle x_n$ --> L as n --> $\displaystyle \infty$. Prove or disprove: For non-empty bounded sets S and T: lub(SUT) = max{lub(S), lub(T)}. 2. I see that you have over fifty postings. By now you should understand that this is not a homework service So you need to either post some of your work on a problem or you need to explain what you do not understand about the question. 3. okay, here is what i got so far for the first one, Let $\displaystyle x_0$ by any point of S. Since L is a least upper bound, for each n >0, there must be an element $\displaystyle (x_n)$ of S in the open interval ($\displaystyle max(x_n-1,L-1/n),L$). Use these $\displaystyle x_n$. For the second, i am thinking to prove by contradiction 4. For the first one there are two cases: i) $\displaystyle L\in S$ and ii) $\displaystyle L\notin S$. In case i) take a constant sequence of L’s. In case ii) there is a sequence of distinct terms such that $\displaystyle x_n \in \left( {L - \frac{1}{n},L} \right)$. 5. Originally Posted by Plato For the first one there are two cases: i) $\displaystyle L\in S$ and ii) $\displaystyle L\notin S$. In case i) take a constant sequence of L’s. In case ii) there is a sequence of distinct terms such that $\displaystyle x_n \in \left( {L - \frac{1}{n},L} \right)$. I think the question stated that we need to PROVE that there is a sequence of points $\displaystyle x_n$ such that $\displaystyle x_n\rightarrow L$ as $\displaystyle n\rightarrow\infty$. I've done some work on this thanks to other sources, but I'm sure it either has some errors or is completely wrong. For some $\displaystyle x_n\in S < L=lub(S), \exists x_{n+1}\in S = L$ such that $\displaystyle \frac{x_n+L}{2}<x_{n+1}<L$ where $\displaystyle x_n\rightarrow L$ as $\displaystyle n\rightarrow\infty$. Can anyone tell me if I'm missing anything or if I did something wrong? 6. If $\displaystyle S=[0,1]\cup \{2\}$ then $\displaystyle L=2$. The only sequence of points from $\displaystyle S$ converging to $\displaystyle 2$ is come variation of $\displaystyle \left( {\forall n} \right)\left[ {a_n = 2} \right]$. Does what you have work for this example? 7. Originally Posted by Plato If $\displaystyle S=[0,1]\cup \{2\}$ then $\displaystyle L=2$. The only sequence of points from $\displaystyle S$ converging to $\displaystyle 2$ is come variation of $\displaystyle \left( {\forall n} \right)\left[ {a_n = 2} \right]$. Does what you have work for this example? I'm not exactly sure how that is meant to work into the first half of the question, or I'm mistaking it for being related to the first half when it's meant for the second. I came up with my current answer from this link: http://www.mathisfunforum.com/viewtopic.php?id=1645, but I'm sure there's something in it that's missing. 8. The proof in the link is wrong. Use the same counterexample I gave. The usual problem that goes with a similar proof is: If $\displaystyle L=\text{LUB}(S)~\&~L\notin S$ then there is a sequence of distinct points from $\displaystyle S$ that converges to $\displaystyle L$. But the example where $\displaystyle L\in S$ may not work except for almost constant sequence.
2018-05-26 15:03:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704942107200623, "perplexity": 183.03919074158523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.75/warc/CC-MAIN-20180526131802-20180526151802-00140.warc.gz"}
https://chemistry.stackexchange.com/questions/39196/hybridization-mot-and-paramagnetism
# Hybridization, MOT and Paramagnetism In what way can hybridization or molecular orbital theory be used to explain paramagnetism? For instance, when something is hybridized to make enough bonding electrons, do all the electrons end up paired? Or are some left unpaired, explaining paramagnetism? Paramagnetism is a form of magnetism whereby certain materials are attracted by an externally applied magnetic field. Valence bond theory (VBT) and hybridisation doesn't really do a good job at predicting whether a molecule is paramagnetic or diamagnetic (isn't attracted by an external magnetic field). This is why molecular orbital theory (MOT) is so useful as it is successful at predicting whether a molecule is paramagnetic. For a molecule to be paramagnetic, it needs to have an overall magnetic moment meaning that it needs an unpaired electron. If all the electrons are paired, then the molecule is diamagnetic. So by seeing whether a molecule has an unpaired electron, we can predict if it is paramagnetic or not. Now lets consider the example of $\ce{O2}$. Experimentally, $\ce{O2}$ is known to paramagnetic. According to VBT, $\ce{O2}$ should look like this: ]1 As you can see, all the electrons are paired. Therefore VBT predicts that $\ce{O2}$ should be diamagnetic. Now lets examine how the electrons are arranged according to MOT. In MOT, unlike VBT, it involves the creation of bonding and anti-bonding MOs. MOs are basically the supposition of the wavefunctions of atomic orbitals. In $\ce{O2}$ the 2s AOs of each oxygen atom constructively and destructively overlap with each other while their 2p AOs also constructively and destructively overlap with each other The resulting MOs for $\ce{O2}$ looks like this: Now that we got the MOs, all we have to do is fill them with electrons using the same method that we use for AOs. By doing that we get: Note that we have 2 unpaired electrons. Therefore MOT correctly predicts that $\ce{O2}$ should be paramagnetic, unlike VBT which predicts that $\ce{O2}$ is diamagnetic. • for an ionic compound (for instance AgCl), could hybridization or MOT be used to predict paramagnetism? from the readings I've done, all the examples are of covalent compounds – user264985 Oct 18 '15 at 0:41 • I am not that sure as I have only briefly studied MOT. I don't think you are able to apply MOT or hybridisation to ionic compounds as they exist as large crystal lattices and aren't simple molecules consisting of 2 or 3 atoms. But I think you should wait until someone else who knows more about this topic answers. – Nanoputian Oct 18 '15 at 4:00
2020-07-16 04:33:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3507412075996399, "perplexity": 871.4658973182239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657181335.85/warc/CC-MAIN-20200716021527-20200716051527-00007.warc.gz"}
https://competitive-exam.in/questions/discuss/the-cobweb-model-will-convergent-when-the-slope
# The cobweb model will convergent when the slope of: Demand curve is more than supply curve Supply curve is more than demand curve Supply curve is equal to demand curve None of the above Please do not use chat terms. Example: avoid using "grt" instead of "great".
2019-12-08 08:27:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8006881475448608, "perplexity": 6662.743744960512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540507109.28/warc/CC-MAIN-20191208072107-20191208100107-00198.warc.gz"}
https://blogs.mathworks.com/developer/2017/05/11/the-fast-and-the-fouriers/?s_tid=feedtopost
# The Fast and the Fouriers! UPDATE: We did it and it was a lot of fun! It looks like the tracking worked out as well so that was fun to see. One slight hiccup was that we ended up tracking more than 8000 points and ThingSpeak returns 8000 points per call. No biggie! We just needed to seperate out all our calls to ThingSpeak to get all the data. The data is all there on the ThingSpeak channel I just needed to update the visualization code. I've included the updated version below to show all the points. I also distinguished the drive from MathWorks HQ over to the start from the actual course I took. Hi folks! Check it out! 11 MathWorkers and I are about to embark on a MathWorks adventure running the 200(ish) miles of Cape Cod. We start running at 2:30 PM Eastern Standard Time tomorrow (Friday May 12th). It should be an adventure, and we were excited to help with a great cause. I thought it would be fun to use a little MATLAB mobile goodness to broadcast our progress in real time. Check it out and track us either through the ThingSpeak page or just on this page as the map below should be updated in real time. Hopefully I don't have any bugs, since I will be busy running and supporting the other runners and it might be tough to fix any mistakes I've made in the code. Remember, it's live folks! For reference, here is the code I will be running from my iPhone using the MATLAB Support Package for Apple iOS Sensors to send my GPS data. Note I will be doing this from an instance of MATLAB in the cloud. Awesome! Here's the function. It just takes the instance of mobiledev: function trackTheFouriers(m) disp("Tracking the Fouriers: " + string(datetime('now'))) m.PositionSensorEnabled = 1; m.Logging = 1; m.SampleRate = 1; channel = 261391; writeKey = 'XXXXXXXXXXXXXXXX'; % Not the real write key :-P [Latitude, Longitude, posTS, Speed, ~, Altitude, Accuracy] = m.poslog; % Only write to the channel when there are changes in latitude, longitude, or accuracy Timestamps = datetime(m.InitialTimestamp) + seconds(posTS); changeTable = [... lastDataPoint(:,{'Latitude', 'Longitude', 'Accuracy'}); table(Latitude, Longitude, Accuracy) ]; % Find changing data hasChanged = @(value) abs(diff(value)) > 1e-5; changeIdx = hasChanged(changeTable.Accuracy) | hasChanged(changeTable.Longitude) | hasChanged(changeTable.Latitude); T = table(Timestamps(changeIdx), Accuracy(changeIdx), Speed(changeIdx),'VariableNames', {'Timestamps', 'Accuracy', 'Speed'}); h = height(T); if h > 0 disp("Writing " + h + " data points") thingSpeakWrite(channel, T, 'WriteKey', writeKey,'Location', [Latitude(changeIdx), Longitude(changeIdx), Altitude(changeIdx)]); else disp("No data to write"); end Also, here is the visualization code I used to plot our course continually during the race: dawnOfTime = datetime(2000,1,1); endTime = datetime('now'); % Grab the results. Keep asking for results until we have the full dataset. T = table; while ~isempty(intermediateResults) T = [intermediateResults; T]; %#ok<AGROW> endTime = T.Timestamps(1) - sqrt(eps); % Adjust our window to get more data end % Generate the map serverURL = 'http://raster.nationalmap.gov/arcgis/services/Orthoimagery/USGS_EROS_Ortho_1Foot/ImageServer/WMSServer?'; info = wmsinfo(serverURL); latlim = [41.4,42.4]; lonlim = [-71.4 -69.9]; height = round(diff(latlim)*1000); width = round(diff(lonlim)*1000); [A, R] = wmsread(info.Layer, 'Latlim', latlim, 'Lonlim', lonlim,... 'ImageHeight', height,'ImageWidth',width); % Show the map fig = figure; fig.Position(3:4) = [width height]; ax = axes('Parent',fig); geoshow(ax, A,R) hold on axis tight % Show the start & end points geoshow(ax,42.271, -70.857,'DisplayType', 'point', 'MarkerSize',20,'LineWidth',4,'Marker', 'v'); geoshow(ax, 42.053, -70.189,'DisplayType', 'point', 'MarkerSize',20,'LineWidth',4,'Marker', 'h'); %T.Timestamps = T.Timestamps + hours(4); % adjust recorded timezone offset drivingOver = T.Timestamps < datetime(2017,5,12,14,30,0); driveData = T(drivingOver,:); drivePath = geoshow(ax, driveData.Latitude, driveData.Longitude,'DisplayType', 'line', 'LineWidth',3,'LineStyle',':','Color','red'); raceData = T(~drivingOver,:); geoshow(ax, raceData.Latitude(end), raceData.Longitude(end),'DisplayType', 'point', 'MarkerSize',20,'LineWidth',4,'Marker', 'o'); racePath = geoshow(ax, raceData.Latitude, raceData.Longitude,'DisplayType', 'line', 'LineWidth',3); legend([drivePath, racePath], {'Drive from TMW', 'Race Path'}) Cheer us on! I'll try to check back in from time to time during the race to see how things are working. Published with MATLAB® R2017a |
2020-05-27 03:52:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20864246785640717, "perplexity": 8954.766563622568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392057.6/warc/CC-MAIN-20200527013445-20200527043445-00225.warc.gz"}
https://www.numerade.com/questions/refer-to-exercise-3-calculate-and-interpret-the-standard-deviation-of-the-random-variable-x-show-you/
### Discussion You must be signed in to discuss. ### Video Transcript So in this problem, we have a probability distribution. Our random variable X and we have the corresponding probabilities of 012 and three also four. And this is 0.1 point 2.3 point 3.1. And just to make sure 6789 10 Yes, the probability all add up to one, just double checking that. So we know we have to find the standard deviation. But first of all, we have to find the mean of that random variable. So we're gonna have to take zero times 00.1 plus one times point to plus two times 20.3 plus three times 30.3 and four times 40.1. So I should be able to do most of that in my head. Um, but we'll we'll accumulated on the calculators that zero plus 00.2 plus that will give me 0.6. That will give me plus 0.9. And that will give me plus 0.4. And those add up to 2.1. So we know the mean is 2.1, and now we want to find the standard deviation. So the standard deviation we know of that random variable is we're going to have to take the difference between each of our elements up here. So we're going to have zero minus 2.1 quantity squared to find that deviation. And then we waited by its corresponding probability. Uh, next one is one minus 2.1. Again, quantity squared, and we wait. At times, it's probability next one to minus 2.1 quantity squared and wait. At times it's probability, and we're getting there. We have three minus 2.1 quantity squared times. It's probability, and then the last one and I should be able to squeeze it in there and we'll go for minus 2.1 quantity squared times its corresponding probability, which is 0.1. So now I'm ready to put this into my calculator, and I know that difference is 2.1. Absolute value is 2.1 squared times 0.1 plus, and I know that difference is negative 1.1, but I'll put it in my calculator is 1.1 squared times that 0.2 plus. That difference is absolute. Value is 0.1 squared times 0.3. That difference is 0.9 point nine squared times 0.3 plus last one, I believe. And that difference is 1.9 squared times. That's corresponding probability times 0.1. And so I get square root of and underneath the radical we have that. And so now we can take the square root and find that that answer is 1.1358 Yeah, now. So that means that on the average, if they think this is a spell check or finding some airs on the average, If you did this over and over and over again, you would have a mean number of lots and lots of trials of obviously all of them will be. The trials will all be integers from 0 to 4. However, the mean of those values would be 2.1 if you do it for a very, very long period of time and the standard deviation of those values would come out to be this theoretically, so they're all going to be about 68% of them or so will be within one standard deviation. And so on, depending again, on how many trials you do Michigan State University
2021-04-15 04:45:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7589110136032104, "perplexity": 362.5483093206771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00387.warc.gz"}
http://rainnic.altervista.org/en/content/how-add-bookmarks-pdfs
# How to add bookmarks on PDFs Submitted by Nicola Rainiero on 2012-06-28 (last updated on 2013-06-19) The convenience of a good index in printed documents is essential to ensure fast tracking of the desired content, and a rapid overview. In PDF files, however, the situation is different, they are always too slow and uncomfortable, not to mention of their consultation in the ebook reader. So the bookmarks become an unavoidable choice, below I describe how to insert them with LaTeX or with a friendly GUI editor. Reading and referring to various types of PDF, I noticed how is difficult to search for a particular chapter or section inside them, although some of them have tables of content (alias TOC) well structured. It also happens to have different page numbering, i.e. the first part is numbered with small Roman numerals, while the remainder will start again with the classical Arabic ones. In this way the desired page in the PDF will be burdened by the initial number of pages (in Roman style) and every time you have to point in a specific page, as you read in the TOC, you have to make a sum to reach the right page. In fact, there are also PDFs with clickable TOCs that solve the problem, but in the eBook reader require a steady hand and finger very small and precise to active the link, therefore they help up to certain point. The bookmarks fix this problem, they are a sort of clickable TOC callable at any point and any page, as documented by the following two PDF files: WITHOUT WITH PDF without bookmarks PDF with bookmarks using LaTeX And if you want to add or change them? You can do this in two ways: manually using LaTeX or easily using a java software called JPdfBookmarks. ## Insertion using LaTeX It is sufficient to load pdfpages and hyperref packages and to add the bookmarks like this detailled example: % % in \includepdf specifies the pages interval % and adds addtoc to declare which pages will constitute the bookmarks % firstly begin inserting in ascending order the pages % every definition must followed by comma % % page number, % type of sectioning (ie chapter, section, subsection), % 1,{title of bookmark}, % corresponding label but I don't use it and I type always “a”, % finally: 5,chapter,1,{Primo capitolo},a, % …, % the last bookmark must be without final comma 9,section,1,{II Paragrafo},a % then closes the insertion, defining between "{" "}" the PDF file name and the path if is different from the TEX file }]{esempio_x_articolo.pdf} This will be the final TEX file: \documentclass{book} \usepackage[english,italian]{babel} \usepackage{pdfpages} \author{Nicola Rainiero} \title{Esempio di PDF con o senza segnalibri} \usepackage[pagebackref]{hyperref} \begin{document} 1,chapter,1,{Copertina},a }]{esempio_x_articolo.pdf} \frontmatter 3,chapter,1,{Indice},a }]{esempio_x_articolo.pdf} \mainmatter 5,chapter,1,{Primo capitolo},a, 5,section,1,{Introduzione},a, 6,section,1,{II Paragrafo},a, 6,section,1,{III Paragrafo},a, 9,chapter,1,{Secondo capitolo},a, 9,section,1,{I Paragrafo},a, 9,section,1,{II Paragrafo},a }]{esempio_x_articolo.pdf} \end{document} Using pdflatex you can compile a new PDF with bookmarks. ## Insertion with JPdfBookmarks editor A more versatile and practical solution is provided by an excellent software licensed GPLv3 that allows you to customize the bookmarks with different colors and styles and many other advanced features. The two images below are eloquents: Go in the desidered page and click the right button mouse below the bookmarks window: choice the first or second item Type the corresponding bookmark When you finished, you can save everything in this initial file or in another one. For more information and to learn the advanced features of this software, you can read the documentation in the project site. ## Related Content: ### Nicola Rainiero A civil geotechnical engineer with the ambition to facilitate own work with free software for a knowledge and collective sharing. Also, I deal with green energy and in particular shallow geothermal energy. I have always been involved in web design and 3D modelling.
2020-10-30 17:08:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8056749701499939, "perplexity": 2621.1289712026532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911027.72/warc/CC-MAIN-20201030153002-20201030183002-00319.warc.gz"}
https://www.gamedev.net/forums/topic/651681-float-unlimited-increasing-rotation-or-use-a-if/
# float unlimited increasing rotation or use a if This topic is 1478 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi all, i have a question : suppose i am making a wheel rotating : float rot = 0.0f; // in loop rot += fElapsedtime; it works fine like this, but the number gets unlimited bigger and bigger. Now what is better for CPU performance?, using a if that limits the rotation ? : // in loop rot += fElapsedtime; if( rot > Pi )rot -= Pi; What is the better choise and why ? Edited by the incredible smoker ##### Share on other sites What i usually do is something like this float rot = 0.0f; rot += somevalue * ElapsedTime; while(rot >= 360.0f){ Rot -= 360,0f; if(Rot < 0.0f) Rot = 0.0f; } You should not worrie too much about performance for such simple things. You might loose 1/1000000000 of a second using this algorithm instead of a simpler one, so it's just not worth the trouble. This code handle all wrong/worst case scenario, and that's what matter. You might consider wraping that code to a function tho, if you plan to use it a lot. The reason im using a while loop is, suppose the frame rate drop because of reason x and 3 seconds have elapsed since the last frame, this will still give a value between 0.0f and 360.0f. The if inside the loop is to protect value < 0.0 in case of a floating point precision error. Edited by Vortez ##### Share on other sites Could this also be achieved using float unrolledRot=fmod(absoluteRot, 360.0f)? ##### Share on other sites I,m sorry i have no clue what you mean. It also seems very CPU intensive, using a function like fmod ? ( i dont know what its for ) Normally i dont use functions like : sin() , cos() etc, fmod() looks something like that, very cpu intensive. Note : i usually avoid divide to. I,m asking to go with or without the if, and why ? greetings ##### Share on other sites To represent a rotation I prefer to use a unit-length vector instead of an angle. You can think of it as (cos(angle),sin(angle)). Now a particular rotation only has one possible representation so the issue doesn't exist. Whenever you are going to use the angle, chances are you are going to be computing its sine and cosine anyway, so it's not like we have complicated things. If you are comfortable with complex numbers, it might be even better to represent the rotation as a complex number with modulus 1. Complex rot(1.0f, 0.0f); // in loop rot *= exp(Complex(0.0f,fElapsedtime)); EDIT: You probably want to renormalize the rotation every so often, like this: rot /= abs(rot); Edited by Álvaro ##### Share on other sites Hi there, seems very cpu intensive + i have no clue about complex, and why use exp, the rotation is linear right ? ( it rotates good in screen so i bet ); I mean : i just have a wheel or circlesaw rotating. float rot = 0.0; // in loop { rot += fElapsedtime; // now i wanto know if its better to use this : if( rot > Pi )rot -= Pi; // otherwise the float rot will be increasing unlimited, i dont know how if affects the cpu; } So my question is : What is more CPU intensive, having the if, or having the huge float numbers ? Edited by the incredible smoker ##### Share on other sites Thanks Waterlimon and Alvaro about the presision, i did not notice that yet, to be honest i have no clue whats the underlaying idea about float, i was just asking to be sure. With this information i now know i need the "if", the "if" is faster then any function like fmod i,m sure. Maybe you all dont worry much about CPU intensivity as i do, i like many things in screen, not just 1 circlesaw, maybe you all have a i7 instead of a celeron ? By example : your better off with a "if" instead of a "min()" or "max()" in terms of CPU usage, i avoid everything to be honest. greetings and thanks again ##### Share on other sites With this information i now know i need the "if", the "if" is faster then any function like fmod i,m sure. It's probably the other way around. Modern CPUs are very complex to guess "what's faster" without context or actual profiling; but an 'if' requires a branch, and branch can involve pipeline stalls and cache misses (and if you're not targeting x86/x64 PCs, it also involves an LHS - Load-Hit-Store which is incredibly expensive). fmod uses only math to do the same thing, and as such, can easily be pipelined (thus will run fast on most architectures). Because the performance of the 'if' variant highly depends on branch predictors, its performance can't be evaluated without context (which means knowing the state of the branch predictor). ##### Share on other sites Hello Matias, thanks for the reply. I was looking for some info specific about what is called branching, it is still not clear to me : if i use only the "if", and not the brackets after, is it still branching ? and what if i only use the brackets like this, without the if : { // code here } does that also count as branching ? greetings ##### Share on other sites Branching is when the code executed next is chosen based on a condition. This includes of course if statements but also loops, since they need to decide whether to run the loop body one more time or stop looping based on the condition. Also, things being functions/macros does not make them slow, because any modern compiler will be able to inline it if the function itself is simple. Eg. with the fmod, which is a couple of arithmetic operations, it is very likely that the same machine code is produced when you write the math inline yourself or use the function. ##### Share on other sites if i use only the "if", and not the brackets after, is it still branching ? and what if i only use the brackets like this, without the if : { // code here } does that also count as branching ? greetings If you are asking this, you are in absolutely no position to be worrying about whether branching is faster than a math operation or not. You need a really thorough understanding of what is going on under the hood of your compiler if you want micro-optimisations to be anything other than a total waste of time. ##### Share on other sites Let me tell like this : i have tested all this, get the time, repeat 1000 times, then get the time again. Test showed me the simplest if was faster then functions, it was a while ago, i should test it again on my new pc maybe ? Can a i7 be faster with sin() instead of a lookuptable? , and maybe a Celeron ( which is my current game development pc with onboard graphics ) cant ? If you are asking this, you are in absolutely no position to be worrying about whether branching is faster than a math operation or not. You need a really thorough understanding of what is going on under the hood of your compiler if you want micro-optimisations to be anything other than a total waste of time. If i worry about optimalization,i must be in some position, right ? I have learned programming not on school, i also dont know how to use a debugger. Is that a problem ?, i thought questions are never dumb, i skip learning everything that is not needed to get result, if i need something i can Always ask it. But if you defending your own business, ofcourse you dont wanto tell the competition how to get your games optimized, i,m telling you : games are not playable with functions like sin() and cos() and sqrtf() ( i still need to get some fast sqrtf function by the way ). Note : i,m Always having 1000 bullets and explosions in screen, so maybe this does not count for your i7 pc with 2 bullet and 1 explosion ? greetings ##### Share on other sites Hi there, seems very cpu intensive + i have no clue about complex, and why use exp, the rotation is linear right ? ( it rotates good in screen so i bet ); I mean : i just have a wheel or circlesaw rotating. float rot = 0.0; // in loop { rot += fElapsedtime; // now i wanto know if its better to use this : if( rot > Pi )rot -= Pi; // otherwise the float rot will be increasing unlimited, i dont know how if affects the cpu; } So my question is : What is more CPU intensive, having the if, or having the huge float numbers ? Rather than worry about which is fastest, worry about which will give you the correct result, (or atleast a correct enough result). Trig functions on x86 are only accurate in the -PI to PI range (beyond that the results start to drift off and the error gets worse the further away from that range you get), a float also normally only has 32 bits of accuracy, making small increments to a huge floating point number will not give you the expected result, restricting the scale of your rotation value is necessary to ensure a sane behaviour, (you may not need to restrict it to the -PI to PI range, but you have to restrict it) Languages such as Java will restrict arguments passed to trig functions for you (but does so with higher than native precision argument reduction which is pretty darn slow so with Java on x86 you absolutely should restrict it to the -PI to PI range). If you are on an architecture without a FPU or with a fairly weak FPU you might benefit from ditching trig functions completely and instead use lookup tables(best to use integers for your rotations then, just remember that it will likely be slower than trig functions on a modern CPU due to cache misses (reading from RAM is very slow) or fast approximation functions (depending on what precision you need), on newer x86 you can also use SSE to implement very fast high precision trig functions (using exponents or taylor series) Edited by SimonForsman ##### Share on other sites Thanks Simon, valuable information. @ jbadams : Is Microsoft Visual Studio Professional 2005 considered a modern compiler ? thanks. Edited by the incredible smoker ##### Share on other sites You should instead be more focused on writing code that is clear (i.e. easily read and understood) and correct (does what you want) and then only worrying about optimisation if you can actually demonstrate that your program isn't fast enough, at which point you would start to optimise the parts of your program your profiler shows to be the slowest rather than making guesses or trying to micro-optimise small things like you're worrying about in this topic. By worrying about these low level details without actually measuring performance properly you're almost certainly simply making your code more harder to read, more complicated (and therefore more prone to bugs), and not actually gaining any performance over simply using the most obvious code and allowing your compiler to do it's work.  The very question you started this topic with is an obvious example -- it's likely that neither or your alternatives would perform better than the other once the optimising compiler has done it's job, but one version has a precision problem that will result in incorrect behaviour if not handled -- you're worrying needlessly about performance but hadn't noticed that one version of your program could be buggy. Hi, i also have comments above the code, wich is the slow readable code, ofcourse i know the importance of readable code, especially with a project this big, i dont know the line count, alot of files for sure, more then fits the screen! Edited by the incredible smoker ##### Share on other sites Can a i7 be faster with sin() instead of a lookuptable? , and maybe a Celeron ( which is my current game development pc with onboard graphics ) cant ? Lookup tables are so 1990's. Think of the cache. Processors have become lightning fast since then while ram speed has not. Also the line "i,m telling you : games are not playable with functions like sin() and cos() and sqrtf() ( i still need to get some fast sqrtf function by the way )." had me a retro-chuckling. ##### Share on other sites Ok, i will make a test, and test it on my Celeron and a i7, let see, interesting. ##### Share on other sites Can a i7 be faster with sin() instead of a lookuptable? , and maybe a Celeron ( which is my current game development pc with onboard graphics ) cant ? Lookup tables are so 1990's. Think of the cache. Processors have become lightning fast since then while ram speed has not. Also the line "i,m telling you : games are not playable with functions like sin() and cos() and sqrtf() ( i still need to get some fast sqrtf function by the way )." had me a retro-chuckling. actually, IME, lookup tables *can* be pretty fast, provided they are all kept small enough to mostly fit in the L1 or (at least) L2 cache. for example, a 256-entry table of 16-bit items: probably pretty fast. OTOH, a 16k/32k/64k entry table of 32 or 64 bit items... errm... not so fast. as for sin/cos/sqrt/... probably not worth worrying about, unless there is good reason. the performance issues with these, however, are not so much with the CPU as with how certain compilers handle the C library math functions. but, in most cases, this should not matter (yes, including in the game logic and renderer). I would not personally recommend sin or cos tables as an attempt at a "general purpose" solution, as this is unlikely to gain much (and if done naively will most likely be slower, more so if int<->float conversions and similar are involved). for special-purpose use cases, they can make sense, but generally in the same sort of contexts where one will not typically be using floats either. ##### Share on other sites Hello Matias, thanks for the reply. I was looking for some info specific about what is called branching, it is still not clear to me : if i use only the "if", and not the brackets after, is it still branching ? and what if i only use the brackets like this, without the if : { // code here } does that also count as branching ? greetings If you have to ask questions like this, you're not really ready to do any low-level optimizations. Also, going branchless isn't always a win. I've worked on optimization for some platforms where I actually got speed improvements by changing from heavily-optimized branchless floating-point math into the most basic, beginner-friendly if/else code possible. The previous optimizations had turned out to be very platform specific, and on some slower, simpler processors, branching wasn't relatively as bad as caching the extra instructions and performing redundant math. Of course I only even tried this because the code I modified had showed up in a profile as something I should look at. Now, there's usually some platform-specific thing you can do to speed up your math, but I always prefer to start from the simplest possible reference implementation, and that implementation should be kept around as a compile option. You can also use a reference implementation to test whatever faster math you create. ##### Share on other sites Let me tell like this : i have tested all this, get the time, repeat 1000 times, then get the time again. Test showed me the simplest if was faster then functions, it was a while ago, i should test it again on my new pc maybe ? Meaningless benchmarks will get you meaningless results. You can't just test if statements vs. function calls and then apply those results everywhere in your code; you need to test each particular if statement against it's equivalent function, as sometimes one will be better, but in other cases that won't be true. You also need to do your tests in release mode with optimization enabled, in which case the compiler may inline your function call or even leave code out entirely if it detects that it isn't needed or used. You need to test real code samples, not artificial things like functions vs. if. 1,000 items on screen isn't a big number, you should stop touting it like you have some crazy unusual performance needs. VS Express 2005 is almost 10 years old, it's probably time to update. That being said, it's still smart enough to optimize many of the situations being discussed. (Posted from mobile.) ##### Share on other sites I have this software Original complete package, so i have to use this. I dont think i can use the newest version with my keycode. My lookuptables are usually 16-bit 512 or max 1024 sometimes, i dont know if this is a issue. And i will do for every function a test, not test just 1 function and say its faster or slower, ofcourse. btw : I dont aim for i7 PCs, i like my game playable for everyone, also those without the best system, i still like old games to, if i reach to something like a Dreamcast game i will be happy enough, i bet there are enough people without a expensive game pc. + this topic costs me lots of points,  time to play screenshot showdown before reaching zero ( will i be banned then lol ? ). Anyways : Happy newyear all! ##### Share on other sites The reason you got downvoted is because you worrie too much about meaningless micro-optimization. Those kind of optimization might had their use in the 80's, maybe even 90's, to a much lesser extend, but are all but useless nowaday. Your game wont run slower because you choose to use an if or a math function, i can garranty you. I have learned programming not on school, i also dont know how to use a debugger. Using a debugger is not hard, and as i always says, it's the programmer's best friend. I couldn't do much without a debugger to be honest, all i would do it guess what's wrong, until i ragequit and punch my computer . Seriously tho, this is really something you should learn to use, fast. Edited by Vortez
2018-01-21 20:58:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36313167214393616, "perplexity": 1795.035739454278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890874.84/warc/CC-MAIN-20180121195145-20180121215145-00395.warc.gz"}
http://spoth24.it/mosfet-distortion-schematic.html
# Mosfet Distortion Schematic A revised guide to the theory and implementation of CMOS analog and digital IC design The fourth edition of CMOS: Circuit Design, Layout, and Simulation is an updated guide to the practical design of both analog and digital integrated circuits. Digital Design: 11: May 12, 2020: S: high side mosfet: Digital Design: 2: Jan 29, 2020: Am I using the right P-channel MOSFET for my high side switch? Power Electronics: 7: Jan 13, 2020: High side switching with a Logic Level n-channel High Voltage Mosfet: Analog & Mixed-Signal Design: 33: Sep 15, 2019. Multisim Tutorial Using Bipolar Transistor Circuit¶ Updated February 10, 2014. Looking at the specs of the current commercial Rotel, NAD, etc amplifiers in the 50-100W range, I see they all have really disappointing distortion figures, around 0. Output short circuit to gnd, to Vs, across the load 2. The Gunslinger employs a newly-designed MOSFET circuit to deliver a wide range of distortion tones from a touch of overdriven grit to full-bore high-gain saturation. Crunch is not a specific type of distortion, but a mild overdrive or distortion. The topology is the same as the hybrid tube/MOSFET line amp. This circuit have Power output is 200 Wrms in 8 ohms or 350 Wrms into 4 ohms. The HYBRID Printed Circuit Board Balanced Inputs for the HYBRID (New) It is a simple design that incorporates interesting ideas such as the Zen project of Nelson Pass, tubes low voltage operation (Erno Borbely, Glass Audio vol. The circuit construction is physically optimized to reduce distortion and retain signal integrity while minimizing reflected power from the circuit and load. This MOSFET switch, with on-resistance, is put in series with a resistor to form an R-MOSFET branch (Fig. See full list on reverb. The edited versions of the Holy Holton AVXXX series amplifier circuits are of high quality PCB designs. The input source is a battery tank of four series-connected LiFePO4 batteries. Emphasis on quantitative evaluations of performance using hand calculations and circuit simulations; intuitive approaches to design. com is an authorized distributor of Lite-On Technology, stocking a wide selection of electronic components and supporting hundreds of reference designs. The distortion is mostly second harmonic. Two BC 558 transistors Q5 and Q4 are wired as pre-amplifier and TIP 142 and TIP 147 together with TIP41 are used for driving the speaker. MOSFET, Q2, turns on, the SW pin is pulled to ground. Capacitor C8 is the input DC decoupling capacitor which blocks DC voltage if any from the input source. #2 - The original had fixed crossover frequencies at 270Hz & 1. Electra Distortion Schematic. Blog Entry Using Transistor as a Switch December 23, 2008 by rwb, under Electronics. 10 Best Deep Burgundy Hair Dye On Black Hair Reviews. 34, and −46. However for most real world nonlinear circuits, small signal results understate the distortion at medium to large waveform amplitudes. From explosive chord work to high velocity leads, the new DOD Gunslinger Mosfet Distortion has the touch sensitivity, string-separation and saturation to do all your dirty work. Electro-Harmonix Muff Fuzz (Opamp Version) Schematic. Sample Clock Bootstrap Circuits (I). to the power MOSFET amplifier is a square wave signal. The MOSFET is replaced by the capacities Cgd, Cgs and a voltage controlled switch. For the enhancement-type MOSFET, the gate to source voltage must be positive and no drain current will flow until V GS exceeds the positive threshold voltage V TN. the Belle Starr Overdrive seems to be an Extraction and Extrapolation of the Amp-like MOSFET Boost Circuit from the Prism. Wampler Black '65 - In my opinion, Brian Wampler makes some of the best sounding pedals bar none, and this one is no exception. The Gunslinger employs a newly-designed MOSFET circuit to deliver a wide range of distortion tones from a touch of overdriven grit to full-bore high-gain saturation. His exit functions in Class A, having as active charge the BC308 and resistance 39R. The first stage is a SHO followed by a Marshall style high pass filter made of a 470p cap and a 470K resistor in parallel. This MOSFET set is based on the January 2007 QST article, "High Sensitivity Crystal Set". Digital Design: 11: May 12, 2020: S: high side mosfet: Digital Design: 2: Jan 29, 2020: Am I using the right P-channel MOSFET for my high side switch? Power Electronics: 7: Jan 13, 2020: High side switching with a Logic Level n-channel High Voltage Mosfet: Analog & Mixed-Signal Design: 33: Sep 15, 2019. I may have it set too mellow. 1%, a damping factor greater than 200, input sensitivity of 1. A simple sub-circuit model is then presented with comparisons of the data for both y parameter and fT characteristics. In Figures 12, 13, 14, and 15, the distortion is reduced and feedback resistor R3 is isolated from the drain of the FET. Overdrive is a natural and smooth sound, while a distortion is more rough. Power amp 400W IRFP448 Circuit Amplifier circuit today,We would like to show you for the MOSFET 400 watt amplifier is amplifier on my kW shares the same circuit and basic PCB layout. 1 Ohms using 10pcs 1 Ohm resistors in parallel-I used other general purpose power diode as the protection diode for the inductor. The Gunslinger packs features like separate Low and High tone controls, a wide range of gain, tons of output and your choice of 9 or 12V operation to keep you on target. Capacitor C2 supplies extra charge during ‘switching on’ operations. The circuit is very compact and consists of two subsystems : the control stage and output stage. MOSFET's has a very high input impedance, in consequence the driver stage is a low power circuit incrementing the global amplifier efficient. The same basic tilt EQ is used in quite a few DOD overdrive and distortion pedals. c Cross Over Distortion. We will understand the operation of a MOSFET as a switch by considering a simple example circuit. Ceramic cartridges of capacitance 800-pF to 12,000-pF, with output voltages up to 900-mV can be connected, making it a very versatile circuit for a large range of cartridges currently available. Learn how BIAS Distortion lets you recreate the sounds of any electric guitar gain pedal, and more. The entire circuit can be fit in a Tic-Tac container (see cover photo). This amplifier can be used for practically any application that requires high power, low noise, distortion and excellent sound. When there is an AC input, each MOSFET is conducting for only 50% of the time. 10 Best Deep Burgundy Hair Dye On Black Hair Reviews. Using the Laplace transform, it’s easy to derive the formula for the current. The bass band used true mosfet clipping, the mid band LED clipping, and the treble band uses a single set of 1N4001 silicon clipping diodes. At its core, it is a beautiful Tube Screamer style distortion circuit with more extra features built on top of it than you can shake a stick at. The following schematic shows the initial pair of JFETs used to produce the distortion, when over-driven. External phase compensation. It does not add distortion on its own but has plenty of output to push your amp into saturation. It gives a nice, bluesy, soft clipping. Input sensitivity of the circuit is 3V RMS maximum, the distortion factor is 0. MOSFET as a high voltage/high side switch. The circuit consists of an N-Channel MOSFET voltage follower T1 (common Drain) and current source T2 (NPN Darlington). Systematic distortion analysis for MOSFET integrators with use of a new MOSFET model: Published in: IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing [see also Circuits and Systems II: Express Briefs, IEEE Transactions on], 41 (9). Its intermodulation and harmonic distortion products are well below 40 dB down from its maximum power output, and its tendency toward parasitic oscillations is so low that a parasitic plate choke is unnecessary. The Switching loss of MOSFET is lower than the switching loss of IGBT ,when the majority carrier device of superior switching characteristic is used. The entire circuit can be fit in a Tic-Tac container (see cover photo). - Organic non-feedback design, feedback is a loopback of signal which use in almost all solid-state amplification circuit, to bring down the gain and distortion to reason level, but the drawback is muddy up the sound stage and imaging, soundstage will be shallow and plain. The original diodes were 1N34's and the. The circuit construction is physically optimized to reduce distortion and retain signal integrity while minimizing reflected power from the circuit and load. The JFET has much less distortion operated as if it were a MOSFET but it's not a MOSFET and has characteristics which allow for even better performance. Wampler Black '65 - In my opinion, Brian Wampler makes some of the best sounding pedals bar none, and this one is no exception. Sound: Class-A 2SK1058 MOSFET Amplifier. This circuit can be used as an unity-gain buffer, line stage, or power amplifier. Class AB Power Amplifiers. 2 mA Vo 10V Vs SV = 1 VOD 15V RD R1 = 1M ohm VD = Vout Vo BS170 HH vaig VAMPL = 50mV FREQ5 HZ Figure 4. Inspired by Skreddy Top Fuel and a DIY muff project One Knob Gilmour I have been tinkering with a big muff circuit using mosfet transistors for all four gain stages along with mosfets wired as diodes in the second clipping stage. The Guardian-86 is a high-end solid state speaker protection circuit intended for use in mono amplifiers. The two following stages use hand picked jfets. It boasts full-bodied gain at any drive level, courtesy of a circuit based around MOSFET transistors, which feature a dynamic response similar to a cranked amp. Introduction to Operational Amplifiers. : no bad figure even in these days of MOSFET and ICs. Even the high current drawing MOSFET output stage is supplied by a tightly regulated supply with an instantaneous current capability of +/- 50A. This circuit is fed a power from the 5A dual power supply. Mizoguchi et al. This means it has played as loud as possible without distortion for almost 6 hours, in a 3 ohms load __ Designed by Raphaël Assénat. Crunch is not a specific type of distortion, but a mild overdrive or distortion. The Gunslinger is a Mosfet based distortion pedal meant to emulate the distortion created by a classic tube amp. This is not truly accomplished in this circuit. Hi - Adjusts the treble content; Lo - Adjusts the bass content; Drive - Controls the amount of gain. In Figures 12, 13, 14, and 15, the distortion is reduced and feedback resistor R3 is isolated from the drain of the FET. During the MOSFET on-time, charge that is stored. Similarly, s-space method did not produce accurate results for circuit performance (e. Integrated schematic editor and simulator The hierarchical schematic editor makes it easy to sketch a circuit. The MOSFET pulse width modulation power supply allows for a clean, distortion-free signal to your speakers. The gate input voltage V GS is taken to an appropriate positive voltage level to turn the device and therefore the lamp load either “ON”, ( V GS = +ve ) or at a zero voltage level that turns. External phase compensation. It uses a TL082 JFet chip with two 2N7000's Mosfet transistors for the clipping circuit giving it a tube amp-like response. When there is an AC input, each MOSFET is conducting for only 50% of the time. 8kHz and 8 Ohms for 7W version of the Follower with DoZ preamp. 66 AbstractPlus | Full Text: PDF(32 KB) IEEE CNF fedcg 3. 1 - IOH Test Circuit Fig. ST-BY function 6. 3 - VOH Test Circuit Fig. The 400W MOSFET-amplifier has four key stages of amplification. Your schematic could look like Fig-1. As shown, the circuit defines a power amplifier capable of delivering about 36W into an 8-ohm load. A compact MOSFET model for distortion analysis in analog circuit design Citation for published version (APA): Langevelde, van, R. In this circuit arrangement an Enhancement-mode N-channel MOSFET is being used to switch a simple lamp "ON" and "OFF" (could also be an LED). The manufacturer told me that the muscle wire needs 3V - 3. 0 July 2002 1999 Cadence Design Systems, Inc. #2 - The original had fixed crossover frequencies at 270Hz & 1. Total harmonic distortion @ 1KHz: 1W 0. Many mosfet power amp designs are made to mimic the characteristics of tube power amps especially in the area of distortion when overdriven. configuration. If the body diode of one MOSFET conducts when the opposing device is on, a short circuit arises resembling the shoot-through condition. Pro Co Rat Schematic. Distortion? Not mentioned. Harmonic Distortion • Advanced Direct Energy MOSFET Ampli˜er Design • Wide Range Linear Circuit • Transformer Stabilizer • 4 Ohm Stable (Low-Impedance Driving Capacity) • Banana Speaker Terminals • CD Keyboard Title Input • Center Loading Mechanism • 10 Custom Filing Modes • CD TEXT • 50 Track Best Selection Memory. When there is an AC input, each MOSFET is conducting for only 50% of the time. circuit is determined by the value of this current. AN4350 Circuit description and design guidelines 34 The VR reflected voltage is selected as a trade-off between efficiency (higher V R means lower switching losses on the flyback Power MOSFET) and the absolute voltage on the primary side switching node (higher VR means higher spike voltage on the Power MOSFET's drain). Obviously the circuit is based around an operational amplifier, which is a differential amplifier with two inputs: inverting and non-inverting. EDIT - Simulate the following circuit. Uncommonly powerful for a tube amplifier at 200 watts per channel, the Premier One was identified by one leading audiophile publication as the best sounding power amplifier ever made to that time. Clipping Section: Switchable between silicon clipping, as in the classic RAT, and a mosfet/germanium clipping section, which is entirely new to the RAT line. Total harmonic distortion is less than 0. The second mode is shown in Figure 3, where the line voltage is greater than half of the output voltage and the MOSFET is turned on in a zero-current switched transition. MOSFET switches have a wide range of applications, such as the need for sampling holding circuit (sample-and-hold circuits) or truncated circuit (chopper circuits) design, For example, MOSFET switch can be seen on analog digital converter (A / D converter) or switched capacitor filter (switch-capacitor filter). A certain fet transfer function can be written as: Id = k. Unconditionally stable on capacitive loads. The schematic of the amp is shown in Fig. Obviously the circuit is based around an operational amplifier, which is a differential amplifier with two inputs: inverting and non-inverting. A pair of adjustable-bias MOSFET gain stages take the place of the 12AX7 vacuum tube, otherwise the circuit is true to the original schematic. The Dual Electronics XPR84D 2/1 High Performance Power MOSFET Class D Car Amplifier with 1,000-Watts of Dynamic Peak Power is a force to be reckoned with. The DDD falls on the classic-rock side of distortion, with a presence and mid kick aimed at the stage. The metal-oxide-semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS FET), also known as the metal-oxide-silicon transistor (MOS transistor, or MOS), is a type of insulated-gate field-effect transistor that is fabricated by the controlled oxidation of a semiconductor, typically silicon. The LEDs show circuit function - MOSFET (13 used) circuit health, gain of the amplifiers, oscillator drive, mixer gain, switching functons, and 12V power - it is the easiest superhet receiver to diagnosis and repair ever!. •Specified 50 Volts, 30 MHz Characteristics Output Power = 150 Watts Power Gain = 17 dB (Typ) Efficiency = 45% (Typ) •Superior High Order IMD •IMD(d3) (150 W PEP) — –32 dB (Typ). High stability: Load VSWR Low power control current: 400 µA Thin package: 5 mmt. MOSFET Channel Thermal Noise For MOS devices operating in saturation region the channel noise can be modeled by a current source connected between the drain and source terminals and expressed as, $$\overline{i_{nd}^{\tiny 2} \over \Delta f} = 4kT \gamma g_{do}$$ wh. Too low a Q causes waveform distortion and increased generation of harmonics. Omron's G3VM MOSFET relay family includes more than 160 devices that handle a wide range of voltages and currents. If you are thinking about building your own guitar pedals, a wonderful place to start is by building DIY guitar pedal kits. A versatile MOSFET drive circuit that can deliver everything from subtle, light overdrive to fully saturated distortion forms the pedal’s core. 10 Watt Portable Guitar Amp With Distortion 7 Steps With Single Chip 25w Amplifier Project 72 500w Rms Power Amplifier Based Mosfet Electronic Schematic Diagram. The circuit differs slightly from the one above it, as I was exploring different ways to reduce distortion. It gives a nice, bluesy, soft clipping. with PCB, Frequency:10Hz -150K. , Lubbers, W. The JFET has much less distortion operated as if it were a MOSFET but it's not a MOSFET and has characteristics which allow for even better performance. Characteristics of MOSFET Question Based on MOSFET Biasing 00:12:01 00:11:23 00:10:21 Chapter 04 Current Mirror Circuit Lecture 01 Lecture 02 Lecture 03 Lecture 04 Lecture 05 Lecture 06 Lecture 07 Lecture 08 Introduction to CMC Concept of CMC CMC for High Value of Beta MOSFET CMC Widlar CMC Wilson Current Mirror Circuit Multiple-Copy CMC. parts of the circuit match. 10 VMOS and UMOS Power and MOSFETs 410 6. It is written such that no prior Multisim knowledge is required. simulate this circuit – Schematic created using CircuitLab. This is a design circuit for Low Distortion Crystal Oscillator circuit. Further fine tuning of the various controls may be necessary to obtain best results. The second mode is shown in Figure 3, where the line voltage is greater than half of the output voltage and the MOSFET is turned on in a zero-current switched transition. Many distortion devices tend to over-hype attack transients, giving each note the same "chirp" regardless of what you're doing with the pick. Both tubes and transistors amplify signals by passing current from one side of the device to the other, sculpting it along the way to the same shape as a much weaker input signal. [1] The example at right shows how a load line is used to determine the current and voltage in a simple diode circuit. His exit functions in Class A, having as active charge the BC308 and resistance 39R. Electro-Harmonix Big Muff Pi Schematic. The Mosfet Booster has a 10M input impedance that will not load down any guitar that is plugged into it, and the moderately low output Z is capable of driving almost any circuit that follows. It overestimated by large amounts, between 40%-80% in our experiments, the impact of MOSFET mismatch on circuit performance. Wampler Black '65 - In my opinion, Brian Wampler makes some of the best sounding pedals bar none, and this one is no exception. Hi – Adjusts the treble content; Lo – Adjusts the bass content; Drive – Controls the amount of gain. , 45V max output. The circuit construction is physically optimized to reduce distortion and retain signal integrity while minimizing reflected power from the circuit and load. Electro-Harmonix Muff Fuzz (Opamp Version) Schematic. The input DC power is then converted into the output AC power with 110Vrms and 60Hz. Distortion has long been integral to the sound of the electric guitar in rock and roll music, and is important to other music genres such as electric blues and jazz fusion. Find the input values V1 and V2, where V1 produces VOUT1 of 4 volts (which implies a Q1 drain current of 1 mA) and V2 produces VOUT2 of -4 volts (Q2 drain current of -1 mA). When three-cycle 26 dB m input power was applied, the second, third, fourth, and fifth harmonic distortion components of a 75 MHz transducer driven by the HVPA with power MOSFET linearizer (−48. I am using lithium ion battery to power up the. Each device consists of an Aluminum Gallium Arsenide (AlGaAs) Light-Emitting Diode (LED) optically coupled to an integrated circuit with a high-speed driver for push-pull MOSFET output stage. Ibanez MT10 Mostortion - Mosfet Distortion: Ibanez 10 series » distortion pedal ». Excellent 2 Ohm driving capability 3. PF0030 MOSFET Power Amplifier. I am writing this instructable as i myself experienced a lot of. Capacitor C2 supplies extra charge during ‘switching on’ operations. There' s only one way for Clarion' s main units to deliver unyielding power output and linearity: MOS-FET amplification, or Metal Oxide Semiconductor Field Effect Transistor amplification. The gate input voltage V GS is taken to an appropriate positive voltage level to turn the device and therefore the lamp load either "ON", ( V GS = +ve ) or at a zero voltage level that turns. The characteristic curve (curved line),. configuration. In connection bridge as in the circuit, we can output 18W at 4 Ohm load, with 0. This amp might be the best-designed piece of electronics in your vehicle- your WHOLE vehicle. Looking at the specs of the current commercial Rotel, NAD, etc amplifiers in the 50-100W range, I see they all have really disappointing distortion figures, around 0. Before you test it, you first have to determine if it is. ISSN 1057-7130. With the right schematic drawing program, it is a simple matter to swap out one MOSFET for another component and compare the performance of each circuit. •Suitable for low frequency, low Q applications •Significant improvement in linearity compared to MOSFET-C •Needs tuning. The “Magnitude” section of Life Pedal is a simple all-discrete MOSFET booster designed to blast your preamp tubes and drive them wild. Background. 8V 0 วงจรแหล่งจ่ายไฟ1. The Pro Co Rat circuit can be broken down into four simpler blocks: Power Supply, Clipper Amplifier, Tone Control and Output Stage: The design is based on the LM308 single op-amp. Total harmonic distortion is less than 0. The 400W MOSFET-amplifier has four key stages of amplification. Improved small-signal equivalent circuit model and large-signal state equation. 1% Unconditionally stable on capacitive loads. You only need to simulate 1 cycle. It is clear, however, that if you want to know what the performance will be for a given device in a circuit, you would best be measuring it yourself, not only because the manufacturer is not likely to be duplicating your exact circuit, but also because there is often wide variation between devices. This is not meant to discourage you from trying the other. 2Kohms D2= 6. 1 and determine the CMRR (in dB) of the amplifier. EDIT - Simulate the following circuit. Plotting MOSFET Characteristic is usually the first experiment taken up by the students. Theory Behind Power Amplifier Circuit: Two important aspects of this circuit are class AB amplifiers and class A voltage amplifiers. Harmonic distortion 0. PF0030 MOSFET Power Amplifier. A Tetrode configured guitar amp will draw current through the screens as the amp is driven into distortion. Comparing the curves of the 12AX7 and 6DJ8 clearly revealed the superiority of the 6DJ8 for constancy of mu. A little further down the list are accessories like the Fulltone Gold Standard Cables, power adapters and apparel. I first seriously considered this question of simple distortion in 1978 in a single triode stage and realized that distortion is due simply to the change of mu with signal. , one octave. The Schematic Diagram is a basic MOSFET amplifier. The single-ended Class A output stage is “second harmonic” in character, and it uses about half the feedback of a comparable MOSFET circuit but with half the distortion and twice the bandwidth. pdf version of the real thing, including power supply decoupling and gate protection zeners. Only the third harmonic has any significance at -90dB. The edited versions of the Holy Holton AVXXX series amplifier circuits are of high quality PCB designs. 12 MESFETs 412 6. Unlike a standard bipolar transistor, which depends on current, a MOSFET depends on voltage. 2019 - 200 watt mosfet amplifier circuit up to 300 watt on Class G. 2) If total output voltage v O()t becomes too large, the MOSFET will enter cutoff. The high operating voltage range of the output stage provides the drive voltages required by gate-controlled devices. 1, 1998) and the Zen output stage with differential power supply (Reinhard Hoffmann, Audio Electronics num. 101 Spring 2020 Lecture 5 4 out in n in n out in out n g e e in out v v v v v v v v v v v. Built on the even clipping style of a Mosfet circuit, the DOD Boneshaker offers touch sensitivity and reactive playing dynamics with a two-band EQ and controls for gain and level. 20080231359: POWER DIVIDER/COMBINER AND POWER DIVIDING/COMBINING METHOD USING THE SAME: September, 2008: Tanimoto: 20030011428: Feedforward amplifier: January, 2003: Yamakawa et al. c Cross Over Distortion. 1200w HF Linear Amplifier board MOSFET 4x: 200W MOSFET Amplifier KIT: 2 x QB4/1100 HF Power Amplifier, SP5GJN: 1KW LDMOS for 144 MHz. The distortion curves for the circuit shown in the patent cover sheet are: (A) the intrinsic distortion of each half of the real example circuit, (B) the distortion of the differential output lowered due to the intrinsic matching between the circuits, (C) the distortion of each half with Su-Sy applied, and (D) the differential distortion with. See full list on elprocus. Why Circuit Simulation? Differential-Amplifier Stage Vin1 Vin2 Vout1 Vout2 VDD N1 N2 N3 N4 0 DVin (V) V o u t 1 S. 9kΩ represent the mismatch). Electro-Harmonix LPB-1 Booster Schematic. The FX10 is a more complicated circuit with FET switches incorporated. Though some people will tell you that stage-to-stage distortion cancellation in a push-pull amp does not work, I can assure you it does. In this case, it is more convenient to use the Norton equivalence. Smaller and more efficient than conventional power supplies, a MOS-FET amplification circuit delivers power with less distortion and zero On/Off switching noise. Even if simple the circuit, plirej' all condition, regarding the distortion and the response of frequency. Mosfet power amplifier circuit diagram pdf. A versatile MOSFET drive circuit that can deliver everything from subtle, light overdrive to fully saturated distortion forms the pedal’s core. Note that not all of these schematics are guaranteed to work. Hi all, I am using a muscle wire based actuator that I am trying to control with an Arduino. A certain fet transfer function can be written as: Id = k. Here is the schematic for the Distortion III. The distortion is produced using a variable gain circuit with diodes clipping the waveform. Some wear from use. The two following stages use hand picked jfets. 00 Page 1 of 10 Dec 05, 2011 Preliminary Data Sheet μPA2375T1P N-CHANNEL MOSFET FOR SWITCHING DESCRIPTION The μPA2375T1P is a switching device, which can be driven directly by a 2. Hi-Fi class distortion 4. Background. Forge growling overdriven bass lines with a MOSFET distortion circuit. 7 Vac max to give VRL max 9. This is a Class B amplifier, or push-pull follower. 1, but the crossover distortion created by the non-linear section of the transistor’s input characteristic curve, near to cut off in class B is overcome. 1, using connections as short as possible, and with the voltage of PS#1 set to zero for now. Dave's Guitar Shop Dave's Guitar Shop is an authorized retailer of guitars and guitar accessories from Fender, Gibson, Gretsch, Hamer, and more!. This circuit have Power output is 200 Wrms in 8 ohms or 350 Wrms into 4 ohms. 02% total harmonic distortion at 30 watts with a ±25v power supply into 8 ohms. MOSFET Driver Vishay Semiconductors Notes (1) This load condition approximates the gate load of a 1200 V/25 A IGBT. The MOSFET Driver is based on the legendary BK Butler Tube Driver. 1 Ohms using 10pcs 1 Ohm resistors in parallel-I used other general purpose power diode as the protection diode for the inductor. MOSFET, Q2, turns on, the SW pin is pulled to ground. “The Gunslinger’s MOSFET circuit delivers saturated tones and touch sensitivity normally associated with tubes,” said Tom Cram, Marketing Manager, DigiTech. Even an ideal FET will produce more distortion than a hot wire, but there are things we can do to minimize it. Many distortion devices tend to over-hype attack transients, giving each note the same "chirp" regardless of what you're doing with the pick. AN4350 Circuit description and design guidelines 34 The VR reflected voltage is selected as a trade-off between efficiency (higher V R means lower switching losses on the flyback Power MOSFET) and the absolute voltage on the primary side switching node (higher VR means higher spike voltage on the Power MOSFET's drain). Capacitor C2 supplies extra charge during ‘switching on’ operations. 6V and 600-700mA for 0. 02% total harmonic distortion at 30 watts with a ±25v power supply into 8 ohms. Examples would be subwoofer amplifier should FOH stage Amplifiers, surround a canal a very powerful sound amplifier, etc. 2 volts (200 W / 8 ohms). Integrated schematic editor and simulator The hierarchical schematic editor makes it easy to sketch a circuit. Hook up the circuit of Fig. High stability: Load VSWR Low power control current: 400 µA Thin package: 5 mmt. An adequate MOSFET model for distortion analysis should not only provide accurate current-voltage characteristics, but should also exhibit good agreement with higher order derivatives of the drain current, which determine the main contributions to higher-order harmonics [1-3]. The Gunslinger employs a newly-designed MOSFET circuit to deliver a wide range of distortion tones from a touch of overdriven grit to full-bore high-gain saturation. The input source is a battery tank of four series-connected LiFePO4 batteries. This circuit is fed a power from the 5A dual power supply. Creation of new library and cellview is… Read more. Even if simple the circuit, plirej' all condition, regarding the distortion and the response of frequency. I took a basic Fuzzface-type circuit and modified the component values so that a mosfet transistor could be used for Q2. The design is as simple as it could be and the components are easily available. Sound: Class-A 2SK1058 MOSFET Amplifier. 100 Watt Amp - Here is a simple and cheap amp to make. For schematics of the AEM6000, you’ll have to visit the library. A MOSFET-RESISTOR INVERTER 1. 101 Spring 2020 Lecture 5 2 Three Stage Amplifier – Crossover Distortion Hole Feedback Crossover Distortion Analysis 6. That's only the beginning. It uses a 2N7000 mosFET in series with a BAT41 for "one half" of the signal, with a 2N7000 mosFET and two BAT41's all in series for the other halfWhat you end up with in the ZD is a forward voltage pair of roughly 1000mV/1400mV. The prototype was analyzed only for signal frequencies around 1 kHz. From high velocity leads to explosive chord work, it possesses the string-separation, saturation, and touch sensitivity to perform all your dirty work. When there is no input, neither MOSFET is conducting. Electro-Harmonix LPB-1 Booster Schematic. The “Magnitude” section of Life Pedal is a simple all-discrete MOSFET booster designed to blast your preamp tubes and drive them wild. Design Your Own Distortion By Rikupetteri Salminen History of Distortions. The only real difference is the number of output devices to the device. 0 July 2002 1999 Cadence Design Systems, Inc. Recent Articles. A certain fet transfer function can be written as: Id = k. 9-5: MOSFET Analogue Switching The analogue switch A basic n-channel MOSFET analog switch is shown in the figure. In this paper, a single phase dc to ac inverter with a low cost driver circuit was developed. I may have it set too mellow. This circuit ensures that the fan speed control depending on temperature. ibanez mt10 mostortion mosfet distortion schematic [88 KB] ibanez od855 overdrive ii schematic [37 KB] ibanez pc10 prime dual chorus schematic [147 KB] ibanez pd7 phat hed bass overdrive schematic [49 KB] ibanez ph7 phaser schematic [39 KB] ibanez ph10 bi mode phaser schematic [89 KB] ibanez pl5 powerlead schematic [105 KB]. 5 V power source. Mizoguchi et al. MOSFET Overdrive Distortion Guitar Effects Pedal What's Included : Electro-Harmonix Glove MOSFET Overdrive Distortion Guitar Effects Pedal Condition : Previously owned. And wtf is a 1M resistor doing across the bipolar totem-pole mosfet, nothing much I suspect. They are perfectly suited for Automated Test Equipment, Medical Equipment, Instrumentation, Security Equipment, Automated Meter Reading, Automotive Diagnostic Equipment and Communications. Distortion Footswitch - JCM800 Channel. The Boss MT-2 Metal Zone Distortion Pedal gives you tube warmth and crunch with your super sustain. Harmonic distortion 0. 100W mosfet power amplifier circuit. 10 Best Mosfet Distortion Reviews. A little further down the list are accessories like the Fulltone Gold Standard Cables, power adapters and apparel. The Simplified DRF1200 Circuit Diagram is illustrated above. This simplified version of the circuit object of application, present dual power supply batteries (B1, B2) or equivalent, six active devices of amplification (Q1, Q2, Q3, Q4, Q5, Q6) including bipolar transistors (BJT) or unipolar transistors (JFET, VFET, MOSFET, SIT and the like), one constant current source (I), two trimmers (VR1, VR2) two. 101 Spring 2020 Lecture 5 4 out in n in n out in out n g e e in out v v v v v v v v v v v. Very inductive loads 3. Spectre Circuit Simulator Reference July 2002 1 Product Version 5. VINTAGE DISTORTION modified Sovtek Big Muff - Behringer came out with a visually blatant, China made knockoff of the BMP in 2005, called the VD-1 Vintage Distortion. A pair of adjustable-bias MOSFET gain stages take the place of the 12AX7 vacuum tube, otherwise the circuit is true to the original schematic. MJR7-Mk3 Mosfet Audio Power Amplifier 70W output power at 60v. The distortion is mostly second harmonic. DOD Gunslinger Mosfet Distortion Features: Mosfet Distortion Circuit Independent Gain, Low, High and Level Controls True Bypass 9 to 18V Operation Crisp Blue Status LED Aluminum Chassis Category. Comment on the distortion, e. Protections: 1. The gate coupling capacity C4 blocks any DC level the square wave input signal may have. 66 AbstractPlus | Full Text: PDF(32 KB) IEEE CNF fedcg 3. The metal-oxide-semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS FET), also known as the metal-oxide-silicon transistor (MOS transistor, or MOS), is a type of insulated-gate field-effect transistor that is fabricated by the controlled oxidation of a semiconductor, typically silicon. A duty-cycle controlled clock is applied to the MOSFET gate to control the average resistance. The drive amplifiers can be accessed from the output of existing amplifiers. 14 Summary 414 6. When VGS is below VT, the MOSFET is cut-off and acts as an open circuit, therefore no current goes through the resistor and all the voltage from the supply is dropped across the transistor as shown in Fig. The circuit was designed and sold as a card by a purveyor of surplus components but, even using mostly manufacturer's rejected transistors, we managed to get about 0. Most of microcontrollers work within 5 volt environment and the I/O port can only handle current up to 20mA; therefore if we want to attach the microcontroller’s I/O port to different voltage level circuit or to drive devices with more than 20mA; we need to use the interface circuit. The same basic tilt EQ is used in quite a few DOD overdrive and distortion pedals. This circuit is fed a power from the 5A dual power supply. The characteristic curve (curved line),. Design Your Own Distortion A non-inverting preamp is a circuit where the input is connected to the non-inverting (+) input of the opamp and feedback loop is between the inverted input (-) and the output. The Gunslinger packs features like separate Low and High tone controls, a wide range of gain and tons of output as well as your choice of 9V or 12V operation to keep you in control. From explosive chord work to high velocity leads, the new DOD Gunslinger Mosfet Distortion has the touch sensitivity, string-separation and saturation to do all your dirty work. I may have it set too mellow. I use mainly single coil pups and they’re useful to up the gain (without effecting the tone too much) of pedals that prefer humbuckers. Excellent 2 Ohm driving capability 3. For a PMOS, VT is negative (VT -3V). This insulates the gate terminal from the source and drain channel. Silicon Transistors. i think its a 1N4007 diode-I didn't connect the current limiter, it was messing with. Each device consists of an Aluminum Gallium Arsenide (AlGaAs) Light-Emitting Diode (LED) optically coupled to an integrated circuit with a high-speed driver for push-pull MOSFET output stage. The BUZ901 is an excellent lateral MOSFET that beats the pants off anything I have seen from International Rectifier. As a result, the average listening on a very low distortion is provided. The input DC power is then converted into the output AC power with 110Vrms and 60Hz. 101 Spring 2020 Lecture 5 2 Three Stage Amplifier – Crossover Distortion Hole Feedback Crossover Distortion Analysis 6. 7 dB, compared with that of a conventional MOSFET-based bootstrapped switch. We discuss what it takes to make a good model for circuit simulation (among other things: a lot of caution and care, and about 20,000 lines of code!), and how you. The 1kw design has 20 O/P devices, while the AV amplifier has 14 O/P devices. With the right schematic drawing program, it is a simple matter to swap out one MOSFET for another component and compare the performance of each circuit. 3 - VOH Test Circuit Fig. Their ruggedness and self-protective capability enables the DH-120 to deliver very high currents into very low impedances, even into a short circuit. The current paths of the boost converter are show in Figure 8, while Figure 9 shows noteworthy waveforms in the boost power stage. Colorsound (Sola Sound) Fuzz (Stellan's Schematics); Overdriver - OK!I like this pedal! Initially designed to do clean boost. MOSFET body diodes generally have a long reverse recovery time compared to the performance of the MOSFETs themselves. An RF model could well predict the distortion behavior of MOSFETs if it can. Wampler Black '65 - In my opinion, Brian Wampler makes some of the best sounding pedals bar none, and this one is no exception. Though some people will tell you that stage-to-stage distortion cancellation in a push-pull amp does not work, I can assure you it does. but it has a very fine grain and delicately textured quality. configuration. See full list on reverb. Other BJT limitation are associated with the contribution of electrons and holes to conduction. Pro Co Rat Schematic. I have added notes in red to the schematics believed to have errors. Less than 0. The two following stages use hand picked jfets. MJR7-Mk3 Mosfet Audio Power Amplifier 70W output power at 60v. As a JFET is a device that controls the amount of current going through it via an input voltage, the first application circuit is obvious: a switch. At 9v with the same setting, there was a noticeable buzzing to the distortion that I found unpleasant. However, few actually deliver on that promise. (2-3) The experiments in this exercise will use Circuit #2 constructed in In-Lab Exercise 2-2 to explore the limits of saturation operation of the amplifier by observing clipping of an output waveform and by listening to distortion in music output. in 12inch sub using 240 and 9240 MOSFET circuit using 27-0-27 transformer…this is the basic circuits used in 5. The amplifier will take 88W from the power supply all the time. Matsumoto et al. 20080231359: POWER DIVIDER/COMBINER AND POWER DIVIDING/COMBINING METHOD USING THE SAME: September, 2008: Tanimoto: 20030011428: Feedforward amplifier: January, 2003: Yamakawa et al. A guitar pedal kit allows you to understand the basics behind building guitar pedals, without having to have a vast knowledge of how circuitry and effect pedals work. However for most real world nonlinear circuits, small signal results understate the distortion at medium to large waveform amplitudes. Whether your Spice program uses a "small signal" or a "large signal" method to calculate distortion, the following limitations apply. Hi – Adjusts the treble content; Lo – Adjusts the bass content; Drive – Controls the amount of gain. CHAOS EXXTREME MOSFET Amplifier User’s Manual - page 20 distortion. NOT AVAILABLE AT GYUITAR PEDAL SHOPPES'S PLYMOUTH, MA LOCATION* From explosive chord work to high velocity leads, the new DOD Gunslinger Mosfet Distortion has the touch sensitivity, string-separation and saturation to do all your dirty work. This MOSFET set is based on the January 2007 QST article, "High Sensitivity Crystal Set". 1, 1998) and the Zen output stage with differential power supply (Reinhard Hoffmann, Audio Electronics num. MOSFET Amplifier Distortion (contd. 2 mA Vo 10V Vs SV = 1 VOD 15V RD R1 = 1M ohm VD = Vout Vo BS170 HH vaig VAMPL = 50mV FREQ5 HZ Figure 4. In this case, the equivalent base resistance of the circuit is 2 || 2 2 2 R R B =R R =. Related Post – 100W MOSFET Power Amplifier Circuit. The board also includes an amplifier circuit for the fan control. The included LED drive circuit is comprised of a high voltage linear regulator and TPS92411 floating switches arranged to toggle three segments of a high-voltage LED string. Orland Park, IL. Crunch is not a specific type of distortion, but a mild overdrive or distortion. The distortion is produced using a variable gain circuit with diodes clipping the waveform. See full list on sweetwater. This amplifier circuit can be used as universal HiFi amplifier and guitar amplifiers, etc. Excellent 2 Ohm driving capability 3. We are going to use this circuit diagram. Overdrive is a natural and smooth sound, while a distortion is more rough. - Mosfet Distortion Circuit - Independent Gain, Low, High and Level Controls - True Bypass - 9 to 18V Operation - Crisp Blue Status LED - Aluminum Chassis zZounds is an authorized dealer of DOD products. Power amp 400W IRFP448 Circuit Amplifier circuit today,We would like to show you for the MOSFET 400 watt amplifier is amplifier on my kW shares the same circuit and basic PCB layout. in 12inch sub using 240 and 9240 MOSFET circuit using 27-0-27 transformer…this is the basic circuits used in 5. 0𝑉,the MOSFET will at times enter cutoff, and even more distortion. In-Ga-As quantum-well MOSFETs scaling study is carried out in [2] by considering excess off-state current. MOSFET output power stage 2. A 100W MOSFET power amplifier circuit based on IRFP240 and IRFP9240 MOSFETs is shown here. And wtf is a 1M resistor doing across the bipolar totem-pole mosfet, nothing much I suspect. The FX10 is a more complicated circuit with FET switches incorporated. 10 kΩ and 9. step2: Apply a 1Vdc (or 1-kHz sinusoidal signal of 2V peak-to-peak) to both inputs. [1] The example at right shows how a load line is used to determine the current and voltage in a simple diode circuit. 6V and 600-700mA for 0. It uses a TL082 JFet chip with two 2N7000's Mosfet transistors for the clipping circuit giving it a tube amp-like response. The Schematic Diagram is a basic MOSFET amplifier. Author: Groenewold, G. The input DC power is then converted into the output AC power with 110Vrms and 60Hz. : no bad figure even in these days of MOSFET and ICs. An amplifier electronic amplifier or informally amp is an electronic device that can increase the power of a signal a time varying voltage or current. Two are used for stereo. “The Gunslinger’s MOSFET circuit delivers saturated tones and touch sensitivity normally associated with tubes,” said Tom Cram, Marketing Manager, DigiTech. The power amplifier circuit SOCL 504, 500-2000 Watt is one of a high power amplifier power circuit that can be used for field or outdor power amplifier so as to enable rental of your sound system either in high position in tweeter or line of midle or bass array on 15 inch or it could be on a sub low at 18 inches. In this case the MOSFET drain voltage resonates downwards to a valley, where it is switched on. The Gunslinger employs a newly-designed MOSFET circuit to deliver a wide range of distortion tones from a touch of overdriven grit to full-bore high-gain saturation. Ir2110 application circuit. In order to operate a MOSFET as a switch, it must be operated in cut-off and linear (or triode) region. The Mosfet Booster has a 10M input impedance that will not load down any guitar that is plugged into it, and the moderately low output Z is capable of driving almost any circuit that follows. Introduction to Operational Amplifiers. An amplifier electronic amplifier or informally amp is an. A pair of adjustable-bias MOSFET gain stages take the place of the 12AX7 vacuum tube, otherwise the circuit is true to the original schematic. The main components which a sample and hold circuit involves is an N-channel Enhancement type MOSFET, a capacitor to store and hold the electric charge and a high precision operational amplifier. I first seriously considered this question of simple distortion in 1978 in a single triode stage and realized that distortion is due simply to the change of mu with signal. Gain an access to analog electronic circuit video tutorials through Electrodiction, an online platform to learn more about electronics. Clipping Section: Switchable between silicon clipping, as in the classic RAT2, and a mosfet/germanium clipping section, which is entirely new to the RAT line. The RF MOSFET Line N–Channel Enhancement–Mode Designed primarily for linear large–signal output stages up to150 MHz frequency range. Our equivalent circuit of a simulated MOSFET is even simpler than the MOSFET simulation model we used in Figure 1. The Mosfet version features a switchable Mosfet circuit for a fatter sound and more growl. 5 V I S I 1 I 1 Let us consider, we are using 5V supply voltage (V1). I've been using 2N7000's just because I have tons of them but other mosfets like BS170 should work as well. Omron's G3VM MOSFET relay family includes more than 160 devices that handle a wide range of voltages and currents. Prepare a table showing calculated, simulated and measured results. The circuit consists of several fairly standard JFET common-source amplifier "stages" cascaded one after the other. 3V output from its I/O port. simulate this circuit – Schematic created using CircuitLab. The Drain is fed from the B+560V through a 2k2 5 Watt wire wound resistor. Integrated schematic editor and simulator The hierarchical schematic editor makes it easy to sketch a circuit. The minimum distortion with BUZ900P is at the bias point 30V DC and 900mA at any output level. The schematic editor features stepping, scaling, panning, multiple-object selection, three axes. The high frequency (HF) noise and distortion modeling issues are also discussed by showing the. Latest product release: G3VM-21MT The world’s first MOSFET relay module with T-type circuit structure. “The Gunslinger’s MOSFET circuit delivers saturated tones and touch sensitivity normally associated with tubes,” said Tom Cram, Marketing Manager, DigiTech. 2 volts (200 W / 8 ohms). The signal at the drain is connected to the source when the MOSFET is turned on by a positive VGS and is disconnected when VGS is 0, as indicated. 3: Inverting Mode Op Amp Stage Eq. The Gunslinger packs features like separate Low and High tone controls, a wide range of gain, tons of output, and your choice of 9V or 18V operation. Electro-Harmonix LPB-1 Booster Schematic. The Mosfet circuit exhibits a minimum of this tendency. Similarly, s-space method did not produce accurate results for circuit performance (e. The Gunslinger packs features like separate Low and High tone controls, a wide range of gain and tons of output as well as your choice of 9V or 12V operation to keep you in control. There is no voltage gain, but it amplifies current 100x. Here's a listing of Fulltone Standard Line products, including the new Plimsoul as well as the legendary FullDrive-Mosfet, Mini DejáVibe and Clyde Wah. mosfet vs jfet JFETs can only be operated in the depletion mode whereas MOSFETs can be operated in either depletion or in enhancement mode. They are words describing the type of distortion an amp or an effect gives out. Examples would be subwoofer amplifier should FOH stage Amplifiers, surround a canal a very powerful sound amplifier, etc. It does its thing into pretty much any amp, at any volume. The two following stages use hand picked jfets. O1 is connected as a phase splitter with anti-phase signals·’ appearing at its collector and emitter. So we ask ourselves: what component has an ON voltage close to a VBE drop? The answer is the PN juntion diode. The input DC power is then converted into the output AC power with 110Vrms and 60Hz. 10 Best Paddle Boat Cushions Reviews. Capacitor C2 supplies extra charge during ‘switching on’ operations. This system is widely used to give on/off control of high-current loads such as electric heaters, etc. 10 Best Shoei X Tec Reviews. TDA2030 is a high current output ic with low distortion. • The rest of the circuit remains unchanged except that ideal constant dc voltage sources are replaced by short circuits. 2V and the bandwidth is from 4Hz to 4 KHz. This insulates the gate terminal from the source and drain channel. Circuit diagram. zero, thereby compensating for distortion. The UNETTO is a "minimalist" Class-AB audio power amplifier that uses only three amplifier stages and a couple of IGBT output devices to deliver up to 200W RMS (400W musical) on 4 ohms or 100W RMS (200W musical) on 8 ohms with a very low THD distortion. This is a simple three transistor frequency doubler circuit to raise an audio frequency by a factor of two i. Fuzz is a metallic and very rough type of distortion that turns the sound of a guitar into a fuzzy sound. In this paper, a single phase dc to ac inverter with a low cost driver circuit was developed. Germanium vs. Overdrive is a natural and smooth sound, while a distortion is more rough. The result was a high gain over-the-top distortion. The Arduino has 3. As a result, the sampling linearity is improved and the distortion at the output can be decreased. The DDD falls on the classic-rock side of distortion, with a presence and mid kick aimed at the stage. O1 is connected as a phase splitter with anti-phase signals·’ appearing at its collector and emitter. The red trace is the input signal. You only need to simulate 1 cycle. Digitech says that the Gunslinger employs a newly-designed MOSFET circuit to deliver a wide range of distortion tones from a touch of overdriven grit to full-bore high-gain saturation. , one octave. The DOD Gunslinger Mosfet Distortion is devised to react to your playing dynamics. Its a simple but effective solution to the distortion problem. NOT AVAILABLE AT GYUITAR PEDAL SHOPPES'S PLYMOUTH, MA LOCATION* From explosive chord work to high velocity leads, the new DOD Gunslinger Mosfet Distortion has the touch sensitivity, string-separation and saturation to do all your dirty work. MOSFET S&H Circuit 3/14/2011 Insoo Kim. Examples would be Sub-woofer amp, FOH stage amplifier, One channel of. When there is no input, neither MOSFET is conducting. This is a simple circuit where a n-Channel Enhancement mode MOSFET will turn ON or OFF a light. Featuring a Vintage mode for a midrange heavy overdrive and an FM (flat mids) mode for a more transparent overdrive, the Fulltone Fulldrive 2 Mosfet has become an industry standard for guitarists. Its intermodulation and harmonic distortion products are well below 40 dB down from its maximum power output, and its tendency toward parasitic oscillations is so low that a parasitic plate choke is unnecessary. -I substituted IRF3710 mosfet in place of the mosfet used in the schematic-I changed the value of the current sense resistor to 0. This circuit is built successfully, however, with 55V you can not expect 300W rms. Built on the even clipping style of a Mosfet circuit, the DOD Boneshaker offers touch sensitivity and reactive playing dynamics with a two-band EQ and controls for gain and level. 2005 Page(s):15 Digital Object Identifier 10. achieve low input current distortion. A compact MOSFET model for distortion analysis in analog circuit design Citation for published version (APA): Langevelde, van, R. Once a circuit is created, you can do Transient, AC, DC Transfer Function, Distortion, Stability, Smoke / Stress, or Worst Case analysis. Selected Stompbox Schematics. 0 V, the MOSFET will (momentarily) leave the saturation and enter the cutoff region! In summary: 1) If, 𝑉𝑖>1. Some wear from use. By including the driver high speed by-pass capacitor (C1), the contribution to the internal parasitic loop inductance of the driver output is greatly reduced. What we need is a way to make up for the 0. 10 Best Mens Gym Shorts With Built In Underwear Reviews. You can put together basic op amp circuits to build mathematical models that predict complex, real-world behavior. The UNETTO is a "minimalist" Class-AB audio power amplifier that uses only three amplifier stages and a couple of IGBT output devices to deliver up to 200W RMS (400W musical) on 4 ohms or 100W RMS (200W musical) on 8 ohms with a very low THD distortion. 2020 Apr 5 - Jelajahi papan Schematic and PCB layout milik antosusanto779, yang diikuti oleh 124 orang di Pinterest. There' s only one way for Clarion' s main units to deliver unyielding power output and linearity: MOS-FET amplification, or Metal Oxide Semiconductor Field Effect Transistor amplification. 8V 0 วงจรแหล่งจ่ายไฟ1. If signals of a single frequency are specified as the input to the circuit, the complex values of the second and third harmonics are determined at every point in the circuit. Place the EQ before or after distortion for an array of interesting tones, or EQ your dry tone and blend to taste. The class AB push-pull output circuit is slightly less efficient than class B because it uses a small quiescent current flowing, to bias the transistors just above cut off as shown in Fig. broadcast band. However the circuit's open loop gain was found to be practically constant within the entire audio frequency range. Component Modeling and Circuit Simulation Harmonic Distortion. The circuit differs slightly from the one above it, as I was exploring different ways to reduce distortion. 0𝑉,the MOSFET will at times enter cutoff, and even more distortion. Here the schematic diagram of 800 watt audio amplifier with MOSFET. If the body diode of one MOSFET conducts when the opposing device is on, a short circuit arises resembling the shoot-through condition. This is a simple three transistor frequency doubler circuit to raise an audio frequency by a factor of two i. When there is an AC input, each MOSFET is conducting for only 50% of the time. Circuit Analyses Involving MOSFET SPICE Models. Lowest possible closed loop gain. Distortion reduction in an op amp circuit is proportional to the amount of feedback, and this corresponds to lower gain circuits having reduced distortion. The single-ended Class A output stage is “second harmonic” in character, and it uses about half the feedback of a comparable MOSFET circuit but with half the distortion and twice the bandwidth. Many electronic devices, such as diodes, transistors and vacuum tubes, whose function is processing time-varying signals, also require a steady (DC) current or voltage at their terminals to operate correctly. MOSFET Driver Vishay Semiconductors Notes (1) This load condition approximates the gate load of a 1200 V/25 A IGBT. 2020 Apr 5 - Jelajahi papan Schematic and PCB layout milik antosusanto779, yang diikuti oleh 124 orang di Pinterest. Electro-Harmonix Muff Fuzz (Transistor Version) Schematic. When the driver is disabled, the high-side gate is held low. Many mosfet power amp designs are made to mimic the characteristics of tube power amps especially in the area of distortion when overdriven. 8, R11 and C7 filter the power supply line to prevent ripple and transients from modulating the bias, causing noise and distortion. This simple circuit suffers from cross over distortion. One of the better setups IMO is in the Zendrive. Likewise for a Vienna Rectifier Module with MOSFET rated at V DSS = 500 V and/or 600 V, I D=130 A, it is possible to have up to 40 KW of AC to DC power conversion and a 14. Significant distortion is visible when the load current is 0. The Mosfet Booster has a 10M input impedance that will not load down any guitar that is plugged into it, and the moderately low output Z is capable of driving almost any circuit that follows. The Gunslinger packs features like separate Low and High tone controls, a wide range of gain, tons of output and your choice of 9 or 18v operation to keep you on target. 02% total harmonic distortion at 30 watts with a ±25v power supply into 8 ohms. In Part 1 of this five-part series, we examined FET voltage controlled resistors, basic voltage controlled resistor circuits, and a balanced or push pull voltage controlled. The distortion is produced using a variable gain circuit with diodes clipping the waveform. Electro-Harmonix Big Muff Pi Schematic. ) that also depends on the parameter variation of individual. Ibanez MT10 Mostortion - Mosfet Distortion: Ibanez 10 series » distortion pedal ». I used a Radio Shack 2N3904 (hfe=233) for Q1 but a silicon transistor with less gain might sound better. This MOSFET set is based on the January 2007 QST article, "High Sensitivity Crystal Set". The main power supply rails for this project is ±54VDC. The MOSFET is replaced by the capacities Cgd, Cgs and a voltage controlled switch. The edited versions of the Holy Holton AVXXX series amplifier circuits are of high quality PCB designs. Javascript is disabled on your browser. Your Class D amplifier was engineered with specific linear circuitry that improves sound quality and power output while reducing distortion and refining efficiency for unrivaled sound. With higher input impedance, the MOSFET draws in less input current than a JFET; thus, it doesn't load the circuit powering it barely at all. of an equivalent circuit into account (transconductance, output conductance and capacitances). Improved small-signal equivalent circuit model and large-signal state equation. 017% Total harmonic distortion @10KHz: 1W 0. A versatile pedal that knows when to play nice and when to sink its teeth into tone, this effect goes from tube-like overdrive to '80s distortion shriek with a strum. The minimum distortion with BUZ900P is at the bias point 30V DC and 900mA at any output level. The original diodes were 1N34's and the. The result was a high gain over-the-top distortion. Introduction to Operational Amplifiers. When three-cycle 26 dB m input power was applied, the second, third, fourth, and fifth harmonic distortion components of a 75 MHz transducer driven by the HVPA with power MOSFET linearizer (−48. So we ask ourselves: what component has an ON voltage close to a VBE drop? The answer is the PN juntion diode. 10 Watt Portable Guitar Amp With Distortion 7 Steps With Single Chip 25w Amplifier Project 72 500w Rms Power Amplifier Based Mosfet Electronic Schematic Diagram. 20090009245: Circuit for Adjusting an Impedance: January, 2009. (2) Pulse width distortion (PWD) is defined as |tPHL - tPLH| for any given device. I used 1N34A, but bretty much any old germanium diode will work here and produce similar results. From explosive chord work to high velocity leads, the new DOD Gunslinger Mosfet Distortion has the touch sensitivity, string-separation and saturation to do all your dirty work. The manufacturer told me that the muscle wire needs 3V - 3. The circuit consists of an N-Channel MOSFET voltage follower T1 (common Drain) and current source T2 (NPN Darlington). In connection bridge as in the circuit, we can output 18W at 4 Ohm load, with 0. Further fine tuning of the various controls may be necessary to obtain best results. The Gunslinger packs features like separate Low and High tone controls, a wide range of gain, tons of output and your choice of 9 or 18v operation to keep you on target. Overdrive is a natural and smooth sound, while a distortion is more rough. Examples would be subwoofer amplifier should FOH stage Amplifiers, surround a canal a very powerful sound amplifier, etc. This means it has played as loud as possible without distortion for almost 6 hours, in a 3 ohms load __ Designed by Raphaël Assénat. Circuit Analyses Involving MOSFET SPICE Models. The Gunslinger employs a newly-designed MOSFET circuit to deliver a wide range of distortion tones from a touch of overdriven grit to full-bore high-gain saturation.
2020-11-29 04:43:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.2569015920162201, "perplexity": 5151.743567902786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141196324.38/warc/CC-MAIN-20201129034021-20201129064021-00480.warc.gz"}
http://www.goddesssalon.com/dr-brenda-izqfe/1ce181-two-parameter-exponential-distribution-sufficient-statistic
#### Get social! two parameter exponential distribution sufficient statistic 18305 ., Xn be a random sample from a two-parameter exponential distribution, Xi ~ EXP().a) Assuming it is known that =150, find a pivotal quanitity for the parameter based on the sufficient statistic.. b) Using the data of Exercise 5, find a one-sided lower 95% confidence limit for . What happens if a probability distribution has two parameters, $$\theta_1$$ and $$\theta_2$$, say, for which we want to find sufficient statistics, $$Y_1$$ and $$Y_2$$? Lorem ipsum dolor sit amet, consectetur adipisicing elit. [2]). The two parameter exponential distribution is also a very useful component in reliability engineering. So even if you don't know what the $\theta$ is you can compute those. The 1-parameter exponential pdf is obtained by setting , and is given by: where: 1. Inserting what we know to be the probability density function of a normal random variable with mean $$\theta_1$$ and variance $$\theta_2$$, the joint p.d.f. 40, 1998, pp. The authors contributed equally to this work. Lett., Vol. 337-349. Note: One should not be surprised that the joint pdf belongs to the exponen-tial family of distribution. Nagaraja, A First Course in Order Statistics, SIAM, Philadelphia, PA, USA, 2008. (20–22), we have, Suppose that counting random variables K−(n,k,a) and K+(n,k,b) be independent. Let's try applying the extended exponential criterion to our previous example. Let $$X_1, X_2, \ldots, X_n$$ denote a random sample from a normal distribution $$N(\theta_1, \theta_2$$. 197-210. In Chapter 2 we consider the CEM and when the lifetime distributions of the experimental units follow different distributions. 375-395. T ( X 1 n ) = ∑ i = 1 n X i. the function $$h(x_1, ... , x_n)$$ does not depend on either of the parameters $$\theta_1$$ or $$\theta_2$$. into two functions, one ($$\phi$$) being only a function of the statistics $$Y_1=\sum_{i=1}^{n}X^{2}_{i}$$ and $$Y_2=\sum_{i=1}^{n}X_i$$, and the other (h) not depending on the parameters $$\theta_1$$ and $$\theta_2$$: Therefore, the Factorization Theorem tells us that $$Y_1=\sum_{i=1}^{n}X^{2}_{i}$$ and $$Y_2=\sum_{i=1}^{n}X_i$$ are joint sufficient statistics for $$\theta_1$$ and $$\theta_2$$. That is, $$\theta_1$$ denotes the mean $$\mu$$ and $$\theta_2$$ denotes the variance $$\sigma^2$$. = constant rate, in failures per unit of measurement, (e.g., failures per hour, per cycle, etc.) Then, the statistics $$Y_1=\sum_{i=1}^{n}K_1(X_i)$$ and $$Y_2=\sum_{i=1}^{n}K_2(X_i)$$ are jointly sufficient for $$\theta_1$$ and $$\theta_2$$. The exponential distribution is often concerned with the amount of time until some specific event occurs. If. 1.1. Order statistics is a kind of statistics distribution commonly used in statistical theory and application of which there are many research [1-6]. The decay parameter is expressed in terms of time (e.g., every 10 mins, every 7 years, etc. Assume that X has exponential distribution. Further, (31) and (32) imply that T2 is a consistent estimator for e−σa. Plan. We conclude that in all examples of a location family of distributions, statistics Yi are ancillary for the location parameter θ. Inf., Vol. 1.1. Further, its performance is compared with the maximum likelihood estimator (MLE) through simulation. [2]. • The partition of a minimal sufficient statistic is the coarsest. Aha! 80, 2010, pp. 1100-1116. Now, we present an asymptotic confidence interval for e−σa based on counting random variable K+(n,k,a) which is stated in the following remark. It is known that only for the exponential distribution, any two non-overlapping spacings will be independent (See, e.g., Arnold et al. We offer world-class services, fast turnaround times and personalised communication. Use the Exponential Criterion to find joint sufficient statistics for $$\theta_1$$ and $$\theta_2$$. Substituting it in Eqs. Basu’s Theorem. 36, 2007, pp. 3-24. Using (18) and (19), we have. 28, 2001, pp. Exponential distribution. Also, more characterization results of exponential distribution can be seen in Galambos and Kotz [4] and Ahsanullah and Hamedani [5]. X 1 , … , X n. {\displaystyle X_ {1},\dots ,X_ {n}} are independent and exponentially distributed with expected value θ (an unknown real-valued positive parameter), then. An estimator of e−σa is introduced in section 4 and some properties of this estimator are discussed. Plan. Lett., Vol. Two-parameter exponential distribution is often used to model the lifetime of a product. Because the observations are … sufficient statistic whenever and are two data values such that ( ) ( ), then ( ) ( ). Meth., Vol. 37-49. Plan. The exponential distribution is the probability distribution of the time or space between two events in a Poisson process, where the events occur continuously and independently at a constant rate \lambda.. Upcoming Events 2020 Community Moderator Election A.G. Pakes and Y. Li, Stat. That is, $$\theta_1$$ denotes the mean $$\mu$$ and $$\theta_2$$ denotes the variance $$\sigma^2$$. If k is unkown, then we can write n ∑ i = 1(yi − k) = n ∑ i = 1((yi − min) + ( min − k)) = ( n ∑ i = 1(yi − min)) + n( min − k). This is an exponential family distribution so T = X2 1 + + X2 n is a complete su cient statistic; moreover, since it’s a scale parameter problem, U= X2 1 =(X 2 1 + + X n) is an ancillary statistic. Stat. 179-193. Theor. 138, 2008, pp. 69, 2004, pp. Lett., Vol. There exists a unique relationship between the exponential distribution and the Poisson distribution. Therefore the sum ∑ni = 1(yi − k) is sufficient if k is known. A.G. Pakes, Adv. Browse other questions tagged self-study mathematical-statistics sufficient-statistics or ask your own question. Similar to the proof of Theorem 2.2, F¯(x)=ce−σx is the most general solution of (17) and this completes the proof. 54, 2009, pp. Stat., Vol. The parameters . Math. The final section contains a discussion of the family of distributions obtained from the distributions of Theorem 2 and their limits as γ → ± ∞. . We refer the reader to Higgins [25] for Hilbert space and complete sequence function. We have just extended the Factorization Theorem. J. Aczél, Lectures on Functional Equations and Their Applications, Academic Press, London, England, New York, NY, USA, 1966. Excepturi aliquam in iure, repellat, fugiat illum The exponential distribution. Let $$X_1, X_2, \ldots, X_n$$ denote random variables with a joint p.d.f. We have factored the joint p.d.f. S. Müller, Methodol. J. This study considers the … Nagaraja, Order Statistics, John Wiley-Sons, New York, NY, USA, 2003. J. Then the cumulative distribution function (CDF) of X is, According to (1), the probability mass function (pmf) of K+(n,k,a) for any j=0,1,⋯,n−k, have been obtained as (See Dembińska et al. ), which is a reciprocal (1/λ) of the rate (λ) in Poisson. into two functions, one (ϕ) being only a function of the statistic Y = X ¯ and the other (h) not depending on the parameter μ: Therefore, the Factorization Theorem tells us that Y = X ¯ is a sufficient statistic for μ. Stat., Vol. So, the obtained results show that with choosing appropriate k, the estimator T2 can be considered as a good estimator for parameter e−σa. Y. Li and A. Pakes, Insur. Therefore, K+(n,k,a) is a sufficient and complete statistic for e−σa. 2. a dignissimos. Comput. 1. The other factor, the exponential function, depends on y1, …, yn only through the given sum. 21, 2017, pp. Thus, a sufficient and complete statistics function for θ, is : ∑ i = 1 n ln (x i − 1) ⟶ T (x) = ∑ i = 1 n ln According to expectation of K+(n,k,a), an unbiased estimator for e−σa is equal to, So, the estimator T2 is uniformly minimum-variance unbiased estimator (UMVUE) and its variance or minimum square error (MSE) is as follows. are also joint sufficient statistics for $$\theta_1$$ and $$\theta_2$$. = operating time, life, or age, in hours, cycles, miles, actuations… In the next theorem, we show an another characterization for exponential distribution based on independent near-order statistics. According to Müntz theorem that is stated in Theorem 2.1, the all results of this section are true for any increasing subsequence {nj,j≥1} which satisfies in (9) instead of for all n≥1. The trick is to look at -Statistic examples are sample mean, min, max, median, order statistics... etc. The sequence {xn, n≥1} is the most important complete sequence function. 85-97. A sequence {Φn}n≥1 of elements of a Hilbert space H is called complete if the only element which is orthogonal to every {Φn} is the null element, that is. {\displaystyle T (X_ {1}^ {n})=\sum _ {i=1}^ {n}X_ {i}} is a sufficient statistic for θ. It is shown that its probability mass function and its first moment can characterize the exponential distribution. sufficient statistic is characterized in the following result. The densi ties of the two exponential distributions are written as . 10, 2007, pp. The sufficient statistic of a set of independent identically distributed data observations is simply the sum of individual sufficient statistics, and encapsulates all the information needed to describe the posterior distribution of the parameters, given the data (and hence to derive any desired estimate of the parameters). Let $$X_1, X_2, \ldots, X_n$$ be a random sample from a distribution with a p.d.f. CHARACTERIZATION BASED ON DEPENDENCY ASSUMPTIONS, 4. Arcu felis bibendum ut tristique et egestas quis: In each of the examples we considered so far in this lesson, there is one and only one parameter. 42, 1971, pp. P(X = x | T(X) = t) does (or joint p.m.f. Because $$X_1, X_2, \ldots, X_n$$ is a random sample, the joint probability density function of $$X_1, X_2, \ldots, X_n$$ is, by independence: $$f(x_1, x_2, ... , x_n;\theta_1, \theta_2) = f(x_1;\theta_1, \theta_2) \times f(x_2;\theta_1, \theta_2) \times ... \times f(x_n;\theta_1, \theta_2) \times$$. 39, 1997, pp. It is shown that the joint distribution of m-generalized order statistics has a representation as a regular exponential family in the model parameters, as it is the case for the comprising model. Let's try the extended theorem out for size on an example. Econ., Vol. Econ., Vol. But it is difficult to calculate MSE of T1 theoretically. J. The results are proved through properties of completeness sequence function. E. Hashorva, Stat. Math. H.A. Stat. Probab. same distributions for prior and posterior distributions), and the posterior predictive distribution has always a closed-form solution (provided that the normalizing factor can also be stated in closed-form), both important properties for Bayesian statistics. 199-210. The quantity (26) shows that the spacings W1 and W2 are independent. Key Definitions: Sufficient, Complete, and Ancillary Statistics. Desu, Ann. Let X be a random variable having two-parameter exponential distribution with parameters μ and σ, denoted by Exp(μ,σ). In this section, we will show that Eqs. MSE of two estimators T1 and T2 with respect to e−σa under n=50, a=1, μ=3 and different k. Our simulation results demonstrate that the performance of T1 and T2 has little differences with increasing a. J.R. Higgins, Completeness and Basis Properties of Sets of Special Functions, Cambridge University Press, New York, NY, USA, 1977. (4), we conclude easily that K+(n,k,a) has binomial distribution with parameters (n−k) and (1−e−σa), that is. Odit molestiae mollitia is: $$f(x_1, x_2, ... , x_n;\theta_1, \theta_2) = \dfrac{1}{\sqrt{2\pi\theta_2}} \text{exp} \left[-\dfrac{1}{2}\dfrac{(x_1-\theta_1)^2}{\theta_2} \right] \times ... \times = \dfrac{1}{\sqrt{2\pi\theta_2}} \text{exp} \left[-\dfrac{1}{2}\dfrac{(x_n-\theta_1)^2}{\theta_2} \right]$$. Sufficient Statistics1: (Intuitively, a sufficient statistics are those statistics that in some sense contain all the information aboutθ) A statistic T(X) is called sufficient for θif the conditional distribution of the data X given T(X) = t does not depend on θ (i.e. Y. Nikitin, ACUTM, Vol. A. Dembińska, Statistics, Vol. So the conditions of central limit theorem for random variable T2 hold and we have, Therefor from (33), we can construct asymptotically confidence interval for e−σa by solving following inequality. 1.6 Organization of the monograph. Except where otherwise noted, content on this site is licensed under a CC BY-NC 4.0 license. Atlantis Press is a professional publisher of scientific, technical and medical (STM) proceedings, journals and books. The probability density function of a normal random variable with mean $$\theta_1$$ and variance $$\theta_2$$ can be written in exponential form as: Therefore, the statistics $$Y_1=\sum_{i=1}^{n}X^{2}_{i}$$ and $$Y_2=\sum_{i=1}^{n}X_i$$ are joint sufficient statistics for $$\theta_1$$ and $$\theta_2$$. N. Balakrishnan and A. Stepanov, J. Stat. At first, Pakes and Stutel [6] defined the number of observations within a of the sample maximum Xn:n as, Then, this definition was developed for the number of observations falling in the open left and right a–vicinity of the kth order statistics by Pakes and Li [7] and Balakrishnan and Stepanov [8], respectively. the Fisher–Neyman factorization theorem implies is a sufficient statistic for . See, Nikitin [27] for more details on application of characterization in goodness-of-fit test. In this paper, we have shown some applications of counting random variable K+(n,k,a) for two-parameter exponential distribution. 18.1 One Parameter Exponential Family Exponential families can have any flnite number of parameters. Desu [3] proved that distribution of population is exponential if and only if nX1:n=dX1, for all n≥1, where the notation =d states the equality in distribution. Exponential distribution [edit | edit source] If are independent and exponentially distributed with expected value θ (an unknown real-valued positive parameter), then is a sufficient statistic for θ. In summary, we have factored the joint p.d.f. To see this, consider the joint probability density function of . Conversely, let (10) holds, then, The above inequality shows that η(u)∈L2(0,1). 117-128. Look at that! This is an expression of the form of the Exponential Distribution Family and since the support does not depend on θ, we can conclude that it belongs in the exponential distribution family. Theor. The authors would like to thank the Editor in Chief, the Associate Editor and two anonymous reviewer for their valuable comments. Stat., Vol. NZ. Theorem 6.2.24 (Basu’s theorem) Let V and T be two statistics of X from a population indexed by q 2 . 46, 2012, pp. We have just shown that the intuitive estimators of $$\mu$$ and $$\sigma^2$$ are also sufficient estimators. Substituting in Eq. M.M. So, we compare them numerically. By the way, can you propose several other ancillary statistics? In this paper, we will prove some characterization results of two-parameter exponential distribution based on these counting random variables which are stated in sections 2 and 3. Probab. A. Dembińska, Stat. 309-323. What's Sufficient Statistic? of the exponential form: $$f(x;\theta_1,\theta_2)=\text{exp}\left[K_1(x)p_1(\theta_1,\theta_2)+K_2(x)p_2(\theta_1,\theta_2)+S(x) +q(\theta_1,\theta_2) \right]$$. minimal statistic for θ is given by T(X,Y) m j=1 X2 j, n i=1 Y2 i, m j=1 X , n i=1 Y i. A. Dembińska, J. Stat. So, the proof is completed. Partition Interpretation for Minimal Sufficient Statistics: • Any sufficient statistic introduces a partition on the sample space. Also, this results are obtained based on 2000 bootstrap samples. M. Ahsanullah and G.G. It is important to know the probability density function, the distribution function and the quantile function of the exponential distribution. So far, more results of characterization of exponential distribution have been obtained that some of them are based on order statistics. For example, Lawless [ 1 151-160. Since the time length 't' is independent, it cannot affect the times between the current events. Steutel, Aust. 34, 2005, pp. Relationship between the Poisson and the Exponential Distribution. A continuous random variable x (with scale parameter λ > 0) is said to have an exponential distribution only if its probability density function can be expressed by multiplying the scale parameter to the exponential function of minus scale parameter and x for all x greater than or equal to zero, otherwise the probability density function is equal to zero. Let X1,X2,⋯Xn be independent and continuous random variables. f t t i i i i ( ) = exp − , , = 1 12 θ θ. It is enough to show that joint pgf of K−(n,k,a) and K+(n,k,b) is equal with multiplication of their pgfs. Now, the Exponential Criterion can also be extended to accommodate two (or more) parameters. Let's start by extending the Factorization Theorem. Appl., Vol. 54, 2012, pp. Also, an estimator based on near-order statistics is introduced for tail thickness of exponential distribution. 1. That seems like a good thing! S(X) is a statistic if it does NOT depend on any unknown quantities including $\theta$, which means you can actually compute S(X). voluptates consectetur nulla eveniet iure vitae quibusdam? A. Dembińska, Aust. 837-838. Then, the statistics $$Y_1=u_1(X_1, X_2, ... , X_n)$$ and $$Y_2=u_2(X_1, X_2, ... , X_n)$$ are joint sufficient statistics for $$\theta_1$$ and $$\theta_2$$ if and only if: $$f(x_1, x_2, ... , x_n;\theta_1, \theta_2) =\phi\left[u_1(x_1, ... , x_n), u_2(x_1, ... , x_n);\theta_1, \theta_2 \right] h(x_1, ... , x_n)$$. Finally, the exponential families have conjugate priors (i.e. According to distribution of K+(n,k,a), it can be considered as sum of independent and identically distributed random variables from binomial 1,1−e−σa. So, the 100(1−α)% interval confidence for e−σa is given by. 9.Write X i = Z i where Z i ˘N(0;1). voluptate repellendus blanditiis veritatis ducimus ad ipsa quisquam, commodi vel necessitatibus, harum quos θ. i. are interpreted as the average failure times, the mean time to failure (MTTF), or the mean time between failures (MTBF) of the two groups. Simplifying by collecting like terms, we get: $$f(x_1, x_2, ... , x_n;\theta_1, \theta_2) = \left(\dfrac{1}{\sqrt{2\pi\theta_2}}\right)^n \text{exp} \left[-\dfrac{1}{2}\dfrac{\sum_{i=1}^{n}(x_i-\theta_1)^2}{\theta_2} \right]$$. The authors declare that there is no potential conflict of interest related to this study. 1-14. Similarly we have the quantity, Further, using Minkowski's inequality for the quantity. Inf., Vol. Let X1,X2,…,Xn be continuous random variables with CDF F. Then F has Exp(μ,σ) if and only if K−(n,k,a) and K+(n,k,b) be independent for a fixed k≥1 and for any a>0 and b>0. Math. So far, more results of characterization of exponential distribution have been obtained that some of them are based on order statistics. Let $$X_1, X_2, \ldots, X_n$$ denote a random sample from a normal distribution $$N(\theta_1, \theta_2)$$. One-parameter exponential distribution has been considered by different authors since the work of … After that, following two random variables have been considered in the literature. J. Galambos and S. Kotz, Characterizations of Probability Distributions, Springer-Verlag, New York, NY, USA, 1978. https://books.google.com/books?id=BkcRRgAACAAJ. Lesson 2: Confidence Intervals for One Mean, Lesson 3: Confidence Intervals for Two Means, Lesson 4: Confidence Intervals for Variances, Lesson 5: Confidence Intervals for Proportions, 6.2 - Estimating a Proportion for a Large Population, 6.3 - Estimating a Proportion for a Small, Finite Population, 7.5 - Confidence Intervals for Regression Parameters, 7.6 - Using Minitab to Lighten the Workload, 8.1 - A Confidence Interval for the Mean of Y, 8.3 - Using Minitab to Lighten the Workload, 10.1 - Z-Test: When Population Variance is Known, 10.2 - T-Test: When Population Variance is Unknown, Lesson 11: Tests of the Equality of Two Means, 11.1 - When Population Variances Are Equal, 11.2 - When Population Variances Are Not Equal, Lesson 13: One-Factor Analysis of Variance, Lesson 14: Two-Factor Analysis of Variance, Lesson 15: Tests Concerning Regression and Correlation, 15.3 - An Approximate Confidence Interval for Rho, Lesson 16: Chi-Square Goodness-of-Fit Tests, 16.5 - Using Minitab to Lighten the Workload, Lesson 19: Distribution-Free Confidence Intervals for Percentiles, 20.2 - The Wilcoxon Signed Rank Test for a Median, Lesson 21: Run Test and Test for Randomness, Lesson 22: Kolmogorov-Smirnov Goodness-of-Fit Test, Lesson 23: Probability, Estimation, and Concepts, Lesson 28: Choosing Appropriate Statistical Methods, Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris, Duis aute irure dolor in reprehenderit in voluptate, Excepteur sint occaecat cupidatat non proident, $$\phi$$ is a function that depends on the data $$(x_1, x_2, ... , x_n)$$ only through the functions $$u_1(x_1, x_2, ... , x_n)$$ and $$u_2(x_1, x_2, ... , x_n)$$, and. If V is ancillary and T is boundedly complete and sufficient for q, then V and T are independent with respect to Pq for any q 2 . In this study, we explore the MSE of T1 and T2 under different μ, a and k which are stated in Figure 1. Fortunately, the definitions of sufficiency can easily be extended to accommodate two (or more) parameters. A.G. Pakes, Extremes, Vol. Let us define two spacings W1 and W2 as follows, From (1) and (2), one can obtain easily the probability generating functions (pgf) of K−(n,k,a) and K+(n,k,b) as follows (see Balakrishnan and Stepanov [8]), It also follows that the joint pgf K−(n,k,a) and K+(n,k,b) is. Two-parameter exponential distribution is the simplest lifetime distributions that is useable in survival analysis and reliability theory. (6) and (7) can characterize exponential distribution. Use the Factorization Theorem to find joint sufficient statistics for $$\theta_1$$ and $$\theta_2$$. Inf., Vol. Received 24 January 2017, Accepted 7 August 2017, Available Online 3 March 2020. So, we firstly define complete sequence function and recall some well-known theorems. 5, 2003, pp. 32, 2003, pp. An exact confidence interval for e−σa when a is known can be obtained by this fact that a confidence interval is available for σ in two-parameter exponential distribution. The two-parameter exponential distribution with density: 1 (; , ) =  − e x p − , (1. If the parameters of a two-parameter exponential family of distributions may be taken to be location and scale parameters, then the distributions must be normal. Recently, the problem of number of observations near the order statistics is considered. So, one estimator for e−σa based on MLE can be considered as, Following, we introduce an estimator for e−σa based on near-order statistic. 142, 2012, pp. or p.m.f. Arnold, N. Balakrishnan, and H.N. Stat., Vol. Let X 1, X 2, ⋯ X n be independent and continuous random variables. That is, the data contain no more information than the estimators $$\bar{X}$$ and $$S^2$$ do about the parameters $$\mu$$ and $$\sigma^2$$! For more information, please contact us at: Department of Statistics, University of Mazandaran, Babolsar, Mazandaran 47416-95447, Iran, Department of Statistics, University of Birjand, Birjand, 97175-615, Iran, This is an open access article distributed under the CC BY-NC 4.0 license (. Hamedani, Exponential Distribution—Theory and Methods, Nova Science Publications Inc., New York, NY, USA, 2009. CHARACTERIZATION BASED ON DISTRIBUTIONAL RESULTS, 3. 1) where < is the threshold parameter, and > 0 is the scale parameter, is widely used in applied statistics. A.G. Pakes, Aust. Math., Vol. It is stated here without proof. The probability density function of a normal random variable with mean θ 1 and variance θ 2 can be written in exponential form as: Therefore, the statistics Y 1 = ∑ i = 1 n X i 2 and Y 2 = ∑ i = 1 n X i are joint sufficient statistics for θ 1 and θ 2. AN ESTIMATOR BASED ON NEAR-ORDER STATISTIC, https://doi.org/10.2991/jsta.d.200224.001, http://creativecommons.org/licenses/by-nc/4.0/. [10]), Further, it is easy to verify that the pmf of K−(n,k,a) for any j=0,1,⋯,k−1 is, Now, assume that F(⋅) has a form as (3). G. Iliopoulos, A. Dembińska, and N. Balakrishnan, Statistics, Vol. 851-867. Let X1,X2,…,Xn be continuous random variables with CDF F. Then F has exponential distribution Exp(μ,σ) if and only if, If X has exponential distribution, then Eq. which depends on the parameters $$\theta_1$$ and $$\theta_2$$. 134, 2005, pp. Higgins ([25], p. 95) The set {xn1,xn2,⋯;1≤n10, following quantity holds. We believe that the results of the second and third section can be used in the construction of testing goodness-of-fit for exponentiality which sometimes can be more efficient or more robust than others. 48, 2014, pp. Taking the time passed between two consecutive events following the exponential distribution with the mean as μ of time units. Let F be Exp(μ,σ). The results are concluded in terms of number of observations near of order statistics. Other examples include the length, in minutes, of long distance business telephone calls, and the amount of time, in months, a car battery lasts. ): $$f(x_1,x_2, ... ,x_n; \theta_1, \theta_2)$$. For instance, as we will see, a normal distribution with a known mean is in the one parameter Exponential family, while a normal distribution with both parameters unknown is in the two parameter Exponential family. E. Hashorva, Insur. David and H.N. Suppose that the time that elapses between two successive events follows the exponential distribution with a mean of $$\mu$$ units of time. The literature from a population indexed by q 2 authors declare that there no. Has an exponential distribution is also a very useful component in reliability engineering on statistics..., k, a ) belongs to the one-parameter exponential family two ( or more ).... Its performance is compared with the mean as μ of time ( e.g., per. Partition of a Minimal sufficient statistics for \ ( \theta_2\ ) the exponential Criterion also! Do n't know what the $\theta$ is you can compute those are also joint sufficient statistics \! Sequence function and the Poisson distribution introduced in section 4 and some of. Of distributions, statistics, John Wiley-Sons, New York, NY, USA, 2008 families... W1 and W2 are independent examples of a product component in reliability engineering http:.... K, a ) is sufficient if k is known k ) is sufficient if k known! Examples of a product distribution and the Poisson distribution time ( beginning now ) until an occurs... 31 ) and ( 19 ), then from ( 7 ) can characterize the exponential distribution been. Inequality shows that the counting random variable having two-parameter exponential distribution have been obtained some! Population indexed by q 2 using Minkowski 's inequality for the quantity ( 26 shows., a. Stepanov, and > 0 is the scale parameter, is widely used in statistical theory application... Mass function and recall some well-known theorems declare that there is no potential conflict interest. Is a probability distribution which represents the time between failures, or to failure 1.2 independent and continuous random with. Two statistics of X from a population indexed by q 2 distributions that is useable survival! Characterization in goodness-of-fit test every month spacings W1 and W2 are independent ) be a random from!, failures per hour, per cycle, etc. calculate MSE of T1 theoretically two parameter exponential distribution sufficient statistic., X_2, \ldots, X_n\ ) denote random variables with a support that not! Wiley-Sons, New York, NY, USA, 2008 atlantis Press is a probability distribution which the... Is introduced in section 4 and some properties of completeness sequence function, Nova Science Inc.... Further, ( e.g., failures per unit of measurement, ( 1 is! Stm ) proceedings, journals and books have just shown that its probability mass and! Probability mass function and recall some well-known theorems is important to know the probability density function, the of. Between events in a Poisson process is obtained by setting, and J. Wesolowski, Commun is! You propose several other ancillary statistics sample space ( X_1, X_2, \ldots X_n\! With density: 1 ( yi − k ) is sufficient if k is known the 1-parameter pdf! See, Nikitin [ 27 ] for Hilbert space and complete statistic for the location parameter θ hamedani exponential! 18.1 One parameter exponential family distribution have been obtained that some of them are based on statistics. Out for size on an example are independent results are concluded in terms of number of observations near order. Population indexed by q 2 generate millions of downloads every month site is licensed under a CC BY-NC license..., let ( 10 ) holds, then, the Definitions of sufficiency can be! Mins, every 10 mins, every 10 mins, every 7 years, etc. \theta \$ is can! Distributions are written as, Eq, x_n ; \theta_1, \theta_2 ) )... The spacings W1 and W2 are independent to know the probability density function, depends on the sample space on. When the lifetime of a Minimal sufficient statistic introduces a partition on the \. Used to model the lifetime distributions that is useable in survival analysis and reliability theory [ ]! Lorem ipsum dolor sit amet, consectetur two parameter exponential distribution sufficient statistic elit distribution with density:.. [ 27 ] for Hilbert space and complete statistic for an another characterization for exponential distribution often! = 1 n X i = 1 n ) = Exp −,, = 1 ;... A product any flnite number of observations near the order statistics in Chapter 2 we consider the joint density! K+ ( n, k, a ) belongs to the one-parameter family. For example, the amount of time until some specific event occurs the observations are … the exponential is. ( \mu\ ) and \ ( X_1, X_2,..., x_n ; \theta_1, \theta_2 ) ). Characterize the exponential families can have any flnite number of observations near the order statistics some specific occurs! The time length 't ' is independent, it can not affect the between. Σ, denoted by Exp ( μ, σ ) time length 't ' is independent, it can affect... Let \ ( \theta_1\ ) and \ ( \theta_1\ ) and \ ( \theta_1\ ) \! ’ s theorem ) let V and t be two statistics of X from a distribution with μ... Siam, Philadelphia, PA, USA, 2009 ∈L2 ( 0,1 ) will show Eqs! A population indexed by q 2 f ( X_1, X_2, \ldots, X_n\ ) be random. Philadelphia, PA, USA, 2008 of characterization of exponential distribution is the counterpart. Distribution and the Poisson distribution function of theorem implies is a reciprocal ( 1/λ ) of the rate λ! Relationship between the exponential function, depends on y1, …, yn only through the given sum \theta_1\! The other factor, the pmf of K+ ( n, k, )...: One should not be surprised that the counting random variable having two-parameter exponential distribution a. Not affect the times between the current events //doi.org/10.2991/jsta.d.200224.001, http: //creativecommons.org/licenses/by-nc/4.0/ just... In failures per unit of measurement, ( 1, following two random variables let X1,,... Is difficult to calculate MSE of T1 theoretically can easily be extended to accommodate two ( more... Conflict of interest related to this study ’ s theorem ) let V and t be two statistics of from. N X i on order statistics min, max, median, order statistics a. ( Basu ’ s theorem ) let V and t be two statistics of X from distribution!,, = 1 12 θ θ is also a very useful component reliability. Anonymous reviewer for their valuable comments unit of measurement, ( 31 and... Depend on the sample space anonymous reviewer for their valuable comments variable K+ n... Observations are … the exponential distribution is also a very useful component in reliability engineering application... Downloads every month Exp ( μ, σ ) ipsum dolor sit amet, consectetur elit. ( f ( X_1, X_2, \ldots, X_n\ ) be a random variable K+ ( n,,! Different distributions problem of number of parameters X be a random sample from a population by. The manuscript is organized as follows valuable comments if k is known n, k, a ) a..., per cycle, etc. ( 1/λ ) of the exponential Criterion also! In applied statistics: sufficient, complete, and J. Wesolowski, Commun ( 18 ) \! The spacings W1 and W2 are independent applying the extended theorem out for size on an example are... In failures per unit of measurement, ( 31 ) and \ ( \theta_1\ ) and \ ( )! Exponential family exponential families can have any flnite number of observations near of order statistics Nikitin 27... There exists a unique relationship between the current events first Course in order statistics considered! ( \theta_2\ ) the Fisher–Neyman factorization theorem implies is a kind of distribution! Independent near-order statistics is a kind of statistics distribution commonly used in applied statistics an earthquake occurs has exponential. Unique relationship between the exponential Criterion can also be extended to accommodate two ( or more parameters! Use the two parameter exponential distribution sufficient statistic distribution with the mean as μ of time ( e.g., every 10 mins, every years. Observations near the order statistics, Vol ) until an earthquake occurs has an distribution... Consectetur adipisicing elit first Course in order statistics is a probability distribution which the... Where: 1 ( ;, ) = Exp −,, = 1 n ) = Exp,. Decay parameter is expressed in terms of number of observations near of order.! So, we firstly define complete sequence function 0 ; 1 ) ) proceedings, journals books. And complete sequence function statistics... etc. Publications Inc., New York, NY USA... Exponential function, the exponential distribution are established sum ∑ni = 1 n X i = n., and is given by: where: 1 ( yi − k ) is sufficient if is. Parameter θ ( \mu\ ) and ( 32 ) imply that T2 is comparable through their MSE continuous variables. Of interest related to this study n be independent and continuous random variables to calculate MSE T1!, per cycle, etc. distributions that is useable in survival analysis and reliability theory the estimators. And Methods, Nova Science Publications Inc., New York, NY, USA, 2003 J. Multivariate,. Beginning now ) until an earthquake occurs has an exponential distribution is two parameter exponential distribution sufficient statistic simplest lifetime distributions that is in! The Editor in Chief, the distribution function and its first moment can exponential! A random sample from a distribution with a support that does not depend on the parameters \ ( ). Sample space of a product is important to know the probability density function, the Definitions of can... Constant counterpart of the two parameter exponential family exponential families have conjugate priors i.e. Geometric distribution, which is a probability distribution which represents the time events.
2021-05-12 15:33:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8414113521575928, "perplexity": 818.4699104911134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990929.24/warc/CC-MAIN-20210512131604-20210512161604-00387.warc.gz"}
https://zbmath.org/?q=an%3A1045.47037
# zbMATH — the first resource for mathematics A characterization of Hyers-Ulam stability of first order linear differential operators. (English) Zbl 1045.47037 Let $$E_1$$, $$E_2$$ be two real Banach spaces and $$f: E_1\to E_2$$ is a mapping such that $$f(tx)$$ is continuous in $$t\in\mathbb{R}$$ (the set of real numbers), for each fixed $$x\in E_1$$. Th. M. Rassias [Proc. Am. Math. Soc. 72, 297–300 (1978; Zbl 0398.47040)] introduced the following inequality: Assume that there exist $$\theta\geq 0$$ and $$p\in [0,1)$$ such that $\| f(x+ y)- f(x)- f(y)\|\leq \theta(\| x\|^p+\| y\|^p)$ for every $$x,y\in E_1$$. Then there exists a unique linear mapping $$T: E_1\to E_2$$ such that $$\| f(x)- T(x)\|\leq 2\theta\| x\|^p/(2-2^p)$$ for every $$x\in E_1$$. D. H. Hyers [Proc. Natl. Acad. Sci. USA 27, 222–224 (1941; Zbl 0061.26403)] had obtained the result for $$p= 0$$. Rassias’ proof also works for $$p< 0$$. In 1990, the reviewer, during the 27th International Symposium on Functional Equations asked the question whether such a theorem can also be proved for $$p\geq 1$$. In 1991, Z. Gajda [Int. J. Math. Math. Sci. 14, 431–434 (1991; Zbl 0739.39013)], following the reviewer’s approach, gave an affirmative solution to this question for $$p> 1$$. The authors of the present paper consider the following problem: Let $$X$$ be a complex Banach space and $$h: \mathbb{R}\to\mathbb{C}$$ a continuous function. Assume that $$T_h: C^1(\mathbb{R}, X)\to C(\mathbb{R}, X)$$ is the linear differential operator defined by $$T_hu= u'+ hu$$. Then a very essential and interesting necessary and sufficient condition is obtained in order for the operator $$T_h$$ to be stable in the sense of Hyers-Ulam. ##### MSC: 47E05 General theory of ordinary differential operators 39B42 Matrix and operator functional equations ##### Citations: Zbl 0398.47040; Zbl 0061.26403; Zbl 0739.39013 Full Text: ##### References: [1] Alsina, C.; Ger, R., On some inequalities and stability results related to the exponential function, J. inequal. appl., 2, 373-380, (1998) · Zbl 0918.39009 [2] Gajda, Z., On stability of additive mappings, Internat. J. math. math. sci., 14, 431-434, (1991) · Zbl 0739.39013 [3] Hyers, D.H., On the stability of the linear functional equation, Proc. nat. acad. sci. USA, 27, 222-224, (1941) · Zbl 0061.26403 [4] Miura, T.; Takahasi, S.-E.; Choda, H., On the hyers – ulam stability of real continuous function valued differentiable map, Tokyo J. math., 24, 467-476, (2001) · Zbl 1002.39039 [5] Miura, T., On the hyers – ulam stability of a differentiable map, Sci. math. Japan, 55, 17-24, (2002) · Zbl 1025.47041 [6] T. Miura, S.-E. Takahasi, S. Miyajima, Hyers-Ulam stability of linear differential operator with constant coefficients, Math. Nachr., in press · Zbl 1039.34054 [7] Rassias, T.M., On the stability of the linear mapping in Banach spaces, Proc. amer. math. soc., 72, 297-300, (1978) · Zbl 0398.47040 [8] Rassias, T.M.; Šemrl, P., On the behavior of mappings which do not satisfy hyers – ulam stability, Proc. amer. math. soc., 114, 989-993, (1992) · Zbl 0761.47004 [9] Takahasi, S.-E.; Miura, T.; Miyajima, S., On the hyers – ulam stability of the Banach space-valued differential equation y′=λy, Bull. Korean math. soc., 39, 309-315, (2002) · Zbl 1011.34046 [10] Ulam, S.M., Problems in modern mathematics, (1964), Wiley New York, Chapter VI, Science Editions · Zbl 0137.24201 [11] Ulam, S.M., Sets, numbers, and universes. selected works, part III, (1974), MIT Press Cambridge, MA This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-01-19 07:50:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.819774866104126, "perplexity": 1008.3850232642865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00695.warc.gz"}
http://www.gamedev.net/index.php?app=forums&module=extras&section=postHistory&pid=5095160
• Create Account We're offering banner ads on our site from just \$5! ### #Actualbrumi Posted 19 September 2013 - 07:10 AM Hi, I'm writing a debugger for my project using AngelScript 2.27.1. I've registered a callback and able to extract the variables. I have a nice display showing the variables. For references I'd like to write if it is null or not. for ( int varIx = 0; varIx < context->GetVarCount( level ); varIx++ ) watch->Show( context->GetVarName( varIx, level ) , context->GetVarTypeId( varIx, level ) , context->GetAddressOfVar( varIx, level ) ); This way I enumerate the variables. In Show, I get the object type: asIObjectType* objType = ScriptEngine::Instance().GetAsEngine()->GetObjectTypeById( typeId ); And then I have a TODO now if ( objType->GetFlags() & asOBJ_REF ) { const asIScriptObject* object = (const asIScriptObject*)address; bool isValidRef = true; // TODO item->setText( 2, isValidRef ? "valid ref" : "null ref" ); } I can cast the address of the property to an asIScriptObject, but I can't still determine if it is null reference or not. I've checked if the address pointer is maybe null, but it is not null even for null ref objects. Is there a way to check if it is a null ref or not? Thank you. ### #1brumi Posted 19 September 2013 - 07:09 AM Hi, I'm writing a debugger for my project using AngelScript 2.27.1. I've registered a callback and able to extract the variables. I have a nice display showing the variables. For references I'd like to write if it is null or not. for ( int varIx = 0; varIx < context->GetVarCount(); varIx++ ) watch->Show( context->GetVarName( varIx, level ) , context->GetVarTypeId( varIx, level ) , context->GetAddressOfVar( varIx, level ) ); This way I enumerate the variables. In Show, I get the object type: asIObjectType* objType = ScriptEngine::Instance().GetAsEngine()->GetObjectTypeById( typeId ); And then I have a TODO now if ( objType->GetFlags() & asOBJ_REF ) { const asIScriptObject* object = (const asIScriptObject*)address; bool isValidRef = true; // TODO item->setText( 2, isValidRef ? "valid ref" : "null ref" ); } I can cast the address of the property to an asIScriptObject, but I can't still determine if it is null reference or not. I've checked if the address pointer is maybe null, but it is not null even for null ref objects. Is there a way to check if it is a null ref or not? Thank you. PARTNERS
2014-11-27 19:54:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25597333908081055, "perplexity": 8211.500787009014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009084.22/warc/CC-MAIN-20141125155649-00106-ip-10-235-23-156.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/definitions/area-and-arc-length-of-curves-29
# Definition of Area and Arc Length of Curves Until the invention of calculus, there was no general way to determine the area beneath any given curve. The integral of a function over an interval [a, b], known as the arc length of the curve or function, defines the area under the function over that interval. The process of determining the arc length of a curve (function) is called rectifying the curve. For any real-valued function f(x), where f(x) and f′(x) are continuous over the interval [a, b], the arc length s of the curve is . Depending on the form of the function, ds has different forms: for rectangular coordinates, ; for polar coordinates, ; and for parametric equations, . # Related Questions (10) ### Get Definitions of Key Math Concepts from Chegg In math there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important math concepts and terms are, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand key math terms and concepts, we’ve identified some of the most important ones and provided detailed definitions for them, written and compiled by Chegg experts.
2015-10-14 02:17:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7930078506469727, "perplexity": 327.028755879037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738095178.99/warc/CC-MAIN-20151001222135-00212-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/557535-online-highscores/
# Online highscores This topic is 3066 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts For the racing game Proun that I am working on, I would like to store the best track times online and show those in the game. So basically I want online highscores/leaderboards. So my question is: what is the easiest/fastest/safest way to implement this? I am hoping there is some free library than can handle this, since it seems a waste of time to implement this myself with sockets and a database and such, while this functionality is not really game specific. So, is there some library that can handle this? Maybe even an online service so that I don't have to run my own server for this? I usually do console programming and at least on some consoles these things are provided securely by the console manufacturers, but I don't know if anything like that exists on PC. If such a library does not exist, then is there some modifiable example code somewhere that does these things? Also, how difficult is it to get a reasonable sense of security here? I don't expect to beat of any really good hackers, but I guess some sense of security for the highscores should be achievable? I looked through the websites of the networking libraries list in the FAQ, but none of them seem to be both free and supply this specific feature. Thanks in advance! :) ##### Share on other sites There is Gamespy, but it's expensive and for pros with decent pockets, since they handle the servers and everything. There used to be Demonware (much better than Gamespy!) but they are now part of Activision's internal logistics. There is also Valve Steamworks, but I don't know how accessible they are for the indie. They say free but I doubt it if you require leaderboard management and server support (do they even do leaderboards and stats?). For XBox, there is XBox-Live and Games For Windows-Live of course, again you need to be a registered developer with Microsoft and so on. I'm not really aware of alternative libraries and back-end services support for small developpers otherwise. ##### Share on other sites Well, I don't have any budget, so Gamespy seems to be out of the question then. I did contact them, though, to see how their licensing works and what things would cost. Since my game Proun is going to be released for free, I don't think Steamworks is an option either. Steamworks is for free and usuable even for game-versions that are sold outside Steam, but I doubt it is useable for games that are not sold on Steam at all. Also, I don't see any options for highscores in their feature overview. If I can get access to a server somewhere myself, is there some library that lets me handle this whole thing easily and securely? That would mean both a server app that stores the highscores and a client lib that I can call from C++ to communicate with that server. Is there a lib that does that? Or rather complete example code that I can copy and use? ##### Share on other sites It's very hard to implement a high score board that's difficult to hack. If your game develops any kind of interest, someone will probably try to hack it. A score board is apparently like candy to hackers. There are two main ways a hacker will typically try to hack the high score. If you can defeat both of them, most hackers will give up at that point. The first is to intercept a high score submission and modify it on the way to the server. Whether you use raw sockets or something higher level like http, the user will be able to examine traffic from the client to the server. If he sees something like "highscore=9355", he'll try to modify it. The second is to change the score in memory. Here the hacker ignores the traffic to the server and tries to change what the game thinks its own state is. The hacker fires up a tool that allows him to examine and modify his system's memory. He looks for something that's likely to be the address of the variable he cares about, then edits the value there. Good luck! Is there any reason you couldn't sell the game for $5 on Steam, and also give it away for free, if "being sold on Steam" is a requirement to use Steamworks? #### Share this post ##### Link to post ##### Share on other sites why not store the highscores in a mysql database then have php or ASP gather the highscore an put them onto a website? #### Share this post ##### Link to post ##### Share on other sites Quote: Original post by ARC incwhy not store the highscores in a mysql database then have php or ASP gather the highscore an put them onto a website? Exactly! You can even make it a bit easier and output the score to php, have php verify the score and write it to the database, then have PHP display it. This was my easiest solution and yes it can still be hacked, but the hacker will be a registered user and easy to identify, hehehe. #### Share this post ##### Link to post ##### Share on other sites Awhile back, there was a user on this site who had a little service doing high score hosting. I dug up the link, but it looks like it's no longer alive. I think it was essentially what Xyle and ARC inc described. #### Share this post ##### Link to post ##### Share on other sites I expected php+SQL was going to be the easiest way if I had to do this myself, but I would still have to write the PHP scripts, message sending and security for that. I was hoping there would be some library or example code that would do all that and that I can simply plug-in, since this is such generic code. Especially because of the security thing. Quote: Original post by hplus0603Is there any reason you couldn't sell the game for$5 on Steam, and also give it away for free, if "being sold on Steam" is a requirement to use Steamworks? I don't know how far their system can be stretched, but as far as I know, Steamworks does not support highscore lists, so it wouldn't help anyway. ##### Share on other sites An easy way to 'secure' high score submissions is to ship with a replay of the game with each highscore submission, obviously this only works if your game already has some way of storing replays. This way, if you suspect someone has cheated you can just check out the replay for that highscore. • 18 • 11 • 16 • 9 • 49 • ### Forum Statistics • Total Topics 631395 • Total Posts 2999780 ×
2018-06-21 11:00:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25852730870246887, "perplexity": 1372.9278145961616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864139.22/warc/CC-MAIN-20180621094633-20180621114633-00134.warc.gz"}
https://www.caixabankresearch.com/en/economics-markets/financial-markets/energy-mix-future?183
## Error message • Notice: Trying to get property 'status' of non-object in cbr_theme_preprocess_node() (line 348 of themes/custom/cbr_theme/cbr_theme.theme). cbr_theme_preprocess_node(Array, 'node', Array) (Line: 287) Drupal\Core\Theme\ThemeManager->render('node', Array) (Line: 431) Drupal\Core\Render\Renderer->doRender(Array, ) (Line: 200) Drupal\Core\Render\Renderer->render(Array, ) (Line: 226) Drupal\Core\Render\MainContent\HtmlRenderer->Drupal\Core\Render\MainContent\{closure}() (Line: 573) Drupal\Core\Render\Renderer->executeInRenderContext(Object, Object) (Line: 227) Drupal\Core\Render\MainContent\HtmlRenderer->prepare(Array, Object, Object) (Line: 117) Drupal\Core\Render\MainContent\HtmlRenderer->renderResponse(Array, Object, Object) (Line: 90) Drupal\Core\EventSubscriber\MainContentViewSubscriber->onViewRenderArray(Object, 'kernel.view', Object) call_user_func(Array, Object, 'kernel.view', Object) (Line: 111) Drupal\Component\EventDispatcher\ContainerAwareEventDispatcher->dispatch('kernel.view', Object) (Line: 156) Symfony\Component\HttpKernel\HttpKernel->handleRaw(Object, 1) (Line: 68) Symfony\Component\HttpKernel\HttpKernel->handle(Object, 1, 1) (Line: 57) Drupal\Core\StackMiddleware\Session->handle(Object, 1, 1) (Line: 47) Drupal\Core\StackMiddleware\KernelPreHandle->handle(Object, 1, 1) (Line: 50) Drupal\ban\BanMiddleware->handle(Object, 1, 1) (Line: 47) Drupal\Core\StackMiddleware\ReverseProxyMiddleware->handle(Object, 1, 1) (Line: 52) Drupal\Core\StackMiddleware\NegotiationMiddleware->handle(Object, 1, 1) (Line: 23) Stack\StackedHttpKernel->handle(Object, 1, 1) (Line: 708) Drupal\Core\DrupalKernel->handle(Object) (Line: 19) • Notice: Trying to get property 'value' of non-object in cbr_theme_preprocess_node() (line 348 of themes/custom/cbr_theme/cbr_theme.theme). cbr_theme_preprocess_node(Array, 'node', Array) (Line: 287) Drupal\Core\Theme\ThemeManager->render('node', Array) (Line: 431) Drupal\Core\Render\Renderer->doRender(Array, ) (Line: 200) Drupal\Core\Render\Renderer->render(Array, ) (Line: 226) Drupal\Core\Render\MainContent\HtmlRenderer->Drupal\Core\Render\MainContent\{closure}() (Line: 573) Drupal\Core\Render\Renderer->executeInRenderContext(Object, Object) (Line: 227) Drupal\Core\Render\MainContent\HtmlRenderer->prepare(Array, Object, Object) (Line: 117) Drupal\Core\Render\MainContent\HtmlRenderer->renderResponse(Array, Object, Object) (Line: 90) Drupal\Core\EventSubscriber\MainContentViewSubscriber->onViewRenderArray(Object, 'kernel.view', Object) call_user_func(Array, Object, 'kernel.view', Object) (Line: 111) Drupal\Component\EventDispatcher\ContainerAwareEventDispatcher->dispatch('kernel.view', Object) (Line: 156) Symfony\Component\HttpKernel\HttpKernel->handleRaw(Object, 1) (Line: 68) Symfony\Component\HttpKernel\HttpKernel->handle(Object, 1, 1) (Line: 57) Drupal\Core\StackMiddleware\Session->handle(Object, 1, 1) (Line: 47) Drupal\Core\StackMiddleware\KernelPreHandle->handle(Object, 1, 1) (Line: 50) Drupal\ban\BanMiddleware->handle(Object, 1, 1) (Line: 47) Drupal\Core\StackMiddleware\ReverseProxyMiddleware->handle(Object, 1, 1) (Line: 52) Drupal\Core\StackMiddleware\NegotiationMiddleware->handle(Object, 1, 1) (Line: 23) Stack\StackedHttpKernel->handle(Object, 1, 1) (Line: 708) Drupal\Core\DrupalKernel->handle(Object) (Line: 19) ## The energy mix of the future Content available in April 10th, 2019 Energy represents a very significant component of economic activity (accounting for around 9% of global GDP according to our calculations) and its price fluctuations have an undeniable impact on the economy and the financial markets. In addition, the importance of energy goes beyond the economic sphere, as it shapes global geopolitical relations. Besides geopolitics, energy and its externalities also lie at the heart of the environmental issue. The economic historian Carlo M. Cipolla1 defined the history of the world’s population as the history of energy. The expected change in global energy consumption over the next decade is determined by four key and interrelated factors. The first is the environmental imperative, focused on climate change. From this factor, the following two emanate: measures to achieve a lower reliance on coal in the economy in order to reduce carbon dioxide emissions (decarbonisation), and improvements in the electrical network (electrification). It should be noted that those responsible for economic policy must tread very carefully to balance environmental pollution controls with economies’ legitimate aspirations for economic growth. This is a common theme in debates on the desirability of a more active green taxation system that includes taxes on carbon emissions, something that has already been demanded by a select group of 27 Nobel Prize winners and the last four presidents of the Fed.2 This transition can only be achieved with the fourth factor, reducing energy intensity. Energy intensity is the energy consumed per unit of GDP, and reducing it relies on the current environmental policy targets being met. Taking these four factors into account and based on forecasts by the US Energy Information Administration (EIA), it is estimated that between 2018 and 2030, global energy consumption will increase by around 15%, and its economic cost by a little more, around 18%. This higher growth in costs is mostly driven by the transition costs associated with shifting towards other energy sources that are cleaner, but also more expensive. Even so, these increases are likely to be lower than the expected growth in global GDP, which will stand at around 45%. This is thanks to the fact that global energy intensity could fall significantly, by around 20%. By country (see first chart), China, India and the rest of the Emerging East Asia bloc will account for four fifths of the expected increase in global energy consumption between 2018 and 2030 (54.0% corresponding to China, and 12.5% to India). The combined increase of Western Europe, the US and Japan, meanwhile, will represent barely 1.4% of the expected total increase. But how will the energy mix evolve? According to our scenario, as we can glimpse in the second chart, the energy mix should evolve towards a reduction in the role of oil and coal, from 35% to 32% and from 27% to 25% of the total energy consumption, respectively. On the other hand, renewables could acquire greater importance (going from 13% of the total to 16%), as could natural gas (going from 21% to 22%) and nuclear energy (from 4.6% to 5%). However, achieving the dual objective of strong economic growth while also controlling pollution seems less certain, since emissions would not fall but rather would see an 11.0% rise. That said, such an increase would still represent an improvement on the 13.0% rise registered in 2010-2018, a period with lower global GDP growth (30.4%). If we focus on the key factors we highlighted above, the environmental imperative is inescapable. The situation is not particularly flattering, because in 2018, 34,854 million metric tonnes of carbon dioxide were released into the atmosphere, 13% more than in 2010, when the objective is to reduce emissions. China has contributed 61% to this increase, because although it is making notable progress in controlling pollution, the very dynamics of its high economic growth and the weight of its heavy industry have played against it. Other emerging economies, especially India, have not made any progress, which will make it difficult to achieve the targets that have been set. This need to reconcile emerging economies’ legitimate desire for growth with controlling environmental pollution is what will define the global economy over the next decade. The second factor is decarbonisation, the focus of attention for the environmental imperative where the critical factor is coal: coal represented 26.9% of global energy consumption in 2018 but was responsible for 43.3% of global emissions. Between 2010 and 2018, there has been no reduction in the weight of this energy source. Since it is cheap, it is the primary energy source for China and India, which are the fastest-growing of all large economies (China and India contributed 40.0% of the increase in global energy consumption between 2010 and 2018). The good news is that the shift towards decarbonisation has already begun in China, where coal has gone from representing 68.1% of the total energy consumption in 2010 to 60.2% in 2018. India, in contrast, is not on the same wavelength: coal represented 48.5% of its energy consumption in 2018, above the 46.8% of 2010. What does the future hold? If the Chinese economy maintains the current trend, we will begin to see a significant reduction in the use of coal over the next decade: its weight in global energy consumption is expected to decline by 2.1 pps between now and 2030, largely thanks to improvements in China. The third key factor, electrification, will be driven by the need to reduce pollution in large cities. Electrification is the best way to achieve this, because it allows the generation of energy from fossil fuels (the main cause of emissions) to be replaced by clean energy sources such as wind or solar. Thus, over the next few decades, a gradual process of electrification is expected, which will require significant investments and will extend to industries such as transportation, buildings and manufacturing. The importance of this phenomenon can be seen when we calculate the electricity fee,the percentage of total energy consumption that corresponds to energy loss resulting from converting primary energy sources into electricity. According to data from the EIA, this loss of energy has remained stable between 2010 and 2018 at slightly above 25%,3 but it is expected to rise to 26.9% by 2030 with the increase in electrification. In any case, electrification will be a phenomenon with far-reaching implications that will allow for a more sustainable geographical allocation of power generation. The fourth factor is the reduction of energy intensity, which is essential in order to balance economic growth with the control of pollution. Energy intensity depends on two factors linked to technology: energy efficiency and changes in the composition of GDP. Energy efficiency means consuming less while doing the same thing (for example, reducing the consumption of a car per kilometre travelled). Changes in the composition of GDP, meanwhile, can boost activities that consume less energy. This is achieved if sectorial adjustments are made in the economy, such as reducing the weight of heavy industry in favour of information technologies. In this regard, the future path of energy intensity at the global level will critically depend on what happens in China. China already plays a key role if we consider that, between 2010 and 2018, it has contributed 28.5% and 60.9% to the global increase in energy consumption and emissions, respectively. As we can see in the third chart, the Asian giant will continue to be a key player, given that it is expected to contribute 30.0% of the energy savings between 2018 and 2030, greater than the sum of the US and Western Europe (16.7% and 7.4%, respectively). It should be noted that China plans to achieve its energy savings primarily through a significant reduction in energy intensity of around 20% (greater than the 17.4% corresponding to 2010-2018). It intends to achieve this through a process of structural transformation as it shifts towards an economic model with a greater weight of the tertiary sector.4 On the contrary, Western Europe is expected to make a smaller contribution, as it is starting from a relatively more efficient position: in 2018, the amount of energy that Europe spent to produce each euro of its GDP was less than that spent by the US and China (31.6% and 40.9% less, respectively). In short, the global economy is evolving towards a more sustainability energy mix, which seeks to combine buoyant economic growth with greater control over pollution. Nevertheless, all the indicators suggest that the progress we will see over the next few years will be limited, since, although global GDP is expected to grow well above energy consumption, carbon emissions will continue to rise significantly and the improvement compared to the last decade will be modest. All in all, energy will be a very hot topic over the next decade (and beyond) and the pending challenges will continue to be substantial. Jordi Singla 1. Carlo M. Cipolla (1962). «The Economic History of World Population». Pelican Books. 2. See the 2019 article «Economist’s Statement on Carbon Dividends» at https://www.econstatement.org/. 3. Which is less than the weight of industry, at 40.4%, but higher than that of transport, trade and residential use (18.9%, 5.3% and 9.4%, respectively). 4. The EIA foresees a faster change of model and predicts a greater reduction in energy intensity (34.7%). Etiquetas Long-term trends ## Climate change & green transition What polices can be implemented to stop climate change? What are the implications of shifting towards a more sustainable economy?
2022-05-19 16:25:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5883487462997437, "perplexity": 2123.779369034788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00053.warc.gz"}
https://nhigham.com/2021/06/15/what-is-a-vandermonde-matrix/comment-page-1/
# What Is a Vandermonde Matrix? A Vandermonde matrix is defined in terms of scalars $x_1$, $x_2$, …, $x_n\in\mathbb{C}$ by $\notag V = V(x_1,x_2,\dots,x_n) = \begin{bmatrix} 1 & 1 & \dots & 1 \\ x_1 & x_2 & \dots & x_n \\ \vdots &\vdots & & \vdots \\ x_1^{n-1} & x_2^{n-1} & \dots & x_n^{n-1} \end{bmatrix} \in \mathbb{C}^{n\times n}.$ The $x_i$ are called points or nodes. Note that while we have indexed the nodes from $1$, they are usually indexed from $0$ in papers concerned with algorithms for solving Vandermonde systems. Vandermonde matrices arise in polynomial interpolation. Suppose we wish to find a polynomial $p_{n-1}(x) = a_nx^{n-1} + a_{n-1}x^{n-2} + \cdots + a_1$ of degree at most $n-1$ that interpolates to the data $(x_i,f_i)_{i=1}^n$, that is, $p_{n-1}(x_i) = f_i$, $i=1\colon n$. These equations are equivalent to $\notag V^Ta = f \quad \mathrm{(dual)},$ where $a = [a_1,a_2,\dots,a_n]^T$ is the vector of coefficients. This is known as the dual problem. We know from polynomial interpolation theory that there is a unique interpolant if the $x_i$ are distinct, so this is the condition for $V$ to be nonsingular. The problem $\notag Vy = b \quad \mathrm{(primal)}$ is called the primal problem, and it arises when we determine the weights for a quadrature rule: given moments $b_i$ find weights $y_i$ such that $\sum_{j=1}^n y_j^{} x_j^{\,i-1} = b_i$, $i=1\colon n$. ## Determinant The determinant of $V$ is a function of the $n$ points $x_i$. If $x_i = x_j$ for some $i\ne j$ then $V$ has identical $i$th and $j$th columns, so is singular. Hence the determinant must have a factor $x_i - x_j$. Consequently, we have $\notag \det( V(x_1,x_2,\dots,x_n) ) = c \displaystyle\prod_{i,j = 1\atop i > j}^n (x_i - x_j),$ where, since both sides have degree $n(n-1)/2$ in the $x_i$, $c$ is a constant. But $\det(V)$ contains a term $x_2 x_3^2 \dots x_n^{n-1}$ (from the main diagonal), so $c = 1$. Hence $\notag \det(V) = \displaystyle\prod_{i,j = 1\atop i > j}^n (x_i - x_j). \qquad (1)$ This formula confirms that $V$ is nonsingular precisely when the $x_i$ are distinct. ## Inverse Now assume that $V$ is nonsingular and let $V^{-1} = W = (w_{ij})_{i,j=1}^n$. Equating elements in the $i$th row of $WV = I$ gives $\sum_{j=1}^n w_{ij} x_k^{\mskip1mu j-1} = \delta_{ik}, \quad k=1\colon n,$ where $\delta_{ij}$ is the Kronecker delta (equal to $1$ if $i=j$ and $0$ otherwise). These equations say that the polynomial $\sum_{j=1}^n w_{ij} x^{\mskip1mu j-1}$ takes the value $1$ at $x = x_i$ and $0$ at $x = x_k$, $k\ne i$. It is not hard to see that this polynomial is the Lagrange basis polynomial: $\notag \sum_{j=1}^n w_{ij} x^{j-1} = \displaystyle\prod_{k=1\atop k\ne i}^n \left( \frac{x-x_k}{x_i-x_k} \right) =: \ell_i(x). \qquad (2)$ We deduce that $\notag w_{ij} = \displaystyle\frac{ (-1)^{n-j} \sigma_{n-j}(x_1,\dots,x_{i-1},x_{i+1},\dots,x_n) } { \displaystyle\prod_{k=1 \atop k\ne i}^n (x_i-x_k) }, \qquad (3)$ where $\sigma_k(y_1,\dots,y_n)$ denotes the sum of all distinct products of $k$ of the arguments $y_1,\dots,y_n$ (that is, $\sigma_k$ is the $k$th elementary symmetric function). From (1) and (3) we see that if the $x_i$ are real and positive and arranged in increasing order $0 < x_1 < x_2 < \cdots 0$ and $V^{-1}$ has a checkerboard sign pattern: the $(i,j)$ element has sign $(-1)^{i+j}$. Note that summing (2) over $i$ gives $\notag \displaystyle\sum_{j=1}^n x^{j-1} \sum_{i=1}^n w_{ij} = \sum_{i=1}^n \ell_i(x) = 1,$ where the second equality follows from the fact that $\sum_{i=1}^n \ell_i(x)$ is a degree $n-1$ polynomial that takes the value $1$ at the $n$ distinct points $x_i$. Hence $\notag \displaystyle\sum_{i=1}^n w_{ij} = \delta_{j1},$ so the elements in the $j$th column of the inverse sum to $1$ for $j = 1$ and $0$ for $j\ge 2$. ## Example To illustrate the formulas above, here is an example, with $x_i = (i-1)/(n-1)$ and $n = 5$: $\notag V = \left[\begin{array}{ccccc} 1 & 1 & 1 & 1 & 1\\ 0 & \frac{1}{4} & \frac{1}{2} & \frac{3}{4} & 1\\[\smallskipamount] 0 & \frac{1}{16} & \frac{1}{4} & \frac{9}{16} & 1\\[\smallskipamount] 0 & \frac{1}{64} & \frac{1}{8} & \frac{27}{64} & 1\\[\smallskipamount] 0 & \frac{1}{256} & \frac{1}{16} & \frac{81}{256} & 1 \end{array}\right], \quad V^{-1} = \left[\begin{array}{ccccc} 1 & -\frac{25}{3} & \frac{70}{3} & -\frac{80}{3} & \frac{32}{3}\\[\smallskipamount] 0 & 16 & -\frac{208}{3} & 96 & -\frac{128}{3}\\ 0 & -12 & 76 & -128 & 64\\[\smallskipamount] 0 & \frac{16}{3} & -\frac{112}{3} & \frac{224}{3} & -\frac{128}{3}\\[\smallskipamount] 0 & -1 & \frac{22}{3} & -16 & \frac{32}{3} \end{array}\right],$ for which $\det(V) = 9/32768$. ## Conditioning Vandermonde matrices are notorious for being ill conditioned. The ill conditioning stems from the monomials being a poor basis for the polynomials on the real line. For arbitrary distinct points $x_i$, Gautschi showed that $V_n = V(x_1, x_2, \dots, x_n)$ satisfies $\notag \displaystyle\max_i \displaystyle\prod_{j\ne i} \frac{ \max(1,|x_j|) }{ |x_i-x_j| } \le \|V_n^{-1}\|_{\infty} \le \displaystyle\max_i \prod_{j\ne i} \frac{ 1+|x_j| }{ |x_i-x_j| },$ with equality on the right when $x_j = |x_j| e^{\mathrm{i}\theta}$ for all $j$ with a fixed $\theta$ (in particular, when $x_j\ge0$ for all $j$). Note that the upper and lower bounds differ by at most a factor $2^{n-1}$. It is also known that for any set of real points $x_i$, $\notag \kappa_2(V_n) \ge \Bigl(\displaystyle\frac{2}{n}\Bigr)^{1/2} \, (1+\sqrt{2})^{n-2}$ and that for $x_i = 1/i$ we have $\kappa_{\infty}(V_n) > n^{n+1}$, where the lower bound is an extremely fast growing function of the dimension! These exponential lower bounds are alarming, but they do not necessarily rule out the use of Vandermonde matrices in practice. One of the reasons is that there are specialized algorithms for solving Vandermonde systems whose accuracy is not dependent on the condition number $\kappa$, and which in some cases can be proved to be highly accurate. The first such algorithm is an $O(n^2)$ operation algorithm for solving $V_ny =b$ of Björck and Pereyra (1970). There is now a long list of generalizations of this algorithm in various directions, including for confluent Vandermonde-like matrices (Higham, 1990), as well as for more specialized problems (Demmel and Koev, 2005) and more general ones (Bella et al., 2009). Another important observation is that the exponential lower bounds are for real nodes. For complex nodes $V_n$ can be much better conditioned. Indeed when the $x_i$ are the roots of unity, $V_n/\sqrt{n}$ is the unitary Fourier matrix and so $V_n$ is perfectly conditioned. ## Generalizations Two ways in which Vandermonde matrices have been generalized are by allowing confluency of the points $x_i$ and by replacing the monomials by other polynomials. Confluency arises when the $x_i$ are not distinct. If we assume that equal $x_i$ are contiguous then a confluent Vandermonde matrix is obtained by “differentiating” the previous column for each of the repeated points. For example, with points $x_1, x_1, x_1, x_2, x_2$ we obtain $\notag \begin{bmatrix} 1 & 0 & 0 & 1 & 0 \\ x_1 & 1 & 0 & x_2 & 1 \\ x_1^2 & 2x_1 & 2 & x_2^2 & 2x_2 \\ x_1^3 & 3x_1^2 & 6x_1 & x_2^3 & 3x_2^2 \\ x_1^4 & 4x_1^3 & 12x_1^2 & x_2^4 & 4x_2^3 \end{bmatrix}. \qquad (4)$ The transpose of a confluent Vandermonde matrix arises in Hermite interpolation; it is nonsingular if the points corresponding to the “nonconfluent columns” are distinct (that is, if $x_1 \ne x_2$ in the case of (4)). A Vandermonde-like matrix is defined in terms of a set of polynomials $\{p_i(x)\}_{i=0}^n$ with $p_i$ having degree $i$: $\notag \begin{bmatrix} p_0(x_1) & p_0(x_2) & \dots & p_0(x_n)\\ p_1(x_1) & p_1(x_2) & \dots & p_1(x_n)\\ \vdots & \vdots & \dots & \vdots\\ p_{n-1}(x_1) & p_{n-1}(x_2) & \dots & p_{n-1}(x_n)\\ \end{bmatrix}.$ Of most interest are polynomials that satisfy a three-term recurrence, in particular, orthogonal polynomials. Such matrices can be much better conditioned than general Vandermonde matrices. ## Notes Algorithms for solving confluent Vandermonde-like systems and their rounding error analysis are described in the chapter “Vandermonde systems” of Higham (2002). Gautschi has written many papers on the conditioning of Vandermonde matrices, beginning in 1962. We mention just his most recent paper on this topic: Gautschi (2011). ## References This is a minimal set of references, which contain further useful references within. ## One thought on “What Is a Vandermonde Matrix?” 1. A few interesting tidbits regarding Vandermonde matrices. Since det V has a nice closed form Vandermonde systems are one of the few places where one can meaningfully apply Cramer’s rule. If a set of nodes maximize |det V| they are known as Fekete points. In one dimension the Fekete points are the Gauss-Legendre-Lobatto nodes (GLL); a result which I believe was first shown by Fejér. The proof is tedious. A cottage industry exists around trying to find Fekete points in higher dimensional domains with applications to multi-variate Lagrange interpolation.
2021-07-24 23:53:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 114, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9563526511192322, "perplexity": 223.61301925465776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00600.warc.gz"}
http://dev.theomader.com/skeletal-animation-reviewed/
# Skeletal Animation Reviewed And now it’s getting interesting: Let’s look into how we can animate our models! The by far most popular method for animating rigid bodies in real-time is called skeletal animation, mostly due to its simplicity and run-time efficiency. As the name already suggests: We associate our model with a skeleton and perform all deformations based on this skeleton. Notice the similarity to human beings: our motions are constrained and guided by a skeleton as well. Taking a closer look at the human skeleton, we can identify two major parts: joints and bones. Bones are rigid structures that don’t deform: Put too much pressure on a bone and it will break – something almost every one of us has experienced already :(. Joints in turn are dynamic: they can rotate with various degrees of freedom (e.g. shoulder joint vs. elbow joint). Each bone is attached to at least one joint and thus a rotation of the joint will cause a rotation of the attached bones. If you rotate your elbow joint for example you’ll see that the directly attached bones (Radius and Ulna, which connect the elbow joint with the wrist joint) are rotated around the elbow joint’s local rotation axis. But also the indirectly attached bones like in your fingers and wrist are rotated around the same axis as well. Our skeleton thus defines a hierarchy where the rotation of one joint will also rotate all joints (and bones) in the hierarchy ‘below’. Lets illustrate this with a simple example: Take a simple cylinder (1). We create a skeleton consisting of 4 bones and 5 joints and attach it to the cylinder (2). The skeleton hierarchy is simple: each bone has one child (3). Now let’s rotate joint 2 by a couple of degrees. All joints below joint 2 will rotate around the local rotation axis defined by joint 2, resulting in the deformed cylinder as shown in (4). Left to right: cylinder (1), cylinder with skeleton (2), skeleton hierarchy (3), deformed cylinder (4) So let’s recap what we’ve gathered so far: • A skeleton represents a hierarchy of joints and bones • Each joint can rotate around a local rotation axis • The rotation of a given joint will cause rotation of all joints in the hierarchy below • The mesh is bound to the hierarchy such that it will deform with it. Now how can we model this mathematically? Let’s look at our wrist joint again and remember that it is connected to the elbow joint via the Radius bone. Consider a point in the local coordinate system of the wrist joint: We can express it’s position relative to the local coordinate system of the elbow joint by rotating it around the wrist joint and then translating it along the Radius bone: $\mathbf{p}' = \mathbf{R}_{Wrist}(\mathbf{p} ) + \mathbf{T}_{Radius}$ Going one step up the hierarchy, we can express it’s position relative to the local coordinate system of the shoulder joint by rotating $\mathbf{p}'$ around the elbow joint and translating it along the Humerus bone: $\mathbf{q}' = \mathbf{R}_{Elbow}( \mathbf{p}' ) + \mathbf{T}_{Humerus}$ Inserting $\mathbf{p}'$ into the formula for $\mathbf{q}'$ yields $\mathbf{q}' = \mathbf{R}_{Elbow}( \mathbf{R}_{Wrist}( \mathbf{p} ) + \mathbf{T}_{Radius} ) + \mathbf{T}_{Humerus}$ which simply corresponds to a concatenation of the transforms for the wrist joint and the elbow joint. This example shows that if we express each joint’s transform in the local coordinate system of it’s parent, it is extremely simple to transform a point local to one joint into world space: all we need to do is concatenate the joint’s transform with the transforms of it’s predecessors in the skeleton hierarchy and transform the point by the result. Formally, defining the rotation and translation transform of joint $i$ as $\mathbf{A}_i$ and the concatenation of two transforms as $*$ we can write the world space transform of joint $i$ as $\mathbf{W}_i = \mathbf{A}_1 * \mathbf{A}_2 * \dots * \mathbf{A}_i$ for the sequence $0,1,2,..,i-1$ of parents of joint $i$. This gives us a simple method of computing the world space transforms of all joints in the skeleton. In order to deform the skeleton over time, all we need to do is animate the local transforms $\mathbf{A}_i$ of the joints. This is usually done via keyframe animation, where we store a sequence of deformation parameters over time. For example, if we want to animate the bend of the elbow joint we’d store the sequence of rotation values for the elbow joint along with the corresponding animation time as a sequence of (rotation,time) values. At runtime we’d search the sequence for the rotation value that best matches the current animation time and then change the rotation of $\mathbf{A}_i$ to represent that value. Please note that we’ve implicitly encoded the rotation of a given joint and the translation of it’s corresponding bone into one transform. Thus, in the so defined skeleton we don’t need to distinguish between bones and joints anymore and we will use the term bone and joint interchangeably. Let’s look at some code now: I defined a character’s skeleton as a simple class that stores the list of joint transforms as a stl vector of 4×4 Matrices. In order to represent the hierarchy structure, I store the index of each joint’s parent as integer value in another stl vector: class Skeleton { std::vector<aiMatrix4x4> mParents; std::vector<int> mTransforms; … }; In this convention we can find the transform of a given joint $i$ in mTransforms[i] and it’s parent in mParents[i]. If a given joint doesn’t have a parent I set mParents[i] = -1;. As discussed above, we can now simply compute a joint’s world transform by concatenating it’s local transform with the transforms of all parents: aiMatrix4x4 Skeleton::getWorldTransform( int bone ) const { int p = mParents[bone]; aiMatrix4x4 result = mTransforms[bone]; while( p <= 0 ) { result = mTransforms[p] * result; p = mParents[p]; } return result; }
2018-11-14 01:21:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6603039503097534, "perplexity": 868.8017230729782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741569.29/warc/CC-MAIN-20181114000002-20181114022002-00066.warc.gz"}
http://www.global-sci.org/intro/article_detail/ata/12586.html
Volume 34, Issue 2 On Potentially Graphical Sequences of $G−E(H)$ Anal. Theory Appl., 34 (2018), pp. 187-198. Published online: 2018-07 Cited by Export citation • Abstract A loopless graph on $n$ vertices in which vertices are connected at least by $a$ and at most by $b$ edges is called a $(a,b,n)$-graph. A $(b,b,n)$-graph is called $(b,n)$-graph and is denoted by $K^b_n$ (it is a complete graph), its complement by $\overline{K}^b_n$. A non increasing sequence $π = (d_1,···,d_n)$ of nonnegative integers is said to be $(a,b,n)$ graphic if it is realizable by an $(a,b,n)$-graph. We say a simple graphic sequence $π = (d_1,···,d_n)$ is potentially $K_4−K_2\cup K_2$-graphic if it has a a realization containing an $K_4−K_2\cup K_2$ as a subgraph where $K_4$ is a complete graph on four vertices and $K_2\cup K_2$ is a set of independent edges. In this paper, we find the smallest degree sum such that every $n$-term graphical sequence contains $K_4−K_2\cup K_2$ as subgraph. • Keywords Graph, $(a,b,n)$-graph, potentially graphical sequences. 05C07 • BibTex • RIS • TXT @Article{ATA-34-187, author = {}, title = {On Potentially Graphical Sequences of $G−E(H)$}, journal = {Analysis in Theory and Applications}, year = {2018}, volume = {34}, number = {2}, pages = {187--198}, abstract = { A loopless graph on $n$ vertices in which vertices are connected at least by $a$ and at most by $b$ edges is called a $(a,b,n)$-graph. A $(b,b,n)$-graph is called $(b,n)$-graph and is denoted by $K^b_n$ (it is a complete graph), its complement by $\overline{K}^b_n$. A non increasing sequence $π = (d_1,···,d_n)$ of nonnegative integers is said to be $(a,b,n)$ graphic if it is realizable by an $(a,b,n)$-graph. We say a simple graphic sequence $π = (d_1,···,d_n)$ is potentially $K_4−K_2\cup K_2$-graphic if it has a a realization containing an $K_4−K_2\cup K_2$ as a subgraph where $K_4$ is a complete graph on four vertices and $K_2\cup K_2$ is a set of independent edges. In this paper, we find the smallest degree sum such that every $n$-term graphical sequence contains $K_4−K_2\cup K_2$ as subgraph. }, issn = {1573-8175}, doi = {https://doi.org/10.4208/ata.2018.v34.n2.8}, url = {http://global-sci.org/intro/article_detail/ata/12586.html} } TY - JOUR T1 - On Potentially Graphical Sequences of $G−E(H)$ JO - Analysis in Theory and Applications VL - 2 SP - 187 EP - 198 PY - 2018 DA - 2018/07 SN - 34 DO - http://doi.org/10.4208/ata.2018.v34.n2.8 UR - https://global-sci.org/intro/article_detail/ata/12586.html KW - Graph, $(a,b,n)$-graph, potentially graphical sequences. AB - A loopless graph on $n$ vertices in which vertices are connected at least by $a$ and at most by $b$ edges is called a $(a,b,n)$-graph. A $(b,b,n)$-graph is called $(b,n)$-graph and is denoted by $K^b_n$ (it is a complete graph), its complement by $\overline{K}^b_n$. A non increasing sequence $π = (d_1,···,d_n)$ of nonnegative integers is said to be $(a,b,n)$ graphic if it is realizable by an $(a,b,n)$-graph. We say a simple graphic sequence $π = (d_1,···,d_n)$ is potentially $K_4−K_2\cup K_2$-graphic if it has a a realization containing an $K_4−K_2\cup K_2$ as a subgraph where $K_4$ is a complete graph on four vertices and $K_2\cup K_2$ is a set of independent edges. In this paper, we find the smallest degree sum such that every $n$-term graphical sequence contains $K_4−K_2\cup K_2$ as subgraph. Bilal A. Chat & S. Pirzada. (1970). On Potentially Graphical Sequences of $G−E(H)$. Analysis in Theory and Applications. 34 (2). 187-198. doi:10.4208/ata.2018.v34.n2.8 Copy to clipboard The citation has been copied to your clipboard
2022-08-14 13:29:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4851072132587433, "perplexity": 446.46979105516084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00184.warc.gz"}
https://diffgeom.subwiki.org/wiki/Milnor_map
# Milnor map Let $f$ be a complex polynomial in $(n+1)$ variables. $f$ can be viewed as a map $\R^{2n+2} \to \R^2$. Let $V_f$ denote the zero set of $f$. Then, on the complement of $V_f$, we can define a map $f/|f|$ from $\R^{2n+2} \setminus V_f$ to $S^1$. The Milnor map of $f$ at radius $r$ is the restriction of this map to the sphere of radius $r$, centered at the origin, to $S^1$. By the Milnor fibration theorem, the Milnor map is a fibration whenever the origin is an isolated singular point of $V_f$. Under such circumstances, it is termed the Milnor fibration.
2019-10-20 08:30:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941396713256836, "perplexity": 57.170846111959264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00423.warc.gz"}
http://tech.yanto-flora.net/e-070916-195947_How_to_Compute_the_Reciprocal_of_A_Value
Have you ever thought how your calculator computes the result when you press the "1/x" button ? and have you ever thought how your computer computes the reciprocal ? Reciprocal of a value is a special case of a division, where the numerator is 1.0. In mathematical form, we can write it as follows: \frac{1}{x} = y where x is the operand and y is the result. Rewriting the equation gives us: 1-xy = 0 And expanding y to its binary representation gives us: 1-x \left(\sum_{i=n\_start}^{n\_end} a_i 2^i\right) = 0 Alright! Now we have a much nicer equation to solve, don't we ? For those we don't get it yet, all we have to do is to loop the i from the n_start to n_stop, check if the equation becomes negative if we set ai to 1, and if it is, then ai is 0 for the particular i. The nice thing is, all multiplications involved are with the power of two numbers, which either in software or hardware can be replaced with shift operations. Below is a very simple Perl program to verify, which tests i from 32 to -32 (quite overkill! ) #!/usr/bin/perl$X=$ARGV[0];$n = 32;$Y = 0.0;printf("Digit x*y y\n");while ($n >= -32) { if (1 -$X * ($Y + 2**$n) > 0) { $Y += 2**$n; printf("2^%-3d %5.10f %5.10f\n", $n,$X*$Y,$Y); } $n--;}printf "\n";printf "Computed Result: %.10f\n",$Y;printf "Calculated Result: %.10f\n", 1.0/\$X; The program's output, with parameter of 2.345 is: Digit x*y y2^-2 0.5862500000 0.25000000002^-3 0.8793750000 0.37500000002^-5 0.9526562500 0.40625000002^-6 0.9892968750 0.42187500002^-8 0.9984570313 0.42578125002^-11 0.9996020508 0.42626953122^-13 0.9998883057 0.42639160162^-15 0.9999598694 0.42642211912^-16 0.9999956512 0.42643737792^-20 0.9999978876 0.42643833162^-21 0.9999990058 0.42643880842^-22 0.9999995649 0.42643904692^-23 0.9999998444 0.42643916612^-24 0.9999999842 0.42643922572^-28 0.9999999929 0.42643922942^-29 0.9999999973 0.42643923132^-30 0.9999999995 0.4264392322Computed Result: 0.4264392322Calculated Result: 0.4264392324 Note that the xy is updated towards 1.0 and y to the reciprocal of x (2.345) for each iteration.
2017-07-28 06:34:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988910436630249, "perplexity": 5629.551722669983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549448095.6/warc/CC-MAIN-20170728062501-20170728082501-00192.warc.gz"}
http://openstudy.com/updates/4dd26204199b8b0b2f002579
## anonymous 5 years ago Find the center of the ellipse (x-4)^(2)+(y-1)^(2)/(16)=1 1. watchmath The center simply the number inside your parenthesis, i.e. (4,1). 2. anonymous There are many numbers inside the parantheses 3. anonymous watchmath, did you do the ring problem, or were you just being challenging? 4. watchmath I did it once, but I am not sure if I can remember the solution :). 5. anonymous Were you saying that (4,1) is the answer? 6. watchmath yes :) 7. anonymous thank you 8. watchmath Whenever you have $\frac{(x-h)^2}{a^2}+\frac{(x-k)^2}{b^2}=1$ then the center is $$(h,k)$$. 9. anonymous then wouldn't it be (-4,-1)? or no, because of the squares? 10. watchmath Compare to $\frac{(x-4)^2}{1^2}+\frac{(y-1)^2}{16}=1$ What is the $$(h,k)$$ here? 11. anonymous But it wouldn't be negative coordinates, it would be positive? 12. watchmath well if you compare (x-h) to (x-4) then the h is 4 in this case. Similarly if you compare (y-k) and (y-1) then k=1 here. So (4,1) is the center. 13. anonymous okay thanks
2017-01-24 15:50:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7942861914634705, "perplexity": 3680.723784312462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00037-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/how-to-calculate-vertical-velocity.12063/
How to calculate vertical velocity 1. Jan 4, 2004 mort I need to calculate vertical velocity for a perjectile 2. Jan 4, 2004 HallsofIvy Staff Emeritus Assuming no air resistance, acceleration vertially is -g (-9.8 m/s2 in metric, -32.2 ft/s2 in English) so that after t seconds, the velocity is -gt+ v0 (v0 is the velocity at 0 seconds). 3. Jan 4, 2004 mort oops Last edited: Jan 4, 2004
2017-06-25 17:13:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8831634521484375, "perplexity": 9555.05395943665}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320545.67/warc/CC-MAIN-20170625170634-20170625190634-00617.warc.gz"}
https://socratic.org/questions/how-do-you-graph-find-the-zeros-intercepts-domain-and-range-of-f-x-abs-x-2-absx
# How do you graph, find the zeros, intercepts, domain and range of f(x)=abs(x+2)-absx? Apr 5, 2017 Domain is $\left(- \infty , \infty\right)$, range is $\left[- 2 , 2\right]$. $y$-intercept is $2$ and $x$-intercept is $x = - 1$ #### Explanation: For $x \le - 2$, $f \left(x\right) = - \left(x + 2\right) - \left(- x\right) = - x - 2 + x = - 2$ for $x \ge 0$, $f \left(x\right) = \left(x + 2\right) - \left(x\right) = x + 2 - x = 2$ and for $- 2 < x < 2$ $f \left(x\right) = x + 2 - \left(- x\right) = 2 x + 2$ Hence domain is $\left(- \infty , \infty\right)$, range is $\left[- 2 , 2\right]$ and for $x = 0$ i.e. $y$-intercept is $2$ and $x$-intercept being at $y = 0$, is given by $2 x + 2 = 0$ i.e. $x = - 1$ The graph appears as follows: graph{|x+2|-|x| [-10, 10, -5, 5]}
2019-09-16 14:10:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 21, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377292990684509, "perplexity": 2579.8623623570293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572744.7/warc/CC-MAIN-20190916135948-20190916161948-00221.warc.gz"}
http://on-the-t.com/2016/11/19/aoleaderboard-reaction-time/
# AO Leaderboard - Reaction Time In my last post I discussed the idea of return pressure and suggested that it could be measured by looking at the amount of time a receiver gives the server to prepare for their second shot. Players who exert more pressure with their return of serve are expected to get a quicker jump on the ball and make a more powerful shot inside the baseline. In this post I want to focus on one part of that equation, the quick jump, by looking at reaction times on the serve return. Reaction time in these summaries represents the expected seconds a returner takes to make impact with a serve traveling at average speed from the time the serve passes the net. The average speed of serve is the same for all players on the same tour, so the reaction times are all compared against the same standard, using a ridge regression approach. It would be unfair to say a returner was slow to react just because he or she had seen more slow serves than other players, so this approach tries to avoid that. The chart below shows the ATP players with the quickest and slowest reaction times based on play at the 2014 to 2016 Australian Open. Only players with 150 or more serve returns over all those years are included, to ensure the estimated times are measured with sufficient precision for each player. The size of the points reflect the number of serve returns a player had in the dataset and larger sizes indicate more confidence about that estimate. We find Aussie Nick Kyrgios at the top of the pack with an expected reaction time of 0.61 seconds. Next is Roger Federer with 0.62 seconds, providing support to many claims that Federer reads the server sooner than most players. We find a number of the ATP World Final participants with quick reaction times. Gael Monfils and Novak Djokovic are at the top with times of 0.64 seconds. Andy Murray eliminated Stan Wawrinka from the Tour Finals on Friday and also edges him out on his reaction time, taking 0.64 seconds to return, on average, compared to 0.65 seconds for Wawrinka. Murray will face Milos Raonic in the semifinals and might be somewhat reassured to find that Raonic is something of an outlier when it comes to returning serve, taking 0.70 seconds to return serve on average. That could be a key advantage for the World No. 1. When we turn to the women’s side, we find more reaction times of top players that are greater than 0.7 seconds, which we can attribute to the slower paced serve on the women’s game. Still a few women have reaction times that are so fast they are competitive with the men’s game. At the top of the list is Venus Williams with a lightning-fast 0.67 seconds to return. Here sister Serena is not too far behind with a time of 0.69 seconds, a time that Ana Ivanovic, Eugenie Bouchard and Garbine Muguruza hover around as well. On the slower reaction side, we see some of the players with a more defensive style of play including Caroline Wozniacki, with a time of 0.76 seconds, and Sara Errani, with a time of 0.8 seconds. I was surprised to see Madison Keys and Simona Halep also in the group with 0.78 seconds or more to return as I think of them as being more aggressive on the return that those numbers would imply. It may be that, while some of the women in this group hit the ball hard on the return, they lose some time by being positioned further from the baseline. Still, it would take some more digging into the data to see whether that conclusion held up.
2022-01-20 08:52:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36865025758743286, "perplexity": 1174.7994314681844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00191.warc.gz"}
https://www.trustudies.com/question/2463/Q-1-Construct-triangle-ABC-given-m-angle-A-60-m-angle-B-30-and-AB-5-8-cm/
Q.1 Construct $$\triangle$$ABC, given m $$\angle$$A =60°, m $$\angle$$B = 30° and AB = 5.8 cm. 2. At point A, draw an angle 60° i.e. $$\angle$$ A = 60°. 3. At point B, draw an angle of 30° i.e. $$\angle$$B= 30°. Then, $$\triangle$$ ABC is the required triangle.
2021-01-26 01:20:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813331365585327, "perplexity": 4160.644703362865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00294.warc.gz"}
https://binfalse.de/page7/
## Sync the clock w/o NTP The network time protocol (NTP) is a really smart and useful protocol to synchronize the time of your systems, but even if we are in two-thousand-whatever there are reasons why you need to seek for alternatives... You may now have some kind of »what the [cussword of your choice]« in mind, but I have just been in an ugly situation. All UDP traffic is dropped and I don't have permissions to adjust the firewall.. And you might have heard about the consequences of time differences between servers. Long story short, there is a good solution to sync the time via TCP, using the Time Protocol and a tool called rdate . ## Time Master First off all you need another server having a correct time (e.g. NTP sync'ed), which can be reached at port 37. Let's call this server $MASTER . To enable the Time Protocol on $MASTER you have to enable the time service in (x)inetd. For instance to enable the TCP service for a current xinetd you could create a file in /etc/xinetd.d/time with the following contents: Such a file may already exist, so you just have to change the value of the disable -key to no . Still using inetd? I'm sure you'll find your way to enable the time server on your system :) ## Time Slave On the client, which is not allowed to use NTP (wtfh!?), you need to install rdate : Just call the following command to synchronize the time of the client with $MASTER : Since rdate immediately corrects the time of your system you need to be root to run this command. Finally, to readjust the time periodically you might want to install a cronjob. Beeing root call crontab -e to edit root's crontab and append a line like the following: This will synchronize the time of your client with the time of $MASTER every six hours. (Don't forget to substitute $MASTER using your desired server IP or DNS.) ## Notes Last but not least I want you to be aware that this workaround just keeps the difference in time between both systems less than 0.5 secs. Beyond all doubt, looking at NTP that's very poor. Nevertheless, 0.5 secs delay is much better than several minutes or even hours! If it is also not permitted to speak to port 37 you need to tunnel your connections or you have to tell the time server to listen to another, more common port (e.g. 80, 443, or 993), as long as they are not already allocated by other services.. ## Bash Wildcards I wanted to publish this summary about wildcards in the bash (and similar shells) some time ago, but didn’t finish it. But finally it gets published. The shell handles words or patterns containing a wildcard as a template. Available filenames are tested to see if they fit this template. This evaluation is also called globbing. Let’s have a look at a small example: In this example * is replaced by appropriate characters, and the list of matching files are passed to the ls command. This set of files will be used in the following examples. ## Encode for a single character: ? The question mark can be replaced by a single character. So if you want to get the files aaa1 , aaa2 , aaa3 and aaab you can use the following pattern: So you see, the ? is replaced by exactly one character. That is, both aaa and aaaa1 won’t match. ## Encode for a an arbitrary number of characters: * To match any number of characters you can use the asterix * . It can replace 0 to n characters, n is limited by the max length of the file name and depends on the file system you’re using. Adapting the previous snippet you’ll now also get aaa and aaaa1 : ## Encode for a set of characters: [...] Most of the common tasks can be done with the previous templates, but there are cases when you need to define the characters that should be replaced. You can specify this set of characters using brackets, e.g. [3421] can be replaced by 3 , 4 , 2 or 1 and is the same as [1-4] : As you can see aaaa5 doesn’t match [3421] , and btw. the order of the specified characters doesn’t matter. And because it would be very annoying if you want to match against any alphabetic character (you would need to type all 26 characters), you can specify character ranges using a hyphen ( a-z ). Here are some exmaples: TemplateCharacter set [xyz1] x , y , z or 1 [C-Fc-f] C , D , E , F , c , d , e or f [a-z0-9] Any small character or digit [^b-d] Any character except b , c , d [Yy][Ee][Ss] Case-insensitive matching of yes [[:alnum:]] Alphanumeric characters, same as A-Za-z0-9 [[:alpha:]] Alphabetic characters, same as A-Za-z [[:digit:]] Digits, same as 0-9 [[:lower:]] Lowercase alphabetic characters, same as a-z [[:upper:]] Uowercase alphabetic characters, same as A-Z [[:space:]] Whitespace characters (space, tab etc.) Btw. the files that match such a template are sorted before they are passed to the command. ## Validating XML files In the scope of different projects I often have to validate XML files. Here is my solution to verify XML files using a schema. First of all to validate XML files in Java you need create a SchemaFactory of the W3C XML schema language and you have to compile the schema (let’s assume it’s located in /path/to/schema.xsd ): Now you’re able to create a validator from the schema. In order to validate a XML file you have to read it (let’s assume it’s located in /path/to/file.xml ): Last but not least you can validate the file: Download: JAVA: XMLValidator.java (Please take a look at the man-page. Browse bugs and feature requests.) ## HowTo Debug Bash Scripts Even shell scripts may get very complex, so it is helpful to know how to debug them. Lets explain it on a small example: Executing it you’ll get an output like this: To debug the execution of scripts the bash provides a debugging mode. There is one option -x to trace the execution So you see, every line that is executed at the runtime will be printed with a leading + , comments are ignored. There is another option -v to enable verbose mode. In this mode each line that is read by the bash will be printed before it is executed: Of course you can combine both modes, so the script is sequentially printed and the commands are traced: These modes will help you to find some errors. To modify the output of the tracing mode you may configure the PS4 : This will also print the file name of the executing script, the line number of the current command that is executed and the respective function name: if You don’t want to trace a whole script you can enable/disable tracing from within a script: This will result in something like: It is of course also possible to enable/disable verbose mode inside the script with set -v and set +v , respectively. ## Absolute Path of a Servlet Installation I’m currently developing some Java Servlets and one of the tasks is to create images dynamically. But where to store them accessible for users? If you want to show the user for example a graph of some stuff that changes frequently you need to generate the image dynamically. The rendering of the graphic is one thing, but where to store the picture so that the visitor can access it from the web? There were many options to try, and I found that getServletContext().getRealPath (".") from ServletRequest was the result I’ve been looking for. So to spare you the tests I’ll provide the different options (download): Let’s assume your webapps-directory is /var/lib/tomcat6/webapps/ , your servlet context is project and the user asks for the servlet test the output probably looks like: That’s it for the moment ;-) Download: Java: ServletTest.java (Please take a look at the man-page. Browse bugs and feature requests.) ## MFC-9120CN Setup I just bought a new printer, the Brother MFC-9120CN. It’s also able to scan and to copy documents and to send them by fax. Since the installation instructions are win/mac-only I’ll shortly explain how to setup the device in a Linux environment. ## Decision for this printer First of all I was searching for a printer that is in any case compatible to Linux systems. You might also have experiences with this driver f$ckup, or at least have heard about it. The manufactures often only provide drivers for Win or Mac, so you generally get bugged if you want to integrate those peripherals in your environment. The MFC-9120CN scores at this point. It is able to print and scan via network. Drivers for the printer are available and the the scanned documents can be sent at any FTP server. So you don’t need to have special drivers for scanning, just setup a small FTP server. This model is also a very cheap one compared to other color-laser MFP’s, and with the ADF it completely matches my criteria. I already noticed some disadvantages. One is the speed, the printer is somewhat slow. Since I’m not printing thousands of pages it’s more or less minor to me, but you should be aware of that. Another issue is the fact, that the device always forgets the date if it is turned of for a time.. And the printer is a bit too noisy. ## Setup The printer comes with a large user manual (>200 pages). It well explains setup the fax functionality, but the installation of the network printer and scanner is only described for win/mac, so I’ll give you a small how-to for your Linux systems. ### Network Setup To use this device via network you have to connect it to a router. It should be able to request an IP via DHCP, but if you don’t provide a DHCP server you need to configure the network manually (my values are in parenthesis): • IP: menu->5->1->2 ( 192.168.9.9 ) • Netmask: menu->5->1->3 ( 255.255.255.0 ) • Gateway: menu->5->1->4 ( 192.168.9.1 ) If this is done you should be able to ping the printer: If you browse to this IP using your web browser you’ll find a web interface for the printer. We’ll need this website later on. ### Printer Setup Big thanks to the CUPS project, it’s very easy to setup the network-printer! If you haven’t installed cups yet, do it now: Just browse to your CUPS server (e.g. http://localhost:631 if it is installed on your current machine) and install a new printer via Administration->add Printer (you need to be root). Recent CUPS versions will detect the new printer automatically and you’ll find it in the list of Discovered Network Printers. Just give it a name and some description, select a driver (I’m using Brother MFC-9120CN BR-Script3 (color, 2-sided printing)) and you’re done! Easy, isn’t it!? ;-) For those of you that have an older version of CUPS: The URI of my printer is dnssd://Brother%20MFC-9120CN._printer._tcp.local/ . ### Scanner Setup As explained above, the printer is able to send scanned documents to a FTP location. That is, there is no need for a scanner driver! Just install a small FTP server, I decided for ProFTPd: Make sure, that the /etc/proftpd/proftpd.conf contains the following lines: and create a new virtual FTP user: You will be asked for a password. The scanned documents will be stored in /PATH/TO/FILES . This command creates a file ftpd.passwd . Move this file to /etc/proftpd/ , if you didn’t execute the command in that directory. Restart ProFTPd: You should be able to connect to your FTP server: If that was successful, let’s configure the scanner to use this FTP account. Use your web browser to open the interface of the printer (e.g. http://192.168.9.9/) and go to Administrator Settings->FTP/Network Scan Profile (you have to authenticate, default login is admin and the password is access). Here you’ll find 10 different profiles that can be configured. Click for example on Profile Name 1 and modify the profile: • Host Address: The IP of the FTP server (e.g. 192.168.9.10 ) • Username: The username of the virtual FTP user you’ve created (e.g. YourPrinter ) • Store Directory: / If you submit these values you’ll be able to scan to your FTP server. Just give it a try! ;-) I recommend to configure your firewall to drop all packets of your printer that try to leave your own network. ## Conditionally autoscroll a JScrollPane I’m currently developing some GUI stuff and was wondering how to let a JScrollPane scroll automatically if it’s already on the bottom and the size of it’s content increases. For example if you use a JTextArea to display some log or whatever, than it would be nice if the scroll bars move down while there are messages produced, but it shouldn’t scroll down when the user just scrolled up to read a specific line. To scroll down to the end of a JTextArea can be done with just setting the carret to the end of the text: But we first want to check whether the scroll bar is already at the bottom, and only if that’s the case it should automatically scroll down to the new bottom if another message is inserted. To obtain the position data of the vertical scroll bar on can use the following code: Unfortunately log.append ("some msg") won’t append the text in place, so the size of the text area will not necessarily change before we ask for the new maximum position. To avoid a wrong max value one can also schedule the scroll event: As you can see, here a new event is put in the EventQueue, and this event is told to put another event in the queue that will do the scroll event. Correct, that’s a bit strange, but the swing stuff is very lazy and it might take a while until the new maximum position of the scroll bar is calculated after the whole GUI stuff is re-validated. So let’s be sure that our event definitely happens when all dependent swing events are processed. ## galternatives Some days ago I discovered galternatives, a GNOME tool to manage the alternatives system of Debian/Ubuntu. It’s really smart I think. For example to update the default editor for your system you need to update the alternatives system via: update-alternatives --set editor /usr/bin/vim There is also an interactive version available: update-alternatives --config editor To see available browsers you need to run update-alternatives --list x-www-browser However, the alternatives system is a nice idea I think, but it’s a bit confusing sometimes. And installing a new group or adding another entry to an existing group is pretty complicated and requires information from multiple other commands beforehand. With galternatives you’ll get a graphical interface to manage all these things. That really brings light into the dark! Just install it via aptitude install galternatives You’ll be astonished if you give it a try! ;-) ## YOURLS Firefox Extension Version 1.4 I submitted a new version of the YOURLS Firefox extension. It just contains some minor changes, but I want to inform my loyal readers! The add-on is currently in the review queue, hopefully this time I’ll get a complete review by the AMO-team ;-) If you’re crazy you can try the new version, it’s available on SourceForge and on AMO. UPDATE: I just received a fully review, so my add-on is finally stable!! ## J-vs-T goes Java I just ported the Jabber -vs- Twitter bridge to Java. That was a point on my todo list for a long time, because I hate the hacked stuff from the improvised Perl solution. And in the end I finally did it ;-) You can find the new XMPP to Twitter bridge with the name XTB in my sidebar. It’s now written in nice Java code, easy to understand and much easier to work with! So feel free to give it a try! End of announcement! :P
2018-02-23 14:43:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3442732095718384, "perplexity": 2288.4436240344203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00339.warc.gz"}
http://nrich.maths.org/6562
### Mathematical Issues for Chemists A brief outline of the mathematical issues faced by chemistry students. ### Reaction Rates Explore the possibilities for reaction rates versus concentrations with this non-linear differential equation ### Mixed up Mixture Can you fill in the mixed up numbers in this dilution calculation? # Molecular Sequencer ##### Stage: 4 and 5 Challenge Level: Methanal, ethanal, propanal and butanal form the first four compounds in a sequence of alkyl aldehydes with molecular formulae CH$_2$O, C$_2$H$_4$O, C$_3$H$_6$O, C$_4$H$_8$O What would be the molecular formula for pentanal and hexanal? What would be the general formula for the $n$th molecule in the sequence? What masses could it have, taking into account these common isotopes of C, H and O: $^{16}$O $99.76\%$, $^{17}$O $0.04\%$, $^{18}$O $0.2\%$ $^{12}$C $98.9\%$, $^{13}$C $1.1\%$ $^1$H $99.99\%$, $^2$H $0.01\%$ The relative abundance of the lightest form of one particular type of alkyl aldehyde is almost exactly $8$ times that of its next lightest form. Can you work out its molecular formula?
2014-04-23 18:01:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6763707399368286, "perplexity": 5038.701238775175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
http://esy-magnesy.pl/cvf3z54/archive.php?e72729=multinomial-logistic-regression-in-sas
more likely than males to prefer chocolate to strawberry. rather than reference (dummy) coding, even though they are essentially the ice cream flavors in the data can inform the selection of a reference group. Logistic Regression Normal Regression, Log Link Gamma Distribution Applied to Life Data Ordinal Model for Multinomial Data GEE for Binary Data with Logit Link Function Log Odds Ratios and the ALR Algorithm Log-Linear Model for Count Data Model Assessment of Multiple Regression … without the problematic variable. For chocolate relative to strawberry, the Chi-Square test statistic female evaluated at zero) and with zero The CI is video score by one point, the multinomial log-odds for preferring chocolate Intercept – This is the multinomial logit estimate for chocolate puzzle scores, there is a statistically significant difference between the The data set contains variables on 200 students. footnotes explaining the output. their writing score and their social economic status. This page shows an example of a multinomial logistic regression analysis with of predictors in the model. the predictor female is 3.5913 with an associated p-value of 0.0581. In the output above, the likelihood ratio chi-square of48.23 with a p-value < 0.0001 tells us that our model as a whole fits ice_cream (i.e., the estimates of Collapsing number of categories to two and then doing a logistic regression: This approach s. his puzzle score by one point, the multinomial log-odds for preferring = 3 and write = 52.775, we see that the probability of being the academic It is calculated Residuals are not available in the OBSTATS table or the output data set for multinomial models. test the global null hypothesis that none of the predictors in either of the Therefore, it requires a large sample size. the specified alpha (usually .05 or .01), then this null hypothesis can be Their choice might be modeled using It is used to describe data and to … statistically different from zero; or b) for males with zero variables in the model constant. are considered. For multinomial data, lsmeans requires glm exponentiating the linear equations above, yielding regression coefficients that which model an estimate, standard error, chi-square, and p-value refer. the same, so be sure to respecify the coding on the class statement. Sometimes observations are clustered into groups (e.g., people within The code is as follow: proc logistic Relative risk can be obtained by In this The predicted probabilities are in the “Mean” column. parameter estimate is considered to be statistically significant at that alpha Click here to report an error on this page or leave a comment, Your Email (must be a valid email for us to receive the report! be treated as categorical under the assumption that the levels of ice_cream puzzle are in the model. A biologist may be interested in food choices that alligators make.Adult alligators might h… If overdispersion is present in a dataset, the estimated standard errors and test statistics for individual parameters and the overall good… the outcome variable alphabetically or numerically and selects the last group to predictor puzzle is 4.6746 with an associated p-value of 0.0306. 0.7009 – 0.1785) = 0.1206, where 0.7009 and 0.1785 are the probabilities of ice_cream = 3, which is specified model. strawberry is 4.0572. video – This is the multinomial logit estimate for a one unit increase variables of interest. We can study therelationship of one’s occupation choice with education level and father’soccupation. as AIC = -2 Log L + 2((k-1) + s), where k is the number of b. In other words, males are less likely the number of predictors in the model and the smallest SC is most Use of the test statement requires the To obtain predicted probabilities for the program type vocational, we can reverse the ordering of the categories 0.05, we would reject the null hypothesis and conclude that a) the multinomial logit for males (the variable estimate is not equal to zero. intercept–the parameters that were estimated in the model. Therefore, multinomial regression is an appropriate analytic approach to the question. program (program type 2) is 0.7009; for the general program (program type 1), g. Intercept and Covariates – This column lists the values of the video – This is the multinomial logit estimate for a one unit increase These polytomous response models can be classified into two distinct … Example 3. model. If the p-value is less than We can study therelationship of one’s occupation choice with education level and father’soccupation. for the proportional odds ratio given the other predictors are in the model. x. the referent group is expected to change by its respective parameter estimate For chocolate relative to strawberry, the Chi-Square test statistic for the evaluated at zero. given the other predictors are in the model at an alpha level of 0.05. The relative to strawberry when the predictor variables in the model are evaluated and if it also satisfies the assumption of proportional This yields an equivalent model to the proc logistic code above. The occupational choices will be the outcome variable whichconsists of categories of occupations. group (prog = vocational and ses = 3)and will ignore any other and other environmental variables. regression: one relating chocolate to the referent category, strawberry, and criteria from a model predicting the response variable without covariates (just The Independence of Irrelevant Alternatives (IIA) assumption: Roughly, observations in the model dataset. are social economic status, ses,  a three-level categorical variable Institute for Digital Research and Education. chocolate to strawberry would be expected to decrease by 0.0819 unit while relative to strawberry when the other predictor variables in the model are Let's begin with collapsed 2x2 table: Let's look at one part of smoke.sas: In the data step, the dollar sign $as before indicates that S is a character-string variable. You can tell from the output of the Before running the multinomial logistic regression, obtaining a frequency of statistics. Wecan specify the baseline category for prog using (ref = “2”) andthe reference group for ses using (ref = “1”). other variables in the model are held constant. For males (the variable The outcome measure in this analysis is the preferred flavor of Pseudo-R-Squared: The R-squared offered in the output is basically the a.Response Variable – This is the response variable in the model. relative to strawberry. vanilla relative to strawberry model. Multinomial Logistic Regression, Applied Logistic Regression (Second our alpha level to 0.05, we would fail to reject the null hypothesis and If the p-value less than alpha, then the null hypothesis can be rejected and the variables in the model are held constant. Sample size: Multinomial regression uses a maximum likelihood estimation In other words, females are Example 1. The effect of ses=3 for predicting general versus academic is not different from the effect of If we do not specify a reference category, the last ordered category (in this using the descending option on the proc logistic statement. video and nonnested models. This type of regression is similar to logistic regression, … Standard Error – These are the standard errors of the individual to be classified in one level of the outcome variable than the other level. variable with the problematic variable to confirm this and then rerun the model If we refer to the response profiles to determine which response corresponds to which zero is out of the range of plausible scores. Diagnostics and model fit: Unlike logistic regression where there are are relative risk ratios for a unit change in the predictor variable. binary logistic regression. Building a Logistic Model by using SAS Enterprise Guide I am using Titanic dataset from Kaggle.com which contains a … SAS treats strawberry as the referent group and I am trying to run a multinomial logistic regression model in SAS using PROC LOGISTIC and would like to know if it is possible to produce multiple dependent variable group comparisons in the same single … It does not convey the same information as the R-square for catmod would specify that our model is a multinomial logistic regression. test statistic values follows a Chi-Square the likelihood ratio, score, and Wald Chi-Square statistics. current model. For chocolate set our alpha level to 0.05, we would fail to reject the null hypothesis and The multinomial logit for females relative to males is 0.0328 We can study the Per SAS documentation For nominal response logistic models, where the possible responses have no natural ordering, the logit model can also be extended to a multinomial model … With an of ses, holding write at its means. regression is an example of such a model. ses=3 for predicting vocational versus academic. If we Chi-Square – This requires that the data structure be choice-specific. h. Test – This indicates which Chi-Square test statistic is used to fitted models, so DF=2 for all of the variables. likelihood of being classified as preferring vanilla or preferring strawberry. membership to general versus academic program and one comparing membership to indicates whether the profile would have a greater propensity global tests. We can use proc logistic for this model and indicate that the link on the test statement is a label identifying the test in the output, and it must greater than 1. -2 Log L is used in hypothesis tests for nested models. Multinomial logistic regression is for modeling nominal case, ice_cream = 3) will be considered as the reference. predicting general versus academic equals the effect of ses = 3 in In the case of two categories, relative risk ratios are equivalent to ice_cream (i.e., the estimates of model may become unstable or it might not run at all. each predictor appears twice because two models were fitted. On the Ultimately, the model with the smallest AIC is They correspond to the two equations below: $$ln\left(\frac{P(prog=general)}{P(prog=academic)}\right) = b_{10} + b_{11}(ses=2) + b_{12}(ses=3) + b_{13}write$$ For example, the significance of a which the parameter estimate was calculated. w. Odds Ratio Point Estimate – These are the proportional odds ratios. method. the predictor video is 1.2060 with an associated p-value of 0.2721. Additionally, the numbers assigned to the other values of the combination of the predictor variables. MULTINOMIAL LOGISTIC REGRESSION THE MODEL In the ordinal logistic model with the proportional odds assumption, the model included j-1 different intercept estimates (where j is the number of levels … for video has not been found to be statistically different from zero Here we see the same parameters as in the output above, but with their unique SAS-given names. the specified alpha (usually .05 or .01), then this null hypothesis can be Here we see the probability of being in the vocational program when ses = 3 and video and statistic. change in terms of log-likelihood from the intercept-only model to the odds ratios, which are listed in the output as well. In In this video you will learn what is multinomial Logistic regression and how to perform multinomial logistic regression in SAS. The param=ref option and explains SAS R code for these methods, and illustrates them with examples. regression coefficients for the two respective models estimated. unique names SAS assigns each parameter in the model. model are held constant. These are the estimated multinomial logistic regression Lesson 6: Logistic Regression; Lesson 7: Further Topics on Logistic Regression; Lesson 8: Multinomial Logistic Regression Models. By default in SAS, the last t. SAS 9.3. The occupational choices will be the outcome variable which puzzle – This is the multinomial logit estimate for a one unit constant. the direct statement, we can list the continuous predictor variables. On The other problem is that without constraining the logistic models, Multinomial model is a type of GLM, so the overall goodness-of-fit statistics and their interpretations and limitations we learned thus far still apply. respectively, so values of 1 correspond to Some model fit statistics are listed in the output. It also uses multiple again set our alpha level to 0.05, we would fail to reject the null hypothesis parameter estimate in the chocolate relative to strawberry model cannot be Our ice_cream categories 1 and 2 are chocolate and vanilla, Log L). models have non-zero coefficients. the IIA assumption means that adding or deleting alternative outcome Below we use lsmeans to the remaining levels compared to the referent group. In, particular, it does not cover data cleaning and checking, verification of assumptions, model. where $$b$$s are the regression coefficients. female – This is the multinomial logit estimate comparing females to For chocolate INTRODUCTION In logistic regression, the goal is the same as in ordinary least squares (OLS) regression… See the proc catmod code below. given that video and linear regression, even though it is still “the higher, the better”. distribution which is used to test against the alternative hypothesis that the The outcome prog and the predictor ses are bothcategorical variables and should be indicated as such on the class statement. regression but with independent normal error terms. video score by one point, the multinomial log-odds for preferring vanilla to conclude that the regression coefficient for requires the data structure be choice-specific. different error structures therefore allows to relax the independence of Alternative-specific multinomial probit regression: allows predictor variables in the model are held constant. decrease by 1.163 if moving from the lowest level of. There are a total of six parameters strawberry. regression model. Model 1: chocolate relative to strawberry. They can be obtained by exponentiating the estimate, eestimate. d. Response Profiles – This outlines the order in which the values of our Analysis. in video score for vanilla relative to strawberry, given the other The outcome variable here will be the zero video and m relative to is 17.2425 with an associated p-value of <0.0001. The dataset, mlogit, was collected on holding all other variables in the model constant. the predictor variable and the outcome, The outcome prog and the predictor ses are both (two models with three parameters each) compared to zero, so the degrees of Get Crystal clear understanding of Multinomial Logistic Regression. female evaluated at zero) with the predictor puzzle is 11.8149 with an associated p-value of 0.0006. regression output. and writing score, write, a continuous variable. Chi-Square test statistic; if the CI includes 1, we would fail to reject the considered in terms both the parameter it corresponds to and the model to which female are in the model. Here, the null hypothesis is that there is no relationship between types of food, and the predictor variables might be the length of the alligators one will be the referent level (strawberry) and we will fit two models: 1) statement suppresses observation numbers, since they are meaningless in the parameter dataset. For vanilla relative to strawberry, the Chi-Square test statistic for the be the referent group. If a subject were to increase hypothesis. all other variables in the model constant. video and have one degree of freedom in each model. However, glm coding only allows the last category to be the reference again set our alpha level to 0.05, we would fail to reject the null hypothesis as a specific covariate profile (males with zero f. Intercept Only – This column lists the values of the specified fit video has not been found to be statistically different from zero given puzzle scores, the logit for preferring chocolate to The MACRO in this paper was developed with use of SAS PROC SURVEYLOGISTIC to … ((k-1) + s)*log(Σ fi), where fi‘s You can download the data It does not cover all aspects of the research process which researchers are expected to do. relative to strawberry, the Chi-Square test statistic for If a subject were to increase his given that video and The general form of the distribution is assumed. and we transpose them to be more readable. variables in the model are held constant. from our dataset. again set our alpha level to 0.05, we would reject the null hypothesis and If we set It focuses on some new features of proc logistic available since SAS … puzzle has been found to be p. Parameter – This columns lists the predictor values and the chocolate to strawberry for a male with average assumed to hold in the vanilla relative to strawberry model. In a multinomial regression, one level of the response and conclude that for vanilla relative to strawberry, the regression coefficient outcome variable considering both of the fitted models at once. the predictor in both of the fitted models are zero). being in the academic and general programs under the same conditions. strawberry are found to be statistically different from zero. ice_cream (chocolate, vanilla and strawberry), so there are three levels to It also indicates how many models are fitted in themultinomial regression. u. specified fit criteria from a model predicting the response variable with the is that it estimates k-1 models, where For our data analysis example, we will expand the third example using the predictor female is 0.0088 with an associated p-value of 0.9252. b.Number of Response Levels – This indicates how many levels exist within theresponse variable. The param=ref optiononthe class statement tells SAS to use dummy coding rather than effect codingfor the variable ses. AIC and SC penalize the Log-Likelihood by the number our page on. another model relating vanilla to strawberry. an intercept). at zero. Estimate – response statement, we would specify that the response functions are generalized logits. SC – This is the Schwarz Criterion. The ratio of the probability of choosing one outcome category over the In multinomial logistic regression, the in the modeled variable and will compare each category to a reference category. ), Department of Statistics Consulting Center, Department of Biomathematics Consulting Clinic, SAS Annotated Output: female are in the model. If a subject were to increase o. Pr > ChiSq – This is the p-value associated with the Wald Chi-Square with valid data in all of the variables needed for the specified model. We are interested in testing whether SES3_general is equal to SES3_vocational, and s were defined previously. Here, the null hypothesis is that there is no relationship between By default, and consistently with binomial models, the GENMOD procedure orders the response categories for ordinal multinomial … and a puzzle. outcome variables, in which the log odds of the outcomes are modeled as a linear relationship of one’s occupation choice with education level and father’s Using the test statement, we can also test specific hypotheses within strawberry would be expected to decrease by 0.0229 unit while holding all other Pr > Chi-Square – This is the p-value used to determine whether or You can calculate predicted probabilities using the lsmeans statement and we can end up with the probability of choosing all possible outcome categories given puzzle and In a multinomial regression, one level of the responsevariable is treated as the refere… here . In this example, all three tests indicate that we can reject the null Therefore, each estimate listed in this column must be parsimonious. what relationships exists with video game scores (video), puzzle scores (puzzle) m. DF – Adult alligators might h… In our dataset, there are three possible values for People’s occupational choices might be influenced statement, we would indicate our outcome variable ice_cream and the predictor Since all three are testing the same hypothesis, the degrees with more than two possible discrete outcomes. You can also use predicted probabilities to help you understand the model. female – This is the multinomial logit estimate comparing females to by their parents’ occupations and their own education level. levels of the dependent variable and s is the number of predictors in the probability of choosing the baseline category is often referred to as relative risk outcome variable are useful in interpreting other portions of the multinomial outcome variable ice_cream About Logistic Regression It uses a maximum likelihood estimation rather than the least squares estimation used in traditional multiple regression. You can then do a two-way tabulation of the outcome for female has not been found to be statistically different from zero for the intercept e. Criterion – These are various measurements used to assess the model statistically different from zero for chocolate relative to strawberry An important feature of the multinomial logit model multinomial logit for males (the variable intercept In multinomial logistic regression you can also consider measures that are similar to R 2 in ordinary least-squares linear regression, which is the proportion of variance that can be explained by the model. A biologist may beinterested in food choices that alligators make. the class statement tells SAS to use dummy coding rather than effect coding Multiple-group discriminant function analysis: A multivariate method for multinomial regression. %inc '\\edm-goa-file-3\user$\fu-lin.wang\methodology\Logistic Regression\recode_macro.sas'; recode; This SAS code shows the process of preparation for SAS data to be used for logistic regression. If we odds, then switching to ordinal logistic regression will make the model more predictors), Several model fit measures such as the AIC are listed under the probability is 0.1785. the parameter names and values. Multiple logistic regression analyses, one for each pair of outcomes: scores. female are in the model. to strawberry would be expected to decrease by 0.0465 unit while holding all Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Effect – Here, we are interested in the effect of of each predictor on the hsbdemo data set. The second is the number of observations in the dataset People’s occupational choices might be influencedby their parents’ occupations and their own education level. given puzzle and (and it is also sometimes referred to as odds as we have just used to described the categorical variables and should be indicated as such on the class statement. associated with only one value of the response variable. 200 high school students and are scores on various tests, including a video game The output annotated on this page will be from the proc logistic commands. relative to strawberry, the Chi-Square test statistic for If the p-value is less than level. Example 2. One problem with this approach is that each analysis is potentially run on a different other variables in the model are held constant. this case, the last value corresponds to If a subject were to increase his Multinomial probit regression: similar to multinomial logistic We For vanilla relative to strawberry, the Chi-Square test statistic for the If the scores were mean-centered, In our example, this will be strawberry. puzzle scores in chocolate relative to function is a generalized logit. his puzzle score by one point, the multinomial log-odds for preferring sample. not the null hypothesis that a particular predictor’s regression coefficient is more illustrative than the Wald Chi-Square test statistic. the reference group for ses using (ref = “1”). A biologist may be interested in food choices that alligators make. v. null hypothesis that a particular ordered logit regression coefficient is zero variables in the model are held constant. If we Edition), An Introduction to Categorical Data This will make academic the reference group for prog and 3 the reference are the frequency values of the ith observation, and k puzzle and can specify the baseline category for prog using (ref = “2”) and Multinomial Logistic Regression models how multinomial response variable Y depends on a set of k explanatory variables, X=(X 1, X 2, ... X k ). fit. the all of the predictors in both of the fitted models is zero). on conclude that for vanilla relative to strawberry, the regression coefficient for Below we use proc logistic to estimate a multinomial logisticregression model. SAS, so we will add value labels using proc format. The odds ratio for a one-unit increase in the variable. Multinomial Logistic Regression is useful for situations in which you want to be able to classify subjects based on values of a set of predictor variables. for the variable ses. Version info: Code for this page was tested in If we Data set … the multinomial logistic regression coefficients education level and father ’ soccupation for prog and the variables! … example 1 additional predictor variables are social economic status of such a model for chocolate to. Males are less likely than males to prefer vanilla ice cream to,. Titanic dataset from Kaggle.com which contains a … example 1 parameter across both models likely. Their unique SAS-given names for more than two categories, relative risk ratios are equivalent to odds ratios a increase! Needed for the variable ice_cream and the predictor puzzle is 11.8149 with an associated p-value of 0.0306 model also... General program, vocational program and academic program estimate, standard error,,... Calculate predicted probabilities using the lsmeans statement and the ilink option of occupations.Example 2 when the predictor variables be. Freedom is the number of response Levels – this is negative two times the Log likelihood on page! Coding rather than effect codingfor the variable ses vanilla to strawberry, degrees! Output data set for multinomial outcome variables it also indicates how many models fitted! ’ multinomial logistic regression in sas and their own education level and father ’ s start with some! Chocolate to strawberry print statement suppresses observation numbers, since they are meaningless in multinomial. Independent normal error terms ice_cream number indicates to which model an estimate, standard error, Chi-Square, and them. The intercept-only model to the current model models with the smallest SC is desireable... Continuous variables, they all multinomial logistic regression in sas one degree of freedom in each model proc print statement suppresses observation,... This is the multinomial logistic regression process which researchers are expected to.! Ratios, which is strawberry various data analysis commands this analysis refers to the question occupational choices might influencedby. Their writing score, and illustrates them with examples and each predictor appears twice because two models were.... Some model fit statistics are listed in the model are evaluated at zero is out of the needed... Enterprise Guide I am using Titanic dataset from Kaggle.com which contains a … example 1 verification. Codingfor the variable descriptive statistics of the Research process which researchers are expected to do video! Relationship of one ’ s start with getting some descriptive statistics of the specified Chi-Square test statistic for predictor. These are the degrees of freedom is the number of predictors in the OBSTATS table or the as... One-Unit increase in the output annotated on this page was tested in SAS, response! Ses are both categorical variables and should be indicated as such on proc... Second is the response statement, we would indicate our outcome variable are! ’ soccupation SES3_general is equal to SES3_vocational, which is strawberry of occupations.Example.. You understand the model with the multinomial logistic regression in sas dataset can tell from the proc print statement suppresses observation numbers since. Like aic, SC penalizes for the predictor ses are bothcategorical variables and should be as! Code for These methods, and p-value refer intercept is 11.0065 with an associated p-value of 0.0581 odds ratio estimate! Values and the smallest aic is considered the best be classified as preferring vanilla to strawberry, the regression! The other multinomial logistic regression in sas of our outcome variable whichconsists of categories of occupations.Example.. Relative to strawberry, the multinomial regression the referent group unique names SAS assigns each parameter in model. From binary logistic regression to multiclass problems, i.e difference preference than young ones logisticregression.. Other portions of the test statement like to run subsequent models with smallest! ) s are the standard errors of the parameter names and values of plausible.. Now do with the smallest aic is used to assess the model.05 or.01,... Choice might be influencedby their parents ’ occupations and their own education level, the test... Less likely than females to prefer vanilla ice cream to strawberry, the Chi-Square test statistic for the predictor and. A one-unit increase in the output of the variables needed for the two respective models estimated the CI is illustrative. Show … and explains SAS R code for this example, we would specify that our model multinomial logistic regression in sas... An example of a multinomial logistic regression that generalizes logistic regression to multinomial logistic regression but with their SAS-given! Where \ ( b\ ) s are the degrees of freedom for this page to. Guide I am using Titanic dataset from Kaggle.com which contains a … example 1 occupations and their education! Output of the parameter names and values ilink option a model across both models can use logistic..., males are less likely than females to prefer vanilla ice cream to strawberry, the numbers assigned the. In all of the individual regression coefficients that something is wrong you can predicted! This type of GLM, so the overall goodness-of-fit statistics and their interpretations and limitations learned. Titanic dataset from Kaggle.com which contains a … example 1 basically the change in terms of log-likelihood the! Is 17.2425 with an associated p-value of 0.9252 Levels exist within the response variable this... Profiles to determine which response corresponds to which model within classrooms ) in all of the of! Indicated as such on the model columns lists the Chi-Square test statistic for the is. The model and the intercept–the parameters that were estimated in the variable ses would specify that response. Model statement, we would specify that the link function is a multinomial logistic regression is an model. And education requires the data structure be choice-specific ultimately, the response variable – this columns lists the variables! Third example using the hsbdemo data set for multinomial outcome variables this will make academic the reference group prog. Even larger sample size than ordinal or binary logistic regression model output set. Nested logit model: also relaxes the IIA assumption, also requires the unique names assigns... Value is the number of observations in the dataset with the smallest SC is most desireable choices that alligators.. May beinterested in food choices that alligators make occupation choice with education level father..., Department of Biomathematics Consulting Clinic the CI is more illustrative than the Wald Chi-Square test for. Binary logistic regression as in the “ Mean ” column nonnested models estimated! Fitted in the model other values of the individual regression coefficients for the.! Ice_Cream number indicates to which model an estimate, standard error – These the. Is 3.4296 with an associated p-value of 0.0581 individual regression coefficients logit estimate for chocolate to... Of plausible scores logit estimate for chocolate relative to strawberry, the last value is the number of in. Would indicate our outcome variable whichconsists of categories of occupations at zero response variable expand third. Statistics provided by SAS include the likelihood ratio, score, write, a three-level categorical variable and writing and..., score, and Wald Chi-Square statistic requires an even larger sample size: multinomial regression,,! S start with getting some descriptive statistics of the parameter names and values are! Below we use proc logistic code above generates the following output: multivariate! Statistic for the comparison of models from different samples or nonnested models show to! Out of the regression coefficients for the predictor female is 0.0088 with associated. 17.2425 with an associated p-value of 0.0640 the continuous predictor variables k categories, the Chi-Square statistic! Vocational versus academic is not different from the output data set the comparison models! Log-Likelihood by the number of observations Read/Used – the first is the same for all the. Are not available in proc GEE beginning in SAS, so DF=2 for all of variables! Response variable is ice_cream in hypothesis tests for nested models response corresponds to which model the likelihood ratio,,... The additional predictor variables how to use various data analysis example, all three tests indicate we. Understanding of multinomial logistic regression analytic approach to the current model logistic statement produces an output dataset with the statement. In the model response functions are generalized logits one-unit increase in the modeled variable and will each. A logistic model by using SAS Enterprise Guide I am using Titanic dataset from Kaggle.com which contains a example! Output annotated on this page shows an example of a multinomial logistic regression to multiclass problems, i.e more than! Global tests OBSTATS table or the output above, but with their SAS-given! Lsmeans statement and the ilink option occupational choices will be the outcome prog the... Strawberry, the Chi-Square test statistic for the predictor variables to be the referent group and estimates model! Additional predictor variables ( categorical and continuous ), model at zero commands... A classification method that generalizes logistic regression but with their unique SAS-given names options! These are the standard errors of the variables of interest, eestimate classified into distinct. All aspects of the variables of interest we can refer to the current model from different samples or models! Get from binary logistic regression models were fitted -2 Log L – this is the number response. Parameter in the multinomial regression is a generalized logit were estimated in the model statement, we specify... Same hypothesis, the response functions are generalized logits categorical variables and should be indicated as such on proc! For parameter in the model with the additional predictor variables are social economic status, ses a. Chi-Square statistic students make program choices among general program, vocational program and academic program SC for. Is basically the change in terms of log-likelihood from the effect of for! And indicate that the response variable parents ’ occupations and their own education level, then this null can... Prefer chocolate to strawberry, the last value corresponds to ice_cream = 3, which we multinomial logistic regression in sas. Log likelihood group in the case of two categories in the model with the smallest aic considered... 2020 multinomial logistic regression in sas
2022-01-16 22:45:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5682762861251831, "perplexity": 1658.7806081659655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00620.warc.gz"}
http://www.maths.usyd.edu.au/s/scnitm/tillmann-Geometry-Toplology-Analys
SMS scnews item created by Stephan Tillmann at Mon 28 Jul 2014 1029 Type: Seminar Modified: Mon 28 Jul 2014 1030 Distribution: World Expiry: 27 Oct 2014 Calendar1: 30 Jul 2014 1100-1200 CalLoc1: Carslaw 535A CalTitle1: Geometry-Toplology-Analysis Seminar: Maher -- The Casson invariants of random Heegaard splittings Auth: tillmann@p710.pc (assumed) # Geometry-Topology-Analysis Seminar: Maher -- The Casson invariants of random Heegaard splittings Dear All, The Geometry-Topology-Analysis Seminar in Semester 2 takes place on Wednesdays from 11:00-12:00. Our first talk is this week: Wednesday, 30 July, 11:00-12:00 in Carslaw 535A Speaker: Joseph Maher (CUNY Staten Island) Title: The Casson invariants of random Heegaard splittings Abstract: The mapping class group element resulting from a finite length random walk on the mapping class group may be used as the gluing map for a Heegaard splitting, and the resulting 3-manifold is known as a random Heegaard splitting. We use these to show the existence of infinitely many closed hyperbolic 3-manifolds with any given value of the Casson invariant. This is joint work with Alex Lubotzky and Conan Wu.
2017-10-19 05:37:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6721146702766418, "perplexity": 10268.854300751207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00627.warc.gz"}
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-10-review-page-727/20
## Prealgebra (7th Edition) $a^{20}b^{10}c^5$ Apply the exponent to each coefficient and variable. Multiply when an exponent is raised to a power. $(a^4b^2c)^5=a^{4\times5}b^{2\times5}c^5=a^{20}b^{10}c^5$
2018-12-13 18:10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900500297546387, "perplexity": 3384.3414076268045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00601.warc.gz"}
http://server3.wikisky.org/starview?object_type=1&object_id=1371&object_name=HR+2011&locale=EN
WIKISKY.ORG Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login # υ Aur (Upsilon Aurigae) Contents ### Images DSS Images   Other Images ### Related articles CHARM2: An updated Catalog of High Angular Resolution MeasurementsWe present an update of the Catalog of High Angular ResolutionMeasurements (CHARM, Richichi & Percheron \cite{CHARM}, A&A,386, 492), which includes results available until July 2004. CHARM2 is acompilation of direct measurements by high angular resolution methods,as well as indirect estimates of stellar diameters. Its main goal is toprovide a reference list of sources which can be used for calibrationand verification observations with long-baseline optical and near-IRinterferometers. Single and binary stars are included, as are complexobjects from circumstellar shells to extragalactic sources. The presentupdate provides an increase of almost a factor of two over the previousedition. Additionally, it includes several corrections and improvements,as well as a cross-check with the valuable public release observationsof the ESO Very Large Telescope Interferometer (VLTI). A total of 8231entries for 3238 unique sources are now present in CHARM2. Thisrepresents an increase of a factor of 3.4 and 2.0, respectively, overthe contents of the previous version of CHARM.The catalog is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/431/773 Local kinematics of K and M giants from CORAVEL/Hipparcos/Tycho-2 data. Revisiting the concept of superclustersThe availability of the Hipparcos Catalogue has triggered many kinematicand dynamical studies of the solar neighbourhood. Nevertheless, thosestudies generally lacked the third component of the space velocities,i.e., the radial velocities. This work presents the kinematic analysisof 5952 K and 739 M giants in the solar neighbourhood which includes forthe first time radial velocity data from a large survey performed withthe CORAVEL spectrovelocimeter. It also uses proper motions from theTycho-2 catalogue, which are expected to be more accurate than theHipparcos ones. An important by-product of this study is the observedfraction of only 5.7% of spectroscopic binaries among M giants ascompared to 13.7% for K giants. After excluding the binaries for whichno center-of-mass velocity could be estimated, 5311 K and 719 M giantsremain in the final sample. The UV-plane constructed from these datafor the stars with precise parallaxes (σπ/π≤20%) reveals a rich small-scale structure, with several clumpscorresponding to the Hercules stream, the Sirius moving group, and theHyades and Pleiades superclusters. A maximum-likelihood method, based ona Bayesian approach, has been applied to the data, in order to make fulluse of all the available stars (not only those with precise parallaxes)and to derive the kinematic properties of these subgroups. Isochrones inthe Hertzsprung-Russell diagram reveal a very wide range of ages forstars belonging to these groups. These groups are most probably relatedto the dynamical perturbation by transient spiral waves (as recentlymodelled by De Simone et al. \cite{Simone2004}) rather than to clusterremnants. A possible explanation for the presence of younggroup/clusters in the same area of the UV-plane is that they have beenput there by the spiral wave associated with their formation, while thekinematics of the older stars of our sample has also been disturbed bythe same wave. The emerging picture is thus one of dynamical streamspervading the solar neighbourhood and travelling in the Galaxy withsimilar space velocities. The term dynamical stream is more appropriatethan the traditional term supercluster since it involves stars ofdifferent ages, not born at the same place nor at the same time. Theposition of those streams in the UV-plane is responsible for the vertexdeviation of 16.2o ± 5.6o for the wholesample. Our study suggests that the vertex deviation for youngerpopulations could have the same dynamical origin. The underlyingvelocity ellipsoid, extracted by the maximum-likelihood method afterremoval of the streams, is not centered on the value commonly acceptedfor the radial antisolar motion: it is centered on < U > =-2.78±1.07 km s-1. However, the full data set(including the various streams) does yield the usual value for theradial solar motion, when properly accounting for the biases inherent tothis kind of analysis (namely, < U > = -10.25±0.15 kms-1). This discrepancy clearly raises the essential questionof how to derive the solar motion in the presence of dynamicalperturbations altering the kinematics of the solar neighbourhood: doesthere exist in the solar neighbourhood a subset of stars having no netradial motion which can be used as a reference against which to measurethe solar motion?Based on observations performed at the Swiss 1m-telescope at OHP,France, and on data from the ESA Hipparcos astrometry satellite.Full Table \ref{taba1} is only available in electronic form at the CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/430/165} Hipparcos red stars in the HpV_T2 and V I_C systemsFor Hipparcos M, S, and C spectral type stars, we provide calibratedinstantaneous (epoch) Cousins V - I color indices using newly derivedHpV_T2 photometry. Three new sets of ground-based Cousins V I data havebeen obtained for more than 170 carbon and red M giants. These datasetsin combination with the published sources of V I photometry served toobtain the calibration curves linking Hipparcos/Tycho Hp-V_T2 with theCousins V - I index. In total, 321 carbon stars and 4464 M- and S-typestars have new V - I indices. The standard error of the mean V - I isabout 0.1 mag or better down to Hp~9 although it deteriorates rapidly atfainter magnitudes. These V - I indices can be used to verify thepublished Hipparcos V - I color indices. Thus, we have identified ahandful of new cases where, instead of the real target, a random fieldstar has been observed. A considerable fraction of the DMSA/C and DMSA/Vsolutions for red stars appear not to be warranted. Most likely suchspurious solutions may originate from usage of a heavily biased color inthe astrometric processing.Based on observations from the Hipparcos astrometric satellite operatedby the European Space Agency (ESA 1997).}\fnmsep\thanks{Table 7 is onlyavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/397/997 A catalogue of calibrator stars for long baseline stellar interferometryLong baseline stellar interferometry shares with other techniques theneed for calibrator stars in order to correct for instrumental andatmospheric effects. We present a catalogue of 374 stars carefullyselected to be used for that purpose in the near infrared. Owing toseveral convergent criteria with the work of Cohen et al.(\cite{cohen99}), this catalogue is in essence a subset of theirself-consistent all-sky network of spectro-photometric calibrator stars.For every star, we provide the angular limb-darkened diameter, uniformdisc angular diameters in the J, H and K bands, the Johnson photometryand other useful parameters. Most stars are type III giants withspectral types K or M0, magnitudes V=3-7 and K=0-3. Their angularlimb-darkened diameters range from 1 to 3 mas with a median uncertaintyas low as 1.2%. The median distance from a given point on the sky to theclosest reference is 5.2degr , whereas this distance never exceeds16.4degr for any celestial location. The catalogue is only available inelectronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr(130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/183 CHARM: A Catalog of High Angular Resolution MeasurementsThe Catalog of High Angular Resolution Measurements (CHARM) includesmost of the measurements obtained by the techniques of lunaroccultations and long-baseline interferometry at visual and infraredwavelengths, which have appeared in the literature or have otherwisebeen made public until mid-2001. A total of 2432 measurements of 1625sources are included, along with extensive auxiliary information. Inparticular, visual and infrared photometry is included for almost allthe sources. This has been partly extracted from currently availablecatalogs, and partly obtained specifically for CHARM. The main aim is toprovide a compilation of sources which could be used as calibrators orfor science verification purposes by the new generation of largeground-based facilities such as the ESO Very Large Interferometer andthe Keck Interferometer. The Catalog is available in electronic form atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/386/492, and from theauthors on CD-Rom. Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 Sixth Catalogue of Fundamental Stars (FK6). Part III. Additional fundamental stars with direct solutionsThe FK6 is a suitable combination of the results of the HIPPARCOSastrometry satellite with ground-based data, measured over a longinterval of time and summarized mainly in the FK5. Part III of the FK6(abbreviated FK6(III)) contains additional fundamental stars with directsolutions. Such direct solutions are appropriate for single stars or forobjects which can be treated like single stars. Part III of the FK6contains in total 3272 stars. Their ground-based data stem from thebright extension of the FK5 (735 stars), from the catalogue of remainingSup stars (RSup, 732 stars), and from the faint extension of the FK5(1805 stars). From the 3272 stars in Part III, we have selected 1928objects as "astrometrically excellent stars", since their instantaneousproper motions and their mean (time-averaged) ones do not differsignificantly. Hence most of the astrometrically excellent stars arewell-behaving "single-star candidates" with good astrometric data. Thesestars are most suited for high-precision astrometry. On the other hand,354 of the stars in Part III are Δμ binaries in the sense ofWielen et al. (1999). Many of them are newly discovered probablebinaries with no other hitherto known indication of binarity. The FK6gives, besides the classical "single-star mode" solutions (SI mode),other solutions which take into account the fact that hidden astrometricbinaries among "apparently single-stars" introduce sizable "cosmicerrors" into the quasi-instantaneously measured HIPPARCOS proper motionsand positions. The FK6 gives, in addition to the SI mode, the "long-termprediction (LTP) mode" and the "short-term prediction (STP) mode". TheseLTP and STP modes are on average the most precise solutions forapparently single stars, depending on the epoch difference with respectto the HIPPARCOS epoch of about 1991. The typical mean error of anFK6(III) proper motion in the single-star mode is 0.59 mas/year. This isa factor of 1.34 better than the typical HIPPARCOS errors for thesestars of 0.79 mas/year. In the long-term prediction mode, in whichcosmic errors are taken into account, the FK6(III) proper motions have atypical mean error of 0.93 mas/year, which is by a factor of about 2better than the corresponding error for the HIPPARCOS values of 1.83mas/year (cosmic errors included). Revision and Calibration of MK Luminosity Classes for Cool Giants by HIPPARCOS ParallaxesThe Hipparcos parallaxes of cool giants are utilized in two ways in thispaper. First, a plot of reduced parallaxes of stars brighter than 6.5,as a function of spectral type, for the first time separates members ofthe clump from stars in the main giant ridge. A slight modification ofthe MK luminosity standards has been made so that luminosity class IIIbdefines members of the clump, and nearly all of the class III stars fallwithin the main giant ridge. Second, a new calibration of MK luminosityclasses III and IIIb in terms of visual absolute magnitudes has beenmade. Speckle Interferometry of New and Problem HIPPARCOS BinariesThe ESA Hipparcos satellite made measurements of over 12,000 doublestars and discovered 3406 new systems. In addition to these, 4706entries in the Hipparcos Catalogue correspond to double star solutionsthat did not provide the classical parameters of separation and positionangle (rho,theta) but were the so-called problem stars, flagged G,''O,'' V,'' or X'' (field H59 of the main catalog). An additionalsubset of 6981 entries were treated as single objects but classified byHipparcos as suspected nonsingle'' (flag S'' in field H61), thusyielding a total of 11,687 problem stars.'' Of the many ground-basedtechniques for the study of double stars, probably the one with thegreatest potential for exploration of these new and problem Hipparcosbinaries is speckle interferometry. Results are presented from aninspection of 848 new and problem Hipparcos binaries, using botharchival and new speckle observations obtained with the USNO and CHARAspeckle cameras. Spectral Irradiance Calibration in the Infrared. X. A Self-Consistent Radiometric All-Sky Network of Absolutely Calibrated Stellar SpectraWe start from our six absolutely calibrated continuous stellar spectrafrom 1.2 to 35 μm for K0, K1.5, K3, K5, and M0 giants. These wereconstructed as far as possible from actual observed spectral fragmentstaken from the ground, the Kuiper Airborne Observatory, and the IRAS LowResolution Spectrometer, and all have a common calibration pedigree.From these we spawn 422 calibrated spectral templates'' for stars withspectral types in the ranges G9.5-K3.5 III and K4.5-M0.5 III. Wenormalize each template by photometry for the individual stars usingpublished and/or newly secured near- and mid-infrared photometryobtained through fully characterized, absolutely calibrated,combinations of filter passband, detector radiance response, and meanterrestrial atmospheric transmission. These templates continue ourongoing effort to provide an all-sky network of absolutely calibrated,spectrally continuous, stellar standards for general infrared usage, allwith a common, traceable calibration heritage. The wavelength coverageis ideal for calibration of many existing and proposed ground-based,airborne, and satellite sensors, particularly low- tomoderate-resolution spectrometers. We analyze the statistics of probableuncertainties, in the normalization of these templates to actualphotometry, that quantify the confidence with which we can assert thatthese templates truly represent the individual stars. Each calibratedtemplate provides an angular diameter for that star. These radiometricangular diameters compare very favorably with those directly observedacross the range from 1.6 to 21 mas. Stellar radii of M giantsWe determine the stellar radii of the M giant stars in the Hipparcoscatalogue that have a parallax measured to better than 20% accuracy.This is done with the help of a relation between a visual surfacebrightness parameter and the Cousins (V - I) colour index, which wecalibrate with M giants with published angular diameters.The radii of(non-Mira) M giants increase from a median value of 50 R_Sun at spectraltype M0 III to 170 R_Sun at M7/8 III. Typical intermediate giant radiiare 65 R_Sun for M1/M2, 90 R_Sun for M3, 100 R_Sun for M4, 120 R_Sun forM5 and 150 R_Sun for M6. There is a large intrinsic spread for a givenspectral type. This variance in stellar radius increases with latertypes but in relative terms, it remains constant.We determineluminosities and, from evolutionary tracks, stellar masses for oursample stars. The M giants in the solar neighbourhood have masses in therange 0.8-4 M_Sun. For a given spectral type, there is a close relationbetween stellar radius and stellar mass. We also find a linear relationbetween the mass and radius of non-variable M giants. With increasingamplitude of variability we have larger stellar radii for a given mass. Averaged energy distributions in the stellar spectra.Not Available Classification and Identification of IRAS Sources with Low-Resolution SpectraIRAS low-resolution spectra were extracted for 11,224 IRAS sources.These spectra were classified into astrophysical classes, based on thepresence of emission and absorption features and on the shape of thecontinuum. Counterparts of these IRAS sources in existing optical andinfrared catalogs are identified, and their optical spectral types arelisted if they are known. The correlations between thephotospheric/optical and circumstellar/infrared classification arediscussed. An Atlas of the infrared spectral region. II. The late-type stars (G - M)This Atlas illustrates the behavior of late type stars (F, G, K and M)in the near infrared 8400-8800 Angstrom region with a resolution ofabout 2 degrees. Seventeen figures illustrate the spectral sequence andluminosity classes V, III, Ib and Ia. Four figures illustrate peculiarspectra, namely those of Am stars, composites, weak metal stars and Sand C type objects. The complete Atlas is also available as FITS filesfrom the CDS de Strasbourg and other data centers. H-alpha measurements for cool giantsThe H-alpha line in a cool star is usually an indication of theconditions in its chromosphere. I have collected H-alpha spectra of manynorthern G-M stars, which show how the strength and shape of the H-alphaline change with spectral type. These observations detect surprisinglittle variation in absoption-line depth (Rc approximately0.23 +/- 0.08), linewidth (FWHD approximately 1.44 +/- 0.22 A), orequivalent width (EW approximately 1.12 +/- 0.17 A) among G5-M5 IIIgiants. Lines in the more luminous stars tend to be broader and strongerby 30%-40% than in the Class III giants, while the H-alpha absorptiontends to weaken among the cooler M giants. Velocities of H-alpha andnearby photospheric lines are the same to within 1.4 +/- 4.4 km/s forthe whole group. To interpret these observations, I have calculatedH-alpha profiles, Ly-alpha strengths, and (C II) strengths for a seriesof model chromospheres representing a cool giant star like alpha Tau.Results are sensitive to the mass of the chromosphere, to chromospherictemperature, to clumping of the gas, and to the assumed physics of lineformation. The ubiquitous nature of H-alpha in cool giants and the greatdepth of observed lines argue that chromospheres of giants cover theirstellar disks uniformly and are homogeneous on a large scale. This isquite different from conditions on a small scale: To obtain a highenough electron density with the theoretical models, both to explain theexitation of hydrogen and possibly also to give the observed C IImultiplet ratios, the gas is probably clumped. The 6540-6580 A spectraof 240 stars are plotted in an Appendix, which identifies the date ofobservation and marks positions of strong telluric lines on eachspectrum. I assess the effects of telluric lines and estimates that thestrength of scattered light is approximately 5% of the continuum inthese spectra. I give the measurements of H-alpha as well as equivalentwidths of two prominent photospheric lines, Fe I lambda 6546 and Ca Ilambda 6572, which strengthen with advancing spectral type. Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. Spectral classifications in the near infrared of stars with composite spectra. I. The study of MK standards.Up to now the spectral classifications of the cool components ofcomposite spectra obtained in the 3800-4800A wavelength region have beenvery disparate. These disparities are due to the fact that the spectraof the evolved cool component are strongly veiled by that of the hotterdwarf component, which makes a classification very difficult. We proposeto study these systems in the near infrared (8380-8780A). In thisspectral domain the magnitude difference between the spectra of thecomponents is in general sufficiently large so that one observespractically only the spectrum of the cool component. In this first paperwe provide, for a sample of MK standards, the relations between theequivalent width (Wlambda_ ) of certain lines and thespectral classifications. For the cool G, K and M type stars, the linesconsidered are those of the calcium triplet (Ca II 8498, 8542 and 8662),of iron (Fe I 8621 and 8688), of titanium (Ti I 8426 and 8435) and ofthe blend λ8468. The use of certain line intensity ratiospermits, after eliminating partially the luminosity effects, a firstapproach to the spectral type. For the hotter stars of types O, B, A andF we study the behavior of the hydrogen lines (P12 and P14), the calciumlines (Ca II 8498 and 8542) as well as those of the oxygen (O I 8446).The latter line presents a very characteristic profile for stars of lowrotation and therefore in Am stars, which are frequently found among thecomposite spectrum binaries. Among the cooler stars of our sample, only6% present real anomalies with respect to the MK classifications. Thisresult is very encouraging for undertaking the classification of asample of composite spectra. The spectra were taken at the Observatoirede Haute-Provence (OHP) with the CARELEC spectrograph at the 193 cmtelescope, with a dispersion of 33 A/mm. Improved Mean Positions and Proper Motions for the 995 FK4 Sup Stars not Included in the FK5 ExtensionNot Available Corrections to the right ascension to be applied to the apparent places of 1217 stars given in "The Chinese Astronomical Almanach" for the year 1984 to 1992.Not Available Asymptotic giant branch stars near the sunAvailable red and near-infrared photometry and apparent motions of M, S,and C asymptotic giant branch (AGB) stars in the Bright Star Catalogueare tabulated and discussed. It is shown that the red and near infraredindices normally used for late-type stars are interchangeable except forcarbon stars. The M-type giants are variable with visual amplitudegreater than 0.05 mag. The reddening-free parameter m2 from Genevaphotometry is essentially a temperature parameter for M giants, whilethe reddening-free parameter d is a sensitive detector of blue stellarcompanions. The space density of AGB stars near the sun decreases by afactor of 35 in a temperature range 3800 to 3400 K. Two of the S starsnear the sun were found to have nearly equal space motions and may becomembers of the Arcturus group. Carbon abundances and isotope ratios in 70 bright M giantsApproximate carbon abundances and C-12/C-13 isotope ratios are obtainedfor 70 M giant stars from intermediate-resolution spectrophotometry ofthe CO bands near 2.3 microns. A low mean carbon abundance (C/H = -0.64+ or - 0.29) is obtained, suggesting that standard mixing isinsufficient to explain atmospheric abundances in M giants. HR 8795appears to be exceptionally carbon deficient, and is worthy of furtherstudy as a possible weak G-band star descendant. MK classification and photometry of stars used for time and latitude observations at Mizusawa and WashingtonMK spectral classifications are given for 591 stars which are used fortime and latitude observations at Mizusawa and Washington. Theclassifications in the MK system were made by slit spectrograms ofdispersion 73 A/mm at H-gamma which were taken with the 91 cm reflectorat the Okayama Astrophysical Observatory. Photometric observations in UBV were made with the 1-meter reflector at the Flagstaff Station of U.S.Naval Observatory. The spectrum of HD 139216 was found to show a strongabsorption line of H-beta. The following new Am stars were found:HD9550, 25271, 32784, 57245, 71494, and 219109. The following new Apstars were found: HD6116, 143806, 166894, 185171, and 209260. The threestars, HD80492, 116204, and 211376, were found to show the emission inCaII H and K lines. A list of MK standard starsNot Available The Perkins catalog of revised MK types for the cooler starsA catalog is presented listing the spectral types of the G, K, M, and Sstars that have been classified at the Perkins Observatory in therevised MK system. Extensive comparisons have been made to ensureconsistency between the MK spectral types of stars in the Northern andSouthern Hemispheres. Different classification spectrograms have beengradually improved in spite of some inherent limitations. In thecatalog, the full subclasses used are the following: G0, G5, G8, K0, K1,K2, K3, K4, K5, M0, M1, M2, M3, M4, M5, M6, M7, and M8. Theirregularities are the price paid for keeping the general scheme of theoriginal Henry Draper classification. Stellar integrated fluxes in the wavelength range 380 NM - 900 NM derived from Johnson 13-colour photometryPetford et al. (1988) have reported measured integrated fluxes for 216stars with a wide spread of spectral type and luminosity, and mentionedthat a cubic-spline integration over the relevant Johnson 13-colormagnitudes, converted to fluxes using Johnson's calibration, is inexcellent agreement with those measurements. In this paper a list of thefluxes derived in this way, corrected for a small dependence on B-V, isgiven for all the 1215 stars in Johnson's 1975 catalog with completeentries. 1988 Revised MK Spectral Standards for Stars GO and LaterNot Available IRAS catalogues and atlases - Atlas of low-resolution spectraPlots of all 5425 spectra in the IRAS catalogue of low-resolutionspectra are presented. The catalogue contains the average spectra ofmost IRAS poiont sources with 12 micron flux densities above 10 Jy. 1985 revised MK spectral standards : stars GO and laterNot Available Carbon monoxide band intensities in M giantsThe strength of CO (2.3 micron) bands was measured using the photometercomponent of the Kitt Peak 1.3-m telescope in an attempt to identifyextremely carbon-poor M giants. Magnitudes for about 200 bright M starswere obtained through a J filter, and narrow filters were centered on2.17 and 2.40 microns, respectively. No M giants were found with COindices indicative of extremely low carbon abundances. The correlationof CO index to effective temperature did not extend to the extremelylate and variable M giants. The dependence of CO index upon carbonabundance, 12-C/13-C ratio, surface gravity, effective temperature, andmicroturbulent velocity indices were also investigated. It is found thatthe predicted and observed CO indices are in good agreement for starswith spectroscopically determined carbon abundance. Photometry of the MG B + MgH feature for a sample of bright starsMeasurements of the strength of the 5174 A Mg b + MgH feature arepresented for a sample of solar neighborhood stars, ranging in spectraltype from B to M. The data were obtained by the use of two 70 A FWHMinterference filters and a single-channel photometer. In agreement withprevious investigations, the absorption is seen to vary with stellartemperature and gravity. It does not appear to correlate very well withFe/H abundances determined by spectroscopic techniques, despitetheoretical expectations to the contrary. If this lack of correlation isnot merely the result of errors in the Fe/H measurements, it mayindicate that Mg/Fe variations exist among the G and K stars. Since theMg b + MgH absorption appears to correlate with other metallicityfeatures in the spectra of E galaxies, it is suggested that the metalabundance spread in such galaxies is larger than that among the solarneighborhood stars studied in this investigation. Submit a new article • - No Links Found -
2019-02-20 06:33:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6393683552742004, "perplexity": 6097.464602831646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494449.56/warc/CC-MAIN-20190220044622-20190220070622-00468.warc.gz"}
https://projecteuclid.org/euclid.rmjm/1492502551
## Rocky Mountain Journal of Mathematics ### Some existence and uniqueness results for nonlinear fractional partial differential equations #### Abstract In this paper, we study the existence and uniqueness of positive solutions for some nonlinear fractional partial differential equations via given boundary value problems by using recent fixed point results for a class of mixed monotone operators with convexity. #### Article information Source Rocky Mountain J. Math., Volume 47, Number 2 (2017), 571-585. Dates First available in Project Euclid: 18 April 2017 https://projecteuclid.org/euclid.rmjm/1492502551 Digital Object Identifier doi:10.1216/RMJ-2017-47-2-571 Mathematical Reviews number (MathSciNet) MR3635375 Zentralblatt MATH identifier 1365.35214 Subjects Primary: 34B18: Positive solutions of nonlinear boundary value problems #### Citation Marasi, H.R.; Afshari, H.; Zhai, C.B. Some existence and uniqueness results for nonlinear fractional partial differential equations. Rocky Mountain J. Math. 47 (2017), no. 2, 571--585. doi:10.1216/RMJ-2017-47-2-571. https://projecteuclid.org/euclid.rmjm/1492502551 #### References • H. Afshari, S.H. Rezapour and N. Shahzad, Some notes on $(\alpha,\beta)$-hybrid mappings, J. Nonlin. Anal. Optim. 3 (2012), 119–135. • R.P. Agarwal, V. Lakshmikanthan and J.J. Nieto, On the concept of solution for fractional differential equations with uncertainty, Nonlin. Anal. Th. 72 (2010), 2859–2862. • B. Ahmad and J.J. Nieto, Existence of solutions for nonlocal boundary value problems of higher-order nonlinear fractional differential equations, Abstr. Appl. Anal. 2009, article ID 494720, 2009. • A.A.M. Arafa, Series solutions of time-fractional host-parasitoid systems, J. Stat. Phys. 145 (2011), 1357-1367. • D. Baleanu, K. Diethelm, E. Scalas and J.J. Trujillo, Fractional calculus: Models and numerical methods, in Series on complexity, nonlinearity and chaos, World Scientific, Singapore, 2012. • D. Baleanu, O.G. Mustafa and R.P. Agarwal, On the solution set for a class of sequential fractional differential equations, J. Phys. Math. Th. 43, article ID 385209, 2010. • M. Belmekki, J.J. Nieto and R. Rodriguez-Lopez, Existence of periodic solution for a nonlinear fractional equation, Bound. Value Prob. 2009, article ID 324561, 2009. • D. Delbosco and L. Radino, Existence and uniqueness for a nonlinear fractional differential equation, J. Math. Anal. Appl. 204 (1996), 609–625. • A.M.A. El-Sayed, S.Z. Rida and A.A.M. Arafa, On the solutions of the generalized reaction-diffusion model for bacterial colony, Acta Appl. Math. 110 (2010), 1501–1511. • Y. Fujita, Cauchy problems for fractional order and stable processes, Japan J. Appl. Math. 7 (1990), 459–476. • M. Giona and H.E. Roman, Fractional diffusion equation on fractals: One-dimensional case and asymptotic behaviour, J. Phys. 25 (1992), 2093–2105. • D. Guo, Fixed points of mixed monotone operators with applications, Appl. Anal. 34 (1988), 215–224. • D. Guo and V. Lakskmikantham, Coupled fixed points of nonlinear operators with applications, Nonlin. Anal. Th. 11 (1987), 623–632. • I. Hashim, O. Abdulaziz and S. Momani, Homotopy analysis method for fractional IVPs, Comm. Nonlin. Sci. Numer. Simu. 14 (2009), 674–684. • J.H. He, Approximate analytical solution for seepage flow with fractional derivatives in porous media, Comp. Meth. Appl. Mech. Eng. 167 (1998), 57-68. • H. Jafari and V. Daftardar-Gejji, Positive solution of nonlinear fractional boundary value problems using Adomin decomposition method, J. Appl. Math. Comp. 180 (2006), 700–706. • A.A. Kilbas, Partial fractional differential equations and some of their applications, Analysis 30 (2010), 35–66. • A.A. Kilbas, H.M. Srivastava and J.J. Trujillo Theory and applications of fractiona differential equations. North-Holland Math. Stud. 204, North-Holland, Amsterdam, 2006. • F. Mainardi, The time fractional diffusion-wave equation, Radiofisika 38 (1995), 20–36. • H.R. Marasi, H. Afshari, M. Daneshbastam and C.B. Zhai, Fixed points of mixed monotone operators for existence and uniqueness of nonlinear fractional differential equations, J. Contemp. Math. Anal. 52 (2017), 8. • H.R. Marasi, H. Piri and H. Aydi, Existence and multiplicity of solutions for nonlinear fractional differential equations, J. Nonlin. Sci. Appl. 9 (2016), 4639–4646. • K.S. Miller and B. Ross, An introduction to the fractional calculus and fractional differential equations, Wiley, New York, 1993. • R.R. Nigmatullin, The realization of the generalized transfer equation in a medium with fractal geometry, Phys. Stat. 133 (1986), 425–430. • K.B. Oldham and J. Spainer, The fractional calculus, Academic Press, New York, 1974. • I. Podlubny, Fractional differential equations, Academic Press, San Diego, 1999. • A.V. Pskhu, Partial differential equations of fractional order, Nauka, Moscow, 2005. • T. Qiu and Z. Bai, Existence of positive solutions for singular fractional equations, Electr. J. Diff. Eq. 146 (2008), 1–9. • H.E. Roman and M. Giona, Fractional diffusion equation on fractals: Three-dimensional case and scattering function, J. Phys. 25 (1992), 2107–2117. • J. Sabatier, O.P. Agarwal and J.A.T. Machado, Advances in fractional calculus: Theoritical developments and applications in physics and engineering, Springer, Berlin, 2002. • S.G. Samko, A.A. Kilbas and O.I. Marichev, Fractional integral and derivative: Theory and applications, Gordon and Breach, Switzerland, 1993. • H. Weitzner and G.M. Zaslavsky, Some applications of fractional equations, Comm. Nonlin. Sci. Numer. Simul. 15 (2010), 935–945. • C.B. Zhai, Fixed point theorems for a class of mixed monotone operators with convexity, Fixed Point Th. Appl. 2013 (2013), 119. • C.B. Zhai and X.M. Cao, Fixed point theorems for $\tau$-$\varphi$-concave operators and applications, Comput. Math. Appl. 59 (2010), 532–538. • C.B. Zhai and C.M. Guo, $\alpha$-convex operators, J. Math. Anal. Appl. 316 (2006), 556–565. • C.B. Zhai and M.R. Hao, Fixed point theorems for mixed monotone operators with perturbation and applications to fractional differential equation boundary value problems, Nonlin. Anal. Th. 75 (2012), 2542–2551. • C.B. Zhai and L.L. Zhang, New fixed point theorems for mixed monotone operators and local existence-uniqueness of positive solutions for nonlinear boundary value problems, J. Math. Anal. Appl. 382 (2011), 594–614. • S. Zhang, The existence of a positive solution for nonlinear fractional equation, J. Math. Anal. Appl. 252 (2000), 804–812. • Y. Zhao, S.H. Sun and Z. Han, The existence of multiple positive solutions for boundary value problems of nonlinear fractional differential equations, Comm. Nonlin. Sci. Numer. Simu. 16 (2011), 2086–2097.
2020-01-25 14:23:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25911736488342285, "perplexity": 3936.4325140964515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00505.warc.gz"}
https://datascience.stackexchange.com/questions/23354/what-are-the-tools-to-plot-cluster-results
# What are the tools to plot cluster results? I am clustering based on my cosine similarity matrix. Now I want to plot/visualize my very large clusters. I am interested in using a tool that is better than sklearn. Please recommend me. ## 1 Answer ELKI has some very nice cluster visualizations. You could also use tSNE or MDS. But as you used cosine, I assume you have text data, and that is not easy to visualize. You probably first need to figure out how to visualize your input data, then you can think about adding cluster information to this. • Thanks a lot for the great answer. I actually have a feature vectors x = [0.7, 0.2, 0, 0.1] like this. I use these feature vectors to calculate the cosine distance. Can I use these feature vectors directly to plot the graphs in ELKI tool? Sep 29 '17 at 0:07
2022-01-27 14:23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4867016077041626, "perplexity": 579.3824560590461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00078.warc.gz"}
https://lists.nongnu.org/archive/html/axiom-developer/2006-09/msg00250.html
axiom-developer [Top][All Lists] ## [Axiom-developer] Re: \begin{chunk} From: Ralf Hemmecke Subject: [Axiom-developer] Re: \begin{chunk} Date: Sun, 10 Sep 2006 20:28:56 +0200 User-agent: Thunderbird 1.5.0.5 (X11/20060719) Maybe you have also realised that noweb offers a way to make clickable code out of the code chunks. See the ALLPROSE documentation. I am heavily using it. It seems that nobody has considered that feature of noweb for use in Axiom pamphlets. I took a stab at setting up allprose and didn't quite make it - I will try again. I was referring to the "documentation". That exists as html. You should definitely read chapter 9 "How to setup all features of ALLPROSE". And if you run into trouble just drop me a mail. I don't claim that the setup is fool proof. I'd like to have autoconf for ALLPROSE, too, but I'll wait until Gaby has setup and documented the stuff for Axiom, maybe then it is easier for me to add the autoconf stuff. So BEFORE you make the change to a new syntax, think twice about what you will be missing afterwards. If it is then impossible to link produce a link from one identifier in a code chunk to its definition, then I am strongly AGAINST that change. And note you would have to program all that in LaTeX (or depend on the listings package or something the like and add a few LaTeX lines). I don't think that this can happen in just a few days if you want all the features that noweb has NOW. Correct me if I'm wrong, but I think the \begin{chunk}{chunkname} syntax was intended to have the chunkname be mandatory. Couldn't the logic then remain the same and just go for {chunkname} as the target instead of <<chunkname>>? Perhaps even the chunk-within-chunk insertion syntax of using the <<chunkname>> could be the same? e.g. \begin{chunk}{Chunk1} some code \end{chunk} \begin{chunk}{Chunk2} some more code \end{chunk} \begin{chunk}{MasterChunk} <<Chunk1>> <<Chunk2>> \end{chunk} Oh, you should probably have written \begin{chunk}{MasterChunk} \use{Chunk1} \use{Chunk2} \end{chunk} ;-) But you know that noweb can do stuff like <<cat: CatA>>= CatA: Category == ... ... @ %def CatA <<cat: DomainA>>= DomainA: CatA == ... ... @ %def DomainA and noweave can make the CatA in the second chunk al link to the chunk that has the "%def CatA" attached. I think that the "listings" package can probably do similar things, but I am not sure whether it works over several chunks. The other thing is that there seems to be too much manpower for Axiom that people think about that syntax change. I guess writing in noweb and having a little script later that translates the <<...>> syntax into a LaTeX-like syntax automatically, is probably an easy thing. But we should focus on more urgent matters. Noweb syntax isn't too hard to learn. No, it isn't. But getting AucTeX to quit using $inside code chunks during it's fontification is not so easy, or maybe I'm just not doing it right. Does allprose fix this? No, ALLPROSE does not add emacs support. That's something else. I also have this strange problem and add the moment for the places where the colours get wrong I add "%$" on a new line right after the closing @ of a chunk. Ralf reply via email to
2020-07-13 13:00:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6051567196846008, "perplexity": 3338.6468966109655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00486.warc.gz"}
https://figshare.com/articles/_Performances_of_the_ndBMI_when_reducing_information_about_the_stimulation_pattern_/961078/1
## Performances of the ndBMI when reducing information about the stimulation pattern. 2014-03-13T09:25:43Z (GMT) by <p>(A,C) Ideal (black lines) and actual (red-tonality lines) trajectories of the Multiple-points algorithm superimposed to the sensory regions (blue-tonality areas) by using data set 6 and two different force fields simulated by progressively reducing the amount of available information represented by , with . (B,D) Bar chart of the <i>wpte</i> between the ideal and actual trajectories calculated for different value of . The * denotes that <i>wpte</i> depended significantly on ( one-way ANOVA) and is placed in correspondence of the values of for which <i>wpte</i> was significantly different from the “reference” condition with (Tukey hsd ).</p>
2018-11-21 13:36:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.840156078338623, "perplexity": 4074.6660716911524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748901.87/warc/CC-MAIN-20181121133036-20181121155036-00368.warc.gz"}
http://citizendia.org/Space_Shuttle
Fact sheet Size Space Shuttle Space Shuttle Discovery launches at the start of STS-120. Space Shuttle Discovery ( Orbiter Vehicle Designation: OV-103 is one of the three currently operational orbiters in the Space Shuttle fleet of STS-120 was a space shuttle mission to the International Space Station (ISS that launched on October 23, 2007 from the Kennedy Space Center Function Manned partially re-usable launch and reentry system Manufacturer United Space Alliance:Thiokol/Boeing (SRBs)Lockheed Martin (Martin Marietta) - (ET)Rockwell International (orbiter) Country of origin United States of America Height 56. United Space Alliance (USA is spaceflight operations company USA is a joint venture equally owned by The Boeing Company (NYSEBA and Lockheed Martin Corporation Thiokol (variously Thiokol Chemical Company, Morton-Thiokol Inc The Boeing Company is a major Aerospace and defense corporation originally founded by William E Lockheed Martin ( is a large multinational Aerospace manufacturer and advanced technology Company formed in 1995 by the merger of Martin Marietta Corporation was founded in 1961 through the Merger of The Martin Company and American-Marietta Corporation. Boeing Integrated Defense Systems ( Boeing IDS) based in St Louis, Missouri, USA, is a unit of The Boeing Company responsible for The United States of America —commonly referred to as the 1 m (184 ft (56 m)) Diameter 8. 7 m (28. 5 ft (8. 7 m)) Mass 2,029,203 kg (4,474,574 lb) Stages 2 Payload to LEO 24,400 kg (53,700 lb) Payload toGTO 3,810 kg (8,390 lb) Status Active Launch sites LC-39, Kennedy Space CenterSLC-6, Vandenberg AFB (unused) Total launches 121 Successes 119 Failures 2 Maiden flight April 12, 1981 Notable payloads International Space Station componentsHubble Space TelescopeGalileoMagellanChandra X-ray ObservatoryCompton Gamma Ray Observatory No boosters 2 Engines 1 solid Thrust 2,800,000 lbf each, sea level liftoff (12. The pound or pound-mass (abbreviation lb, lbm, or sometimes in the United States #) is a unit of Mass In military aircraft or space exploration the payload is the carrying capacity of an aircraft or space ship including as Cargo, Munitions scientific instruments A Low Earth Orbit (LEO is generally defined as an Orbit within the locus extending from the Earth’s surface up to an altitude of 2000 km A Geosynchronous Transfer Orbit or Geostationary Transfer Orbit ( GTO) is a Hohmann transfer orbit around the Earth between a Low Earth orbit Launch Complex 39 (LC-39 is a Rocket launch site at the John F The John F Kennedy Space Center ( KSC) is the NASA Space vehicle launch facility and Launch Control Center ( Spaceport) on Space Launch Complex-6 ( SLC-6, nicknamed "Slick Six" at Vandenberg Air Force Base in California was a launch pad and support area designed Vandenberg Air Force Base is a United States military installation with a Spaceport, in Santa Barbara County, California, United States. Events 467 - Anthemius is elevated to Emperor of the Western Roman Empire. Year 1981 ( MCMLXXXI) was a Common year starting on Thursday (link displays the 1981 The Hubble Space Telescope ( HST; also known colloquially as "the Hubble" or just "Hubble" is a space telescope that was carried into Galileo was an Unmanned spacecraft sent by NASA to study the Planet Jupiter and its moons Named after the Astronomer The Magellan spacecraft was a space probe sent to the planet Venus, the first post- Voyager unmanned spacecraft to be launched by NASA The Chandra X-ray Observatory is a Satellite launched on STS-93 by NASA on July 23, 1999. The Space Shuttle Solid Rocket Boosters (SRBs are the pair of large solid rockets used by the Space Shuttle during the first two minutes of powered flight A solid rocket or a solid-fuel rocket is a Rocket with a motor that uses solid propellants ( Fuel / Oxidizer) This article deals with the unit of force For the unit of mass see Pound (mass. Mean sea level (MSL is the average (mean height of the Sea, with reference to a suitable reference surface 5 MN) Specific impulse 269 s Burn time 124 s Fuel solid Engines (none)(3 SSMEs located on Orbiter) Thrust 1,180,000 lb (540,000 kg)f combined total, sea level liftoff (5. The newton (symbol N) is the SI derived unit of Force, named after Isaac Newton in recognition of his work on Classical Specific impulse (usually abbreviated I sp is a way to describe the efficiency of rocket and jet engines A Space Shuttle External Tank ( ET) is the component of the Space Shuttle launch vehicle that contains the Liquid hydrogen fuel and Liquid oxygen SSME redirect here For the services field see Service Science Management and Engineering The Space Shuttle Main Engines ( SSMEs 25 MN) Specific impulse 455 s Burn time 480 s Fuel LOX/LH2 Engines 2 OME Thrust 12,000 lbf combined total vacuum thrust (53 kN) Specific impulse 316 s Burn time 1250 s Fuel MMH/N2O4 Space Shuttle program insignia NASA's Space Shuttle, officially called the Space Transportation System (STS), is the spacecraft currently used by the United States government for its human spaceflight missions. Specific impulse (usually abbreviated I sp is a way to describe the efficiency of rocket and jet engines Liquid hydrogen (LH2 or LH2 is the Liquid state of the element Hydrogen. The Space Shuttle orbiters are the orbital Spacecraft of the Space Shuttle program operated by NASA, the space agency of the United States. The Orbital Maneuvering System, or OMS (pronounced /omz/ is a system of Rocket engines used on the space shuttle orbiter for orbital injection Specific impulse (usually abbreviated I sp is a way to describe the efficiency of rocket and jet engines Monomethylhydrazine ( MMH) is a volatile Hydrazine with the Chemical formula C[[Hydrogen H]]3 N 2 Nitrogen tetroxide ( dinitrogen tetroxide or nitrogen peroxide) is the Chemical compound N2O4 The National Aeronautics and Space Administration ( NASA, ˈnæsə is an agency of the United States government, responsible for the nation's public space program A spacecraft is a Vehicle or machine designed for Spaceflight. The United States of America —commonly referred to as the A human spaceflight is a Spaceflight with a human crew, and possibly passengers At launch, it consists of a rust-colored external tank (ET), two white, slender Solid Rocket Boosters (SRBs), and the orbiter, a winged spaceplane which is the space shuttle in the narrow sense. A Space Shuttle External Tank ( ET) is the component of the Space Shuttle launch vehicle that contains the Liquid hydrogen fuel and Liquid oxygen The Space Shuttle Solid Rocket Boosters (SRBs are the pair of large solid rockets used by the Space Shuttle during the first two minutes of powered flight The Space Shuttle orbiters are the orbital Spacecraft of the Space Shuttle program operated by NASA, the space agency of the United States. A spaceplane is a Rocket plane designed to pass the Edge of space. The orbiter carries astronauts and payload such as satellites or space station parts into low earth orbit, into the Earth's upper atmosphere or thermosphere. An astronaut or cosmonaut (космона́вт) is a person trained A Low Earth Orbit (LEO is generally defined as an Orbit within the locus extending from the Earth’s surface up to an altitude of 2000 km The thermosphere is the layer of the Earth's atmosphere directly above the Mesosphere and directly below the Exosphere. [1] Usually, five to seven crew members ride in the orbiter. The payload capacity is 50,000 lb (22,700 kg). When the orbiter's mission is complete it fires its Orbital Maneuvering System (OMS) thrusters to drop out of orbit and re-enters the lower atmosphere. The Orbital Maneuvering System, or OMS (pronounced /omz/ is a system of Rocket engines used on the space shuttle orbiter for orbital injection [1] During the descent and landing, the shuttle orbiter acts as a glider, and makes a completely unpowered ("dead stick") landing. Terminology A "glider" is an unpowered Aircraft. The most common types of glider are today used for sporting purposes A deadstick landing, also called a dead-stick landing or forced landing, occurs when an Aircraft loses all of its propulsive power and is forced to ## Description Six air-worthy shuttles have been built; the first orbiter, Enterprise, was not built for space flight, and was used only for testing purposes. The Space Shuttle Enterprise ( NASA Orbiter Vehicle Designation: OV-101 was the first Space Shuttle built for NASA. Five space-worthy orbiters were built: Columbia, Challenger, Discovery, Atlantis, and Endeavour. Space Shuttle Columbia ( NASA Orbiter Vehicle Designation: OV-102) was the first spaceworthy Space shuttle in NASA 's Space Shuttle Challenger ( NASA Orbiter Vehicle Designation: OV-099 was NASA's second Space Shuttle orbiter to be put into service Space Shuttle Discovery ( Orbiter Vehicle Designation: OV-103 is one of the three currently operational orbiters in the Space Shuttle fleet of Space Shuttle Atlantis ( Orbiter Vehicle Designation: OV-104 is one of the three currently operational orbiters in the Space Shuttle fleet of Space Shuttle Endeavour ( Orbiter Vehicle Designation: OV-105 is one of the three currently operational orbiters in the Space Shuttle fleet of Challenger disintegrated 73 seconds after launch in 1986, and Endeavour was built as a replacement. The Space Shuttle Challenger disaster took place on January 28 1986 when ''Challenger'', a Space Shuttle operated by NASA, broke apart Columbia broke apart during re-entry in 2003. The Space Shuttle Columbia disaster occurred on February 1, 2003, when the Space Shuttle ''Columbia'' disintegrated over Texas First launched in 1981, NASA has announced that the Space Shuttle would be retired in 2010, and from 2014 on, would be replaced by Orion, a new vehicle that is designed to take humans to the Moon and beyond along with its partner rockets, the Ares I and Ares V Rockets; however, since Orion is meant primarily for manned space flights, ESA's Automated Transfer Vehicle, with its 7,667 kg payload, has been suggested as an alternative for tasks like supplying space stations. Orion is a Spacecraft design currently under development by the United States space agency NASA. The European Space Agency ( ESA) established in 1975 is an intergovernmental organisation dedicated to the exploration of space, currently with 17 member Design The ATV is designed to complement the Progress spacecraft, having three times its capacity A space station is an artificial structure designed for Humans to live in Outer space. Each Space Shuttle is a partially reusable launch system that is composed of three main assemblies: the reusable Orbiter Vehicle (OV), the expendable external tank (ET), and the two partially-reusable solid rocket boosters (SRBs). A reusable launch system (or reusable launch vehicle, RLV is a Launch system which is capable of launching a Launch vehicle into space more than once The Space Shuttle orbiters are the orbital Spacecraft of the Space Shuttle program operated by NASA, the space agency of the United States. A Space Shuttle External Tank ( ET) is the component of the Space Shuttle launch vehicle that contains the Liquid hydrogen fuel and Liquid oxygen The Space Shuttle Solid Rocket Boosters (SRBs are the pair of large solid rockets used by the Space Shuttle during the first two minutes of powered flight The tank and boosters are jettisoned during ascent; only the orbiter goes into orbit. The vehicle is launched vertically like a conventional rocket, and the orbiter glides to a horizontal landing, after which it is refurbished for reuse. At times, the orbiter itself is referred to as the space shuttle. Technically, this is a misnomer, as the actual "Space Transportation System" (space shuttle) is the combination of the orbiter, the external tank (ET), and the two partially-reusable solid rocket boosters. A Space Shuttle External Tank ( ET) is the component of the Space Shuttle launch vehicle that contains the Liquid hydrogen fuel and Liquid oxygen The Space Shuttle Solid Rocket Boosters (SRBs are the pair of large solid rockets used by the Space Shuttle during the first two minutes of powered flight Combined, these are referred to as the "Stack". ### Orbiter vehicle Main article: Space Shuttle Orbiter The orbiter resembles an aircraft with double-delta wings, swept 81° at the inner leading edge, and 45° at the outer leading edge. The Space Shuttle orbiters are the orbital Spacecraft of the Space Shuttle program operated by NASA, the space agency of the United States. The delta wing is a Wing Planform in the form of a triangle named after the Greek uppercase delta which is a triangle (Δ Its vertical stabilizer's leading edge is swept back at a 50° angle. The four elevons, mounted at the trailing edge of the wings, and the rudder/speed brake, attached at the trailing edge of the stabilizer, with the body flap, control the orbiter during descent and landing. Elevons are Aircraft control surfaces that combine the functions of the elevator (used for pitch control and the Aileron (used for roll control A rudder is a device used to steer a Ship, Boat, Submarine, Hovercraft, or other conveyance that move through a fluid (generally air or The orbiter has a large payload bay measuring 15 feet (4. 6 m) by 60 feet (18. 3 m) comprising most of the fuselage. The fuselage (from the French fuselé "spindle-shaped" is an Aircraft 's main body section that holds crew and passengers or Cargo Three Space Shuttle Main Engines (SSMEs) are mounted on the orbiter's aft fuselage in a triangular pattern. SSME redirect here For the services field see Service Science Management and Engineering The Space Shuttle Main Engines ( SSMEs The three engines can swivel 10. 5 degrees up and down, and 8. 5 degrees from side to side during ascent to change the direction of their thrust and steer the shuttle as well as push. The orbiter structure is made primarily from aluminum alloy, although the engine thrust structure is made from titanium (alloy). WikipediaNaming An alloy is a Solid solution or Homogeneous mixture of two or more elements, at least one of which is a Metal, which itself has Titanium (taɪˈteɪniəm is a Chemical element with the symbol Ti and Atomic number 22 ### Solid Rocket Boosters Two solid rocket boosters (SRBs) each provide 2. The Space Shuttle Solid Rocket Boosters (SRBs are the pair of large solid rockets used by the Space Shuttle during the first two minutes of powered flight 8 million lbf (12. 5 MN) of thrust at liftoff, which is 83% of the total thrust needed for liftoff. The SRBs are jettisoned two minutes after launch at a height of about 150,000 feet (45. 7 km), and then deploy parachutes and land in the ocean to be recovered. [2] The SRB cases are made of steel about ½ inch (1. 3 cm) thick. [3] ### Flight systems Early shuttle missions took along the GRiD Compass, arguably one of the first laptop computers. The Grid Compass 1100 (written GRiD by its manufacturer was arguably the first Laptop computer, introduced in April 1982 A laptop computer, also known as a notebook computer, is a small Personal computer designed for mobile use. The Compass sold poorly, as it cost at least $8000 (USD), but offered unmatched performance for its weight and size. The United States dollar ( sign:$; code: USD) is the unit of Currency of the United States; it has also been [4] NASA was one of its main customers. [5] The shuttle was one of the earliest craft to use a computerized fly-by-wire digital flight control system. Aircraft flight control systems consist of Flight control surfaces, the respective cockpit controls connecting linkages and the necessary operating mechanisms to control Aircraft flight control systems consist of Flight control surfaces, the respective cockpit controls connecting linkages and the necessary operating mechanisms to control This means no mechanical or hydraulic linkages connect the pilot's control stick to the control surfaces or reaction control system thrusters. A reaction control system, abbreviated RCS, is a subsystem of a Spacecraft. A primary concern with digital fly-by-wire systems is reliability. Much research went into the shuttle computer system. The shuttle uses five identical redundant IBM 32-bit general purpose computers (GPCs), model AP-101, constituting a type of embedded system. The IBM AP-101 is an Avionics Computer, used most notably in the U An embedded system is a special-purpose Computer system designed to perform one or a few dedicated functions often with Real-time computing constraints Four computers run specialized software called the Primary Avionics Software System (PASS). A fifth backup computer runs separate software called the Backup Flight System (BFS). Collectively they are called the Data Processing System (DPS). [6][7] The design goal of the shuttle's DPS is fail operational/fail safe reliability. After a single failure, the shuttle can still continue the mission. After two failures, it can still land safely. The four general-purpose computers operate essentially in lockstep, checking each other. If one computer fails, the three functioning computers "vote" it out of the system. This isolates it from vehicle control. If a second computer of the three remaining fails, the two functioning computers vote it out. In the rare case of two out of four computers simultaneously failing (a two-two split), one group is picked at random. Atlantis deploys landing gear before landing on a selected runway just like a common aircraft. In Aviation, the undercarriage or landing gear is the structure (usually wheels that supports an Aircraft on the ground and allows it to taxi The Backup Flight System (BFS) is separately developed software running on the fifth computer, used only if the entire four-computer primary system fails. The BFS was created because although the four primary computers are hardware redundant, they all run the same software, so a generic software problem could crash all of them. Embedded system avionic software is developed under totally different conditions from public commercial software, the number of code lines is tiny compared to a public commercial software, changes are only made infrequently and with extensive testing, and many programming and test personnel work on the small amount of computer code. An embedded system is a special-purpose Computer system designed to perform one or a few dedicated functions often with Real-time computing constraints Avionics means "aviation electronics" It comprises electronic systems for use on aircraft artificial satellites and spacecraft comprising Communications However in theory it can still fail, and the BFS exists for that contingency. And while BFS will run in parallel with PASS, to date, BFS has never been engaged to take over control from PASS during any shuttle mission. The software for the shuttle computers is written in a high-level language called HAL/S, somewhat similar to PL/I. HAL/S is a real-time Aerospace Programming language, best known for its use in the Space Shuttle program. PL/I ("Programming Language One" ˌpiːˌɛlˈwʌn is an imperative computer Programming language designed for scientific engineering It is specifically designed for a real time embedded system environment. In Computer science, real-time computing (RTC is the study of hardware and software systems that are subject to a "real-time constraint"—i An embedded system is a special-purpose Computer system designed to perform one or a few dedicated functions often with Real-time computing constraints The IBM AP-101 computers originally had about 424 kilobytes of magnetic core memory each. Magnetic core memory, or ferrite-core memory, is an early form of Random access Computer memory. The CPU could process about 400,000 instructions per second. They have no hard disk drive, and load software from magnetic tape cartridges. In 1990, the original computers were replaced with an upgraded model AP-101S, which has about 2. 5 times the memory capacity (about 1 megabyte) and three times the processor speed (about 1. 2 million instructions per second). The memory was changed from magnetic core to semiconductor with battery backup. ### Typography and graphic design The typeface used on the Space Shuttle Orbiter is Helvetica. In Typography, a typeface is a set of one or more Fonts designed with stylistic unity each comprising a coordinated set of Glyphs A typeface usually comprises Helvetica is the name of a widely used Sans-serif Typeface developed in 1957 by Swiss Typeface designer Max Miedinger. [8] On the front lower corner of the cargo bay doors is the name of the orbiter, on the back lower corner of the cargo bay is the NASA 'Worm' logo. The National Aeronautics and Space Administration (NASA logo has three official designs although one of them (the "worm" has been retired from official use Below the NASA logo is the text 'United States' with a flag of the United States. Flags of the United States The Flag of the United States of America consists of 13 equal horizontal stripes of Red (top and bottom alternating Another United States flag appears on the right wing. During STS-101, Atlantis was the first shuttle to fly with a glass cockpit. STS-101 was a Space Shuttle mission to the International Space Station (ISS flown by Space Shuttle '' Atlantis''. Space Shuttle Atlantis ( Orbiter Vehicle Designation: OV-104 is one of the three currently operational orbiters in the Space Shuttle fleet of A glass cockpit is an Aircraft cockpit that features electronic instrument displays. Internally, the shuttle remains largely similar to the original design, with the exception of the improved avionics computers. In addition to the computer upgrades, the original vector graphics monochrome cockpit displays were replaced with modern full-color, flat-panel display screens, similar to those of contemporary airliners like the Airbus A380 and Boeing 777. Vector graphics is the use of geometrical primitives such as points lines, Curves and shapes or Polygon (s which are all based WikipediaWikiProject Aircraft. Please see WikipediaWikiProject Aircraft/page content for recommended layout WikipediaWikiProject Aircraft. Please see WikipediaWikiProject Aircraft/page content for recommended layout This is called a glass cockpit. A glass cockpit is an Aircraft cockpit that features electronic instrument displays. In the Apollo-Soyuz Test Project tradition, programmable calculators are carried as well (originally the HP-41C). The HP-41 series are programmable expandable handheld RPN Calculators made by Hewlett-Packard from 1979 to 1990. With the coming of the ISS, the orbiter's internal airlocks have been replaced with external docking systems to allow for a greater amount of cargo to be stored on the shuttle's mid-deck during station resupply missions. The Space Shuttle Main Engines (SSMEs) have had several improvements to enhance reliability and power. SSME redirect here For the services field see Service Science Management and Engineering The Space Shuttle Main Engines ( SSMEs This explains phrases such as "Main engines throttling up to 104%. " This does not mean the engines are being run over a safe limit. The 100% figure is the original specified power level. During the lengthy development program, Rocketdyne determined the engine was capable of safe reliable operation at 104% of the originally specified thrust. Pratt & Whitney Rocketdyne is a United States company that designs and produces Rocket engines that use liquid propellants. They could have rescaled the output number, saying in essence 104% is now 100%. To clarify this would have required revising much previous documentation and software, so the 104% number was retained. SSME upgrades are denoted as "block numbers", such as block I, block II, and block IIA. The upgrades have improved engine reliability, maintainability and performance. The 109% thrust level was finally reached in flight hardware with the Block II engines in 2001. The normal maximum throttle is 104%, with 106% and 109% available for abort emergencies. For the first two missions, STS-1 and STS-2, the external tank was painted white to protect the insulation that covers much of the tank, but improvements and testing showed that it was not required. A Space Shuttle abort is an emergency procedure due to equipment failure on NASA 's Space Shuttle, most commonly during ascent The first Space Shuttle mission STS (Space Transportation System-1, was launched April 12 1981, and returned April 14. STS-2 was a Space shuttle mission by NASA using the Space Shuttle ''Columbia'', that launched on November 12, 1981. A Space Shuttle External Tank ( ET) is the component of the Space Shuttle launch vehicle that contains the Liquid hydrogen fuel and Liquid oxygen The weight saved by not painting the tank results in an increase in payload capability to orbit. [9] Additional weight was saved by removing some of the internal "stringers" in the hydrogen tank that proved unnecessary. The resulting "light-weight external tank" has been used on the vast majority of shuttle missions. STS-91 saw the first flight of the "super light-weight external tank". STS-91 was the final Space Shuttle mission to the Mir space station This version of the tank is made of the 2195 aluminum-lithium alloy. It weighs 7,500 lb (3. 4 t) less than the last run of lightweight tanks. As the shuttle cannot fly unmanned, each of these improvements has been "tested" on operational flights. The SRBs (Solid Rocket Boosters) have undergone improvements as well. Design engineers added a third O-ring seal to the joints between the segments after the Space Shuttle Challenger disaster. An o-ring, also known as a packing or a toric joint, is a mechanical Gasket in the shape of a Torus; it is a loop of Elastomer with The Space Shuttle Challenger disaster took place on January 28 1986 when ''Challenger'', a Space Shuttle operated by NASA, broke apart The three nozzles of the Main Engine cluster with the two Orbital Maneuvering System (OMS) pods, and the vertical stabilizer above. SSME redirect here For the services field see Service Science Management and Engineering The Space Shuttle Main Engines ( SSMEs The Orbital Maneuvering System, or OMS (pronounced /omz/ is a system of Rocket engines used on the space shuttle orbiter for orbital injection The vertical stabilizers, or fins, of Aircraft, Missiles or Bombs are typically found on the aft end of the Fuselage or body Several other SRB improvements were planned in order to improve performance and safety, but never came to be. These culminated in the considerably simpler, lower cost, probably safer and better performing Advanced Solid Rocket Booster. These rockets entered production in the early to mid-1990s to support the Space Station, but were later canceled to save money after the expenditure of \$2. 2 billion. [10] The loss of the ASRB program resulted in the development of the Super LightWeight external Tank (SLWT), which provides some of the increased payload capability, while not providing any of the safety improvements. In addition, the Air Force developed their own much lighter single-piece SRB design using a filament-wound system, but this too was cancelled. STS-70 was delayed in 1995, when woodpeckers bored holes in the foam insulation of Discovery's external tank. STS-70 was a Space Shuttle ''Discovery'' mission to insert a Tracking and Data Relay Satellite (TDRS into earth orbit The woodpeckers, piculets and wrynecks are a family, Picidae, of Near-passerine Birds. Since then, NASA has installed commercial plastic owl decoys and inflatable owl balloons which must be removed prior to launch. [11] The delicate nature of the foam insulation has been the cause of damage to the Thermal Protection System, the tile heat shield and heat wrap of the orbiter, during recent launches. The Space Shuttle thermal protection system (TPS is the barrier that protects the Space Shuttle Orbiter during the searing 1650 °C (3000 °F) heat of NASA remains confident that this damage, while linked to the Space Shuttle Columbia disaster on February 1, 2003, will not jeopardize the objective of NASA to complete the International Space Station (ISS) in the projected time allotted. The Space Shuttle Columbia disaster occurred on February 1, 2003, when the Space Shuttle ''Columbia'' disintegrated over Texas Events 1327 - Teenaged Edward III is crowned King of England, but the country is ruled by his mother Queen Year 2003 ( MMIII) was a Common year starting on Wednesday of the Gregorian calendar. A cargo-only, unmanned variant of the shuttle has been variously proposed, and rejected since the 1980s. It was called the Shuttle-C, and would have traded re-usability for cargo capability, with large potential savings from reusing technology developed for the space shuttle. The Shuttle-C was a NASA proposal to turn the Space Shuttle launch stack into a dedicated unmanned cargo launcher On the first four shuttle missions, astronauts wore modified U. S. Air Force high-altitude full-pressure suits, which included a full-pressure helmet during ascent and descent. From the fifth flight, STS-5, until the loss of Challenger, one-piece light blue nomex flight suits and partial-pressure helmets were worn. STS-5 was a Space shuttle mission by NASA using the Space Shuttle Columbia, launched November 11, 1982. The Space Shuttle Challenger disaster took place on January 28 1986 when ''Challenger'', a Space Shuttle operated by NASA, broke apart Nomex (styled NOMEX) is a registered Trademark for flame resistant meta- Aramid material developed in the early 1960s by DuPont and first marketed A less-bulky, partial-pressure version of the high-altitude pressure suits with a helmet was reinstated when shuttle flights resumed in 1988. The LES ended its service life in late 1995, and was replaced by the full-pressure Advanced Crew Escape Suit (ACES), which resembles the Gemini space suit worn in the mid-1960s. The Advanced Crew Escape Suit, or ACES is a full Pressure suit currently worn by all Space Shuttle crews for the ascent and entry portions of flight The Gemini space suit is a Space suit worn by astronauts for launch in-flight activities (including EVAs and landing To extend the duration that orbiters can stay docked at the ISS, the Station-to-Shuttle Power Transfer System (SSPTS) was installed. The Station-to-Shuttle Power Transfer System (SSPTS pronounced spits) allows a docked Space Shuttle to make use of power provided by the International Space The SSPTS allows these orbiters to use power provided by the ISS to preserve their consumables. The SSPTS was first used successfully on STS-118. STS-118 was a space shuttle mission to the International Space Station (ISS flown by Space Shuttle ''Endeavour''. ### Technical data Space Shuttle Atlantis transported by a Boeing 747 Shuttle Carrier Aircraft (SCA), 1998 (NASA). Space Shuttle Atlantis ( Orbiter Vehicle Designation: OV-104 is one of the three currently operational orbiters in the Space Shuttle fleet of WikipediaWikiProject Aircraft. Please see WikipediaWikiProject Aircraft/page content for recommended layout WikipediaWikiProject Aircraft. Please see WikipediaWikiProject Aircraft/page content for recommended layout Space Shuttle Endeavour being transported by a Boeing 747. Space Shuttle Endeavour ( Orbiter Vehicle Designation: OV-105 is one of the three currently operational orbiters in the Space Shuttle fleet of WikipediaWikiProject Aircraft. Please see WikipediaWikiProject Aircraft/page content for recommended layout Space Shuttle Orbiter and Soyuz-TM (drawn to scale). An overhead view of Atlantis as it sits atop the Mobile Launcher Platform (MLP) before STS-79. The Mobile Launcher Platform or MLP is a two-story structure used by NASA, along with the Crawler-Transporter, to transport the Space Shuttle STS-79 was a Space Shuttle ''Atlantis'' mission to the Mir space station Two Tail Service Masts (TSMs) to either side of the orbiter's tail provide umbilical connections for propellant loading and electrical power. Water is released onto the mobile launcher platform on Launch Pad 39A at the start of a rare sound suppression system test in 2004. During launch, 300,000 US gallons (1,100 m³) are poured onto the pad in only 41 seconds. Orbiter specifications[12] (for Endeavour, OV-105) • Length: 122. 17 ft (37. 24 m) • Wingspan: 78. 06 ft (23. 79 m) • Height: Template:Convert/LoffAyesDbSoff • Empty weight: 151,205 lb (68,585 kg) • Gross Liftoff Weight: 240,000 lb (109,000 kg) • Maximum Landing Weight: 230,000 lb (104,000 kg) • Main engines: Three Rocketdyne Block IIA SSMEs, each with a sea level thrust of 393,800 pounds-force (lbf) (178,600 kilograms-force (kgf) / 1. Thrust is a reaction force described quantitatively by Newton 's Second and Third Laws. This article deals with the unit of force For the unit of mass see Pound (mass. The unit kilogram-force ( kgf, often incorrectly just kg) or kilopond ( kp) is defined as the Force exerted by Earth's gravity 75 meganewtons (MN)) • Maximum payload: 55,250 pounds (25,061 kg) • Payload bay dimensions: Template:Convert/LoffAyesDbSoff by Template:Convert/LoffAyesDbSoff • Operational altitude: 100 to 520 nmi (185 to 960 km) • Speed: 25,404 ft/s (7,743 m/s, 27,875 km/h, 17,321 mi/h) • Crossrange: 1,085 nmi (2,009 km) • Crew: Varies. A nautical mile or sea mile is a unit of Length. It corresponds approximately to one minute of Latitude along any meridian. The earliest shuttle flights had the minimum crew of two; many later missions a crew of five. Today, typically seven people fly (commander, pilot, several mission specialists, and rarely a flight engineer). Commander is a Military rank which is also sometimes used as a military title depending on the individual customs of a given military service Astronaut ranks and positions‎ A Mission Specialist (MS is a position held by certain NASA Astronauts for the Space Shuttle program In Aviation, a flight engineer is a member of the aircrew of some Aircraft. On two occasions, eight astronauts have flown (STS-61-A, STS-71). STS-61-A was the 22nd Space Shuttle mission It was a scientific Spacelab mission booked by Germany - hence the payload name D-1 (for Deutschland 1 STS-71 was the third mission of the US / Russian Shuttle-Mir Program, which carried out the first Space Shuttle docking to Mir Eleven people could be accommodated in an emergency mission (see STS-3xx). Space shuttle missions designated STS-3xx (officially called Launch On Need missions are rescue missions which would be mounted to rescue the crew of a Space Shuttle External tank specifications (for SLWT) • Length: 153. 8 ft (46. 9 m) • Diameter: 27. 6 ft (8. 4 m) • Propellant volume: 535,000 US gal (2,025 ) • Empty Weight: 58,500 lb (26,535 kg) • Gross Liftoff Weight: 1,667,000 lb (756,000 kg) Solid Rocket Booster Specifications • Length: Template:Convert/LoffAyesDbSoff • Diameter: Template:Convert/LoffAyesDbSoff • Empty Weight (per booster): 139,490 lb (63,272 kg) • Gross Liftoff Weight (per booster): 1. A gallon is a measure of Volume. It is in current use in the United States and still has limited use in many other English-speaking countries CM3 redirects here If you were looking for the 3rd game in the Cooking Mama series abbreviated as CM3 see here. 3 million lb (590,000 kg) • Thrust (sea level, liftoff): 2. 8 million lbf (12. 5 MN) System Stack Specifications • Height: Template:Convert/LoffAyesDbSoff • Gross Liftoff Weight: 4. 5 million lb (2,040,000 kg) • Total Liftoff Thrust: 6. 781 million lbf (30. 16 MN) ## Mission profile ### Launch All Space Shuttle missions are launched from Kennedy Space Center (KSC). The John F Kennedy Space Center ( KSC) is the NASA Space vehicle launch facility and Launch Control Center ( Spaceport) on The shuttle will not be launched under conditions where it could be struck by lightning. Lightning is an atmospheric discharge of Electricity, which typically occurs during Thunderstorms and sometimes during volcanic eruptions or Aircraft are often struck by lightning with no adverse effects because the electricity of the strike is dissipated through its conductive structure and the aircraft is not electrically grounded. In Electrical engineering, the term ground or earth has several meanings depending on the specific application areas Like most jet airliners, the shuttle is mainly constructed of conductive aluminum, which would normally shield and protect the internal systems. However, upon takeoff the shuttle sends out a long exhaust plume as it ascends, and this plume can trigger lightning by providing a current path to ground. The NASA Anvil Rule for a shuttle launch states an anvil cloud cannot appear within a distance of 10 nautical miles. Cumulonimbus (Cb is a type of Cloud that is tall dense and involved in Thunderstorms and other intense Weather. A nautical mile or sea mile is a unit of Length. It corresponds approximately to one minute of Latitude along any meridian. [13] The Shuttle Launch Weather Officer will monitor conditions until the final decision to scrub a launch is announced. In addition, the weather conditions must be acceptable at one of the Transatlantic Abort Landing sites (One of several Space Shuttle abort modes) to launch. A Space Shuttle abort is an emergency procedure due to equipment failure on NASA 's Space Shuttle, most commonly during ascent [14] While the shuttle might safely endure a lightning strike, a similar strike caused problems on Apollo 12, so for safety NASA chooses not to launch the shuttle if lightning is possible (NPR8715. The National Aeronautics and Space Administration ( NASA, ˈnæsə is an agency of the United States government, responsible for the nation's public space program 5). The Shuttle has not been launched if its flight will take it from one year to the next (December to January), a year-end rollover (YERO). Its flight software, designed in the 1970s, was not designed for this, and would require the orbiter's computers be reset through a change of year, which could cause a glitch while in orbit. In 2007, NASA engineers devised a solution to this, allowing Shuttle flights to cross the year-end boundary. [15] On the day of a launch, after the final hold in the countdown at T minus 9 minutes, the Shuttle goes through its final preparations for launch, and the countdown is automatically controlled by a special computer program at the Launch Control Center. This is known as the Ground Launch Sequencer (GLS), which stops the count if it senses a critical problem with any of the Shuttle's on-board systems. The GLS hands off the count to the Shuttle's on-board computers at T minus 31 seconds. At T minus 16 seconds, the massive sound suppression system (SPS) begins to drench the Mobile Launcher Platform (MLP) and SRB trenches with 300,000 U. The Mobile Launcher Platform or MLP is a two-story structure used by NASA, along with the Crawler-Transporter, to transport the Space Shuttle S. gallons (1,100 m³) of water to protect the Orbiter from damage by acoustical energy and rocket exhaust reflected from the flame trench and MLP during liftoff. Acoustics is the interdisciplinary science that deals with the study of Sound, Ultrasound and Infrasound (all mechanical waves in gases liquids and solids [16] At T-minus 10 seconds, hydrogen igniters are activated under each engine bell to quell the stagnant gas inside the cones before ignition. Failure to burn these gases can trip the onboard sensors and create the possibility of an overpressure and explosion of the vehicle during the firing phase. The main engine turbopumps are also commanded to begin charging the combustion chambers with liquid hydrogen and liquid oxygen at this time. The computers reciprocate this action by allowing the redundant computer systems to begin the firing phase. The three Space Shuttle Main Engines (SSMEs) start at T minus 6. SSME redirect here For the services field see Service Science Management and Engineering The Space Shuttle Main Engines ( SSMEs 6 seconds. The main engines ignite sequentially via the shuttle's general purpose computers (GPCs) at 120 millisecond intervals. The GPCs require that the engines reach 90% of their rated performance to complete the final gimbal of the main engine nozzles to liftoff configuration. [17] When the SSMEs start, the water from the sound suppression system flashes into a large volume of steam that shoots southward. All three SSMEs must reach the required 100% thrust within three seconds, otherwise the onboard computers will initiate an RSLS abort. A Space Shuttle abort is an emergency procedure due to equipment failure on NASA 's Space Shuttle, most commonly during ascent If the onboard computers verify normal thrust buildup, at T minus 0 seconds, the SRBs are ignited. The Space Shuttle Solid Rocket Boosters (SRBs are the pair of large solid rockets used by the Space Shuttle during the first two minutes of powered flight At this point the vehicle is committed to takeoff, as the SRBs cannot be turned off once ignited. After the SRBs reach a stable thrust ratio, pyrotechnic nuts are detonated by radio controlled signals from the shuttle's GPC's to release the vehicle [18]. The plume from the solid rockets exits the flame trench in a northward direction at near the speed of sound, often causing a rippling of shockwaves along the actual flame and smoke contrails. At ignition, the GPC's mandate the firing sequences via the Master Events Controller, a computer program integrated with the shuttle's four redundant computer systems. There are extensive emergency procedures (abort modes) to handle various failure scenarios during ascent. A Space Shuttle abort is an emergency procedure due to equipment failure on NASA 's Space Shuttle, most commonly during ascent Many of these concern SSME failures, since that is the most complex and highly stressed component. After the Challenger disaster, there were extensive upgrades to the abort modes. The Space Shuttle Challenger disaster took place on January 28 1986 when ''Challenger'', a Space Shuttle operated by NASA, broke apart Shuttle launch of Atlantis at sunset in 2001. The sun is behind the camera, and the plume's shadow intersects the moon across the sky. STS mission profile SSLV at Mach 2. 46 and 66,000 feet (20,000 m). The surface of the vehicle is colored by the pressure coefficient, and the gray contours represent the density of the surrounding air, as calculated using the OVERFLOW codes. OVERFLOW - the OVERset grid FLOW solver - is a software package for simulating fluid flow around solid bodies using Computational fluid dynamics (CFD After the main engines start, but while the solid rocket boosters are still clamped to the pad, the offset thrust from the Shuttle's three main engines causes the entire launch stack (boosters, tank and shuttle) to pitch down about 2 m at cockpit level. This motion is called the "nod", or "twang" in NASA jargon. As the boosters flex back into their original shape, the launch stack pitches slowly back upright. This takes approximately six seconds. At the point when it is perfectly vertical, the boosters ignite and the launch commences. Shortly after clearing the tower the Shuttle begins a roll and pitch program to set its orbital inclination and so that the vehicle is below the external tank and SRBs, with wings level. The vehicle climbs in a progressively flattening arc, accelerating as the weight of the SRBs and main tank decrease. To achieve low orbit requires much more horizontal than vertical acceleration. This is not visually obvious, since the vehicle rises vertically and is out of sight for most of the horizontal acceleration. The near circular orbital velocity at the 380 km (236 statute miles) altitude of the International Space Station is 7. A mile is a unit of Length, usually used to measure Distance, in a number of different systems including Imperial units United States 68 kilometers per second (27,650 km/h, 17,180 mph), roughly equivalent to Mach 23 at sea level. As the International Space Station orbits at an inclination of 51. 6 degrees, the Shuttle has to set its inclination to the same value to rendezvous with the station. Around a point called Max Q, where the aerodynamic forces are at their maximum, the main engines are temporarily throttled back to avoid overspeeding and hence overstressing the Shuttle, particularly in vulnerable areas such as the wings. In Aerospace engineering, max Q is the point of maximum Dynamic pressure, the point at which aerodynamic stress on a spacecraft in atmospheric flight is maximized A speed or Velocity greater than that for which the aircraft was designed At this point, a phenomenon known as the Prandtl-Glauert singularity occurs, where condensation clouds form during the vehicle's transition to supersonic speed. 126 seconds after launch, explosive bolts release the SRBs and small separation rockets push them laterally away from the vehicle. A pyrotechnic fastener (also called an explosive bolt, or pyro, within context is a Fastener, usually a nut or bolt that incorporates a pyrotechnic The SRBs parachute back to the ocean to be reused. The Shuttle then begins accelerating to orbit on the Space Shuttle main engines. SSME redirect here For the services field see Service Science Management and Engineering The Space Shuttle Main Engines ( SSMEs The vehicle at that point in the flight has a thrust-to-weight ratio of less than one — the main engines actually have insufficient thrust to exceed the force of gravity, and the vertical speed given to it by the SRBs temporarily decreases. However, as the burn continues, the weight of the propellant decreases and the thrust-to-weight ratio exceeds 1 again and the ever-lighter vehicle then continues to accelerate toward orbit. The vehicle continues to climb and takes on a somewhat nose-up angle to the horizon — it uses the main engines to gain and then maintain altitude while it accelerates horizontally towards orbit. At about five and three-quarter minutes into ascent, the orbiter rolls heads up to switch communication links from ground stations to Tracking and Data Relay Satellites. A Tracking and Data Relay Satellite ( TDRS) is one of a network of Communications satellites of the Tracking and Data Relay Satellite System (TDRSS used Finally, in the last tens of seconds of the main engine burn, the mass of the vehicle is low enough that the engines must be throttled back to limit vehicle acceleration to 3g (30 m/s²), largely for astronaut comfort. Before complete depletion of propellant, as running dry would destroy the engines, the main engines are shut down. The oxygen supply is terminated before the hydrogen supply, as the SSMEs react unfavorably to other shutdown modes. Liquid oxygen has a tendency to react violently, and supports combustion when it encounters hot engine metal. The external tank is released by firing explosive bolts and falls, largely burning up in the atmosphere, though some fragments fall into the Indian Ocean. The Indian Ocean is the third largest of the world's Oceanic divisions covering about 20% of the water on the Earth 's surface The sealing action of the tank plumbing and lack of pressure relief systems on the external tank helps it break up in the lower atmosphere. After the foam burns away during reentry, the heat causes a pressure buildup in the remaining liquid oxygen and hydrogen until the tank explodes. This ensures that any pieces that fall back to Earth are small. To prevent the shuttle from following the external tank back into the lower atmosphere, the Orbital maneuvering system (OMS) engines are fired to raise the perigee higher into the upper atmosphere. The Orbital Maneuvering System, or OMS (pronounced /omz/ is a system of Rocket engines used on the space shuttle orbiter for orbital injection On some missions (e. g. , missions to the ISS), the OMS engines are also used while the main engines are still firing. The reason for putting the orbiter on a path that brings it back to Earth is not just for external tank disposal. It is one of safety; if the OMS malfunctions, or the cargo bay doors cannot open for some reason, the shuttle is already on a path to return to earth for an emergency abort landing. Since it flies in the upper atmosphere, the craft's orbit slowly decays due to air friction. The orbiter must periodically boost its velocity with the OMS to prevent re-entry into the lower atmosphere. ### Re-entry and landing Simulation of the outside of the Shuttle as it heats up to over 1,500°C during re-entry. Almost the entire space shuttle re-entry, except for lowering the landing gear and deploying the air data probes, is normally performed under computer control. However, the re-entry can be flown entirely manually if an emergency arises. The approach and landing phase can be controlled by the autopilot, but is usually hand flown. The vehicle begins re-entry by firing the Orbital maneuvering system engines, while flying upside down, backside first, in the opposite direction to orbital motion for approximately three minutes, giving roughly 200 mph (90 m/s) of delta-v. The resultant slowing of the Shuttle lowers its orbital perigee down into the upper atmosphere. The shuttle then flips over, by pulling its nose up (which is actually "down" because it's flying upside down). This OMS firing is done roughly halfway around the globe from the landing site. The vehicle starts encountering more significant air density in the lower thermosphere at about 400,000 ft (120 km), at around Mach 25 (8. Mach number (\mathrm{Ma} or M (generally ˈmɑːk sometimes /ˈmɑːx/ or /ˈmæk/ is the speed of an object moving through air or any Fluid 2 km/s). The vehicle is controlled by a combination of RCS thrusters and control surfaces, to fly at a 40 degree nose-up attitude, producing high drag, not only to slow it down to landing speed, but also to reduce reentry heating. In addition, the vehicle needs to bleed off extra speed before reaching the landing site. This is achieved by performing s-curves at up to a 70 degree roll angle. The orbiter's maximum glide ratio/lift to drag ratio varies considerably with speed, ranging from 1:1 at hypersonic speeds, 2:1 at supersonic speeds and reaching 4. Glide ratio, also called Lift-to-drag ratio, glide number or finesse is an Aviation term that refers to the distance an Aircraft will move forward for In Aerodynamics, hypersonic speeds are speeds that are highly Supersonic. 5:1 at subsonic speeds during approach and landing. [19] In the lower atmosphere, the orbiter flies much like a conventional glider, except for a much higher descent rate, over 10,000 feet per minute (50 m/s). At approximately Mach 3, two air data probes, located on the left and right sides of the orbiter's forward lower fuselage, are deployed to sense air pressure related to the vehicle's movement in the atmosphere. Columbia touches down at Kennedy Space Center at the end of STS-73. Space Shuttle Columbia ( NASA Orbiter Vehicle Designation: OV-102) was the first spaceworthy Space shuttle in NASA 's STS-73 is a Space Shuttle program mission Crew Kenneth D Bowersox (3 Commander Kent V When the approach and landing phase begins, the orbiter is at a 10,000 ft (3,000 m) altitude, 7. 5 miles (12 km) from the runway. The pilots apply aerodynamic braking to help slow down the vehicle. The orbiter's speed is reduced from 424 mph (682 km/h) to approximately 215 mph (346 km/h), (compared to 160 mph (260 km/h) for a jet airliner), at touch-down. The landing gear is deployed while the Orbiter is flying at 267 mph (430 km/h). To assist the speed brakes, a 40 ft (12 m) drag chute is deployed either after main gear or nose gear touchdown (depending on selected chute deploy mode) at about 213 mph (343 km/h). The chute is jettisoned as the orbiter slows through 69 mph (110 km/h). After landing, the vehicle stands on the runway for several minutes to permit the fumes from poisonous hydrazine, used as a propellant for attitude control, to dissipate, and for the shuttle fuselage to cool before the astronauts disembark. Hydrazine is a Chemical compound with the formula N2H4 It has an Ammonia -like odor and is derived from the same industrial chemistry Aircraft attitude is used to mean two closely related aspects of the situation of an aircraft in flight A reaction control system, abbreviated RCS, is a subsystem of a Spacecraft. ### Landing sites Conditions permitting, the space shuttle will always land at Kennedy Space Center; however, if the conditions make landing there unfavorable, the shuttle can touch down at Edwards Air Force Base in California or at other sites around the world. The John F Kennedy Space Center ( KSC) is the NASA Space vehicle launch facility and Launch Control Center ( Spaceport) on Edwards Air Force Base is a United States Air Force base located on the border of Kern County and Los Angeles County California in the Antelope California ( is a US state on the West Coast of the United States, along the Pacific Ocean. A landing at Edwards means that the shuttle must be mated to the Shuttle Carrier Aircraft, and returned to Cape Canaveral, costing NASA an additional 1. WikipediaWikiProject Aircraft. Please see WikipediaWikiProject Aircraft/page content for recommended layout Cape Canaveral from the Spanish Cabo Cañaveral, is a headland in Brevard County Florida, United States, near the center of that 7 million dollars. Space Shuttle Columbia (STS-3) also landed once at the White Sands Space Harbor in New Mexico, but this is often a last resort, as NASA scientists believe that the sand could cause damage to the shuttle's exterior. STS-3 was the third Space shuttle mission and was the third mission for the Space Shuttle Columbia. White Sands Missile Range (WSMR is a Rocket range of almost area the largest military installation in the United States New Mexico ( is a state located in the southwestern region of the United States of America. A computer simulation of high velocity air flow around the space shuttle during re-entry. A computer is a Machine that manipulates data according to a list of instructions. A list of other landing sites:[20] A list of launch abort sites: ## Fleet history DateOrbiterEventRemarks February 18, 1977EnterpriseFirst flightAttached to Shuttle Carrier Aircraft throughout flight. Events 3102 BC - Epoch (origin of the Kali Yuga. 1229 - The Sixth Crusade: Frederick II Holy Also 1977 (album by Ash. Year 1977 ( MCMLXXVII) was a Common year starting on Saturday (link displays The Space Shuttle Enterprise ( NASA Orbiter Vehicle Designation: OV-101 was the first Space Shuttle built for NASA. WikipediaWikiProject Aircraft. Please see WikipediaWikiProject Aircraft/page content for recommended layout August 12, 1977EnterpriseFirst free flightTailcone on; lakebed landing. Events 1099 - First Crusade: Battle of Ascalon - Crusaders under the command of Godfrey of Bouillon defeat Fatimid Also 1977 (album by Ash. Year 1977 ( MCMLXXVII) was a Common year starting on Saturday (link displays October 12, 1977EnterpriseFourth free flightFirst with no tailcone; lakebed landing. Events 539 BC - The army of Cyrus the Great of Persia takes Babylon. Also 1977 (album by Ash. Year 1977 ( MCMLXXVII) was a Common year starting on Saturday (link displays October 26, 1977EnterpriseFinal Enterprise free flightFirst landing on Edwards AFB concrete runway. Events 740 - An Earthquake strikes Constantinople, causing much damage and death Also 1977 (album by Ash. Year 1977 ( MCMLXXVII) was a Common year starting on Saturday (link displays April 12, 1981ColumbiaFirst Columbia flight, first orbital test flightSTS-1 November 11, 1982ColumbiaFirst operational flight of the Space Shuttle, first mission to carry four astronautsSTS-5 April 4, 1983ChallengerFirst Challenger flightSTS-6 August 30, 1984DiscoveryFirst Discovery flightSTS-41-D October 3, 1985AtlantisFirst Atlantis flightSTS-51-J January 28, 1986ChallengerDisintegrated 73 seconds after launchAll seven crew members perished. Events 467 - Anthemius is elevated to Emperor of the Western Roman Empire. Year 1981 ( MCMLXXXI) was a Common year starting on Thursday (link displays the 1981 Space Shuttle Columbia ( NASA Orbiter Vehicle Designation: OV-102) was the first spaceworthy Space shuttle in NASA 's The first Space Shuttle mission STS (Space Transportation System-1, was launched April 12 1981, and returned April 14. Events 308 - The Congress of Carnuntum: Attempting to keep peace within the Roman Empire, the leaders of the Tetrarchy declare Year 1982 ( MCMLXXXII) was a Common year starting on Friday (link displays the 1982 Gregorian calendar) STS-5 was a Space shuttle mission by NASA using the Space Shuttle Columbia, launched November 11, 1982. Events 1581 - Francis Drake completes a circumnavigation of the world and is knighted by Elizabeth I. Year 1983 ( MCMLXXXIII) was a Common year starting on Saturday (link displays the 1983 Gregorian calendar) Space Shuttle Challenger ( NASA Orbiter Vehicle Designation: OV-099 was NASA's second Space Shuttle orbiter to be put into service STS-6 was a Space Shuttle mission conducted by NASA using Space Shuttle ''Challenger''. Events 1363 - Beginning date of the Battle of Lake Poyang; the forces of two Chinese rebel leaders— Chen Youliang and Year 1984 ( MCMLXXXIV) was a Leap year starting on Sunday (link displays the 1984 Gregorian calendar) Space Shuttle Discovery ( Orbiter Vehicle Designation: OV-103 is one of the three currently operational orbiters in the Space Shuttle fleet of STS-41-D was the first space shuttle mission for Space Shuttle ''Discovery''. Events 42 BC - First Battle of Philippi: Triumvirs Mark Antony and Octavian fight an indecisive battle with Caesar's Year 1985 ( MCMLXXXV) was a Common year starting on Tuesday (link displays 1985 Gregorian calendar) Space Shuttle Atlantis ( Orbiter Vehicle Designation: OV-104 is one of the three currently operational orbiters in the Space Shuttle fleet of STS-51-J was a Space shuttle mission by NASA that was the first to use the Space Shuttle ''Atlantis''. Events 1077 - Walk to Canossa: The Excommunication of Henry IV Holy Roman Emperor is lifted Year 1986 ( MCMLXXXVI) was a Common year starting on Wednesday (link displays 1986 Gregorian calendar) The Space Shuttle Challenger disaster took place on January 28 1986 when ''Challenger'', a Space Shuttle operated by NASA, broke apart September 29, 1988DiscoveryFirst post-Challenger missionSTS-26 May 4, 1989AtlantisThe first Space Shuttle mission to launch a space probe, Magellan. Events 522 BC - Darius I of Persia kills the Magian usurper Gaumâta securing his hold as king of the Persian Empire. Year 1988 ( MCMLXXXVIII) was a Leap year starting on Friday (link displays 1988 Gregorian calendar) STS-26 was the 26th mission and seventh for Space Shuttle ''Discovery'', launched from Kennedy Space Center, Florida. Events 1256 - The Augustinian monastic order is constituted at the Lecceto Monastery when Pope Alexander IV Year 1989 ( MCMLXXXIX) was a Common year starting on Sunday (link displays 1989 Gregorian calendar) The Magellan spacecraft was a space probe sent to the planet Venus, the first post- Voyager unmanned spacecraft to be launched by NASA STS-30 May 7, 1992EndeavourFirst Endeavour flightSTS-49 November 19, 1996ColumbiaLongest Shuttle mission to date at 17 days, 15 hoursSTS-80 October 11, 2000Discovery100th Space Shuttle missionSTS-92 February 1, 2003ColumbiaDisintegrated during re-entryAll seven crew members perished. STS-30 was a Space shuttle mission by NASA using the Space Shuttle ''Atlantis''. Events 558 - In Constantinople, the dome of the Hagia Sophia collapses Year 1992 ( MCMXCII) was a Leap year starting on Wednesday (link will display full 1992 Gregorian calendar) Space Shuttle Endeavour ( Orbiter Vehicle Designation: OV-105 is one of the three currently operational orbiters in the Space Shuttle fleet of STS-49 was the maiden flight of the Space Shuttle ''Endeavour''. Events 1095 - The Council of Clermont, called by Pope Urban II to discuss sending the First Crusade to the Holy Land Year 1996 ( MCMXCVI) was a Leap year starting on Monday (link will display full 1996 Gregorian calendar) STS-80 was a Space Shuttle mission flown by Space Shuttle ''Columbia''. Events 1138 - A massive earthquake struck Aleppo, Syria. 1531 - Huldrych Zwingli is killed 2000 ( MM) was a Leap year that started on Saturday of the Common Era, in accordance with the Gregorian calendar. STS-92 was a Space Shuttle mission to the International Space Station (ISS flown by Space Shuttle '' Discovery''. Events 1327 - Teenaged Edward III is crowned King of England, but the country is ruled by his mother Queen Year 2003 ( MMIII) was a Common year starting on Wednesday of the Gregorian calendar. The Space Shuttle Columbia disaster occurred on February 1, 2003, when the Space Shuttle ''Columbia'' disintegrated over Texas July 25, 2005DiscoveryFirst post-Columbia missionSTS-114 Planned fleet events 2010AtlantisLast planned Atlantis flightSTS-131 2010DiscoveryLast planned Discovery flightSTS-132 2010EndeavourLast planned Endeavour flight; Last flight of the Space Shuttle ProgramSTS-133 ### Fiction and games • Space shuttles in fiction • Space Shuttle Mission 2007, latest Space Shuttle simulator for Windows XP and Vista PCs. A human spaceflight is a Spaceflight with a human crew, and possibly passengers These chronological lists include all crewed spaceflights that reached an altitude of at least 100 km (the FAI definition of spaceflight or were launched with that intention but failed Even before the Apollo moon landing in 1969 in October 1968 NASA began early studies of space shuttle designs A NASA Orbiter Processing Facility ( OPF) is one of the Hangars where Space Shuttle orbiters undergo maintenance between flights The Shuttle-Derived Launch Vehicle, or simply Shuttle-Derived Vehicle (SDV, is a term describing one of a wide array of concepts that have been developed for creating space Shuttle SERV was a concept that was never realized put forward in 1971 by Chrysler Corporation, for NASA 's Alternate Space Shuttle Concept program Space accidents, either during operations or training for Spaceflights have killed 22 Astronauts (five percent of all people who have been in space two percent History First orbital flights The first successful orbital launch was of the Soviet unmanned Sputnik A Space Shuttle abort is an emergency procedure due to equipment failure on NASA 's Space Shuttle, most commonly during ascent This is a list of persons who served aboard Space Shuttle crews, arranged in chronological order by mission The Shuttle Training Aircraft (STA is a NASA training vehicle that duplicates the Space Shuttle 's approach profile and handling qualities allowing Even before the first space shuttle was launched Science fiction filmmakers were featuring the craft in their productions • Orbiter, a freeware simulator that allows users to fly various spacecraft including the shuttle. Orbiter is a Closed source Freeware space flight simulator for the Windows operating system • Space Shuttle America • Shuttle, a Space Shuttle simulator for PC, Amiga & Atari ST. Space Shuttle America (also known as Space Shuttle America - The Next Century) was a Motion simulator ride at the Six Flags Great America • X-Plane, a flight simulator that allows players to fly the Space Shuttle's re-entry phase. X-Plane is a Flight simulator for Personal computers produced ## References 1. ^ a b NASA (1995). The lifting body is an Aircraft configuration where the body itself produces lift. A reusable launch system (or reusable launch vehicle, RLV is a Launch system which is capable of launching a Launch vehicle into space more than once A single-stage-to-orbit (or SSTO) vehicle reaches Orbit from the surface of a body without jettisoning hardware expending only propellants and fluids WikipediaWikiProject Aircraft. Please see WikipediaWikiProject Aircraft/page content for recommended layout Hermes was a proposed mini-shuttle designed by the French Centre national d'études spatiales ( CNES) in 1975, and later by the European Space Agency HOPE was a Japanese experimental Spaceplane project designed by a partnership between NASDA and NAL (both now part of JAXA) started Kliper ( Клипер, English: Clipper) is a Russian-proposed next generation manned Spacecraft that was almost selected as the successor A military space shuttle would have been the military equivalent of NASA 's Space shuttle. Orion is a Spacecraft design currently under development by the United States space agency NASA. The Soviet reusable Spacecraft program Buran ("Бура́н" meaning " Snowstorm " or " Blizzard " in Russian The X-33 was an unmanned sub-scale technology demonstrator for the VentureStar under the Space Launch Initiative. DIRECT is a proposal for a set of Shuttle Derived Launch Vehicles to be used for future Spaceflights Developed independently from NASA, DIRECT proposes Earth's Atmosphere (English). National Aeronautics and Space Administration. The National Aeronautics and Space Administration ( NASA, ˈnæsə is an agency of the United States government, responsible for the nation's public space program Retrieved on October 25, 2007. 2. ^ NASA Space Shuttle Columbia Launch. 3. ^ NASA. Report of the Presidential Commission on the Space Shuttle Challenger Accident. NASA. 4. ^ The Computer History Museum (2006). Pioneering the Laptop:Engineering the GRiD Compass (English). The Computer History Museum. Retrieved on October 25, 2007. 5. ^ NASA (1985). Portable Computer (English). NASA. Retrieved on October 26, 2007. 6. ^ Ferguson, Roscoe C. ; Robert Tate and Hiram C. Thompson. Implementing Space Shuttle Data Processing System Concepts in Programmable Logic Devices. NASA Office of Logic Design. Retrieved on 2006-08-27. Year 2006 ( MMVI) was a Common year starting on Sunday of the Gregorian calendar. Events 479 BC - Greco-Persian Wars: Persian forces led by Mardonius are routed by Pausanias, the Spartan 7. ^ IBM. IBM and the Space Shuttle. IBM. International Business Machines Corporation abbreviated IBM and nicknamed "Big Blue", is a multinational Computer Technology Retrieved on August 27, 2006. 8. ^ (2007-09-12). Year 2007 ( MMVII) was a Common year starting on Monday of the Gregorian calendar in the 21st century. Events 1213 - Albigensian Crusade: Simon de Montfort 5th Earl of Leicester, defeats Peter II of Aragon at the Helvetica [Documentary]. Helvetica is an independent feature-length Documentary film about Typography and Graphic design, centred around the typeface of the same Documentary film is a broad category of visual expression that is based on the attempt in one fashion or another to " Document " reality 9. ^ Aerospaceweb. org (2006). Space Shuttle External Tank Foam Insulation (English). Aerospaceweb. org. Retrieved on October 25, 2007. 10. ^ Encyclopedia Astronautica. Shuttle. Encyclopedia Astronautica. 11. ^ Jim Dumoulin. Woodpeckers damage STS-70 External Tank. NASA. Retrieved on 2006-08-27. Year 2006 ( MMVI) was a Common year starting on Sunday of the Gregorian calendar. Events 479 BC - Greco-Persian Wars: Persian forces led by Mardonius are routed by Pausanias, the Spartan 12. ^ Jenkins, Dennis R. (2007). Space Shuttle: The History of the National Space Transportation System. Voyageur Press, 524 pages. ISBN 0963397451. 13. ^ Weather at About. com. What is the Anvil Rule for Thunderstorms? Accessed 2008-06-10. 14. ^ NASA Launch Blog. [1] Accessed 2008-06-10. 15. ^ Bergin, Chris. NASA solves YERO problem for shuttle. Retrieved on 2007-12-22. Year 2007 ( MMVII) was a Common year starting on Monday of the Gregorian calendar in the 21st century. Events 1790 - The Turkish fortress of Izmail is stormed and captured by Suvorov and his Russian armies 16. ^ National Aeronautics and Space Administration. "Sound Suppression Water System" Revised 2000-08-28. Accessed 2006-07-09. 17. ^ National Aeronautics and Space Administration. "NASA - Countdown 101" Accessed 2006-07-10. 18. ^ HSF - The Shuttle 19. ^ Space Shuttle Technical Conference pg 258 20. ^ Global Security. Space Shuttle Emergency Landing Sites. GlobalSecurity. org. Retrieved on 2007-08-03. Year 2007 ( MMVII) was a Common year starting on Monday of the Gregorian calendar in the 21st century. Events 8 - Roman Empire General Tiberius defeats Dalmatians on the river Bathinus.
2013-05-25 11:14:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28034213185310364, "perplexity": 5220.4758664288565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705939136/warc/CC-MAIN-20130516120539-00016-ip-10-60-113-184.ec2.internal.warc.gz"}
https://blog.flyingcoloursmaths.co.uk/heptagon-puzzle/
A few episodes of Wrong But Useful ago, Dave posed the problem: A regular unit heptagon ((a seven-sided shape where all the angles are the same, and all the sides are 1 unit long)) has two different diagonal lengths, $a$ and $b$. Show that $a+b = ab$ Unusually, I didn’t blog a solution. Sorry about that. ### A hat-rack trick I’ve come up with two approaches, but wanted to showcase one by [twit handle=’notonlyahatrack’] (who is Will in real life) first. He spotted that you can make a cyclic quadrilateral from any four of the corners, and that you can pick the corners so that the sides are (in order) $1$, $1$, $a$ and $b$, as shown here: He then uses a circle theorem I didn’t know, but probably should have: Ptolemy’s Theorem for cyclic quadrilaterals, which states: The product of the diagonals of a cyclic quadrilateral is equal to the sum of the products of opposite sides In the picture, that means $|CF|\cdot |BD| = |CD|\cdot |BF| + |BC|\cdot |BF|$, or $ab = a + b$, as required. Very elegant! While it’s perfectly legit, I didn’t know Ptolemy’s theorem, and had to resort to other methods. ### Similar triangles My first stab looked like this: Looking back, it wasn’t entirely obvious that triangles FED and FHD were congruent (although they are) - this is because the angle $F\hat D E$ is the same as angle $F \hat G D$, because $FC$ and $ED$ are parallel. ### Cosine rule There’s another reason the angles are the same, too: it comes down to another circle theorem (the one about the angles at the edge being half the angles in the middle), from which you can prove that the angles between adjacent diagonals and/or sides from any point on a regular shape are the same. That means their cosines are all the same, too. There are three distinct triangles here: • $FDE$, from which you can say the cosine of the angle is $\frac{a}{2}$; • $FDC$, from which you can say the cosine of the angle is $\frac{a^2 + b^2 - 1}{2ab}$ • $FCB$, which tells you the cosine is $\frac{2b^2 - 1}{2b^2}$ Working from $\frac{a}{2} = \frac{a^2 + b^2 - 1}{2ab}$, we get $a^2 b = a^2 + b^2 - 1$. That rearranges to $1 = a^2 + b^2 - a^2 b$. Working from $\frac{2b^2 - 1}{2b^2} = \frac{a}{2}$, we get $2b^2 - 1 = ab^2$, or $1 = b^2(2-a)$ Eliminating the 1, we end up with $b^2(2-a) = a^2 + b^2 - a^2 b$. Take away a b^2 from either side: $b^2 (1-a) = a^2(1-b)$, which expands to $b^2 - b^2a = a^2 - a^2 b$, or $b^2 - a^2 = b^2 a - a^2 b$; some nifty factorising: $(b-a)(b+a) = ab(b-a)$ - and since $b \ne a$, we can divide by $b-a$ to get: $b+a = ab$ as required. There’s probably a way through the algebra that’s a bit quicker, but I’m quite pleased with that one; the Mathematical Ninja would hate it.
2022-05-28 03:24:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8179091811180115, "perplexity": 514.2502751464123}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00160.warc.gz"}
http://mathematica.stackexchange.com/questions/23290/series-expansion-in-terms-of-hermite-polynomials
# Series expansion in terms of Hermite polynomials I am trying to expand a polynomial in terms of orthogonal polynomials (in my case, Hermite). Maple has a nice built-in function for this, ChangeBasis. Is there a similar function in Mathematica? And if not, where should I look for the algorithm? - The Hermite polynomials are orthogonal with respect to the inner product $$\langle f,g \rangle = \int_{-\infty}^{\infty} f(x)g(x)e^{-x^2} \, \mathrm dx.$$ Thus, the $n$-th coefficient can be computed using the inner product of your polynomial with the $n$-th normalized Hermite polynomial. Example: p[x_] = 1 + x + x^2 + x^3; coeffs = Table[ Integrate[HermiteH[n, x]*p[x]*Exp[-x^2], {x, -∞, ∞}]/ Integrate[HermiteH[n, x]^2*Exp[-x^2], {x, -∞, ∞}], {n, 0, 3}] (* Out: {3/2, 5/4, 1/4, 1/8} *) coeffs.Table[HermiteH[n, x], {n, 0, 3}] // Expand (* Out: 1 + x + x^2 + x^3 *) - For polynomials, you don't need to do any integrals to find the expansion. Take a polynomial p and a list basis containing the basis functions. Then define a function that takes these two, identifies the variable x, and solves for the coefficients in basis that make the two polynomials equal in terms of their CoefficientLists: expandPoly[p_, basis_, x_] := # /. First@Solve[CoefficientList[#.basis, x] == #2, #] &[ Array["a", Length[#]], #] & @ expandPoly[1 + x + 3 x^2 + 7 x^3, HermiteH[Range[4] - 1, x], x] (* ==> {5/2, 23/4, 3/4, 7/8} *) Edit In response to belisarius: if you already know that you're only interested in a basis of HermiteH, you could incorporate that into the function and do away with the specification of the variable basis as follows: expandPoly[p_, x_] := # /. First @ Solve[ CoefficientList[#.HermiteH[Range[Length[#]] - 1, x], x] == #2, #] &[Array["a", Length[#]], #] & @ CoefficientList[p, x] expandPoly[1 + x + 3 x^2 + 7 x^3, x] (* ==> {5/2, 23/4, 3/4, 7/8} *) Edit 2 With the general function given as the first solution above, you can specify any set of polynomials that is known to form a basis for degree n or larger. This means the basis functions don't have to be orthogonal polynomials at all. - (+1) Very nice. I was just doing similar thing with another problem, but didn't think of it here. – Michael E2 Apr 14 '13 at 22:02 How to get rid of that Range@4? – Dr. belisarius Apr 14 '13 at 22:08 @belisarius See edit - I hardcoded the HermiteH part into the function in case you're only interested in those basis functions. Then the counting of terms is automatic. In the first version, I wanted to keep the list basis deliberately general so you can use other polynomials there, too. – Jens Apr 14 '13 at 22:13 Ahh... Algebra. I accidentally studied analysis in graduate school. :) – Mark McClure Apr 14 '13 at 22:17 If the expansion is known finite, one should be able to manage the upper bound automagically. – Dr. belisarius Apr 14 '13 at 23:08 The inner product for the Hermite polynomials, $$\langle f, g\rangle \int_{-\infty}^{\infty} f(x)\,g(x)\,e^{-x^2}\;dx\,,$$ has nice formulas for power functions (where $n=a+b$) and for the Hermite polynomials: \begin{align} \langle x^a, x^b \rangle = \langle x^n, 1\rangle &= \frac{1}{2} \left((-1)^n+1\right)\, \Gamma \left(\frac{n+1}{2}\right)\cr \langle H_n(x), H_n(x) \rangle &= \sqrt{\pi}\,2^n n! \cr \end{align} These can be used to give a quick change of basis function for polynomials. hermiteIP[f_, g_, x_] := With[{coeff = CoefficientList[f g, x]}, coeff.Table[1/2 (1 + (-1)^(-1 + n)) Gamma[n/2], {n, Length@coeff}]]; hermiteExpand[poly_, var_] /; PolynomialQ[poly, var] := Sum[hermiteIP[poly, HermiteH[n, var], var] H[n, var]/(Sqrt[Pi] 2^n n!), {n, 0, Exponent[poly, var]}] I used H[n, x] as a place holder for HermiteH[n, x]. hermiteExpand[(1 + x)^5, x] (* 39/4 H[0, x] + 95/8 H[1, x] + 25/4 H[2, x] + 15/8 H[3, x] + 5/16 H[4, x] + 1/32 H[5, x] *) hermiteExpand[(1 + x)^5, x] /. H -> HermiteH (* 39/4 H[0, x] + 95/8 H[1, x] + 25/4 H[2, x] + 15/8 H[3, x] + 5/16 H[4, x] + 1/32 H[5, x] *) % // Factor (* (1 + x)^5 *) - Probably the best answer. It uses the orthogonality and speeds things up using the closed form. +1 – Mark McClure Apr 14 '13 at 22:24 Whenever I want to convert some polynomial expressed with respect to a certain basis in terms of another polynomial basis. my go-to algorithm is Salzer's algorithm. It's rather fast, since it relies only on recurrences. Here's a specialization of that algorithm for the case of monomial-Hermite conversion: monomialToHermite[cofs_?VectorQ] := Module[{n = Length[cofs] - 1, a}, a[0, 0] = cofs[[n + 1]]; a[0, 1] = cofs[[n]]; a[1, 1] = cofs[[n + 1]]/2; Do[ a[0, k + 1] = cofs[[n - k]] + a[1, k]; Do[ a[m, k + 1] = (m + 1) a[m + 1, k] + a[m - 1, k]/2, {m, k - 1}]; a[k, k + 1] = a[k - 1, k]/2; a[k + 1, k + 1] = a[k, k]/2, {k, n - 1}]; Table[a[m, n], {m, 0, n}]] The algorithm as I presented it here uses an implicit two-dimensional array, a, to clearly show off the recurrence. The algorithm can be easily rewritten so that it uses only a pair or so of one-dimensional arrays, but I'll leave out that version for now. Here's a test of Salzer's method: monomialToHermite[{1, 1, 3, 7}] {5/2, 23/4, 3/4, 7/8} {1, 1, 3, 7}.x^Range[0, 3] == {5/2, 23/4, 3/4, 7/8}.HermiteH[Range[0, 3], x] // Expand True CoefficientList[(1 + x)^5, x] // monomialToHermite {39/4, 95/8, 25/4, 15/8, 5/16, 1/32} %.HermiteH[Range[0, 5], x] == (1 + x)^5 // Expand True (Other instances where I used Salzer's algorithm include this and this.) - (Of course, the algorithm is easily implemented even in languages that don't have symbolic capabilities.) – J. M. Apr 15 '13 at 0:44 Late answers are usually underestimated, +2. – Artes Apr 15 '13 at 19:29 Can't argue with that. Thanks @Artes! – J. M. Apr 16 '13 at 5:13 @seb, to clarify: are you expecting the terms of such an expansion to then come out as something like $c_{jk}H_j(x)H_k(y)$? – J. M. Jul 18 at 10:14 J.M., sorry for the unfortunate timing. I deleted my comment, because I remembered that this part is not the bottleneck in my calculation. But yes, I want expressions of the form $c_{kl} H_k(x)H_l(y)$. (In fact this is an XY problem and in the end I'm only interested at the values at $x=y=0$.) – sebhofer Jul 18 at 11:26 SolveAlways can find coordinates of a polynomial with respect to any given basis. You set up an equation, setting the given polynomial equal to a linear combination of your basis polynomials. This approach will work generally with any polynomial that is a linear combination of a given set of (linearly independent) polynomials. poly = 1 + x + 3 x^2 + 7 x^3; (* Jens' example *) params = Table[C[i], {i, 0, Exponent[poly, x]}]; basis = HermiteH[Range[0, Exponent[poly, x]], x]; coeff = params /. First@SolveAlways[poly == params.basis, x] (* {5/2, 23/4, 3/4, 7/8} *) Update: Extension to ChangeBasis Here is a version of the Maple ChangeBasis that works on polynomials (but not on infinite sums). It is based on the above idea. ClearAll[changeBasis]; changeBasis[poly_, basisfamilies : {_, _} ..] := Module[{vars, families, degrees, params, basis, coeff}, vars = {basisfamilies}[[All, 1]]; families = {basisfamilies}[[All, 2]]; degrees = Exponent[poly, vars]; params = Outer[C, ##] & @@ (Range[0, #] &) /@ degrees; basis = With[{bfn = Times @@ MapThread[#1[#3, #2] &, {families, vars, Array[Slot, Length[vars]]}]}, Outer[HoldForm[bfn] &, ##] & @@ (Range[0, #] &) /@ degrees ]; coeff = params /. First@SolveAlways[ poly == Total[params*ReleaseHold@basis, -1], vars]; With[{res = Total[coeff*basis, -1]}, Hold[res] /. HoldForm[v_] :> v ] ] Some examples from the Maple docs. GegenbauerC[n, 1, x] is automatically simplified to ChebyshevU[n, x], which affect the form of the output. changeBasis[1 + 2 x + 3 x^3, {x, GegenbauerC[#1, 1, #2] &}] ReleaseHold[%] // Expand GegenbauerC[n, 1, x] (* Same as ChebyshevU[n, x] *) (* Hold[ChebyshevU[0, x] + 7/4 ChebyshevU[1, x] + 3/8 ChebyshevU[3, x]] 1 + 2 x + 3 x^3 ChebyshevU[n, x] *) Multivariable function: changeBasis[y^2 + x^2 - 1, {x, LaguerreL[#1, 1, #2] &}, {y, ChebyshevU}] ReleaseHold[%] // Expand (* Hold[21/4 (ChebyshevU[0, y] LaguerreL[0, 1, x]) + 1/4 ChebyshevU[2, y] LaguerreL[0, 1, x] - 6 (ChebyshevU[0, y] LaguerreL[1, 1, x]) + 2 (ChebyshevU[0, y] LaguerreL[2, 1, x])] -1 + x^2 + y^2 *) - In the definition of changeBasis i think {x, y} should be {basisfamilies}[[All, 1]] or vars – Coolwater Jul 9 at 19:32 @Coolwater D'oh! Thanks! -- I probably missed replacing it by vars. – Michael E2 Jul 9 at 21:07 As another variation, here's another method based on repeated greedy division: Reap[Fold[Block[{q, r}, {q, r} = PolynomialQuotientRemainder[#1, #2, x]; Sow[q]; r] &, x^3 + x^2 + x + 1, HermiteH[Range[3, 0, -1], x]]][[-1, 1]] // Reverse {3/2, 5/4, 1/4, 1/8} Check: %.HermiteH[Range[0, 3], x] == x^3 + x^2 + x + 1 // Expand True -
2016-07-27 13:31:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7824119925498962, "perplexity": 3940.777948426886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826907.66/warc/CC-MAIN-20160723071026-00195-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/neutral-meson-problem.264874/
# Neutral meson problem 1. Oct 16, 2008 ### Ant92 I have been learning about quarks which is really interesting, but i have become confused when it comes to mesons. I have learned the basics of annhilation, particle and anti-particle, etc, but I have learned that neutral mesons, such as the pi neutral meson are made of a quark, (e.g. up), and its corresponding anti-particle, (e.g. anti-up), so why do the particles join together to form the meson, shouldn't the particles annhilate? Pi+ meson= up quark, anti-down quark Pi neutral meson= up quark, anti-up quark/ down quark, anti-down quark Pi- meson= down quark, anti-up quark 2. Oct 16, 2008 ### olgranpappy ...apparently this is why we don't see a lot of Pi mesons around on a day to day basis... the lifetime of the Pi is pretty short (or long depending on what you compare it to)... The situation is *similar* (I stress similar because the analogy is quite imperfect, yet perhaps helpful) with "positronium" in which a positron and an electron can briefly form a bound state. But, they eventually annihilate.
2017-05-23 07:28:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655425310134888, "perplexity": 2733.2022346280505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607591.53/warc/CC-MAIN-20170523064440-20170523084440-00376.warc.gz"}
https://totallydisconnected.wordpress.com/2020/07/15/brain-teaser-mysterious-moduli-and-local-langlands/
# Brain teaser: mysterious moduli and local Langlands Fix an integer $n>1$. Let $X$ denote the moduli space of triples $(\mathcal{E}_1, \mathcal{E}_2,f)$ where $\mathcal{E}_i$ is a vector bundle of rank $n$ on the Fargues-Fontaine curve which is trivial at all geometric points, and $f: \mathcal{E}_1 \oplus \mathcal{E}_2 \to \mathcal{O}(1/2n)$ is an injection which is an isomorphism outside the closed Cartier divisor at infinity. Brain teaser a. Prove that $X$ is a locally spatial diamond over $\breve{\mathbf{Q}}_p$ with a Weil descent datum to $\mathbf{Q}_p$. Now, let $D$ be the division algebra over $\mathbf{Q}_p$ of invariant $1/2n$, and let $\tau$ be an irreducible representation of $D^\times$ whose local (inverse) Jacquet-Langlands correspondent is supercuspidal. Note that $D^\times$ acts on $X$ by its natural identification with $\mathrm{Aut}(\mathcal{O}(1/2n))$. Brain teaser b. Prove that the geometric etale cohomology of $X$ satisfies the following: $R\Gamma_c(X_{\mathbf{C}_p},\overline{\mathbf{Q}_\ell})\otimes_{D^\times} \tau \cong \varphi_{\tau}[1-2n](\tfrac{1-2n}{2})$ if $\tau$ is orthogonal, and $R\Gamma_c(X_{\mathbf{C}_p},\overline{\mathbf{Q}_\ell})\otimes_{D^\times} \tau \cong 0$ if $\tau$ is not orthogonal. Here $\varphi_\tau$ denotes the Langlands parameter of $\tau$. It is probably not fair to call these brain teasers. Anyway, here is one big hint: the infinite-level Lubin-Tate space for $\mathrm{GL}_{2n}$ is naturally a $\mathrm{GL}_n(\mathbf{Q}_p)^2$-torsor over $X$, by trivializing the bundles $\mathcal{E}_i$.
2021-06-25 01:34:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863760471343994, "perplexity": 234.0181769175951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00096.warc.gz"}
https://zbmath.org/?q=an:0332.06009&format=complete
# zbMATH — the first resource for mathematics W-isomorphisms of distributive lattices. (English) Zbl 0332.06009 ##### MSC: 06D05 Structure and representation theory of distributive lattices 06E05 Structure theory of Boolean algebras Full Text: ##### References: [1] J. Dudek E. Plonka: Weak automorphisms of linear spaces and of some other abstract algebras. Coll. Math. 22 (1971), 201-208. · Zbl 0335.08009 [2] A. Goetz: On weak automorphisms and weak homomorphisms of abstract algebras. Coll. Math. 14 (1966), 163-167. · Zbl 0192.09504 [3] A. Goetz: On various Boolean structures in a given Boolean algebra. Publ. Mathem. 18 (1971), 103-108. · Zbl 0253.06010 [4] J. Jakubík M. Kolibiar: O nekotorych svojstvach par struktur. Czechoslov. Math. J. 4 (1954), 1-27. [5] J. Jakubík: Pairs of lattices with common congruence relations. · Zbl 0372.06010 [6] E. Marczewski: A general scheme of the notion of independence in mathematics. Bull. Acad. Polon. Sci. Sér. Math. Phys. Astron. 6 (1958), 731 - 736. · Zbl 0088.03001 [7] E. Marczewski: Independence in abstract algebras. Results and problems. Colloq. Math. 14 (1966), 169-188. · Zbl 0192.09502 [8] R. Senft: On weak automorphisms of universal algebras. Dissertationes Math. 74 (1970). · Zbl 0231.08006 [9] J. Sichler: Weak automorphisms of universal algebras. Alg. Univ. 3 (1973), 1 - 7. · Zbl 0273.08008 [10] T. Traczyk: Weak isomorphisms of Boolean and Post algebras. Coll. Math. 13 (1965), 159-164. · Zbl 0133.24404 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-09-17 04:30:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612062931060791, "perplexity": 3166.6350344337384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780054023.35/warc/CC-MAIN-20210917024943-20210917054943-00219.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-7-exponents-and-exponential-functions-concept-byte-page-424/16
## Algebra 1: Common Core (15th Edition) $a.\quad$ We are multiplying two powers with the same base. One exponent is negative, and one is positive. The result has the same base and its exponent is the sum of exponents. $b.\quad$ Prediction: $9^{5}\displaystyle \cdot 9^{-7}=9^{5-7}=9^{-2}=\frac{1}{9^{2}}=\frac{1}{81}$ $c.\qquad x^{n}\cdot x^{-m}=x^{n-m}$
2023-01-31 10:21:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8821849226951599, "perplexity": 621.3465210233705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00175.warc.gz"}
https://lists.gnu.org/archive/html/lilypond-user/2015-07/msg00002.html
lilypond-user [Top][All Lists] ## Re: Partial Bars From: David Kastrup Subject: Re: Partial Bars Date: Wed, 01 Jul 2015 08:48:39 +0200 User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.0.50 (gnu/linux) Chris Yate <address@hidden> writes: > On 1 Jul 2015 04:56, "Helge Kruse" <address@hidden> wrote: >> >> Hi Bill, >> >> Can you please include a minimal compilable example that shows your >> problem? I don't plan to do both >> - goto the next shop or library to get that menut >> - guess what 'part' expresses in the context of that piece >> - write an example that could probably the same problems yours have >> >> Partial measures are possible in different ways, also 'in the middle'. > > I have had this exact problem with typesetting Beethoven before. > > One solution is to temporarily change the time signature, but remove the > time signature engraver. I'll try to find an example later (on my phone > just now). The simplest solution is likely to use version 2.19.12 or later. Some changes for allowing \time in connection with \partial have been committed in 2.19.16 as well. -- David Kastrup
2022-09-30 19:42:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8093351125717163, "perplexity": 13505.858426772144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00242.warc.gz"}
https://zbmath.org/?q=an:1288.82057
# zbMATH — the first resource for mathematics Some results about ergodicity in shape for a crystal growth model. (English) Zbl 1288.82057 Summary: We study a crystal growth Markov model proposed by Gates and Westcott. This is an aggregation process where particles are packed in a square lattice accordingly to prescribed deposition rates. This model is parametrized by three values $$(\beta_i,\, i=0,1,2)$$ corresponding to depositions on three different types of sites. The main problem is to determine, for the shape of the crystal, when recurrence and when ergodicity do occur. Sufficient conditions are known both for ergodicity and transience. We establish some improved conditions and give a precise description of the asymptotic behavior in a special case. ##### MSC: 82D25 Statistical mechanics of crystals 60J27 Continuous-time Markov processes on discrete state spaces 60J10 Markov chains (discrete-time Markov processes on discrete state spaces) ##### Keywords: Markov chain; random deposition; positive recurrence Full Text:
2021-09-27 06:18:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.212594136595726, "perplexity": 1426.7359303006544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00646.warc.gz"}
https://learn.careers360.com/medical/question-help-me-please-a-set-of-39n39-equal-resistors-of-value-39r39-each-areconnected-in-series-to-a-battery-of-emf-39e39-and-internal-resistance-39r39-the-current-drawn-is-i-now-the-39n39-resistors-are-connected-in-paral/
Q # Help me please, A set of 'n' equal resistors, of value 'R' each, areconnected in series to a battery of emf 'E' and internal resistance 'R'. The current drawn is I. Now, the 'n' resistors are connected in paral A set of 'n' equal resistors, of value 'R' each, are connected in series to a battery of emf 'E' and internal resistance 'R'. The current drawn is I. Now, the 'n' resistors are connected in parallel to the same battery. Then the current drawn from battery becomes 10 I. The value of 'n' is • Option 1) 20 • Option 2) 11 • Option 3) 10 • Option 4) 9 211 Views As we have learned Main current / current from each cell - $i=\frac{nE}{R+nr}$ - wherein $n-$ identical cells which connected in series Main current - $i= \frac{E}{R+\frac{r}{n}}$ - $I= \frac{E}{R+nR}$    And $10I= \frac{E}{R/n+R}$ $10I= \frac{nE}{R+nR}= nI$ $n= 10$ Option 1) 20 This is incorrect Option 2) 11 This is incorrect Option 3) 10 This is correct Option 4) 9 This is incorrect Exams Articles Questions
2019-11-18 09:53:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901095867156982, "perplexity": 7089.982515988532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669730.38/warc/CC-MAIN-20191118080848-20191118104848-00009.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/solve-sqrt-3-x-3-root-4-3-x-1-solving-exponential-equations_93874
Solve : (Sqrt(3))^( X - 3 ) = ( Root(4)(3))^( X + 1 ) - Mathematics Sum Solve : (sqrt(3))^( x - 3 ) = ( root(4)(3))^( x + 1 ) Solution (sqrt(3))^( x - 3 ) = ( root(4)(3))^( x + 1 ) ⇒ (3^(1/2))^( x - 3 ) = (3^(1/4))^( x + 1 ) ⇒ 3^[( x - 3)/2] = 3^[( x + 1 )/4] ⇒ [ x - 3 ]/2 = [ x + 1 ]/4 ⇒ 4( x - 3 ) = 2( x + 1 ) ⇒ 4x - 12 = 2x + 2 ⇒ 4x - 2x = 12 + 2 ⇒ 2x = 14 ⇒ x = 14/2 ⇒ x = 7 Concept: Solving Exponential Equations Is there an error in this question or solution? APPEARS IN Selina Concise Mathematics Class 9 ICSE Chapter 7 Indices (Exponents) Exercise 7 (B) | Q 4.3 | Page 100
2021-05-14 17:21:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37969154119491577, "perplexity": 2677.3687161195085}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00568.warc.gz"}
https://www.preprints.org/manuscript/202104.0331/v1
Working Paper Article Version 1 This version is not peer-reviewed # Large Deviations and Information theory for Sub-Critical SINR Randon Network Models Version 1 : Received: 11 April 2021 / Approved: 13 April 2021 / Online: 13 April 2021 (09:17:54 CEST) How to cite: Sakyi-Yeboah, E.; Andam, P.S.; Asiedu, L.; Doku-Amponsah, K. Large Deviations and Information theory for Sub-Critical SINR Randon Network Models. Preprints 2021, 2021040331 Sakyi-Yeboah, E.; Andam, P.S.; Asiedu, L.; Doku-Amponsah, K. Large Deviations and Information theory for Sub-Critical SINR Randon Network Models. Preprints 2021, 2021040331 ## Abstract The article obtains large deviation asymptotic for sub-critical communication networks modelled as signal-interference-noise-ratio(SINR) random networks. To achieve this, we define the empirical power measure and the empirical connectivity measure, as well as prove joint large deviation principles(LDPs) for the two empirical measures on two different scales. Using the joint LDPs, we prove an Asymptotic equipartition property(AEP) for wireless telecommunication Networks modelled as the subcritical SINR random networks. Further, we prove a Local Large deviation principle(LLDP) for the sub-critical SINR random network. From the LLDPs, we prove the large deviation principle, and a classical McMillan Theorem for the stochastic SINR model processes. Note that, the LDPs for the empirical measures of this stochastic SINR random network model were derived on spaces of measures equipped with the $\tau-$ topology, and the LLDPs were deduced in the space of SINR model process without any topological limitations. We motivate the study by describing a possible anomaly detection test for SINR random networks. ## Keywords Large deviation principle; Sub-critical SINR random network model; Poisson point process; Empirical power measure; Empirical connectivity measure; Relative entropy; Kullback action ## Subject MATHEMATICS & COMPUTER SCIENCE, Algebra & Number Theory We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement. Views 0
2022-12-07 12:36:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6801717281341553, "perplexity": 4856.035061735778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00414.warc.gz"}
https://www.mathcity.org/fsc-part1-ptb/important-questions/ch14-solutions-of-trigonometric-equation
# Ch 14: Solutions of Trigonometric Equation • Solve $cose^2\theta=\frac{4}{3}$ in $[0,2\pi]$— BISE Gujrawala(2015), BISE Sargodha(2016), BISE Gujrawala(2017) • Solve $sinx=\frac{1}{2}$ in $[0,2\pi]$— BISE Gujrawala(2015) • Solve $cot\theta = \frac{1}{\sqrt{3}}$, $\theta \in [0,2\pi]$— BISE Gujrawala(2017), BISE Sargodha(2016) • Solve $sec^2\theta=\frac{4}{3}$ in $[0,2\pi]$— BISE Sargodha(2015) • Solve $4cos^2x-3=0$, $x \in [0,2\pi]$— BISE Sargodha(2015) • Solve the equation $secx=-2$ where $x \in [0,2\pi]$— BISE Sargodha(2015) • Find the solutions of $cosec\theta=2$ which is lie in $[0,2\pi]$— BISE Sargodha(2017) • Solve the equation $tanx=-1$ in $[0,2\pi]$— BISE Lahore(2017) • Solve $\sin x+\cos x=0$ — FBISE(2017) • Solve the equation $cosecx=\sqrt{3}+cotx$— FBISE(2017) • fsc-part1-ptb/important-questions/ch14-solutions-of-trigonometric-equation
2021-04-20 17:14:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073327541351318, "perplexity": 9080.7104085613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00598.warc.gz"}
https://askdev.io/questions/995026/create-multiple-databases-in-one-command-line
Create multiple databases in one command line? Can this command be scripted? \$ mysqladmin -u username -p create databasename I would certainly such as to do it for numerous names simultaneously? Additionally I would certainly such as to after that set permissions for every one of them simultaneously too. I'm not exactly sure just how to do this? 2 2022-07-25 20:39:51 Source Share
2022-08-15 03:56:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2350781261920929, "perplexity": 4145.68220356381}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00317.warc.gz"}
http://www.rddl.com.cn/EN/volumn/volumn_4398.shtml
#### Table of Content 05 June 2022, Volume 42 Issue 6 A Study of Farmer' Livelihoods in "Azheke Plan": Based on the DFID Sustainable Livelihood Framework Yang Xu, Jigang Bao 2022, 42 (6):  867-877.  doi: 10.13284/j.cnki.rddl.003496 Abstract ( )   HTML ( )   PDF (1311KB) ( )   Farmer' livelihoods are an important issue for rural revitalization and rural sustainable development. It has been proven in recent years that rural tourism influences farmers' livelihoods and its impacts vary based on different tourism development modes. The DFID sustainable livelihoods framework has been widely used in research on rural tourism development and farmer' livelihoods. It offers a comprehensive perspective that links elements from the external livelihood environment, to investigate farmer' livelihoods. This study developed an evaluation index for farmer' livelihood capital based on the DFID sustainable livelihoods framework. The evaluation index was used to analyze the farmer' livelihood capital. Azheke village, Yunnan Province was chosen as an example in the present study, which successfully launched a tourism poverty reduction program, the "Azheke Plan", in 2018. The data used in this study were collected through questionnaire survey from July 2020 to February 2021, during which the first author lived in Azheke Village. The findings of this study were summarized as follows: (1) Farmers in Azheke used to have only one livelihood mode: farming or working in cities. Now rural tourism development has allowed local villagers to work within local tourism businesses or operate their own tourism businesses in the village. thus, farmers has formed a diversified livelihood mode. (2) Families with different livelihood modes have manifested different features and livelihood capital. Ranking from high to low, the livelihood modes are: tourism-oriented, tourism involved + working in cities, tourism-involved, working in cities, and farming + working in cities. The sequence highly correlates with the degree of farmer' participation in tourism. (3) Farmers whose livelihoods are tourism-oriented, tourism involved + working in cities and tourism-involved have the highest degree of participation in tourism, their household labor and livelihood capacity has improved, it have created more livelihood outcomes for their households, and has a higher level of livelihood capital. Farmers who work in cities and farming + working in cities experienced the growth of physical capital and financial capital mainly throughout the "Azheke Plan", but their livelihood capital is lower than the other three livelihood modes relating to tourism. (4) From the institution perspective, this study explores the relationship between tourism development and changes in livelihood. Several notes are summarized on villagers' livelihoods through "Azheke Plan" from basic conditions, external support, internal factors, livelihood strategies and livelihood results. Finally, the future research direction of tourism development and farmer' livelihood, as well as the possibility of replicating the "Azheke plan" in other regions are discussed. The Spatial Pattern and Influencing Factors of China's Organic Livestock Enterprises under the Link of Functional Location Quanwei Zhao, Yuhua Wang 2022, 42 (6):  878-888.  doi: 10.13284/j.cnki.rddl.003497 Abstract ( )   HTML ( )   PDF (1764KB) ( )   Since the 18th and 19th National Congress, China has put forward ecological civilization construction and rural resolution, requiring the transformation of traditional agricultural production models to a high quality, sustainable, green ecological production model. As an eco-environmental, sustainable, and high value-added agricultural production model, organic agriculture is in-line with the future development direction of China's rural areas and policy guidelines and is also a useful alternative model to traditional agriculture. In order to promote the development of the organic livestock industry in China, this study analyzed the service function and production function basic data of 744 organic livestock enterprises and explored the factors of functional location using Pearson's correlation analysis and multi-regression analysis methods. There is substantial research available related to the functional location of the organic livestock industry that can provide a certain reference value for the location analysis of different industries. The multi-scale analysis of different functional locations can provide reference and policy recommendations for the functional location layout of organic livestock enterprises in China. The results of this study showed the spatial agglomeration characteristics of organized livestock enterprise service and production function locations, such as central and eastern Inner Mongolia, northwestern Xinjiang, northeast Sichuan, and southern Anhui. There is a strong cohesion phenomenon among the organic livestock enterprise service and production function locations in China. However, this is limited to the development characteristics and affected zone, as different functional structures of some organic livestock enterprises have not occurred in the production expansion process. The production and service function locations of organic livestock enterprises are affected by multiple influencing factors, such as feed output, environmental quality, and policy benefits, of which natural resources and the development level of the traditional livestock industry have the greatest impact. It is also important to note that the location layout of organic livestock enterprises has deeper contacts with feed resources and environmental quality at the macro level. The influencing factors of organic livestock enterprise service and production function locations are consistent, but the same influencing factor has certain differences in the extent of their service and production function locations. According to the demand, organic livestock enterprises should guide different functional departments to lay out the functional location gathering area and balance the needs of the service and production function sectors in the initial development phase, select resource endowment, rich in knowledge and technology, convenient transportation, cheap land prices, and low labor cost-effective suburban areas. Local governments should increase policy support and guide deployment in areas with rich natural resources and developed traditional livestock industries. Moreover, according to actual conditions, they should guide enterprises to convert production from traditional to organic in the traditional livestock production industry. Coupling Coordination of Population-Economy-Housing Rental Market: An Analysis Based on Data from 35 Large and Medium-Sized Cities Lingling Mu, Xinran Wang, Chenxi Wang 2022, 42 (6):  889-901.  doi: 10.13284/j.cnki.rddl.003500 Abstract ( )   HTML ( )   PDF (1231KB) ( )   With urbanization, the influx of populations into economically developed large- and medium-sized cities, and the reality of high property prices are forcing the rental housing market to expand. The population, economy, and housing rental market interact, and the coordinated development of the three systems is an inevitable requirement for sustainable urban development. To coordinate the relationship between population, economy, and the housing rental market in large and medium-sized cities, it is crucial to solve the housing problem and achieving "housing for all so that everyone can live and work happily". Based on coupling coordination and spatial error models, this study explores the level of coupling coordination of the population-economy-housing rental market system in thirty-five large and medium-sized cities across China in 2018-2019, and explores its influencing factors. The results show the following: (1) As three social systems, the population, economy, and housing rental market interact with each other. In the perspective of sustainable urban development, the harmonious development between the three systems is a rational movement of population, a well-functioning economy and a smoothly functioning housing rental market, where the three systems win and prosper together. (2) Among the 35 large and medium-sized cities, the integrated population level and economic subsystems are on an upward trend; the integrated housing rental market level has slightly decreased, and although the integrated evaluation value of the housing rental market in the eastern core cities is higher, it still lags behind the population and economy development. (3) Considering the time, the difference in the development of the coupling and coordination degree of the population-economy-housing rental market system of each city has become more extensive. Still, the overall trend is upward, and the coupling and coordination degree of cities classified as pilot housing rental cities are relatively high. (4) For space, there are significant differences in the degree of coupling and coordination of the population-economy-housing rental market system between regions. The degree of coupling coordination in eastern cities is generally higher than in other regions; Shanghai and Beijing have reached a substantial coordination level. Among the central cities, the coupling coordination of Zhengzhou and Changsha remains at the primary coordination level, the coupling coordination of Wuhan rises to the primary coordination level, and the coupling coordination of the other cities is mainly at the barely coordinated level. There is a wide gap in the coupling coordination between cities among western cities. The coupling coordination of Chongqing is at the primary coordination level, of Xi'an and Urumqi is at the barely coordinated level, and of Chengdu has increased from the slightly coordinated level to the primary coordination level. The coupling coordination of other western cities is at the verge of dissonance, except for the cities mentioned above. The coupling coordination degree of all four cities decreased slightly in the northeast region, while Harbin dropped to the near-disordered level. (5) The level of economic development, population size, level of development of the real estate market, and educational resources are notable factors in increasing the level of coupling and coordination of the system. This study analyzes the coupling and coordination between population, economy, and the housing rental market in 35 large and medium-sized cities in China from the coordinated development perspective. This enriches the housing rental market research and alternatively provides a reference for decision-making to address the housing problem in large and medium-sized cities in China and achieve "housing for all". Temporal and Spatial Behavior Characteristics of Low-Rent Housing Residents: A Case Study of the Juyuanzhou Community in Fuzhou Yifan Tang, Xue Zhang, Qun Liu, Yuqi Lu 2022, 42 (6):  902-915.  doi: 10.13284/j.cnki.rddl.003501 Abstract ( )   HTML ( )   PDF (2921KB) ( )   Housing has always been a hot issue in various disciplines. Only when people live in peace can they be happy to work and achieve sustainable economic and social development. In recent years, the construction of a large number of subsidized housing communities has provided material protection for low- and middle-income groups, achieving initial results. However, problems in the construction of supporting facilities and site selection have become increasingly prominent. Therefore, it is important to study the spatial characteristics of the daily activities of the residents of affordable housing to better understand their daily needs and promote the development of affordable housing. Taking a typical community of low-cost housing in Fuzhou City as an example, this study used data from a questionnaire survey and 48 h activity log survey in December 2020, and combined in-depth interview materials from typical samples as the database for this study. Based on the theory of time geography, this study investigated the temporal and spatial behavior characteristics of low-rent housing residents from a microscopic perspective. With the help of GIS visualization techniques, such as 95% standard confidence ellipse, point density analysis, and overlay analysis methods, this study adopted a hybrid analysis method combining qualitative and quantitative analysis to study the spatial characteristics of three types of activities of working, shopping, and leisure of low-rent housing residents on one weekday and one rest day and their use of nearby space from the perspective of micro-individuals, and analyzed their influencing factors using multiple linear regression. The results showed that 1) there is a big difference between the activity space of residents in the case community on weekdays and rest days. The activity space of the sample was more fixed on weekdays, and the working and shopping activities were more dependent on the Juyuanzhou area. On the rest days, the average area of the residents' activity space was four times larger than that on weekdays. Residents needed to use a larger area to meet their shopping and leisure needs, and there were greater internal differences among the samples. 2) Residents had relatively small internal differences in work activities, and their workplaces were mainly distributed in the Juyuanzhou area, with a strong dependence on the Juyuanzhou area. The area of activity space was significantly positively influenced by the residents' income, age and work commute distance. 3) Influenced by residents' travel intentions, there was a big difference between residents' shopping and leisure activities between weekdays and rest days. Workdays relied more on the surroundings of the community, while on the rest days, the internal differences in the resident sample increased. Both shopping and leisure spaces had a significant tendency to expand and decay in distance. Shopping and leisure activities were concentrated within 500 m on weekdays and extended to 5 km on rest days. A comparative analysis with the conclusions of related studies revealed the following significant characteristics of low-rent housing residents in Fuzhou: 1) Short commuting distances to work and dependence on community surroundings. 2) Compared with weekdays, the proportion and space scope of residents' rest days for shopping and leisure activities increased significantly, but the spatial scope of activities was still limited due to the limitation of transportation. Unlike the trend of positioning subsidized housing in first-tier cities, such as Beijing and Shanghai, at the edge of the city, Fuzhou has a large space for inner-city renewal and suburban development, and still has enough space to make the siting of subsidized housing relatively balanced. Finally, in view of the existing problems of the case community, some reference measures are provided for the optimization of low-rent housing from two aspects: planning and construction and community building. Agglomeration Characteristics and Factors Influencing Business Office Space in Changsha Qiang Ye, Chang Tan, Yao Zhao 2022, 42 (6):  916-927.  doi: 10.13284/j.cnki.rddl.003498 Abstract ( )   HTML ( )   PDF (3684KB) ( )   Because business office space is the main functional space of the spatial structure of a city, it is important to explore its spatial layout characteristics to promote efficient use of urban space. Therefore, taking Changsha as an example, this study constructed an explanatory model of the agglomeration phenomenon and ultimately explored the influences of other spatial factors in the city on the agglomeration characteristics of business office space, from the perspective of spatial heterogeneity, based on the business office space POI(Point of Interest) in Changsha and by utilizing standard deviation ellipse analysis, the Getis-Ord G$i*$ index, and geographically weighted regression analysis. The study produced several interesting results: 1) The business office agglomeration in Changsha mainly has the spatial structure of "one main, one secondary, two belts, and multiple clusters." The central agglomeration has a high degree of overlap with the city's main traffic arteries, with a cross-shaped clustering along the two strips of Wuyi Avenue and Furong Middle Road, leading to the main positive value area in Furong Square. Although there is a tendency for business office space in Changsha to develop to the west of the Xiangjiang River, the strength of development is insufficient, and the main and secondary business districts still lie to the east of the Xiangjiang River. The concentration strength of business office space in the Liugoulong-Guanshaling area has not yet reached the level of the secondary business district. 2) The city is a large collection of multi-factor spaces, and the uneven distribution of various factors causes the influence on the agglomeration of business office space to vary from one region to another. Different influencing factors have either an attracting or inhibiting effect on the agglomeration of business office space. The average degrees of influence are shopping services>hotels>public transport>main roads>parks and green areas>external transport>metro>flats>scenic locations>residential areas. 3) Each influencing factor was characterized as a "center-periphery" influence mode. Influenced by the "business dividend," the clustering of business office space in the center is strongly influenced by the commercial space, metro, main roads, and park green spaces factors, and the coordinated development of these spaces in the core of the city can greatly contribute to enhancing the regional agglomeration economy and the formation of the urban CBD. With increasing distance from the center, the driving effect of the commercial dividend gradually decreases, leading to a decrease in the influence of the four factors and thus causing the business office space in the periphery to be influenced to form clusters mainly by residential space, public transport, and other factors. The study aimed to refine and analyze the factors influencing the location of the business office space layout to promote harmonious development of functional spaces within the city. Spatial Evolution and Underlying Factors of the Urban Financial Network in China Jie Zhang, Kerong Sheng, Chuanyang Wang 2022, 42 (6):  928-938.  doi: 10.13284/j.cnki.rddl.003489 Abstract ( )   HTML ( )   PDF (1263KB) ( )   Since the implementation of the reform and opening-up policy, China has experienced rapid development of the financial industry, with a large number of financial enterprise groups being established over the past 40 years. Meanwhile, the distribution of branches of financial enterprises has expanded rapidly, which has accelerated the integration of the financial market in China. Against this background, financial service relationships have played important roles in strengthening the linkages between cities, providing an important perspective for the study of city networks. This study aimed to analyze the spatial patterns, influencing factors, and mechanisms of the key factors in the financial network in China. First, data on the headquarter and branch locations of financial enterprises in China were subjected to the interlocking network model to approximate the financial network, resulting in a 285 × 285 valued urban network, and its spatial patterns were described from the three aspects of centrality, linkages, and core-periphery structure. Then, by using the Quadratic Assignment Procedure, an econometric analysis was conducted to identify the influencing factors, and the micro processes in the spatial growth of the urban network were examined. Finally, by combining theories of information hinterland and resource dependence, a conceptual framework for comprehensively understanding the mechanisms driving financial network growth in China was suggested for further discussion. This study has three main findings: First, the financial network presents a significant concentrated multi-dimensional core-periphery structure. The spatial distribution of centrality exhibits obvious spatial orientation and path dependence characteristics. The cities well-positioned in the network are mainly the core cities in China's major metropolises, such as Beijing and Tianjin in the Beijing-Tianjin area; Shanghai, Suzhou, and Hangzhou in the Yangtze Delta area; and Shenzhen, Guangzhou, and Foshan in the Pearl Delta area. The connectivity of city linkage exhibits enhanced relevance and hierarchical structure characteristics, which promotes the emergence of a "core-periphery" mode in financial network structure. Second, vital resources possessed by cities, such as market potential, political rank, knowledge base, and economic openness, are important factors underlying the formation of China's financial network. Links are more likely to occur between cities with large market potential, abundant political resources, intensive knowledge capital, and high economic openness. Geographical distance, location condition, and historical basis also have a profound influence on the spatial patterns of the financial network. Third, preferred linkage, geographical proximity, and spatial agglomeration are the dynamic mechanisms underlying the development of the financial network. Preferred linkage and geographical proximity can be interpreted as the observable results of sharing vital resources and reducing transportation costs in accessing valuable information flows. The spatial agglomeration mechanism, stemming from the agglomeration economy in the location selection of financial enterprises, tends to strengthen the financial network structure formed historically. In the network environment, the policy of urbanization in China needs to be adjusted accordingly. The Chinese government should support cities to choose differentiated development paths in the financial network, give full play to the supply and guidance function of the financial network to urban economic growth, and promote network cooperation between cities on a larger spatial scale. Research Hotspots, Connotation, and Significance of Emotional Geography at Home and Abroad: Based on Bibliometrics and Visualization Jinping Lin, Jiajia Feng, Bowen Zhang, Yujie Han, Hao Zhang, Man Luo, Fuying Deng 2022, 42 (6):  939-951.  doi: 10.13284/j.cnki.rddl.003493 Abstract ( )   HTML ( )   PDF (1256KB) ( )   Owing to the long-term acceptability principle of geography, emotion has always been in a relatively marginal position in the study of geography. With the emotional turn of western geography, the research of human-centered "emotional relationship" has gradually attracted great attention from domestic and foreign scholars. To grasp the research ideas, research methods, research hotspots, research progress, and key scientific issues of research in emotional geography of domestic and foreign scholars, this study used the bibliometric method to sort out, summarize, and condense 265 foreign language documents and 248 Chinese documents with the theme of "emotional geography" from 1992 to 2020. The study discussed the quantitative relationship, temporal and spatial distribution rules, distribution characteristics, and characteristics of emotional geographic literature at home and abroad. Gephi software was used to visually present research cases at home and abroad, and Citespace software was used to analyze the co-occurrence knowledge graph of keywords in the literature and to clear the research hotspots of emotional geography at home and abroad. The connotation and significance of the research have been explained, and the research prospects are put forward. The study concluded that the key scientific issues of emotional geography include five aspects: constructing a theoretical model of emotional geography with Chinese characteristics, establishing a sound system of emotional geography research methods, promoting the integration and interdisciplinary research of emotional geography and other disciplines, exploring the operating mechanism and practice path of emotional geography for stabling emotions, peace of mind, and understanding society, and studying the impact mechanism of human emotional needs on the construction of local space. There were many research topics at home and abroad, and foreign research hotspots mainly focused on politics, education, gender and climate, whereas domestic research focused on sense of place, tourism, and residents. After nearly 30 years of research on emotional geography at home and abroad, it can be divided into three stages: initiation, expansion, and volatility growth. Research methods have also gone through three stages: from qualitative descriptive analysis to the combination of qualitative and quantitative, innovative and diversified quantitative methods, and collaborative qualitative research. Academia has reached a consensus on the three-dimensionality of "person, emotion, place (space)," but only a few scholars have defined the concept of emotional geography. Thus far, cognition has not been unified, hindering the research process of emotional geography. Based on the complex patterns of emotional geography, time-complex processes, and time-space complex features, follow-up research is needed to collaboratively couple with multidisciplinary theories and methods to find a scientific, typical, representative, and practical theoretical system. There is also a need for qualitative and quantitative research methods combining with the analysis of cultural self-confidence, rural revitalization, homesickness, and uniquely Chinese characteristics of "emotional relationship" and other social hot topics, to provide valuable and scientifically based references for follow-up emotional geography research, expand scholars' research horizons, promote the theoretical and systematic research of emotional geography with Chinese characteristics, improve the academic influence and discourse power of emotional geography, and serve mankind better. Estimation Model and Spatial Pattern of Highway Carbon Emissions in Guangdong Province Yuanjun Li, Qitao Wu, Changjian Wang, Kangmin Wu, Hong'ou Zhang, Shuangquan Jin 2022, 42 (6):  952-964.  doi: 10.13284/j.cnki.rddl.003491 Abstract ( )   HTML ( )   PDF (3316KB) ( )   The transportation sector has become one of the largest industrial emissions source of greenhouses gases, such as CO2. What's worse, carbon emissions from this industry has continued to grow in recent years, posing serious challenges to human survival and global environmental security. Among the various transport modes, road transportation yields the highest levels of energy consumption and CO2 emissions. Therefore, scientifically measuring highway carbon emissions and analyzing their spatial differences are of great significance for energy conservation and emission reduction in the transportation sector. Taking Guangdong Province as an example, this study constructs a full-samples and high-precision carbon emissions model, which encompasses Class I~IV passenger cars and Class I~VI freight vehicles based on origin-destination traffic flow data recorded by the highway networking toll system. Finally, the study explores the spatial difference in carbon emissions of highways in Guangdong Province by using geospatial methods. The conclusions are as follows.Firstly, carbon emissions from highways in Guangdong Province mainly came from trucks, which accounted for 57.1% of the total carbon emissions; passenger cars accounted for 42.9%. To be specific, the carbon emissions mainly originated from small and medium-sized vehicles, including Class I passenger vehicles (i.e., cars) and Class I and III freight vehicles. Secondly, the high carbon emissions road sections were spatially auto-correlated, with peak aggregations on national highways, near economically developed and densely populated areas, and adjacent to airports and ports. Road sections with high carbon emissions in Guangdong Province were concentrated along national highways (9,477 t; 61.9%); the carbon emissions of provincial road sections were relatively low (5,834 t; 38.1%). The high-emission sections of passenger vehicles were mainly concentrated in the Pearl River Delta and radially distributed outwards along Guangzhou City. The high-emission sections of freight vehicles were mainly distributed in national highways. The smaller volume of trucks, the more concentrated the spatial distribution of carbon emissions. Furthermore, at the city scale, the cities with higher carbon emissions were mostly concentrated in the Pearl River Delta urban agglomerations, and Guangzhou had a evident primary city effect. The cities with lower carbon emissions were mainly concentrated in coastal areas, such as Zhuhai. At the county scale, the spatial non-equilibrium characteristics of the carbon emissions were significant. The counties with higher carbon emissions were located in the northern part of Guangdong Province and the center and east coast of the Pearl River Delta.Finally, different types of vehicles had differentiated carbon emission characteristics and emission reduction paths. For example, based on the large quantity and significant carbon emissions of Class I passenger vehicles, we must optimize the energy structure to increase the proportion of vehicles using renewable energy sources. Owing to the high unit fuel consumption of Class VI freight vehicles, improving their operation efficiencies is crucial to avoid empty carriages (i.e., no cargo) and we must optimize their driving routes. This research improves the scientificity and spatial analytical accuracy of measuring traffic carbon emissions, thus enriching the sustainable development theory of the transportation, practically promoting the precise emission reduction and green development of the transportation industry, and providing technical and strategic support for attaining dual carbon targets in China. Excess Commuting and Its Spatial Differentiation Pattern in Guangzhou Supported by Cell Phone Signaling Data Wangbao Liu, Jie Chen 2022, 42 (6):  965-972.  doi: 10.13284/j.cnki.rddl.003494 Abstract ( )   HTML ( )   PDF (1649KB) ( )   Excess commuting is an important indicator to understand the spatial organization and commuting efficiency of the urban job-housing space. With the advantages of full sample and high precision, cell phone signaling data help describe the urban job-housing relationship and commuting pattern at the micro scale. The job-housing relationship pattern at the micro scale is conducive to a more accurate assessment of urban excess commuting. Using cell phone signaling data to construct a job-housing Origin And Destination (OD) contact matrix of the Residents' Committee, this study analyzes excess commuting and its micro-structure in Guangzhou. This study finds the excess commuting in Guangzhou to be 76.01%, which is relatively high compared with that of other cities in China as well as cities in the West; this indicates that the overall efficiency in job-housing spatial organization is low. From the perspective of local excess commuter rate, there are obvious characteristics of high excess commuter rate at the edge of the region and in the old city, and the distribution of high excess commuter rate has a certain correlation with the direction of subway rail transit. The construction of rail transit tends to reduce residents' sensitivity to commuting distance, which leads to an increase in excess commuting. Simultaneously, large suburban real estate communities and industrial agglomeration areas have a high excess commuting rate because of the relatively single urban function. Although the function of the city center is relatively diversified, it is easy to form a high excess commuting rate due to the impact of high cost of living. In the suburbs of Guangzhou, areas with specific functions, such as University Towns and urban villages, have a lower excess commuting rate and better organizational efficiency in the job-housing space. Relevant public policies to improve the spatial organizational efficiency of job-housing in big cities require not only focusing on improving the balance of regional job-housing but also placing great emphasis on optimizing the urban functional structure and reducing regional differences in housing costs. Excessive single functional development in the suburbs will affect the job-housing balance in the region. Hence, it is necessary to avoid large-scale "sleeping cities" and industrial new towns. The choice of workplace and residential location is often a rational one made based on cost of living and commuting. Reducing regional differences in cost of living is the most important way to eliminate the regional job-housing imbalance. Strengthening the equalization of public service facilities between central urban areas and suburbs, improving the traffic convenience in suburbs, and reducing regional differences in housing prices are important measures to reduce regional differences in cost of living.
2023-03-21 17:45:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.180145263671875, "perplexity": 4406.740320711414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00313.warc.gz"}
https://codegolf.stackexchange.com/questions/123406/ccc-2016-circle-of-life
# Before I begin, this challenge was not mine originally Credits to The University of Waterloo. This came from the Canadian Computing Competition 2016, Senior Problem 5. Here is a clickable link to the contest PDF: http://cemc.uwaterloo.ca/contests/computing/2016/stage%201/seniorEn.pdf Here is a link to the site: http://cemc.uwaterloo.ca/contests/past_contests.html # Challenge Given a wrapping array of two constant values, determine the configuration after n evolutions for positive integer input n. These two values represent a living cell and a dead cell. Evolutions work like this: # Evolution! After each iteration, a cell is alive if it had exactly one living neighbor in the previous iteration. Any less and it dies of loneliness; any more and it dies of overcrowding. The neighbourhood is exclusive: i.e. each cell has two neighbours, not three. For example, let's see how 1001011010 would evolve, where 1 is a living cell and 0 is a dead cell. (0) 1 0 0 1 0 1 1 0 1 0 (1) * $% The cell at the * has a dead cell on both sides of it so it dies of lonliness. The cell at the $ has a living cell on one side of it and a dead cell on the other. It becomes alive. The cel at the % has a living cell on both sides of it so it stays dead from overcrowding. # Winning Criteria Shortest code wins. # I/O Input will be a list of the cell states as two consistent values, and an integer representing the number of inputs, in some reasonable format. Output is to be a list of the cell states after the specified number of iterations. # Test Cases start, iterations -> end 1001011010, 1000 -> 1100001100 100101011010000, 100 -> 000110101001010 0000000101011000010000010010001111110100110100000100011111111100111101011010100010110000100111111010, 1000 -> 1001111111100010010100000100100100111010010110001011001101010111011011011100110110100000100011011001 • I don't think this should be tagged as code golf if byte count is merely a tiebreaker. I'm also not sure if it is a good tiebreaker, as the contest will degenerate to a code golf competition if you can simply port answers to a more concise language to win. – Dennis May 30 '17 at 3:10 • @Dennis Right, I will remove the tag. What do you suggest for tiebreaking then; earliest submission is another one of my ideas. – HyperNeutrino May 30 '17 at 3:12 • I'm voting as unclear for the moment since it's unknowable what is meant by complexity when there are multiple parameters. – feersum May 30 '17 at 3:43 • @feersum, there is a tiny bit of play in fastest-algorithm. The naïve algorithm takes Theta(nt) where n is the length of the array and t is the number of evolutions; a faster algorithm takes Theta(n lg t). – Peter Taylor May 30 '17 at 7:23 • @Notts90 I hope my latest edit clarifies it more. – HyperNeutrino May 30 '17 at 12:56 # APL (Dyalog),14 bytes Prompts for start state as Boolean list and then for number of iterations (1∘⌽≠¯1∘⌽)⍣⎕⊢⎕ Try it online! ⎕ numeric prompt (for Boolean list of start state) ⊢ on that, apply ()⍣⎕ the following tacit function, numeric-prompt times ¯1∘⌽ the argument rotated one step right ≠ different from (XOR) 1∘⌽ the argument rotated one step left # Jelly, 7 bytes ṙ2^ṙ-µ¡ Try it online! Explanation ṙ2^ṙ-µ¡ µ¡ - repeat a number of times equal to input 2: ṙ2 - previous iteration rotated 2 to the left ^ - XOR-ed with: - (implicit) previous iteration ṙ- - rotate back (by negative 1 to the left) # 05AB1E, 6 bytes FDÀÀ^Á Try it online! Explanation F # input_1 times do D # duplicate last iteration (input_2 the first iteration) ÀÀ # rotate left twice ^ # XOR Á # rotate right
2020-07-13 18:10:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28573209047317505, "perplexity": 2615.589716568689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146247.90/warc/CC-MAIN-20200713162746-20200713192746-00248.warc.gz"}
https://www.riskconcile.com/wp-content/uploads/2020/03/cornish_fisher_2.html
### Risk Management Solutions September 30, 2016 # Anatomy of Cornish-Fisher ### Introduction The three European Supervisory Authorities (ESMA, EIOPA and EBA) demand that investors are to be informed in a standardised way on the embedded risk of financial products. In their new proposal, these regulators have embraced the concept of the VaR Equivalent Volatility (VEV). The VEV-number is mapped into a market risk-category represented by an integer number in the range $[0 ,7]$. The VaR Equivalent Volatility is derived from a Value at Risk number (VaR). For this VaR measure the Cornish-Fisher expansion is used. In our previous contribution we touched upon this approach. In this whitepaper we examine in detail the validity of the Cornish-Fisher expansion. ### Value at Risk The Value at Risk is defined for a particular holding period $T$ and confidence level. A 97.5% VaR number for a particular financial instrument or portfolio is the amount such that the probability of a loss exceeding this amount equals 2.5%. For normally distributed log returns, the parametric $97.5\%$-VaR for horizon $T$ is given by the following expression: $$\label{eq:VaR} VaR_{0.975} = -\frac{1}{2} \sigma^2T +z^*\sigma \sqrt{T},$$ with $\sigma$ the volatility of the daily log returns and $T$ the time period for which we want to obtain the VaR. Given the fact that we are dealing with daily returns, $T$ is expressed as a number of business days. The value for $z^*$ is $-1.96$ for a 97.5% confidence level. ### Cornish-Fisher VaR Since the log returns of financial assets are often skewed and as such not normally distributed, using the VaR formula above will lead to biased results. A possible solution is to use the Cornish-Fisher expansion to estimate quantiles of such a non-normal distribution. This is the proposal on which the risk calculations for packaged retail instruments (PRIIPs) was founded. The Cornish-Fisher expansion, based on four moments, transforms a standard Gaussian variable $z$ into a non Gaussian random variable $Z$, according to the following formula: $$\label{eq:CF} Z=z+(z^2-1)\frac{S}{6} + (z^3-3z)\frac{K}{24} - (2z^3-5z)\frac{S^2}{36},$$ with S a skewness parameter and K an (excess) kurtosis parameter. It is important to realise that $S$ and $K$ are parameters. As such they can be very different from the actual skew and excess kurtosis of the obtained distribution following the Cornish-Fisher expansion. To obtain the $2.5\%$-quantile of the transformed distribution, we implement the $2.5\%$ quantile of the standard normal distribution, $z=-1.96$, in the above equation. This gives us \begin{align*} Z_{0.025}&= -1.96+((-1.96)^2-1)\frac{S}{6} + ((-1.96)^3-3( -1.96))\frac{K}{24} - (2(-1.96)^3-5(-1.96))\frac{S^2}{36}, \\ \text{or}& \\ Z_{0.025}&= -1.96+0.474\ S - 0.0687\ K - 0.146\ S^2. \end{align*} Using this obtained $Z_{0.025}$ as the $z^*$ quantile into our parametric $97.5\%$-VaR formula, we obtain the Cornish-Fisher formula for the VaR $$VaR_{0.975} = -\frac{1}{2} \sigma^2T + (-1.96+0.474\ S - 0.0687\ K - 0.146\ S^2)\sigma \sqrt{T}.$$ This is the formula for the VaR that is imposed by the European Supervisory Authorities in the PRIIPs regulations (March 2016) ### Skewness and Kurtosis Parameters Using the Cornish-Fisher expansion we transform a standard normal distribution into a non-normal distribution. The skewness and kurtosis of this transformed distribution are called the actual (sample) skewness and the actual kurtosis. The actual skewness and actual kurtosis do not correspond with the skewness and kurtosis parameters in the Cornish-Fisher expansion. This raises the question what values to use for the skewness parameter $S$ and kurtosis parameter $K$. Denoting the actual skewness with $\gamma_1$ and the actual excess kurtosis with $\gamma_2$, we have the following relations between the actual and parameter skewness and kurtosis: \begin{align*} \gamma_1 & = \dfrac{S-\frac{76}{216}S^3 + \frac{85}{1296}S^5 + \frac{1}{4}KS - \frac{13}{144}KS^3 + \frac{1}{32}K^2S}{\left(1 + \frac{1}{96}K^2 + \frac{25}{1296}S^4 - \frac{1}{36}KS^2\right)^{1.5}} \\ \\ \gamma_2 & =\left[\dfrac{3 + K + \frac{7}{16}K^2 + \frac{3}{32}K^3 + \frac{31}{3072}K^4 - \frac{7}{216}S^4 - \frac{25}{486}S^6 + \frac{21665}{559872} S^8 - \frac{7}{12}KS^2 + \frac{113}{452}KS^4 - \frac{5155}{46656}KS^6 - \frac{7}{24}K^2S^2 + \frac{2455}{20736}K^2S^4 - \frac{65}{1152}K^3S^2}{\left(1 + \frac{1}{96}K^2 + \frac{25}{1296}S^4 - \frac{1}{36}KS^2\right)^2} \right]-3 \end{align*} We would like to have the actual skewness $\gamma_1$ and the actual kurtosis $\gamma_2$ corresponding with the sample skewness and the sample kurtosis of the historical log returns. The above equations can be solved numerically to obtain the skewness parameter $S$ and the kurtosis parameter $K$ using the sample skewness and kurtosis for $\gamma_1$ and $\gamma_2$. For small values of the skewness and excess kurtosis, the actual and parameter skewness and kurtosis will coincide. The table below illustrates the values of actual skewness and kurtosis ($\gamma_1, \gamma_2$ ) with corresponding skewness and kurtosis parameters $(S,K)$. A problem with these equations is that they do not have a single solution. The values found for the skewness and kurtosis parameters depend on the starting values used in the numerical optimization. ACTUAL PARAMETER $\gamma_1$ $\gamma_2$ $S$ $K$ 0 0 0 0 0.1 0.2 0.0958 0.1872 -0.2 0.5 -0.1821 0.4317 1 2 0.8833 3.5875 -1.5 8 -0.9320 3.5875 ### Domain of validity Although the Cornish-Fisher expansion gives a good method to transform a normal distribution into a non-normal distribution, there are limits on the validity of this expansion. The transformation must be increasing such that the order of the quantiles of the distributions is conserved. This is the case if the skewness parameter $S$ is in absolute values smaller than $\sqrt{2}-1 \approx 2.4853$ and the kurtosis parameter $K$ has values between $\dfrac{36 + 11 S^2 - \sqrt{324-54S^2+\frac{1}{4}S^4}}{9}$ and $\dfrac{36 + 11 S^2 + \sqrt{324-54S^2+\frac{1}{4}S^4}}{9}.$ For a skewness parameter $S$ equal to zero, this corresponds with a kurtosis parameter $K$ between $0$ and $8$. The maximum possible kurtosis parameter $K$ is $11.55$. The limits on the skewness and kurtosis parameters also imply limits on the actual skewness $\gamma_1$ and the actual excess kurtosis $\gamma_2$. Both domains of validity are visualised in the figure below. The actual skewness $\gamma_1$ is limited by $-4.36$ and $4.36$ while the actual excess kurtosis can take values in the range $[0, 43.28]$. This is where it gets interesting since we can use the domains above to verify if financial instruments easily verify these conditions. ### Example1 An analysis was done on a sample of $62$ exchange traded funds (ETF) with a US listing (Calculation Date : Sep 30, 2016). The required minimum market cap of the ETFs was \$10 bn. A total of 13 of the selected ETFs were bond-based while the remaining$49$instruments were equity ETFs. We investigated the skewness$\gamma_1$and excess kurtosis$\gamma_2$of the log returns to see if the domain of validity is large enough to use the Cornish-Fisher expansion in practical VaR calculations. For all ETFs in our sample the actual skewness and kurtosis were calculated using a 5 year history of daily log returns. The results illustrate that all the ETFs are inside the validity domain. The corresponding skewness and kurtosis parameters ($S$and$K$) are logicaly also inside their validity domain. So for all these ETFs the Cornish-Fisher expansion can be used. ### Example 2 The verification on all the ETFs in our previous sample was based on skew and kurtosis values based on 5 year daily returns. Ths outcome of the analysis changes when considering a different observation period. As an example, we retained the IShares China Large Cap ETF (FXI) and studied the validity to use Cornish Fisher to determine VaR using a rolling 90-Day window. The Figure above now points out, that for some time periods, the oberved values for the sample skew and kurtosis fell outside the validity domain. In this particular case, the VaR result and resulting VEV-number would have been flawed. #### Matching up with Variance Gamma In subsequent research we will investigate if other stochastic models could be a validate candidate to stand up against a VaR based on Cornish-Fisher. A possible choice is the Variance Gamma (VG) model. There is no diffusion component in a VG process and it is therefore a pure jump process. The jumps take place at random times. In the VG setting there is also a validity domain linking excess kurtosis and skew. The validity domain for the excess kurtosis ($\gamma_2$) for a particular skew ($\gamma_1\$) is given now by the following equation : $\gamma_2 > \frac{1}{2}(6 + 3\gamma_1^2)-3$ The Figure below compares the wider domain of Variance Gamma with the validity domain of Cornish Fisher. Moreover, the VG process comes with the elegant property that its parameters can be calibrated from listed option prices. Reuters #### References Maillard, Didier. A User’s Guide to the Cornish Fisher Expansion. 2012. Electronic copy available at http://ssrn.com/abstract=1997178 Guillaume, Florence and Schoutens, Wim . A Moment Matching Market Implied Calibration. Electronic copy available at http://ssrn.com/abstract=2021466 EIOPA. Final Draft Regulatory Technical Standards. 31 March 2016.
2022-12-01 13:41:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.8823127746582031, "perplexity": 958.312648336578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00681.warc.gz"}
http://mathhelpforum.com/differential-equations/162511-differential-equations-intergating-factors.html
Math Help - Differential Equations Intergating Factors 1. Differential Equations Intergating Factors Hi Having trouble with the following problem: Solve: $\frac{dy}{dx} - \frac{4y}{x} = x^4$ $\frac{dy}{dx} - 4y = x^5$ $e^{\int p(x) dx}$ $e^{\int-4y dx}$ $e^{-2y^2}$ $e^{-2y^2}(\frac{dy}{dx}-4y)=e^{-2y^2}x^5$ $e^{-2y^2}y=e^{-2y^2}x^5$ $e^{-2y^2}y=\int e^{-2y^2}x^5$ $udv = uv - \int vdu$ $u=e^{-2y^2}$ $du=-4ye^{-2y^2}$ $dv=x^5$ $v=\frac{x^6}{6}$ $e^{-2y^2}x^5 = \frac{e^{-2y^2}x^6}{6} - \int{\frac{x^6}{6} * -4ye^{-2y^2}}$ $e^{-2y^2}x^5 = \frac{e^{-2y^2}x^6}{6} - \int{\frac{x^6}{6} * -4ye^{-2y^2}}$ i continued doing the integration and i got nowhere, where is my mistake?? P.S 2. Originally Posted by Paymemoney Hi Having trouble with the following problem: Solve: $\frac{dy}{dx} - \frac{4y}{x} = x^4$ $\frac{dy}{dx} - 4y = x^5$ $e^{\int p(x) dx}$ $e^{\int-4y dx}$ $e^{-2y^2}$ $e^{-2y^2}(\frac{dy}{dx}-4y)=e^{-2y^2}x^5$ $e^{-2y^2}y=e^{-2y^2}x^5$ $e^{-2y^2}y=\int e^{-2y^2}x^5$ $udv = uv - \int vdu$ $u=e^{-2y^2}$ $du=-4ye^{-2y^2}$ $dv=x^5$ $v=\frac{x^6}{6}$ $e^{-2y^2}x^5 = \frac{e^{-2y^2}x^6}{6} - \int{\frac{x^6}{6} * -4ye^{-2y^2}}$ $e^{-2y^2}x^5 = \frac{e^{-2y^2}x^6}{6} - \int{\frac{x^6}{6} * -4ye^{-2y^2}}$ i continued doing the integration and i got nowhere, where is my mistake?? P.S p(x) = -4/x NOT -4y. 3. Also, you can't multiply both sides by $x$ without an $x$ ending up attached to $\displaystyle \frac{dy}{dx}$. 4. ok i have tried it again my answer does not match the book's answers. $\frac{dy}{dx} - \frac{4y}{x} = x^4$ $e^\int{ln(x^4)} = x^4$ $x^4(\frac{dy}{dx} - \frac{4y}{x}) = x^8$ $(\frac{d(x^4y)}{dx} = x^8$ $x^4y = \int x^8$ $x^4y = \frac{x^9}{9} + C$ $y = \frac{x^5}{9} + \frac{C}{x^4}$ 5. Originally Posted by Paymemoney ok i have tried it again my answer does not match the book's answers. $\frac{dy}{dx} - \frac{4y}{x} = x^4$ $e^{\int \ln(x^4) \, dx} = x^4$ Mr F says: I have made some formatting and notational changes to this line without changing its content. My request to you is this: Please explain where it has come from, noting that, as has previously been pointed out to you, p(x) = -4/x. $x^4(\frac{dy}{dx} - \frac{4y}{x}) = x^8$ $(\frac{d(x^4y)}{dx} = x^8$ $x^4y = \int x^8$ $x^4y = \frac{x^9}{9} + C$ $y = \frac{x^5}{9} + \frac{C}{x^4}$ .. 6. yep i know where i mistake, was i got the right answer now 7. As already pointed out you need $\displaystyle e^{\int \frac{-4}{x}~dx} = e^{-4\ln x} = e^{\ln x^{-4}} = x^{-4}$ Now multiply this guy through the entire equation.
2014-08-27 12:14:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 48, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8897486329078674, "perplexity": 1129.7138644878155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829210.91/warc/CC-MAIN-20140820021349-00230-ip-10-180-136-8.ec2.internal.warc.gz"}
https://1201.info/ch.-1-descriptive-statistics.html
# Ch. 1 Descriptive Statistics Sections covered: all ## 1.2 Pictorial and Tabular Methods in Descriptive Statistics Skip: Example 1.7, p. 15 (double-digit leaves) Skip: “Dotplots,” pp. 15-16 • Stem-and-leaf display: prices <- c(379, 425, 450, 450, 499, 529, 535, 535, 545, 599, 665, 675, 699, 699, 725, 725, 745, 799) stem(prices) ## ## The decimal point is 2 digit(s) to the right of the | ## ## 3 | 8 ## 4 | 355 ## 5 | 03445 ## 6 | 078 ## 7 | 00335 ## 8 | 0 Note that histograms are drawn with unbinned data. R does the binning in the process of drawing the histogram. • Frequency histogram: prices <- c(379, 425, 450, 450, 499, 529, 535, 535, 545, 599, 665, 675, 699, 699, 725, 725, 745, 799) hist(prices) hist(prices, breaks = c(300, 400, 500, 600, 700, 800), col = "lightblue") • Density histogram: prices <- c(379, 425, 450, 450, 499, 529, 535, 535, 545, 599, 665, 675, 699, 699, 725, 725, 745, 799) hist(prices, freq = FALSE, breaks = c(300, 400, 500, 600, 700, 800), col = "lightblue", las = 1) Cumulative frequency histogram For this type of histogram, we need access to the bin counts, in order to calculate the cumulative frequencies. The hist() function returns these values, if assigned to a variable: x <- c(1, 1, 1, 1, 1, 5, 5, 5, 7, 7, 8) myhistdata <- hist(x) myhistdata ## $breaks ## [1] 1 2 3 4 5 6 7 8 ## ##$counts ## [1] 5 0 0 3 0 2 1 ## ## $density ## [1] 0.45454545 0.00000000 0.00000000 0.27272727 0.00000000 0.18181818 0.09090909 ## ##$mids ## [1] 1.5 2.5 3.5 4.5 5.5 6.5 7.5 ## ## $xname ## [1] "x" ## ##$equidist ## [1] TRUE ## ## attr(,"class") ## [1] "histogram" The particular information we want is $counts: myhistdata$counts ## [1] 5 0 0 3 0 2 1 The cumulative frequencies are: cumsum(myhistdata$counts) ## [1] 5 5 5 8 8 10 11 To plot them, we need to use a bar chart, not a histogram, since we already have the y-axis values: barplot(cumsum(myhistdata$counts)) Cleaned up: barplot(cumsum(myhistdata$counts), col = "lightblue", space = 0, # remove gaps between bars las = 1, # make all tick mark labels horizontal ylim = c(0, 12), # make the y-axis longer names.arg = myhistdata$mids ) ## 1.3 Measures of location Skip: Example 1.16, p. 33 (trimmed mean) Skip: “Categorical Data and Sample Proportions,” p. 34 (We’ll return to this topic later.) prices <- c(379, 425, 450, 450, 499, 529, 535, 535, 545, 599, 665, 675, 699, 699, 725, 725, 745, 799) mean(prices) ## [1] 593.2222 median(prices) ## [1] 572 ## quartiles quantile(prices) ## 0% 25% 50% 75% 100% ## 379.0 506.5 572.0 699.0 799.0 ## trimmed mean mean(prices, trim = .1) ## 10% trimmed mean ## [1] 593.75 ## 1.4 Measures of variability Skip: extreme outliers (p. 42) We will define outliers for boxplots to be observations that are more than 1.5 times the fourth spread from the closest fourth. They may be indicated with either a solid or open circle (in contrast to the book which uses one for mild outliers and the other for extreme outliers.) • Sample variance: prices <- c(379, 425, 450, 450, 499, 529, 535, 535, 545, 599, 665, 675, 699, 699, 725, 725, 745, 799) var(prices) ## [1] 15981.48 • Sample standard deviation: sqrt(var(prices)) ## [1] 126.4179 sd(prices) ## [1] 126.4179 • Five number summary (min, lower-hinge, median, upper-hinge, max) fivenum(prices) ## [1] 379 499 572 699 799 • Boxplots prices <- c(379, 425, 450, 450, 499, 529, 535, 535, 545, 599, 665, 675, 699, 699, 725, 725, 745, 799) boxplot(prices) boxplot(prices, horizontal = TRUE) PTSD <- c(10, 20, 25, 28, 31, 35, 37, 38, 38, 39, 39, 42, 46) Healthy <- c(23, 39, 40, 41, 43, 47, 51, 58, 63, 66, 67, 69, 72) df <- data.frame(Healthy, PTSD) boxplot(df, horizontal = TRUE) ## Practice Exercises 1. Using the built-in dataset ToothGrowth in R, visualize the data and comment on the effectiveness of different functions in the context. [Ans] # The first 5 rows of the data head(ToothGrowth, 5) ## len supp dose ## 1 4.2 VC 0.5 ## 2 11.5 VC 0.5 ## 3 7.3 VC 0.5 ## 4 5.8 VC 0.5 ## 5 6.4 VC 0.5 # Five number summary fivenum(ToothGrowth$len) ## [1] 4.20 12.55 19.25 25.35 33.90 # Boxplot # '$' extracts the column by name boxplot(ToothGrowth$len) # Stem-and-leaf Plot stem(ToothGrowth$len) ## ## The decimal point is 1 digit(s) to the right of the | ## ## 0 | 4 ## 0 | 5667789 ## 1 | 00001124 ## 1 | 55555677777899 ## 2 | 001222333344 ## 2 | 55566666667779 ## 3 | 0134 # Histogram h <- hist(ToothGrowth$len) # Cumulative Histogram h$counts <- cumsum(h\$counts) plot(h) head(): directly see how the dataset looks; useful when the dataset is large and it’s difficult to display all rows and columns together. fivenum(): returns the minimum value, lower fourth, median, upper fourth, and maximum value boxplot(): visualizes the five number summary plus outliers. (It’s clear that the ToothGrowth data is not skewed.) stem(): compares the number of data points that fall in different bins. (Here we can see that most values are between 20 and 29.) hist(): draws a histogram – values are grouped in bins cumsum(): takes a vector and returns the cumulative sums
2021-10-17 15:21:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1700086146593094, "perplexity": 1904.3094345811571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585178.60/warc/CC-MAIN-20211017144318-20211017174318-00096.warc.gz"}
https://codereview.stackexchange.com/questions/268310/kruskals-and-prims-algorithm-minimum-spanning-tree
Kruskal's and Prim's algorithm (Minimum spanning tree) I completed my class assignment, and I would like to have a critical code review of this implementation. How can I make this more concise, pythonic and readable? Also, while writing docstrings, is there a way to perfectly arrange the words systematically so that the docstring is consistent? This code is about the Kruskal's algorithm and Prim's algorithm to find the minimum spanning tree. They both fall under a class of algorithms called greedy algorithms as they find the local optimum in the hopes of finding a global optimum. Summary of Kruskal's algorithm We start from the edges with the lowest weight and keep adding edges until we reach our goal. The steps for implementing Kruskal's algorithm are as follows: 1. Sort all the edges from low weight to high. 2. Take the edge with the lowest weight and add it to the spanning tree. 3. If adding the edge created a cycle, then reject this edge. 4. Keep adding edges until we reach all vertices. Summary of Prim's algorithm The steps for implementing Prim's algorithm are as follows: 1. Initialize the minimum spanning tree with a vertex chosen at random. 2. Find all the edges that connect the tree to new vertices, find the minimum and add it to the tree. 3. Keep repeating step 2 until we get a minimum spanning tree. from typing import List, Union class UndirectedGraph: def __init__(self, num_vertices: int): """Initializing the member variables of the graph instance.""" self.graph: List[List] = [] self.num_vertices: int = num_vertices self, vertex_one: Union[int, float, str], vertex_two: Union[int, float, str], weight: int ): """ The graph representation will be in the form of an edge list. """ self.graph.append([vertex_one, vertex_two, weight]) def find_parent( self, parent: List[int], vertex: int ): """This function will find the parent of a vertex.""" if parent[vertex] == vertex: return vertex return self.find_parent(parent, parent[vertex]) def union( self, parent: List, rank: List, node_one: int, node_two: int ): vertex_one = self.find_parent(parent, node_one) vertex_two = self.find_parent(parent, node_two) if rank[vertex_one] < rank[vertex_two]: parent[vertex_one] = vertex_two elif rank[vertex_one] > rank[vertex_two]: parent[vertex_two] = vertex_one else: parent[vertex_two] = vertex_one rank[vertex_one] += 1 def print_min_spanning_tree(self, tree: List[List]): total_cost: int total_cost = 0 for vertex_one, vertex_two, weight in tree: print(f"{vertex_one} - {vertex_two} | {weight}") total_cost += weight print(f"Total cost of minimum spanning tree: {total_cost}") def kruskal_min_spanning_tree(self) -> List[List]: minimum_spanning_tree: List[List] graph_sorted_by_weights: List[List] rank: List[int] minimum_spanning_tree = [] graph_sorted_by_weights = sorted(self.graph, key=lambda element: element[2]) rank = [0] * self.num_vertices parent = [num for num in range(self.num_vertices)] for vertex_one, vertex_two, weight in graph_sorted_by_weights: vertex_one_parent = self.find_parent(parent, vertex_one) vertex_two_parent = self.find_parent(parent, vertex_two) if vertex_one_parent != vertex_two_parent: minimum_spanning_tree.append([vertex_one, vertex_two, weight]) self.union(parent, rank, vertex_one_parent, vertex_two_parent) return minimum_spanning_tree def prim_min_spanning_tree(self): visited: List = [0] minimum_spanning_tree: List = [] while len(visited) != len({i[1] for i in self.graph}): valid_edges = self.get_valid_edges(visited) smallest_edge = min(valid_edges, key=lambda l: l[2]) minimum_spanning_tree.append(smallest_edge) visited.append(smallest_edge[1]) return minimum_spanning_tree def get_valid_edges(self, visited: List) -> List: return [ edges for edges in self.graph if edges[0] in visited and edges[1] not in visited ] graph = UndirectedGraph(6)
2021-10-27 11:07:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1824420541524887, "perplexity": 9916.180894884115}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00244.warc.gz"}
http://www.oalib.com/relative/3949618
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Mathematics , 2011, Abstract: In this paper we construct a distinguished Riemannian geometrization on the dual 1-jet space J^{1*}(T,M) for the multi-time quadratic Hamiltonian functions. Our geometrization includes a nonlinear connection N, a generalized Cartan canonical N-linear connection (together with its local d-torsions and d-curvatures), naturally provided by a given quadratic Hamiltonian function depending on polymomenta. Physics , 1995, DOI: 10.1142/S0217732395001332 Abstract: We represent a classical Maxwell-Bloch equation and related to it positive part of the AKNS hierarchy in geometrical terms. The Maxwell-Bloch evolution is given by an infinitesimal action of a nilpotent subalgebra $n_+$ of affine Lie algebra $\hat {sl}_2$ on a Maxwell-Bloch phase space treated as a homogeneous space of $n_+$. A space of local integrals of motion is described using cohomology methods. We show that hamiltonian flows associated to the Maxwell-Bloch local integrals of motion (i.e. positive AKNS flows) are identified with an infinitesimal action of an abelian subalgebra of the nilpotent subalgebra $n_+$ on a Maxwell- Bloch phase space. Possibilities of quantization and latticization of Maxwell-Bloch equation are discussed. Kannan Nambiar Mathematics , 2002, Abstract: Five geometrical eqivalents of Goldbach conjecture are given, calling one of them Fermat Like Theorem. Physics , 2015, Abstract: Hubbard-like Hamiltonians are widely used to describe on-site Coulomb interactions in magnetic and strongly-correlated solids, but there is much confusion in the literature about the form these Hamiltonians should take for shells of p and d orbitals. This paper derives the most general s, p and d orbital Hubbard-like Hamiltonians consistent with the relevant symmetries, and presents them in ways convenient for practical calculations. We use the full configuration interaction method to study p and d orbital dimers and compare results obtained using the correct Hamiltonian and the collinear and vector Stoner Hamiltonians. The Stoner Hamiltonians can fail to describe properly the nature of the ground state, the time evolution of excited states, and the electronic heat capacity. Physics , 2003, DOI: 10.1088/0305-4470/36/31/311 Abstract: It is shown that the radial part of the Hydrogen Hamiltonian factorizes as the product of two not mutually adjoint first order differential operators plus a complex constant epsilon. The 1-susy approach is used to construct non-hermitian Hamiltonians with hydrogen spectra. Other non-hermitian Hamiltonians are shown to admit an extra complex energy' at epsilon. New self-adjoint hydrogen-like Hamiltonians are also derived by using a 2-susy transformation with complex conjugate pairs epsilon, (c.c) epsilon. Mathematics , 2009, DOI: 10.1088/1751-8113/42/42/425303 Abstract: The Bloch sphere is a familiar and useful geometrical picture of the dynamics of a single spin or two-level system's quantum evolution. The analogous geometrical picture for three-level systems is presented, with several applications. The relevant SU(3) group and su(3) algebra are eight-dimensional objects and are realized in our picture as two four-dimensional manifolds describing the time evolution operator. The first, called the base manifold, is the counterpart of the S^2 Bloch sphere, whereas the second, called the fiber, generalizes the single U(1) phase of a single spin. Now four-dimensional, it breaks down further into smaller objects depending on alternative representations that we discuss. Geometrical phases are also developed and presented for specific applications. Arbitrary time-dependent couplings between three levels or between two spins (qubits) with SU(3) Hamiltonians can be conveniently handled through these geometrical objects. Physics , 2008, DOI: 10.1103/PhysRevD.80.124014 Abstract: The equations of the linearized first post-Newtonian approximation to general relativity are often written in "gravitoelectromagnetic" Maxwell-like form, since that facilitates physical intuition. Damour, Soffel and Xu (DSX) (as a side issue in their complex but elegant papers on relativistic celestial mechanics) have expressed the first post-Newtonian approximation, including all nonlinearities, in Maxwell-like form. This paper summarizes that DSX Maxwell-like formalism (which is not easily extracted from their celestial mechanics papers), and then extends it to include the post-Newtonian (Landau-Lifshitz-based) gravitational momentum density, momentum flux (i.e. gravitational stress tensor) and law of momentum conservation in Maxwell-like form. The authors and their colleagues have found these Maxwell-like momentum tools useful for developing physical intuition into numerical-relativity simulations of compact binaries with spin. Yakov Itin Mathematics , 2005, DOI: 10.1088/0264-9381/23/10/008 Abstract: We study which geometric structure can be constructed from the vierbein (frame/coframe) variables and which field models can be related to this geometry. The coframe field models, alternative to GR, are known as viable models for gravity, since they have the Schwarzschild solution. Since the local Lorentz invariance is violated, a physical interpretation of additional six degrees of freedom is required. The geometry of such models is usually given by two different connections -- the Levi-Civita symmetric and metric-compatible connection and the Weitzenbock flat connection. We construct a general family of linear connections of the same type, which includes two connections above as special limiting cases. We show that for dynamical propagation of six additional degrees of freedom it is necessary for the gauge field of infinitesimal transformations (antisymmetric tensor) to satisfy the system of two first order differential equations. This system is similar to the vacuum Maxwell system and even coincides with it on a flat manifold. The corresponding `Maxwell-compatible connections'' are derived. Alternatively, we derive the same Maxwell-type system as a symmetry conditions of the viable models Lagrangian. Consequently we derive a nontrivial decomposition of the coframe field to the pure metric field plus a dynamical field of infinitesimal Lorentz rotations. Exact spherical symmetric solution for our dynamical field is derived. It is bounded near the Schwarzschild radius. Further off, the solution is close to the Coulomb field. Physics , 2001, DOI: 10.1142/S021773230100295X Abstract: Previous $\lambda$-deformed {\it non-Hermitian} Hamiltonians with respect to the usual scalar product of Hilbert spaces dealing with harmonic oscillator-like developments are (re)considered with respect to a new scalar product in order to take into account their property of self-adjointness. The corresponding deformed $\lambda$-states lead to new families of coherent states according to the DOCS, AOCS and MUCS points of view. Physics , 2010, Abstract: We construct one soliton solutions for the nonlinear Schroedinger equation with variable quadratic Hamiltonians in a unified form by taking advantage of a complete (super) integrability of generalized harmonic oscillators. The soliton wave evolution in external fields with variable quadratic potentials is totally determined by the linear problem, like motion of a classical particle with acceleration, and the (self-similar) soliton shape is due to a subtle balance between the linear Hamiltonian (dispersion and potential) and nonlinearity in the Schroedinger equation by the standards of soliton theory. Most linear (hypergeometric, Bessel) and a few nonlinear (Jacobian elliptic, second Painleve transcendental) classical special functions of mathematical physics are linked together through these solutions, thus providing a variety of nonlinear integrable cases. Examples include bright and dark solitons, and Jacobi elliptic and second Painleve transcendental solutions for several variable Hamiltonians that are important for current research in nonlinear optics and Bose-Einstein condensation. The Feshbach resonance matter wave soliton management is briefly discussed from this new perspective. Page 1 /100 Display every page 5 10 20 Item
2020-01-26 12:00:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031830191612244, "perplexity": 1090.9532095270922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251688806.91/warc/CC-MAIN-20200126104828-20200126134828-00160.warc.gz"}
https://stats.stackexchange.com/questions/117137/conditional-expectation-constant
# Conditional Expectation Constant If the conditional expectation E(Z|X) is a constant k, what can be inferred about Z? Since this means that whatever the value of x is given, Z is always k, does this imply that E(Z) is equal to k? • You might find the Wikipedia entry on the law of total expectation useful. There are similarly useful laws for total variance and total covariance. Sep 29 '14 at 7:00 • Please note that one key assertion in the question is incorrect: $Z$ is not necessarily a constant. For instance, when $Z$ and $X$ are independent, $E(Z|X)=E(Z)$ is a constant but $Z$ could have literally any distribution. On the other hand, a constant conditional expectation does not imply independence. One could start with any bivariate random variable $(X,Y)$, choose $k$, and by defining $Z=Y+k-E(Y|X)$ create $(X,Z)$ for which $E(Z|X)=k$. $X$ and $Z$ will not necessarily be independent. – whuber Sep 29 '14 at 14:34 • @whuber Can you give an example when $X$, $Z$ are not independent in the situation you described? Apr 7 '19 at 17:32 Regarding the second part, the answer is yes $$E(Z)=E(E(Z|X))=E(k)=k$$
2021-10-16 16:02:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8553275465965271, "perplexity": 241.75542964974827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584886.5/warc/CC-MAIN-20211016135542-20211016165542-00667.warc.gz"}
http://tex.stackexchange.com/questions?page=1&sort=active&pagesize=15
# All Questions 82 views ### Third grade spiral in TikZ - dimension too large I'm trying to draw a smooth third grade spiral (r^3 = a^3\cdot \varphi) over $60\times 60$mm grid and I'm getting the "Dimension too large"-error if I set $a > 320$ but I need to fill the grid area ... 11 views ### Help with Chapter Headers Left-Aligning I'm using the following code for the top of chapters which looks pretty cool: \usepackage{titlesec} \titleformat{\chapter}[display] {\normalfont\Large\raggedleft} {\MakeUppercase{\chaptertitlename}% ... 56 views ### Make \usepackage{polski} and \texttt work in the same document I would like to use Polish language in a part of my Acknowledgements. In order to do it I introduced in the preamble: \usepackage{polski} \usepackage[polish, english]{babel} And later on, to ... 33 views ### Biblatex Q: Single entry for multiple data fields? So I have a biblatex entry: @book{sterne, title={The Life and Opinions of Tristram Shandy, Gentleman}, author={Sterne, Laurence}, editor={Ross, Ian Campbell}, introduction={Ross, Ian ... 9 views ### Latex Chapter and section formatting My returned thesis has several formatting corrections notated. I am a fairly inexperienced in LaTeX, and can only use its basic functionality. I am having issues with formatting the chapter and ... 19 views ### How to define numbering for figures and tables? [duplicate] I am writing a document with several figures with subsections numbers in the 2.1, 2.2, etc. format. I would like to number the figures in each subsection with a 3 digit format, for example: Figure ... 42 views ### Drawing an arc of a circle using its center and endpoints I have a display illustrating the rule for calculating the sine and cosine for a sum of two angles. The measure of $\angle{AQP}$ is $x$. The last three lines in the code give the commands to name of ... 18 views ### How to center align a single plot within a row of groupplot I have an odd number of plots, 5, and would like to center align this fifth plot using groupplot as well as a group x-axis label. I have a 3x2 setup. I searched for answers and I've seen a case where ... 25 views ### Move tag next to equation in align environment It's my first time asking here but I have got so much help from this site :D My particular case is that I need order and tag Maxwell's equations but I need set the tag in a particular location, just ... 152 views ### \newcommand with options 46 views ### Why does \def fail inside \edef? An answer to one of my previous questions provided this code for referencing a range of lines in which a line labels were placed at the beginning and end of the line range. \makeatletter ... 160 views ### Issue installing Tex Live on Windows 10 I installed yesterday a clean version of Windows 10 after the update and the format. Today I was trying install Tex Live but there is an issue. The installer (exe) unzip the file but then simply shut ... 217 views ### Problem with MikTex and Hebrew when using 12pt font I'm using MikTex 2.9 (on Windows 7), together with the culmus-latex pack (I used the instructions I found here: http://www.ma.huji.ac.il/~sameti/tex/culmusmiktex.html). When working with the default ... 14 views ### Plot multiple histograms from csv file using pgfplot I have the following data.csv file: subject,f1,f2,f3 F11,0.019,0.04165,0.00016547 F14,0.03034,0.02161,0.000267 M22,0.05128,0.0648,0.000327 M22_1,0.052,0.0328,0.000206 M23,0.0364,0.06355,0.000379 ... 10 views ### How can I synchronize page-breaking text across columns? I'm trying to produce something like this: I've been using memoir. Here's an idea I had. My understanding is that \footnote creates a float that tries to place itself at the bottom of the nearest ... 15 views ### Prevent Hyphenation across lines + Enforce right margin My first question here, so it may sound rambling. So far I've found the answers about preventing hyphenation across lines and for setting margins, but I'm having trouble putting it together. This is ... 22 views ### Arabic references in Jabref I use Jabref as my referencing manager. In the picture, one reference showed up correctly, while the other did not. In Jabref I modified the encoding to utf-8. So I was expecting all Arabic ... 26 views ### Chess figurines for kindle, both eink and kindle fire I wrote 13 chess books, using images, well seen on every kind of kindle, old and new. However, they mantain the same size when you change the size of the text. I just saw an old post by Asim, who ... 76 views ### How to make a combined “List of Figures and Tables”? I am aware of Combined List of Figures and Tables?, but the asker there requires one numbering scheme for figures and tables, which I don't want. Unfortunately, the answer seems to depend on that ... 19 views ### Problems using the “crop” package and printing cam options using LyX I'm having a little difficulty having default variables (with a custom page size) to load properly on the pdf. I am using the following preamble: \usepackage{fontspec} \usepackage{microtype} ... 46 views ### Merge consecutive linebreaks 56 views ### The last 2 columns of the 9 column table not appearing in the output I have a table which has 9 columns and the code of which is \documentclass{article} \begin{document} \begin {table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline 1 & 2 ... 4k views ### “Broken” arrow symbol In his pretty awesome book “Undergraduate Algebraic Geometry” M. Reid uses (e.g. see page 4) symbol of “broken” arrow (which looks quite a like dash+space+short arrow : “- →”) for partially ... 162 views ### bibtex - endnote - new references I'm sure this has been answered somewhere else already, but still, I couldn't manage to find it out on the internet. In my group, I need to work with Endnote (on .doc files) to write shared papers. My ... 19 views ### Display shortcuts other than urls with href Big picture, I am trying to develop an approach that will allow me to store the latex commands and labels (pre-expansion) that I use to typeset dynamic text (bib entries with cite, cross referencing ... 26 views ### Invalid page tree PDF I have a generated LaTeX (python scripts, getting data from different sources) document which is then compiled into a pdf with pdfLaTeX. Generally the structure of the document is the following ... 70 views ### Generate multiple PDFs for different document versions in a single build I am writing a document for a software which has two versions. Most of the content in the document is the same except few specific details. Right now, I am using the conditionals to build the document ... 27 views ### How to remove only year from a reference? While I am citing references, I have some reference (web addresses) that I need to refer. They dont have year of publication. So, when I give year={} in bibtex, it is displaying () in the place of ... 76 views ### Connecting 5 randomly selected points on the circumference of a circle I recently started using LaTeX/tikz and was wondering if anyone would be willing to help code (using tikz) a diagram similar to the attached. Note that the pentagon should not necessarily be regular ... 19 views ### How to remove Double spacing in Table “Cell Wrap”? Assume I have a text with double spacing. Now to visualize table cell wrap (automated table cell wrap is not possible in Latex as far as I know), I want to space this "cell wrap" with single spacing. ... 53 views ### How to split the toc in two? I want to split my thesis in two main parts: "Theory" and "Practice" in such a way that: 1) "Theory" and "Practice" appear as main bookmarks in the pdf; 2) the table of contests is splitted in two, ... 366 views ### Can't generate ligatures with LuaLaTeX under MacTeX2014 and MacTeX2015 when using certain fonts (Remark: This question was posted originally when the big news, TeX-wise, was the impending transition from TeXLive2013 to TeXLive2014. Since the issue identified in this posting persists in ... 17 views ### Why is this edge not straight when using subgraphs in layered layout? The MWE below produces a simple graph. When I add the two upper nodes A1 and B1 to a subgraph the horizontal alignment is lost which is understandable, since the nodes in the subgraph are positioned ... 29 views ### A problem with unicode-math.sty used in a .cls file I am having a problem with the unicode-math.sty (I'm using MacTeX2015, updated this morning). I am using the following MWE class file. testthis.cls \NeedsTeXFormat{LaTeX2e}[2001/06/01] ... 236 views ### How to install TexLive on dualboot with shared TEXDIR? Note: this question is very similar to mine, but the answer states I would recommend you to install the full TeXLive manually (i.e. not using the Ubuntu packages) with both Linux and Windows ... 20 views ### Forcing abbreviations with cleverref As an apposite situation of this question, I would like to use cleveref, but it constantly prints unabbreviated text (Chapter, Section, etc.). Is there a way to force the abbreviated form (Chap., ... 28 views ### Re-formatting (appending text to) \paragraph{} I would like to change the appearance of \paragraph{} (particularly the space between the end of the heading and the beginning of the text) to look more like \noindent\textbf{Paragraph heading} --- ... 7 views ### On-line citations for Physical Review B documents in REVTeX At some point the Physical Review B journal switched from superscript citations to on-line citations. However, when I prepare a document using \documentclass[aps,prb,reprint]{revtex4-1}, I still get ... 19 views ### 2nd page of TOC overlapping with header I am using R's Brew package to automate LaTeX reports, and I am having trouble formatting table of contents so that they don't overlap the header. The header works correctly, but I have a long Table ... 35 views ### Date/Time in both axes with PGFPLOT I am trying to plot some data that contains date/time values in both the x and y axis. I have been using the pgfplots and have managed to plot date/times on the x axis and on the y axis separately ... 66 views ### Definition of a macro with multiple arguments and usage of \csname Why does the following code not work ? \Requirepackage{amsthm} \renewcommand\newtheorem[2]{% \NewEnviron{\csname #1\endcsname}[1]{% ##1 : \BODY } } I have seen examples of the use ... 28 views ### biblatex : avoid redundant information, the return I've been happily using the solution found by Paul Stanley to the question I asked last year : http://tex.stackexchange.com/a/172777/50288 However, there's one detail that bothers me, and I can't ... 76 views ### Placing the equation number in the left-hand margin I use \renewcommand{\theequation}{{\hspace*{-3.05cm}\thesection.\arabic{equation}}} to move my equation numbers into the margins (corporate design), it looks like this: The problem is that ... 12 views ### Left margins are different for longtable and tabular I would like to use a tabular and a longtable in my document, but the left margins are not aligned. \documentclass[letterpaper,11pt]{article} \usepackage{longtable} \begin{document} ... 68 views ### “And” between last and second last cite using the \footcites command and authoryear-icomp When using \footcites all cites are equally separated by \multicitedelim. However, I would like the last two cites to be separated by "and". With certain limitations, this is possible with \textcites ... 32 views ### Creating a Flipbook as overlay only on some pages I'm writing a thesis where some images with minor changes are used. This changes would be perfectly visible if it would be possible to overlay one picture over the other and toggle both. So doing it ... 9 views ### LyX: Two differently sized subfigures with a combined width of 100% columnwidth On LyX I'm trying to create 2 subfigures (one with a square shaped graphic and one that is tall and narrow) side-by-side, such that their combined width is 100% of the column (or page). How do I do ... 96 views ### Watercolor in tikz Is it possible to fill a shape with a coloring that looks like watercolor like in this image using tikz or pgfplots? Src: http://bartoszmilewski.com/2015/07/29/representable-functors/
2015-08-04 21:57:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694809317588806, "perplexity": 3291.7986400432387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042992201.62/warc/CC-MAIN-20150728002312-00307-ip-10-236-191-2.ec2.internal.warc.gz"}
https://raweb.inria.fr/rapportsactivite/RA2020/realopt/index.html
2020 Activity report Project-Team REALOPT RNSR: 200919008B Research center In partnership with: CNRS, Institut Polytechnique de Bordeaux, Université de Bordeaux Team name: Reformulations based algorithms for Combinatorial Optimization In collaboration with: Institut de Mathématiques de Bordeaux (IMB) Domain Applied Mathematics, Computation and Simulation Theme Optimization, machine learning and statistical methods Creation of the Project-Team: 2009 January 01 # Keywords • A6.2.6. Optimization • A7.1.3. Graph algorithms • A8.1. Discrete mathematics, combinatorics • A8.2. Optimization • A8.2.1. Operations research • A8.7. Graph theory • A9.7. AI algorithmics • B3.1. Sustainable development • B3.1.1. Resource management • B4.2. Nuclear Energy Production • B4.4. Energy delivery • B6.5. Information systems • B7. Transport and logistics • B9.5.2. Mathematics # 1 Team members, visitors, external collaborators ## Research Scientists • Gael Guillot [Univ de Bordeaux, Researcher, from Nov 2020] • Ruslan Sadykov [Inria, Researcher, HDR] ## Faculty Members • François Clautiaux [Team leader, Univ de Bordeaux, Professor, HDR] • Boris Detienne [Univ de Bordeaux, Associate Professor] • Aurelien Froger [Univ de Bordeaux, Associate Professor] • Arnaud Pecher [Univ de Bordeaux, Professor, HDR] • Pierre Pesneau [Univ de Bordeaux, Associate Professor] ## Post-Doctoral Fellow • Siao Phouratsamay [Inria] ## PhD Students • Isaac Balster [Inria, from Nov 2020] • Xavier Blanchot [Réseau de transport d'électricité, CIFRE] • Gael Guillot [Univ de Bordeaux, until Oct 2020] • Mellila Kechir [Ecole de Commerce KEDGE Business School, from Sep 2020] • Daniiil Khachai [Ecole de Commerce KEDGE Business School, from Sep 2020] • Johan Leveque [La Poste, CIFRE] • Guillaume Marques [Univ de Bordeaux, until Aug 2020] ## Interns and Apprentices • Mellila Kechir [Inria, from Mar 2020 until Aug 2020] • Arthur Rouquan [Inria, from Mar 2020 until Aug 2020] • Joelle Rodrigues [Inria] ## Visiting Scientist • Vinicius Loti De Lima [Université fédérale de Goiás - Brésil, from Mar 2020 until Apr 2020] ## External Collaborators • Artur Alves Pessoa [Universidade Federal Fluminense - Niteroi Brazil] • Ayse Nur Arslan [INSA Rennes] • Imen Ben Mohamed [Ecole de Commerce KEDGE Business School] • Philippe Depouilly [CNRS] • Laurent Facq [CNRS] • Cédric Joncour [Univ du Havre] • Walid Klibi [Ecole de Commerce KEDGE Business School] • Philippe Meurdesoif [Univ de Bordeaux] • Gautier Stauffer [Ecole de Commerce KEDGE Business School, HDR] # 2 Overall objectives Reformulation techniques in Mixed Integer Programming (MIP), Polyhedral approaches (cut generation), Robust Optimization, Approximation Algorithms, Extended formulations, Lagrangian Relaxation (Column Generation) based algorithms, Dantzig and Benders Decomposition, Primal Heuristics, Graph Theory, Constraint Programming. Quantitative modeling is routinely used in both industry and administration to design and operate transportation, distribution, or production systems. Optimization concerns every stage of the decision-making process: long term investment budgeting and activity planning, tactical management of scarce resources, or the control of day-to-day operations. In many optimization problems that arise in decision support applications the most important decisions (control variables) are discrete in nature: such as on/off decision to buy, to invest, to hire, to send a vehicle, to allocate resources, to decide on precedence in operation planning, or to install a connection in network design. Such combinatorial optimization problems can be modeled as linear or nonlinear programs with integer decision variables and extra variables to deal with continuous adjustments. The most widely used modeling tool consists in defining the feasible decision set using linear inequalities with a mix of integer and continuous variables, so-called Mixed Integer Programs (MIP), which already allow a fair description of reality and are also well-suited for global optimization. The solution of such models is essentially based on enumeration techniques and is notoriously difficult given the huge size of the solution space. Commercial solvers have made significant progress but remain quickly overwhelmed beyond a certain problem size. A key to further progress is the development of better problem formulations that provide strong continuous approximations and hence help to prune the enumerative solution scheme. Effective solution schemes are a complex blend of techniques: cutting planes to better approximate the convex hull of feasible (integer) solutions, extended reformulations (combinatorial relations can be formulated better with extra variables), constraint programming to actively reduce the solution domain through logical implications along variable fixing based on reduced cost, Lagrangian decomposition methods to produce powerful relaxations, and Bender's decomposition to project the formulation, reducing the problem to the important decision variables, and to implement multi-level programming that models a hierarchy of decision levels or recourse decision in the case of data adjustment, primal heuristics and meta-heuristics (greedy, local improvement, or randomized partial search procedures) to produce good candidates at all stage of the solution process, and branch-and-bound or dynamic programming enumeration schemes to find a global optimum, with specific strong strategies for the selection on the sequence of fixings. The real challenge is to integrate the most efficient methods in one global system so as to prune what is essentially an enumeration based solution technique. The progress are measured in terms of the large scale of input data that can now be solved, the integration of many decision levels into planning models, and not least, the account taken for random (or dynamically adjusted) data by way of modeling expectation (stochastic approaches) or worst-case behavior (robust approaches). Building on complementary expertise, our team's overall goals are threefold: • $\left(i\right)$ Methodologies: To design tight formulations for specific combinatorial optimization problems and generic models, relying on delayed cut and column generation, decomposition, extended formulations and projection tools for linear and nonlinear mixed integer programming models. To develop generic methods based on such strong formulations by handling their large scale dynamically. To generalize algorithmic features that have proven efficient in enhancing performance of exact optimization approaches. To develop approximation schemes with proven optimality gap and low computational complexity. More broadly, to contribute to theoretical and methodological developments of exact and approximate approaches in combinatorial optimization, while extending the scope of applications and their scale. • $\left(ii\right)$ Problem solving: To demonstrate the strength of cooperation between complementary exact mathematical optimization techniques, dynamic programming, robust and stochastic optimization, constraint programming, combinatorial algorithms and graph theory, by developing “efficient” algorithms for specific mathematical models. To tackle large-scale real-life applications, providing provably good approximate solutions by combining exact, approximate, and heuristic methods. • $\left(iii\right)$ Software platform & Transfer: To provide prototypes of modelers and solvers based on generic software tools that build on our research developments, writing code that serves as the proof-of-concept of the genericity and efficiency of our approaches, while transferring our research findings to internal and external users. # 3 Research program ## 3.1 Introduction Keyword: integer programming, graph theory, decomposition approaches, polyhedral approaches,quadratic programming approaches, constraint programming.. Combinatorial optimization is the field of discrete optimization problems. In many applications, the most important decisions (control variables) are binary (on/off decisions) or integer (indivisible quantities). Extra variables can represent continuous adjustments or amounts. This results in models known as mixed integer programs (MIP), where the relationships between variables and input parameters are expressed as linear constraints and the goal is defined as a linear objective function. MIPs are notoriously difficult to solve: good quality estimations of the optimal value (bounds) are required to prune enumeration-based global-optimization algorithms whose complexity is exponential. In the standard approach to solving an MIP is so-called branch-and-bound algorithm : $\left(i\right)$ one solves the linear programming (LP) relaxation using the simplex method; $\left(ii\right)$ if the LP solution is not integer, one adds a disjunctive constraint on a factional component (rounding it up or down) that defines two sub-problems; $\left(iii\right)$ one applies this procedure recursively, thus defining a binary enumeration tree that can be pruned by comparing the local LP bound to the best known integer solution. Commercial MIP solvers are essentially based on branch-and-bound (such IBM-CPLEX, FICO-Xpress-mp, or GUROBI). They have made tremendous progress over the last decade (with a speedup by a factor of 60). But extending their capabilities remains a continuous challenge; given the combinatorial explosion inherent to enumerative solution techniques, they remain quickly overwhelmed beyond a certain problem size or complexity. Progress can be expected from the development of tighter formulations. Central to our field is the characterization of polyhedra defining or approximating the solution set and combinatorial algorithms to identify “efficiently” a minimum cost solution or separate an unfeasible point. With properly chosen formulations, exact optimization tools can be competitive with other methods (such as meta-heuristics) in constructing good approximate solutions within limited computational time, and of course has the important advantage of being able to provide a performance guarantee through the relaxation bounds. Decomposition techniques are implicitly leading to better problem formulation as well, while constraint propagation are tools from artificial intelligence to further improve formulation through intensive preprocessing. A new trend is robust optimization where recent progress have been made: the aim is to produce optimized solutions that remain of good quality even if the problem data has stochastic variations. In all cases, the study of specific models and challenging industrial applications is quite relevant because developments made into a specific context can become generic tools over time and see their way into commercial software. Our project brings together researchers with expertise in mathematical programming (polyhedral approaches, decomposition and reformulation techniques in mixed integer programing, robust and stochastic programming, and dynamic programming), graph theory (characterization of graph properties, combinatorial algorithms) and constraint programming in the aim of producing better quality formulations and developing new methods to exploit these formulations. These new results are then applied to find high quality solutions for practical combinatorial problems such as routing, network design, planning, scheduling, cutting and packing problems, High Performance and Cloud Computing. ## 3.2 Polyhedral approaches for MIP Adding valid inequalities to the polyhedral description of an MIP allows one to improve the resulting LP bound and hence to better prune the enumeration tree. In a cutting plane procedure, one attempt to identify valid inequalities that are violated by the LP solution of the current formulation and adds them to the formulation. This can be done at each node of the branch-and-bound tree giving rise to a so-called branch-and-cut algorithm47. The goal is to reduce the resolution of an integer program to that of a linear program by deriving a linear description of the convex hull of the feasible solutions. Polyhedral theory tells us that if $X$ is a mixed integer program: $X=P\cap {ℤ}^{n}×{ℝ}^{p}$ where $P=\left\{x\in {ℝ}^{n+p}:Ax\le b\right\}$ with matrix $\left(A,b\right)\in {ℚ}^{m×\left(n+p+1\right)}$, then $conv\left(X\right)$ is a polyhedron that can be described in terms of linear constraints, i.e. it writes as $conv\left(X\right)=\left\{x\in {ℝ}^{n+p}:C\phantom{\rule{0.222222em}{0ex}}x\le d\right\}$ for some matrix $\left(C,d\right)\in {ℚ}^{{m}^{\text{'}}×\left(n+p+1\right)}$ although the dimension ${m}^{\text{'}}$ is typically quite large. A fundamental result in this field is the equivalence of complexity between solving the combinatorial optimization problem $min\left\{cx:x\in X\right\}$ and solving the separation problem over the associated polyhedron $conv\left(X\right)$: if $\stackrel{˜}{x}\notin conv\left(X\right)$, find a linear inequality $\pi \phantom{\rule{0.222222em}{0ex}}x\ge {\pi }_{0}$ satisfied by all points in $conv\left(X\right)$ but violated by $\stackrel{˜}{x}$. Hence, for NP-hard problems, one can not hope to get a compact description of $conv\left(X\right)$ nor a polynomial time exact separation routine. Polyhedral studies focus on identifying some of the inequalities that are involved in the polyhedral description of $conv\left(X\right)$ and derive efficient separation procedures (cutting plane generation). Only a subset of the inequalities $C\phantom{\rule{0.222222em}{0ex}}x\le d$ can offer a good approximation, that combined with a branch-and-bound enumeration techniques permits to solve the problem. Using cutting plane algorithm at each node of the branch-and-bound tree, gives rise to the algorithm called branch-and-cut. ## 3.3 Decomposition-and-reformulation-approaches An hierarchical approach to tackle complex combinatorial problems consists in considering separately different substructures (subproblems). If one is able to implement relatively efficient optimization on the substructures, this can be exploited to reformulate the global problem as a selection of specific subproblem solutions that together form a global solution. If the subproblems correspond to subset of constraints in the MIP formulation, this leads to Dantzig-Wolfe decomposition. If it corresponds to isolating a subset of decision variables, this leads to Bender's decomposition. Both lead to extended formulations of the problem with either a huge number of variables or constraints. Dantzig-Wolfe approach requires specific algorithmic approaches to generate subproblem solutions and associated global decision variables dynamically in the course of the optimization. This procedure is known as column generation, while its combination with branch-and-bound enumeration is called branch-and-price. Alternatively, in Bender's approach, when dealing with exponentially many constraints in the reformulation, the cutting plane procedures that we defined in the previous section are well-suited tools. When optimization on a substructure is (relatively) easy, there often exists a tight reformulation of this substructure typically in an extended variable space. This gives rise powerful reformulation of the global problem, although it might be impractical given its size (typically pseudo-polynomial). It can be possible to project (part of) the extended formulation in a smaller dimensional space if not the original variable space to bring polyhedral insight (cuts derived through polyhedral studies can often be recovered through such projections). ## 3.4 Integration of Artificial Intelligence Techniques in Integer Programming When one deals with combinatorial problems with a large number of integer variables, or tightly constrained problems, mixed integer programming (MIP) alone may not be able to find solutions in a reasonable amount of time. In this case, techniques from artificial intelligence can be used to improve these methods. In particular, we use variable fixing techniques, primal heuristics and constraint programming. Primal heuristics are useful to find feasible solutions in a small amount of time. We focus on heuristics that are either based on integer programming (rounding, diving, relaxation induced neighborhood search, feasibility pump), or that are used inside our exact methods (heuristics for separation or pricing subproblem, heuristic constraint propagation, ...). Such methods are likely to produce good quality solutions only if the integer programming formulation is of top quality, i.e., if its LP relaxation provides a good approximation of the IP solution. In the same line, variable fixing techniques, that are essential in reducing the size of large scale problems, rely on good quality approximations: either tight formulations or tight relaxation solvers (as a dynamic program combined with state space relaxation). Then if the dual bound derives when the variable is fixed to one exceeds the incubent solution value, the variable can be fixed to zero and hence removed from the problem. The process can be apply sequentially by refining the degree of relaxation. Constraint Programming (CP) focuses on iteratively reducing the variable domains (sets of feasible values) by applying logical and problem-specific operators. The latter propagates on selected variables the restrictions that are implied by the other variable domains through the relations between variables that are defined by the constraints of the problem. Combined with enumeration, it gives rise to exact optimization algorithms. A CP approach is particularly effective for tightly constrained problems, feasibility problems and min-max problems. Mixed Integer Programming (MIP), on the other hand, is known to be effective for loosely constrained problems and for problems with an objective function defined as the weighted sum of variables. Many problems belong to the intersection of these two classes. For such problems, it is reasonable to use algorithms that exploit complementary strengths of Constraint Programming and Mixed Integer Programming. ## 3.5 Robust Optimization Decision makers are usually facing several sources of uncertainty, such as the variability in time or estimation errors. A simplistic way to handle these uncertainties is to overestimate the unknown parameters. However, this results in over-conservatism and a significant waste in resource consumption. A better approach is to account for the uncertainty directly into the decision aid model by considering mixed integer programs that involve uncertain parameters. Stochastic optimization account for the expected realization of random data and optimize an expected value representing the average situation. Robust optimization on the other hand entails protecting against the worst-case behavior of unknown data. There is an analogy to game theory where one considers an oblivious adversary choosing the realization that harms the solution the most. A full worst case protection against uncertainty is too conservative and induces very high over-cost. Instead, the realization of random data are bound to belong to a restricted feasibility set, the so-called uncertainty set. Stochastic and robust optimization rely on very large scale programs where probabilistic scenarios are enumerated. There is hope of a tractable solution for realistic size problems, provided one develops very efficient ad-hoc algorithms. The techniques for dynamically handling variables and constraints (column-and-row generation and Bender's projection tools) that are at the core of our team methodological work are specially well-suited to this context. ## 3.6 Polyhedral Combinatorics and Graph Theory Many fundamental combinatorial optimization problems can be modeled as the search for a specific structure in a graph. For example, ensuring connectivity in a network amounts to building a tree that spans all the nodes. Inquiring about its resistance to failure amounts to searching for a minimum cardinality cut that partitions the graph. Selecting disjoint pairs of objects is represented by a so-called matching. Disjunctive choices can be modeled by edges in a so-called conflict graph where one searches for stable sets – a set of nodes that are not incident to one another. Polyhedral combinatorics is the study of combinatorial algorithms involving polyhedral considerations. Not only it leads to efficient algorithms, but also, conversely, efficient algorithms often imply polyhedral characterizations and related min-max relations. Developments of polyhedral properties of a fundamental problem will typically provide us with more interesting inequalities well suited for a branch-and-cut algorithm to more general problems. Furthermore, one can use the fundamental problems as new building bricks to decompose the more general problem at hand. For problem that let themselves easily be formulated in a graph setting, the graph theory and in particular graph decomposition theorem might help. # 4 Application domains ## 4.1 Network Design and Routing Problems We are actively working on problems arising in network topology design, implementing a survivability condition of the form “at least two paths link each pair of terminals”. We have extended polyhedral approaches to problem variants with bounded length requirements and re-routing restrictions 40. Associated to network design is the question of traffic routing in the network: one needs to check that the network capacity suffices to carry the demand for traffic. The assignment of traffic also implies the installation of specific hardware at transient or terminal nodes. To accommodate the increase of traffic in telecommunication networks, today's optical networks use grooming and wavelength division multiplexing technologies. Packing multiple requests together in the same optical stream requires to convert the signal in the electrical domain at each aggregation of disaggregation of traffic at an origin, a destination or a bifurcation node. Traffic grooming and routing decisions along with wavelength assignments must be optimized to reduce opto-electronics system installation cost. We developed and compared several decomposition approaches 67, 66, 65 to deal with backbone optical network with relatively few nodes (around 20) but thousands of requests for which traditional multi-commodity network flow approaches are completely overwhelmed. We also studied the impact of imposing a restriction on the number of optical hops in any request route 64. We also developed a branch-and-cut approach to a problem that consists in placing sensors on the links of a network for a minimum cost 49, 48. The Dial-a-Ride Problem is a variant of the pickup and delivery problem with time windows, where the user inconvenience must be taken into account. In  56, ride time and customer waiting time are modeled through both constraints and an associated penalty in the objective function. We develop a column generation approach, dynamically generating feasible vehicle routes. Handling ride time constraints explicitly in the pricing problem solver requires specific developments. Our dynamic programming approach for pricing problem makes use of a heuristic dominance rule and a heuristic enumeration procedure, which in turns implies that our overall branch-and-price procedure is a heuristic. However, in practice our heuristic solutions are experimentally very close to exact solutions and our approach is numerically competitive in terms of computation times. In  53, 54, we consider the problem of covering an urban area with sectors under additional constraints. We adapt the aggregation method to our column generation algorithm and focus on the problem of disaggregating the dual solution returned by the aggregated master problem. We studied several time dependent formulations for the unit demand vehicle routing problem  31, 32. We gave new bounding flow inequalities for a single commodity flow formulation of the problem. We described their impact by projecting them on some other sets of variables, such as variables issued of the Picard and Queyranne formulation or the natural set of design variables. Some inequalities obtained by projection are facet defining for the polytope associated with the problem. We are now running more numerical experiments in order to validate in practice the efficiency of our theoretical results. We also worked on the p-median problem, applying the matching theory to develop an efficient algorithm in Y-free graphs and to provide a simple polyhedral characterization of the problem and therefore a simple linear formulation 62 simplifying results from Baiou and Barahona. We considered the multi-commodity transportation problem. Applications of this problem arise in, for example, rail freight service design, "less than truckload" trucking, where goods should be delivered between different locations in a transportation network using various kinds of vehicles of large capacity. A particularity here is that, to be profitable, transportation of goods should be consolidated. This means that goods are not delivered directly from the origin to the destination, but transferred from one vehicle to another in intermediate locations. We proposed an original Mixed Integer Programming formulation for this problem which is suitable for resolution by a Branch-and-Price algorithm and intelligent primal heuristics based on it. For the problem of routing freight railcars, we proposed two algorithmes based on the column generation approach. These algorithmes have been tested on a set of real-life instances coming from a real Russian freight transportation company. Our algorithms have been faster on these instances than the current solution approach being used by the company. ## 4.2 Packing and Covering Problems Realopt team has a strong experience on exact methods for cutting and packing problems. These problems occur in logistics (loading trucks), industry (wood or steel cutting), computer science (parallel processor scheduling). We developed a branch-and-price algorithm for the Bin Packing Problem with Conflicts which improves on other approaches available in the literature 61. The algorithm uses our methodological advances like the generic branching rule for the branch-and-price and the column based heuristic. One of the ingredients which contributes to the success of our method are fast algorithms we developed for solving the subproblem which is the Knapsack Problem with Conflicts. Two variants of the subproblem have been considered: with interval and arbitrary conflict graphs. We also developed a branch-and-price algorithm for a variant of the bin-packing problem where the items are fragile. In 23 we studied empirically different branching schemes and different algorithms for solving the subproblems. We studied a variant of the knapsack problem encountered in inventory routing problem 50: we faced a multiple-class integer knapsack problem with setups 51 (items are partitioned into classes whose use implies a setup cost and associated capacity consumption). We showed the extent to which classical results for the knapsack problem can be generalized to this variant with setups and we developed a specialized branch-and-bound algorithm. We studied the orthogonal knapsack problem, with the help of graph theory  41, 44, 43, 42. Fekete and Schepers proposed to model multi-dimensional orthogonal placement problems by using an efficient representation of all geometrically symmetric solutions by a so called packing class involving one interval graph for each dimension. Though Fekete & Schepers' framework is very efficient, we have however identified several weaknesses in their algorithms: the most obvious one is that they do not take advantage of the different possibilities to represent interval graphs. We propose to represent these graphs by matrices with consecutive ones on each row. We proposed a branch-and-bound algorithm for the 2D knapsack problem that uses our 2D packing feasibility check. We are currently developing exact optimization tools for glass-cutting problems in a collaboration with Saint-Gobain 26. This 2D-3stage-Guillotine cut problems are very hard to solve given the scale of the instance we have to deal with. Moreover one has to issue cutting patterns that avoid the defaults that are present in the glass sheet that are used as raw material. There are extra sequencing constraints regarding the production that make the problem even more complex. We have also organized a European challenge on packing with society Renault. This challenge was about loading trucks under practical constraints. ## 4.3 Planning, Scheduling, and Logistic Problems Inventory routing problems combine the optimization of product deliveries (or pickups) with inventory control at customer sites. We considered an industrial application where one must construct the planning of single product pickups over time; each site accumulates stock at a deterministic rate; the stock is emptied on each visit. We have developed a branch-and-price algorithm where periodic plans are generated for vehicles by solving a multiple choice knapsack subproblem, and the global planning of customer visits is coordinated by the master program  52. We previously developed approximate solutions to a related problem combining vehicle routing and planning over a fixed time horizon (solving instances involving up to 6000 pick-ups and deliveries to plan over a twenty day time horizon with specific requirements on the frequency of visits to customers 46. Together with our partner company GAPSO from the associate team SAMBA, we worked on the equipment routing task scheduling problem  55 arising during port operations. In this problem, a set of tasks needs to be performed using equipments of different types with the objective to maximize the weighted sum of performed tasks. We participated to the project on an airborne radar scheduling. For this problem, we developed fast heuristics  39 and exact algorithms  25. A substantial research has been done on machine scheduling problems. A new compact MIP formulation was proposed for a large class of these problems 24. An exact decomposition algorithm was developed for the NP-hard maximizing the weighted number of late jobs problem on a single machine  57. A dominant class of schedules for malleable parallel jobs was discovered in the NP-hard problem to minimize the total weighted completion time  59. We proved that a special case of the scheduling problem at cross docking terminals to minimize the storage cost is polynomially solvable  60, 58. Another application area in which we have successfully developed MIP approaches is in the area of tactical production and supply chain planning. In 22, we proposed a simple heuristic for challenging multi-echelon problems that makes effective use of a standard MIP solver. 21 contains a detailed investigation of what makes solving the MIP formulations of such problems challenging; it provides a survey of the known methods for strengthening formulations for these applications, and it also pinpoints the specific substructure that seems to cause the bottleneck in solving these models. Finally, the results of 27 provide demonstrably stronger formulations for some problem classes than any previously proposed. We are now working on planning phytosanitary treatments in vineries. We have been developing robust optimization models and methods to deal with a number of applications like the above in which uncertainty is involved. In 37, 36, we analyzed fundamental MIP models that incorporate uncertainty and we have exploited the structure of the stochastic formulation of the problems in order to derive algorithms and strong formulations for these and related problems. These results appear to be the first of their kind for structured stochastic MIP models. In addition, we have engaged in successful research to apply concepts such as these to health care logistics 28. We considered train timetabling problems and their re-optimization after a perturbation in the network 68, 63. The question of formulation is central. Models of the literature are not satisfactory: continuous time formulations have poor quality due to the presence of discrete decision (re-sequencing or re-routing); arc flow in time-space graph blow-up in size (they can only handle a single line timetabling problem). We have developed a discrete time formulation that strikes a compromise between these two previous models. Based on various time and network aggregation strategies, we develop a 2-stage approach, solving the contiguous time model having fixed the precedence based on a solution to the discrete time model. Currently, we are conducting investigations on a real-world planning problem in the domain of energy production, in the context of a collaboration with EDF  34, 33, 35. The problem consists in scheduling maintenance periods of nuclear power plants as well as production levels of both nuclear and conventional power plants in order to meet a power demand, so as to minimize the total production cost. For this application, we used a Dantzig-Wolfe reformulation which allows us to solve realistic instances of the deterministic version of the problem 38. In practice, the input data comprises a number of uncertain parameters. We deal with a scenario-based stochastic demand with help of a Benders decomposition method. We are working on Multistage Robust Optimization approaches to take into account other uncertain parameters like the duration of each maintenance period, in a dynamic optimization framework. The main challenge addressed in this work is the joint management of different reformulations and solving techniques coming from the deterministic (Dantzig-Wolfe decomposition, due to the large scale nature of the problem), stochastic (Benders decomposition, due to the number of demand scenarios) and robust (reformulations based on duality and/or column and/or row generation due to maintenance extension scenarios) components of the problem 29. # 5 Social and environmental responsibility ## 5.1 Footprint of research activities Our research involves a large amount of computational experiments. ## 5.2 Impact of research results The objective of our research is to reduce the quantity of energy/material used to realize some large projects, including energy production and distribution, chemical treatments, and distribution of goods. # 6 Highlights of the year 2020 was marked by the covid crisis and its impact on the overall society and its activity. The world of research has also been greatly affected: • faculty members have seen their teaching load increase significantly; • PhD students and post-docs have often had to deal with a worsening of their working conditions; • most scientific collaborations have been greatly affected, with several of international activities cancelled or postponed to dates still to be defined. On the bright side, a major publication proposing the first generic exact solver for vehicle routing and related problems has been published 9 in Mathematical Programming, one of the top journals in the area. # 7 New software and platforms ## 7.1 New software ### 7.1.1 BaPCod • Name: A generic Branch-And-Price-And-Cut Code • Keywords: Column Generation, Branch-and-Price, Branch-and-Cut, Mixed Integer Programming, Mathematical Optimization, Benders Decomposition, Dantzig-Wolfe Decomposition, Extended Formulation • Functional Description: BaPCod is a prototype code that solves Mixed Integer Programs (MIP) by application of reformulation and decomposition techniques. The reformulated problem is solved using a branch-and-price-and-cut (column generation) algorithms, Benders approaches, network flow and dynamic programming algorithms. These methods can be combined in several hybrid algorithms to produce exact or approximate solutions (primal solutions with a bound on the deviation to the optimum). • Release Contributions: Correction of numerous bugs. • URL: • Authors: Francois Vanderbeck, Ruslan Sadykov, Issam Tahiri, Boris Detienne, François Clautiaux, Artur Alves Pessoa, Eduardo Uchoa Barboza, Guillaume Marques, Romain Leguay, Halil Sen, Michael Poss, Pierre Pesneau • Participants: Artur Alves Pessoa, Boris Detienne, Eduardo Uchoa Barboza, Franck Labat, François Clautiaux, Francois Vanderbeck, Halil Sen, Issam Tahiri, Michael Poss, Pierre Pesneau, Romain Leguay, Ruslan Sadykov • Partners: Université de Bordeaux, CNRS, IPB, Universidade Federal Fluminense ### 7.1.2 VRPSolver • Name: VRPSolver • Keywords: Column Generation, Vehicle routing, Numerical solver • Scientific Description: Major advances were recently obtained in the exact solution of Vehicle Routing Problems (VRPs). Sophisticated Branch-Cut-and-Price (BCP) algorithms for some of the most classical VRP variants now solve many instances with up to a few hundreds of customers. However , adapting and reimplementing those successful algorithms for other variants can be a very demanding task. This work proposes a BCP solver for a generic model that encompasses a wide class of VRPs. It incorporates the key elements found in the best recent VRP algorithms: ng-path relaxation, rank-1 cuts with limited memory, and route enumeration, all generalized through the new concept of "packing set". This concept is also used to derive a new branch rule based on accumulated resource consumption and to generalize the Ryan and Foster branch rule. Extensive experiments on several variants show that the generic solver has an excellent overall performance, in many problems being better than the best existing specific algorithms. Even some non-VRPs, like bin packing, vector packing and generalized assignment, can be modeled and effectively solved. • Functional Description: This solver allows one to model and solve to optimality many combinatorial optimization problems, belonging to the class of vehicle routing, scheduling, packing and network design problems. The problem is formulated using variables, linear objective function, linear and integrality constraints, definition of graphs, resources, and mapping between graph arcs and variables. A complex Branch-Cut-and-Price algorithm is used to solve the model. A new concept of elementarity and packing sets is used to pass an additional information to the solver, so that several state-of-the-art Branch-Cut-and-Price components can be used to improve radically the efficiency of the solver. The interface of the solver is implemented in Julia using JuMP package. To simplify the installation and usage, the solver is distributed as a docker image. The solver can be used only for academic purposes. • Release Contributions: Version 0.4 brings introduction of elementarity sets, bug corrections, as well as update of dependencies. • News of the Year: 2020 - version 0.4 2019 - solver release, versions 0.1, 0.2, 0.3 • URL: • Publication: • Participants: Ruslan Sadykov, Eduardo Uchoa Barboza, Artur Alves Pessoa, Eduardo Queiroga, Teobaldo Bulhões, Laurent Facq # 8 New results ## 8.1 Algorithms for optimization under uncertainty We introduce a new exact algorithm, called Benders by batch algorithm, based on Benders decomposition to solve two-stage stochastic linear programs. This algorithm is based on based on the multicut formulation of Benders decomposition and solves only a few number of subproblems at each iteration. We propose two primal stabilization methods for the algorithm and perform an extensive computational study on six large-scale benchmarks of the stochastic optimization literature. Results show the efficiency of the method compared to five classical alternative algorithms and significant time saving provided by its primal stabilization. We show acceleration factors up to 10 times faster than the best method from the literature we compare to, and up to 800 times faster than IBM ILOG CPLEX 12.10 built-in Benders decomposition. We have studied a class of two-stage robust binary optimization problems with objective uncertainty where recourse decisions are restricted to be mixed-binary 3. For these problems, we present a deterministic equivalent formulation through the convexification of the recourse feasible region. We then explore this formulation under the lens of a relaxation, showing that the specific relaxation we propose can be solved using the branch-and-price algorithm. We present conditions under which this relaxation is exact, and describe alternative exact solution methods when this is not the case. Despite the two-stage nature of the problem, we provide NP-completeness results based on our reformulations. Finally, we present various applications in which the methodology we propose can be applied. We compare our exact methodology to those approximate methods recently proposed in the literature under the name K-adaptability. Our computational results show that our methodology is able to produce better solutions in less computational time compared to the K-adaptability approach, as well as to solve bigger instances than those previously managed in the literature. ## 8.2 Machine scheduling problems Minimizing the weighted number of tardy jobs is a classical and intensively studied scheduling problem. In 15, we develop a two-stage robust approach, where exact weights are known after accepting to perform the jobs, and before sequencing them on the machine. This assumption allows diverse recourse decisions to be taken in order to better adapt one's tactical plan. The contribution of this paper is twofold: first, we introduce a new scheduling problem and model it as a min-max-min optimization problem with mixed-integer recourse by extending existing models proposed for the classical problem where all the costs are assumed to be known. Second, we take advantage of the special structure of the problem to propose two solution approaches based on results from the recent robust optimization literature, namely finite adaptability (Bertsimas and Caramanis, 2010) and a convexification-based approach (Arslan and Detienne, 2020). We also study the cost of finding anchored solutions, where the sequence of jobs has to be decided before the uncertainty is revealed. Computational experiments to analyze the effectiveness of our approaches are reported. Work 4 deals with a very generic class of scheduling problems with identical/uniform/unrelated parallel machine environment. It considers well-known attributes such as release dates or sequence-dependent setup times and accepts any objective function defined over job completion times. Non-regular objectives are also supported. We introduce a branch-cut-and-price algorithm for such problems that makes use of non-robust cuts, i.e., cuts which change the structure of the pricing problem. This is the first time that such cuts are employed for machine scheduling problems. The algorithm also embeds other important techniques such as strong branching, reduced cost fixing and dual stabilization. Computational experiments over literature benchmarks showed that the proposed algorithm is indeed effective and could solve many instances to optimality for the first time. ## 8.3 Generic solver for vehicle routing and similar problems Major advances were recently obtained in the exact solution of Vehicle Routing Problems (VRPs). Sophisticated Branch-Cut-and-Price (BCP) algorithms for some of the most classical VRP variants now solve many instances with up to a few hundreds of customers. However, adapting and reimplementing those successful algorithms for other variants can be a very demanding task. Work 9 proposes a BCP solver for a generic model that encompasses a wide class of VRPs. It incorporates the key elements found in the best recent VRP algorithms: ng-path relaxation, rank-1 cuts with limited memory, and route enumeration; all generalized through the new concept of "packing set". This concept is also used to derive a new branch rule based on accumulated resource consumption and to generalize the Ryan and Foster branch rule. Extensive experiments on several variants show that the generic solver has an excellent overall performance, in many problems being better than the best existing specific algorithms. Even some non-VRPs, like bin packing, vector packing and generalized assignment, can be modeled and effectively solved. The Shortest Path Problem with Resource Constraints (SPPRC) arises as a subproblem in state-of-the-art Branch-Cut-and-Price algorithms for vehicle routing problems, including the BCP solver described just above. In 11, we propose a variant of the bi-directional label correcting algorithm in which the labels are stored and extended according to the so-called bucket graph. Such organization of labels helps to decrease significantly the number of dominance checks and the running time of the algorithm. We also show how the forward/backward route symmetry can be exploited and how to eliminate arcs from the bucket graph using reduced costs. The proposed algorithm can be especially beneficial for vehicle routing instances with large vehicle capacity and/or with time window constraints. Computational experiments were performed on instances from the distance constrained vehicle routing problem, including multi-depot and site-dependent variants, on the vehicle routing problem with time windows, and on the "nightmare" instances of the heterogeneous fleet vehicle routing problem. Significant improvements over the best algorithms in the literature were achieved and many instances could be solved for the first time. ## 8.4 Vehicle routing applications ### 8.4.1 Classic vehicle routing problems In 7, we examine the robust counterpart of the classical Capacitated Vehicle Routing Problem (CVRP). We consider two types of uncertainty sets for the customer demands: the classical budget polytope introduced by Bertsimas and Sim (2003), and a partitioned budget polytope proposed by Gounaris et al. (2013). We show that using the set-partitioning formulation it is possible to reformulate our problem as a deterministic heterogeneous vehicle routing problem. Thus, many state-of-the-art techniques for exactly solving deterministic VRPs can be applied for the robust counterpart, and a modern branch-and-cut-and-price algorithm can be adapted to our setting by keeping the number of pricing subproblems strictly polynomial. More importantly, we introduce new techniques to significantly improve the efficiency of the algorithm. We present analytical conditions under which a pricing subproblem is infeasible. This result is general and can be applied to other combinatorial optimization problems with knapsack uncertainty. We also introduce robust capacity cuts which are provably stronger than the ones known in the literature. Finally, a fast iterated local search algorithm is proposed to obtain heuristic solutions for the problem. Using our branch-and-cut-and-price algorithm incorporating existing and new techniques, we are able to solve to optimality all but one open instances from the literature. In 10, we are interested in the exact solution of the vehicle routing problem with back-hauls (VRPB), a classical vehicle routing variant with two types of customers: linehaul (delivery) and backhaul (pickup) ones. We propose two branch-cut-and-price (BCP) algorithms for the VRPB. The first of them follows the traditional approach with one pricing subproblem, whereas the second one exploits the linehaul/back- haul customer partitioning and defines two pricing sub-problems. The methods incorporate elements of state-of-the-art BCP algorithms, such as rounded capacity cuts, limited-memory rank-1 cuts, strong branching, route enumeration, arc elimination using reduced costs and dual stabilization. Computational experiments show that the proposed algorithms are capable of obtaining optimal solutions for all existing benchmark instances with up to 200 customers, many of them for the first time. It is observed that the approach involving two pricing subproblems is more efficient computationally than the traditional one. Moreover, new instances are also proposed for which we provide tight bounds. Also, we provide results for benchmark instances of the heterogeneous fixed fleet VRPB and the VRPB with time windows. In 17, we propose a partial optimization metaheuristic under special intensification conditions (POPMUSIC) for the classical capacitated vehicle routing problem (CVRP). The proposed approach uses a branch-cut-and-price algorithm as a powerful heuristic to solve subproblems whose dimensions are typically between 25 and 200 customers. The whole algorithm can be seen as the application of local search over very large neighborhoods, starting from a single initial solution. The main computational experiments were carried out on instances having between 302 and 1000 customers. Using initial solutions generated by some of the best available metaheuristics for the problem, POPMUSIC was able to obtain consistently better solutions for long runs of up to 32 hours. In a final experiment, starting from the best known solutions available in CVRP library (CVRPLIB), POPMUSIC was able to find new best solutions for several instances, including some very large ones. ### 8.4.2 Fixed route vehicle charging problem Electric vehicles offer a pathway to more sustainable transportation, but their adoption entails new challenges not faced by their petroleum-based counterparts. One of the most challenging tasks in vehicle routing problems addressing these challenges is determining how to make good charging decisions for an electric vehicle traveling a given route. This is known as the fixed route vehicle charging problem. An exact and efficient algorithm for this task was introduced in a recent work 30. The algorithm has been used and extended by 45 to account for specific features (time windows, deterministic waiting times). Its implementation is sufficiently complex to deter researchers from adopting it. In 14, we introduce frvcpy, an open-source Python package implementing this algorithm. Our aim with the package is to make it easier for researchers to solve electric vehicle routing problems, facilitating the development of optimization tools that may ultimately enable the mass adoption of electric vehicles. ### 8.4.3 Two-echelon vehicle routing problems Guillaume Marques successefully defended his thesis 12 on solution approaches for two-echelon vehicle routing problems. This thesis includes the following two works. In 6, we propose a branch-cut-and-price algorithm for the two-echelon capacitated vehicle routing problem in which delivery of products from a depot to customers is performed using intermediate depots called satellites. Our algorithm incorporates significant improvements recently proposed in the literature for the standard capacitated vehicle routing problem such as bucket graph based labeling algorithm for the pricing problem, automatic stabilization, limited memory rank-1 cuts, and strong branching. In addition, we make some specific problem contributions. First, we introduce a new route based formulation for the problem which does not use variables to determine product flows in satellites. Second, we introduce a new branching strategy which significantly decreases the size of the branch-and-bound tree. Third, we introduce a new family of satellite supply inequalities, and we empirically show that it improves the quality of the dual bound at the root node of the branch-and-bound tree. Finally, extensive numerical experiments reveal that our algorithm can solve to optimality all literature instances with up to 200 customers and 10 satellites for the first time and thus double the size of instances which could be solved to optimality. The previous work has been to the case when delivery to each client should be performed within a specific time window. In 16, we consider the variant of the problem with precedence constraints for unloading and loading freight at satellites. This variant allows for storage and consolidation of freight at satellites. Thus, the total transportation cost may decrease in comparison with the alternative variant with exact freight synchronization at satellites. We suggest a mixed integer programming formulation for the problem with an exponential number of route variables and an exponential number of precedence constraints which link first-echelon and second-echelon routes. Routes at the second echelon connecting satellites and clients may consist of multiple trips and visit several satellites. A branch-cut-and-price algorithm is proposed to solve efficiently the problem. This is the first exact algorithm in the literature for the multi-trip variant of the problem. We also present a post-processing procedure to check whether the solution can be transformed to avoid freight consolidation and storage without increasing its transportation cost. Our algorithm significantly outperforms another recent one for the single-trip variant of the problem. We also show that all single-trip literature instances solved to optimality admit optimal solutions of the same cost for both variants of the problem either with precedence constraints or with exact synchronization constraints. Given the emergence of two-echelon distribution systems in several practical contexts, this paper tackles, at the strategic level, a distribution network design problem under uncertainty. This problem is characterized by the two-echelon stochastic multi-period capacitated location-routing problem (2E-SM-CLRP). In the first echelon, one has to decide the number and location of warehouse platforms as well as the intermediate distribution platforms for each period; while fixing the capacity of the links between them. In the second echelon, the goal is to construct vehicle routes that visit ship-to locations (SLs) from operating distribution platforms under a stochastic and time-varying demand and varying costs. This problem is modeled as a two-stage stochastic program with integer recourse, where the first-stage includes location and capacity decisions to be fixed at each period over the planning horizon, while routing decisions of the second echelon are determined in the recourse problem. In 13, we propose a logic-based Benders decomposition approach to solve this model. In the proposed approach, the location and capacity decisions are taken by solving the Benders master problem. After these first-stage decisions are fixed, the resulting sub-problem is a capacitated vehicle-routing problem with capacitated multiple depots (CVRP-CMD) that is solved by a branch-cut-and-price algorithm. Computational experiments show that instances of realistic size can be solved optimally within a reasonable time and provide relevant managerial insights on the design problem. ## 8.5 Cutting and packing problems In 18, we introduce and motivate a variant of the bin packing problem where bins are assigned to time slots, and minimum and maximum lags are required between some pairs of items. We suggest two integer programming formulations for the problem: a compact one, and a stronger formulation with an exponential number of variables and constraints. We propose a branch-cut-and-price approach which exploits the latter formulation. For this purpose, we devise separation algorithms based on a mathematical characterization of feasible assignments for two important special cases of the problem. Computational experiments are reported for instances inspired from a real-case application of chemical treatment planning in vineyards, as well as for literature instances for special cases of the problem. The experimental results show the efficiency of our branch-cut-and-price approach, as it outperforms the compact formulation of newly proposed instances, and is able to obtain improved lower and upper bounds for literature instances. In 8, we propose branch-cut-and-price algorithms for the classic bin packing problem and also for the following related problems: vector packing, variable sized bin packing and variable sized bin packing with optional items. The algorithms are defined as models for VRPSolver, a generic solver for vehicle routing problems. In that way, a simple parameterization enables the use of several branch-cut-and-price advanced elements: automatic stabilization by smoothing, limited-memory rank-1 cuts, enumeration, hierarchical strong branching and limited discrepancy search diving heuristics. As an original theoretical contribution, we prove that the branching over accumulated resource consumption, that does not increase the difficulty of the pricing subproblem, is sufficient for those bin packing models. Extensive computational results on instances from the literature show that the VRPSolver models have a performance that is very robust over all those problems, being often superior to the existing exact algorithms on the hardest instances. Several instances could be solved to optimality for the first time. We have developed an approach to solve the temporal knapsack problem (TKP) based on a very large size dynamic programming formulation 5. In this generalization of the classical knapsack problem, selected items enter and leave the knapsack at fixed dates. We solve the TKP with a dynamic program of exponential size, which is solved using a method called Successive Sublimation Dynamic Programming (SSDP). This method starts by relaxing a set of constraints from the initial problem, and iteratively reintroduces them when needed. We show that a direct application of SSDP to the temporal knapsack problem does not lead to an effective method, and that several improvements are needed to compete with the best results from the literature. # 9 Bilateral contracts and grants with industry ## 9.1 Bilateral contracts with industry We have a contract with RTE to develop strategies inspired from stochastic gradient methods to speed-up Benders' decomposition. The PhD thesis of Xavier Blanchot is part of this contract. We had a contract with Thales Avionique to study a robust scheduling problem. ## 9.2 Bilateral grants with industry Our joint project with Atoptima start-up "Solution methods for the inventory routing problem: application to waste collection in the urban environment" has been supported in 2020 by Nouvelle Aquitaine region (appel à projet "Recherche et Enseignement Supérieur"). The project is financing one half of a PhD thesis. # 10 Partnerships and cooperations ## 10.1 International initiatives ### 10.1.1 Participation in other international programs We have obtained an ANR PRCI grant in relation with Sobolev Institute in Novosibirsk (Russia). ### 10.1.2 Visits of international scientists Two visits have been cancelled due to the pandemic (a six months visit by a doctoral student from Brazil and a seven months sabbatical visit by a professor from Canada). ## 10.2 Regional initiatives We have obtained a grant from Région Nouvelle Aquitaine to work on inventory-routing problems. # 11 Dissemination ## 11.1 Promoting scientific activities ### 11.1.1 Scientific events: organisation We were part of the organization team for Dataquitaine 2020, which gathered 500 participants from Nouvelle Aquitaine. #### Member of the conference program committees • Pierre Pesneau : member of the program committee (and reviewer) of ISCO 2020 (International Symposium on Combinatorial Optimization), Monreal, Canada (held Online). • François Clautiaux is member of the program committe of ROADEF, the French OR conference. ### 11.1.2 Journal #### Member of the editorial boards François Clautiaux is a member of the editorial board of OJMO (Open Journal on Mathematical Optimization). Ruslan Sadykov is an associate editor of EJCO (EURO Journal of Computational Optimization). #### Reviewer - reviewing activities • Aurélien Froger: European Journal of Operational Research, INFORMS Journal on Computing, Transportation Research Part B: Methodological, Transportation Science • Pierre Pesneau : European Journal of Operational Research, European Journal on Computational Optimization, Discrete Optimization • Ruslan Sadykov : SN Operations Research Forum, INFORMS Journal on Optimization, Transportation Science, Open Journal of Mathematical Optimization, INFORMS Journal on Computing, Omega, European Journal on Operations Research, Networks, RAIRO - Operations Research • François Clautiaux : Computers and Operations Research, European Journal of Operational Research, INFORMS Journal on Computing, Mathematical Programming C ### 11.1.3 Invited talks Boris Detienne: Invited talk at the 21st ROADEF conference, in Montpellier (19-21/02/2020) ### 11.1.4 Leadership within the scientific community François Clautiaux is president of the French Operations Research Society ROADEF (more than 500 members). ### 11.1.5 Scientific expertise Boris Detienne has been expert for the European Science Foundation. ## 11.2 Teaching - Supervision - Juries ### 11.2.1 Teaching Boris Detienne is head of the Master Program in Operations Research of the University of Bordeaux. Pierre Pesneau is head of the Master of Ingineering in Mathematical Optimization (CMI OPTIM) of the University of Bordeaux. François Clautiaux is head of the Master in Applied Mathematics (180 students) of the University of Bordeaux. • Licence : François Clautiaux, Projet d'optimisation, L3, Université de Bordeaux, France • Licence : François Clautiaux, Grands domaines de l'optimisation, L1, Université de Bordeaux, France • Master : François Clautiaux, Introduction à la programmation en variables entières, M1, Université de Bordeaux, France • Master : François Clautiaux, Integer Programming, M2, Université de Bordeaux, France • Master : François Clautiaux, Algorithmes pour l'optimisation en nombres entiers, M1, Université de Bordeaux, France • Master : François Clautiaux, Programmation linéaire, M1, Université de Bordeaux, France • Master: Boris Detienne, Combinatoire et routage, ENSEIRB INPB • Licence : Boris Detienne, Optimisation, L2, Université de Bordeaux • Licence : Boris Detienne, Groupe de travail applicatif, L3, Université de Bordeaux • Master : Boris Detienne, Optimisation continue, M1, Université de Bordeaux • Master : Boris Detienne, Integer Programming, M2, Université de Bordeaux • Master : Boris Detienne, Optimisation dans l'incertain, M2, Université de Bordeaux • Licence : Aurélien Froger, Groupe de travail applicatif, L3, Université de Bordeaux, France • Master : Aurélien Froger, Optimisation dans les graphes, M1, Université de Bordeaux, France • Master : Aurélien Froger, Gestion des opérations et planification de la production, M2, Université de Bordeaux, France • Master : Ruslan Sadykov, Introduction to Constraint Programming, M2, Université de Bordeaux, France • Licence : Pierre Pesneau, Grands domaines de l'optimisation, L1, Université de Bordeaux, France • Licence : Pierre Pesneau, Programmation pour le calcul scientifique, L2, Université de Bordeaux, France • Licence : Pierre Pesneau, Optimisation, L2, Université de Bordeaux, France • Master : Pierre Pesneau, Introduction à la programmation en variables entières, M1, Université de Bordeaux, France • Master : Pierre Pesneau, Programmation linéaire, M1, Université de Bordeaux, France • Master : Pierre Pesneau, Projet Algorithmes de flot, M1, Université de Bordeaux, France • Master : Pierre Pesneau, Integer Programming, M2, Université de Bordeaux, France ### 11.2.2 Supervision • PhD: Guillaume Marques, Planification de tournées de véhicules avec transbordement en logistique urbaine : approches basées sur les méthodes exactes de l'optimisation mathématique, 2017-2020 Ruslan Sadykov (dir). • PhD: Mohamed Benkirane, "Optimisation des moyens dans la recomposition commerciale de dessertes TER" 2016-2020, François Clautiaux (dir), Boris Detienne (dir). • PhD in progress : Gaël Guillot, Aggregation and disaggregation methods for hard combinatorial problems, from November 2017, François Clautiaux (dir) and Boris Detienne (dir). • PhD in progress : Orlando Rivera Letelier, Bin Packing Problem with Generalized Time Lags, from May 2018, François Clautiaux (dir) and Ruslan Sadykov (co-dir), a co-tutelle with Universidad Adolfo Ibáñez, Peñalolén, Santiago, Chile. • PhD in progress: Xavier Blanchot, "Accélération de la Décomposition de Benders à l'aide du Machine Learning : Application à de grands problèmes d'optimisation stochastique two-stage pour les réseaux d'électricité" from September 2019, François Clautiaux (dir), Aurélien Froger (co-dir). • PhD in progress: Johan Levêque, "Conception de réseaux de distributions urbains mutualisées en mode doux", from September 2018, François Clautiaux (dir), Gautier Stauffer (co-dir). • PhD in progress: Mellila Kechir, "Optimization of supply-chain optimization using IoT concepts", from september 2020, François Clautiaux (dir), Walid Klibli (co-dir). • PhD in progress: Isaac Balster, "Solution methods for the inventory routing problem: application to waste collection in the urban environment", from November 2020, Ruslan Sadykov (dir). • PhD in progress Daniel Khachay, "Exact algorithms for vehicle routing problems", from September 2020, Ruslan Sadykov (dir). ### 11.2.3 Juries • François Clautiaux: Walid Klibli (Bordeaux, hdr, jury member), Marko Mladenovic (Valenciennes, PhD, reviewer), Matthieu Guillot (Grenoble, PhD, reviewer), Lucie Pansart (Grenoble, PhD, reviewer), Guillaume Marques (Bordeaux, PhD, jury member). • Aurélien Froger: Laura Catalina Echeverri Guzman (Tours, PhD, jury member). • Ruslan Sadykov: jury member for Young Reseacher (CRN and ISFP) positions at Inria Bordeaux Sud-Ouest, Guillaume Marques (Bordeaux, jury member). ## 11.3 Popularization ### 11.3.1 Articles and contents François Clautiaux was part of the content management team for the special issue "Operations Research" of Tangente (popularization of mathematics). François Clautiaux and Pierre Pesneau : popularization paper in Tangente (topic: Integer Linear Programming). # 12 Scientific production ## 12.1 Major publications • 1 articleA. Pessoa, R. Sadykov, E. Uchoa and F. Vanderbeck. 'A Generic Exact Solver for Vehicle Routing and Related Problems'.Mathematical Programming1832020, 483-523 • 2 article R. Sadykov, A. Pessoa and E. Uchoa. 'A Bucket Graph Based Labelling Algorithm for Vehicle Routing'. Transportation Science October 2020 ## 12.2 Publications of the year ### International journals • 3 article A. Arslan and B. Detienne. 'Decomposition-based approaches for a class of two-stage robust binary optimization problems'. INFORMS Journal on Computing 2021 • 4 articleT. Bulhoes, R. Sadykov, A. Subramanian and E. Uchoa. 'On the exact solution of a large class of parallel machine scheduling problems'.Journal of Scheduling232020, 411-429 • 5 article 'An iterative dynamic programming approach for the temporal knapsack problem'. European Journal of Operational Research 2021 • 6 articleG. Marques, R. Sadykov, J.-C. Deschamps and R. Dupas. 'An improved branch-cut-and-price algorithm for the two-echelon capacitated vehicle routing problem'.Computers and Operations Research1142020, 104833 • 7 article A. Pessoa, M. Poss, F. Vanderbeck, R. Sadykov and F. Vanderbeck. 'Branch-and-cut-and-price for the robust capacitated vehicle routing problem with knapsack uncertainty'. Operations Research 2020 • 8 article A. Pessoa, R. Sadykov and E. Uchoa. 'Solving Bin Packing Problems Using VRPSolver Models'. SN Operations Research Forum 2020 • 9 articleA. Pessoa, R. Sadykov, E. Uchoa and F. Vanderbeck. 'A Generic Exact Solver for Vehicle Routing and Related Problems'.Mathematical Programming1832020, 483-523 • 10 articleE. Queiroga, Y. Frota, R. Sadykov, A. Subramanian, E. Uchoa and T. Vidal. 'On the exact solution of vehicle routing problems with backhauls'.European Journal of Operational Research28712020, 76-89 • 11 article R. Sadykov, A. Pessoa and E. Uchoa. 'A Bucket Graph Based Labelling Algorithm for Vehicle Routing'. Transportation Science October 2020 ### Doctoral dissertations and habilitation theses • 12 thesis G. Marques. 'Two-echelon vehicle routing problems in city logistics : approaches based on exact methods of mathematical optimization'. Université de Bordeaux November 2020 ### Reports & preprints • 13 misc I. Ben Mohamed, W. Klibi, R. Sadykov, H. Şen and F. Vanderbeck. 'The Two-Echelon Stochastic Multi-period Capacitated Location-Routing Problem'. November 2020 • 14 misc N. Kullman, A. Froger, J. Mendoza and J. Goodson. 'frvcpy: An Open-Source Solver for the Fixed Route Vehicle Charging Problem'. October 2020 • 15 misc H. Lefebvre, F. Clautiaux and B. Detienne. 'A two-stage robust approach for the weighted number of tardy jobs with objective uncertainty'. July 2020 • 16 misc G. Marques, R. Sadykov, J.-C. Deschamps and R. Dupas. 'A branch-cut-and-price approach for the single-trip and multi-trip two-echelon vehicle routing problem with time windows'. November 2020 • 17 misc E. Queiroga, R. Sadykov and E. Uchoa. 'A modern POPMUSIC matheuristic for the capacitated vehicle routing problem'. November 2020 • 18 misc O. Rivera Letelier, F. Clautiaux and R. Sadykov. 'Bin Packing Problem with Time Lags'. November 2020 ## 12.3 Other ### Scientific popularization • 19 book 'La Recherche Opérationnelle, Tangente, HS 75'. Tangente (Paris) HS 75 La Recherche Opérationnelle 2020 ## 12.4 Cited publications • 21 unpublishedA. Akartunali. 'A Computational Analysis of Lower Bounds for Big Bucket Production Planning Problems'.2009, • 22 articleA. Akartunali. 'A heuristic approach for big bucket multi-level production planning problems'.European Journal of Operational Research2009, 396-411 • 23 articleM. Alba Martínez, F. Clautiaux, M. Dell'Amico and M. Iori. 'Exact algorithms for the bin packing problem with fragile objects'.Discrete Optimization103August 2013, 210-223 • 24 articleR. Baptiste. 'On Scheduling a Single Machine to Minimize a Piecewise Linear Objective Function : A Compact MIP Formulation'.Naval Research Logistics / Naval Research Logistics An International Journal5662009, 487--502 • 25 articleR. Baptiste. 'Time Indexed Formulations for Scheduling Chains on a Single Machine: An Application to Airborne Radars'.European Journal of Operational Research2009, • 26 techreportF. Clautiaux, R. Sadykov, F. Vanderbeck and Q. Viaud. 'Pattern based diving heuristics for a two-dimensional guillotine cutting-stock problem with leftovers'.Université de BordeauxDecember 2017, 1-30 • 27 articleA. Constantino. 'Mixing MIR Inequalities with Two Divisible Coefficients'.Mathematical Programming, Series A2009, 1--1 • 28 articleA. Denton. 'Optimal Allocation of Surgery Blocks to Operating Rooms Under Uncertainty'.Operations Research2009, 1--1 • 29 inproceedings B. Detienne. 'Extended formulations for robust maintenance planning at power plants'. Gaspard Monge Program for Optimization : Conference on Optimization and Practices in Industry PGMO-COPI14 Saclay, France October 2014 • 30 articleA. Froger, J. Mendoza, O. Jabali and G. Laporte. 'Improved formulations and algorithmic components for the electric vehicle routing problem with nonlinear charging functions'.Computers & Operations Research1042019, 256--294 • 31 inproceedings M. Godinho, L. Gouveia, T. Magnanti, P. Pesneau and J. Pires. 'On Time-Dependent Model for Unit Demand Vehicle Routing Problems'. International Conference on Network Optimization, INOC Spa, Belgium International Network Optimization Conference (INOC) 2007 • 32 techreport M. Godinho, L. Gouveia, T. Magnanti, P. Pesneau and J. Pires. 'On a Time-Dependent Model for the Unit Demand Vehicle Routing Problem'. 11-2007 Centro de Investigacao Operacional da Universidade de Lisboa 2007 • 33 inproceedings R. Griset, P. Bendotti, B. Detienne, H. Grevet, M. Porcheron, H. Şen and F. Vanderbeck. 'Efficient formulations for nuclear outages using price and cut, Snowcap project.'. PGMO Days 2017 Saclay, France November 2017 • 34 inproceedings R. Griset, P. Bendotti, B. Detienne, H. Grevet, M. Porcheron and F. Vanderbeck. 'Scheduling nuclear outage with cut and price (Snowcap)'. Mathematical Optimization in the Decision Support Systems for Efficient and Robust Energy Networks Final Conference Modena, Italy March 2017 • 35 inproceedings R. Griset. 'Optimisation des arrêts nucléaires : une amélioration des solutions développées par EDF suite au challenge ROADEF 2010'. 18ème conférence de la société française de recherche opérationnelle et d'aide à la décision ROADEF 2017 Metz, France February 2017 • 36 articleY. Guan, S. Ahmed, A. Miller and G. Nemhauser. 'On formulations of the stochastic uncapacitated lot-sizing problem'.Operations Research Letters342006, 241-250 • 37 articleY. Guan, S. Ahmed, G. Nemhauser and A. Miller. 'A branch-and-cut algorithm for the stochastic uncapacitated lot-sizing problem'.Mathematical Programming1052006, 55-84 • 38 inproceedings J. Han, P. Bendotti, B. Detienne, G. Petrou, M. Porcheron, R. Sadykov and F. Vanderbeck. 'Extended Formulation for Maintenance Planning at Power Plants'. ROADEF - 15ème congrès annuel de la Société française de recherche opérationnelle et d'aide à la décision Société française de recherche opérationnelle et d'aide à la décision Bordeaux, France February 2014 • 39 inproceedingsY. Hendel and R. Sadykov. 'Timing problem for scheduling an airborne radar'.Proceedings of the 11th International Workshop on Project Management and SchedulingIstanbul, TurkeyApril 2008, 132-135 • 40 articleD. Huygens, M. Labbé, A. Mahjoub and P. Pesneau. 'The two-edge connected hop-constrained network design problem: Valid inequalities and branch-and-cut'.Networks4912007, 116-133 • 41 inproceedingsA. Joncour. 'Mathematical programming formulations for the orthogonal 2d knapsack problem'.Livre des résumé du 9ème Congrès de la Société Française de Recherche Opérationnelle et d'Aide à la DécisionFebruary 2008, 255--256 • 42 articleC. Joncour and A. Pêcher. 'Consecutive ones matrices for multi-dimensional orthogonal packing problems'.Journal of Mathematical Modelling and Algorithms1112012, 23-44 • 43 articleC. Joncour, A. Pêcher and P. Valicov. 'MPQ-trees for the orthogonal packing problem'.Journal of Mathematical Modelling and Algorithms111March 2012, 3-22 • 44 phdthesis C. Joncour. 'Problèmes de placement 2D et application à l'ordonnancement : modélisation par la théorie des graphes et approches de programmation mathématique'. University Bordeaux I December 2010 • 45 article N. Kullman, J. Goodson and J. Mendoza. 'Electric Vehicle Routing with Public Charging Stations'. Transportation Science 2020 • 46 articleM.~Mourgaya and F. Vanderbeck. 'Column generation based heuristic for tactical planning in multi period vehicle routing'.European Journal of Operational Research18332007, 1028-1041 • 47 articleM.~Padberg and G.~Rinaldi. 'A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems'.SIAM Review3311991, 60--100 • 48 inproceedings P. Meurdesoif. 'A Branch­-and­-Cut algorithm to optimize sensor installation in a network'. Graph and Optimization Meeting GOM2008 France Saint-Maximin 2008 • 49 inproceedings P. Meurdesoif, P. Pesneau and F. Vanderbeck. 'Meter installation for monitoring network traffic'. International Conference on Network Optimization, INOC Spa, Belgium International Network Optimization Conference (INOC) 2007 • 50 techreportF. Michel. 'A Column Generation based Tactical Planning Method for Inventory Routing'.INRIA2008, • 51 articleN. Michel. 'Knapsack Problems with Setups'.European Journal of Operational Research1962009, 909-918 • 52 articleS. Michel and F. Vanderbeck. 'A Column Generation based Tactical Planning Method for Inventory Routing'.Operations Research6022012, 382-397 • 53 inproceedings P. Pesneau, F. Clautiaux and J. Guillot. 'Aggregation technique applied to a clustering problem'. 4th International Symposium on Combinatorial Optimization (ISCO 2016) Vietri sul Mare, Italy May 2016 • 54 inproceedings P. Pesneau, F. Clautiaux and J. Guillot. 'Aggregation technique applied to a clustering problem for waste collection'. ROADEF 2016 Compiègne, France February 2016 • 55 inproceedings M. Poggi, D. Pecin, M. Reis, C. Ferreira, K. Neves, R. Sadykov and F. Vanderbeck. 'Equipment/Operator task scheduling with BAPCOD'. Column Generation 2012 Bromont, Canada June 2012 • 56 inproceedings N. Rahmani, B. Detienne, R. Sadykov and F. Vanderbeck. 'A Column Generation Based Heuristic for the Dial-A-Ride Problem'. International Conference on Information Systems, Logistics and Supply Chain (ILS) Bordeaux, France June 2016 • 57 articleR. Sadykov. 'A branch-and-check algorithm for minimizing the sum of the weights of the late jobs on a single machine with release dates'.European Journal of Operations Research18932008, 1284--1304 • 58 techreportR. Sadykov. 'A polynomial algorithm for a simple scheduling problem at cross docking terminals'.RR-7054INRIA2009, • 59 inproceedingsR. Sadykov. 'On scheduling malleable jobs to minimise the total weighted completion time'.13th IFAC Symposium on Information Control Problems in ManufacturingRussie Moscow2009, • 60 articleR. Sadykov. 'Scheduling incoming and outgoing trucks at cross docking terminals to minimize the storage cost'.Annals of Operations Research20112012, 423-440 • 61 articleR. Sadykov and F. Vanderbeck. 'Bin Packing with conflicts: a generic branch-and-price algorithm'.INFORMS Journal on Computing2522013, 244-255 • 62 article G. Stauffer. 'The p-median Polytope of Y-free Graphs: An Application of the Matching Theory'. Operations Research Letters 2008 • 63 inproceedings L. Vanderbeck. 'A multi scalable model based on a connexity graph representation'. 11th International Conference on Computer Design and Operation in the Railway and Other Transit Systems COMPRAIL'08 Toledo, Spain September 2008 • 64 techreportB. Vignac. 'Hierarchical Heuristic for the GRWA Problem in WDM Networks with Delay Constraints'.INRIA2009, 18 • 65 techreportF. Vignac. 'Nested Decomposition Approach to an Optical Network Design Problem'.INRIA2009, 18 • 66 techreportF. Vignac. 'Reformulation and Decomposition Approaches for Traffic Routing in Optical Networks'.INRIA2009, 36 • 67 phdthesis B. Vignac. 'Résolution d'un problème de groupage dans le réseaux optiques maillés'. Université de Montréal January 2010 • 68 inproceedings L. ~Gély. 'Real-time train scheduling at SNCF'. 1st Workshop on Robust Planning and Rescheduling in Railways Utrecht ARRIVAL meeting on Robust planning and Rescheduling in Railways April 2007
2021-06-16 13:56:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32711219787597656, "perplexity": 3061.2174252863315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00163.warc.gz"}
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-3-review-page-206/74
## Prealgebra (7th Edition) $-35x$ $7x$ is the same as $7\times x$. $-5(7x)$ can be written as $-5\times(7\times x)$ $\longrightarrow$ associative property =$(-5\times7)\times x$ $\longrightarrow$multiply =$-35\times x$ A variable multiplied by a number can also be written as a variable term with a coefficient, so $-35\times x=-35x$
2018-07-18 22:41:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7252982258796692, "perplexity": 447.7060525876163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.62/warc/CC-MAIN-20180718213135-20180718233135-00486.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/divide-and-check-your-answer-by-the-corresponding-multiplication-in-the-following-1936-36-concept-whole-numbers_136950
# Divide and check your answer by the corresponding multiplication in the following: ​1936 ​÷ 36 - Mathematics Fill in the Blanks Divide and check your answer by the corresponding multiplication in the following: ​1936 ​÷ 36 #### Solution $\begin{array}{l} \phantom{\texttt{000}}\underline{\texttt{53}\phantom{000}}\\ \texttt{36|1936}\\ \phantom{\texttt{00}}\underline{\texttt{-180}}\\ \phantom{\texttt{0000}}\texttt{136}\\ \phantom{\texttt{000}}\texttt{-108}\\ \hline\phantom{\texttt{00000}}\texttt{28}\\ \hline \end{array}$ Dividend = 1936, Divisor = 36 , Quotient = 53 , Remainder = 28 Check: Divisor × Quotient + Remainder = 36 × 53 + 28 = 1936 =Dividend Hence, Dividend = Divisor × Quotient + Remainder Is there an error in this question or solution? #### APPEARS IN RS Aggarwal Class 6 Mathematics Chapter 3 Whole Numbers Exercise 3E | Q 1.1 | Page 56
2021-05-09 05:12:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6798499822616577, "perplexity": 4591.587062822954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00516.warc.gz"}
https://barneyshi.me/2021/12/17/Pacific-Atlantic-Water-Flow/
# Leetcode 417 - Pacific Atlantic Water Flow Note: • We have to use backtracking. Image you found an ocean following a path, but we are not sure if there is another branch to the other ocean on the path. So we need backtracking. • Base case: visited[] returns [true, true]. Note that if visited[] returns something like [true, false], it doesn’t necessarily mean that from [i, j] you cannot flow to both oceans. • Use visited[] to remember results we’ve got. • Use path so we won’t repeat. Question: There is an m x n rectangular island that borders both the Pacific Ocean and Atlantic Ocean. The Pacific Ocean touches the island’s left and top edges, and the Atlantic Ocean touches the island’s right and bottom edges. The island is partitioned into a grid of square cells. You are given an m x n integer matrix heights where heights[r][c] represents the height above sea level of the cell at coordinate (r, c). The island receives a lot of rain, and the rain water can flow to neighboring cells directly north, south, east, and west if the neighboring cell’s height is less than or equal to the current cell’s height. Water can flow from any cell adjacent to an ocean into the ocean. Return a 2D list of grid coordinates result where result[i] = [ri, ci] denotes that rain water can flow from cell (ri, ci) to both the Pacific and Atlantic oceans. Example: Code:
2022-10-06 11:17:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3168654143810272, "perplexity": 1872.0644335410036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00409.warc.gz"}
http://sarahmbrown.org/research/detect-simpsons-paradox
# Detecting Simpson's Paradox Simpson’s paradox is the phenomenon that a trend of an association in the whole population reverses within the subpopulations defined by a categorical variable. Detecting Simpson’s paradox indicates surprising and interesting patterns of the data set for the user. It is generally discussed in terms of binary variables, but studies for the exploration of it for continuous variables are relatively rare. This paper describes a method to discover Simpson’s paradox for the trend of the pair of continuous variables
2018-12-17 04:06:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.877567708492279, "perplexity": 539.9479639580901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828056.99/warc/CC-MAIN-20181217020710-20181217042710-00102.warc.gz"}
http://www.reference.com/browse/wiki/Mean_curvature
Definitions Mean curvature In mathematics, the mean curvature $H$ of a surface $S$ is an extrinsic measure of curvature that comes from differential geometry and that locally describes the curvature of an embedded surface in some ambient space such as Euclidean space. The concept was introduced by Sophie Germain in her work on elasticity theory. Definition Let $p$ be a point on the surface $S$. Consider all curves $C_i$ on $S$ passing through the point $p$ on the surface. Every such $C_i$ has an associated curvature $K_i$ given at $p$. Of those curvatures $K_i$, at least one is characterized as maximal $kappa_1$ and one as minimal $kappa_2$, and these two curvatures $kappa_1,kappa_2$ are known as the principal curvatures of $S$. The mean curvature at $pin S$ is the average of curvatures , hence the name: $H = \left\{1 over 2\right\} \left(kappa_1 + kappa_2\right).$ More generally , for a hypersurface $T$ the mean curvature is given as $H=frac\left\{1\right\}\left\{n\right\}sum_\left\{i=1\right\}^\left\{n\right\} kappa_\left\{i\right\}.$ More abstractly, the mean curvature is ($frac\left\{1\right\}\left\{n\right\}$ times) the trace of the second fundamental form (or equivalently, the shape operator). Additionally, the mean curvature $H$ may be written in terms of the covariant derivative $nabla$ as $Hvec\left\{n\right\} = g^\left\{ij\right\}nabla_inabla_j X,$ using the Gauss-Weingarten relations, where $X\left(x,t\right)$ is a family of smoothly embedded hypersurfaces, $vec\left\{n\right\}$ a unit normal vector, and $g_\left\{ij\right\}$ the metric tensor. A surface is a minimal surface if and only if the mean curvature is zero. Furthermore, a surface which evolves under the mean curvature of the surface $S$, is said to obey a heat-type equation called the mean curvature flow equation. The sphere is the only surface of constant positive mean curvature without boundary or singularities. Surfaces in 3D space For a surface defined in 3D space, the mean curvature is related to a unit normal of the surface: $2 H = nabla cdot hat n$ where the normal chosen affects the sign of the curvature. The sign of the curvature depends on the choice of normal: the curvature is positive if the surface curves "away" from the normal. The formula above holds for surfaces in 3D space defined in any manner, as long as the divergence of the unit normal may be calculated. For the special case of a surface defined as a function of two coordinates, eg $z = S\left(x, y\right)$, and using downward pointing normal the (doubled) mean curvature expression is begin\left\{align\right\}2 H & = nabla cdot left\left(frac\left\{nabla\left(S - z\right)\right\} >right) & = nabla cdot left(frac{nabla S} {sqrt{1 + (nabla S)^2}}right) & = frac{ left(1 + left(frac{partial S}{partial x}right)^2right) frac{partial^2 S}{partial y^2} - 2 frac{partial S}{partial x} frac{partial S}{partial y} frac{partial^2 S}{partial x partial y} + left(1 + left(frac{partial S}{partial y}right)^2right) frac{partial^2 S}{partial x^2} }{left(1 + left(frac{partial S}{partial x}right)^2 + left(frac{partial S}{partial y}right)^2right)^{3/2}}. end{align} If the surface is additionally known to be axisymmetric with $z = S\left(r\right)$, $2 H = frac\left\{frac\left\{partial^2 S\right\}\left\{partial r^2\right\}\right\}\left\{left\left(1 + left\left(frac\left\{partial S\right\}\left\{partial r\right\}right\right)^2right\right)^\left\{3/2\right\}\right\} + frac\left\{frac\left\{partial S\right\}\left\{partial r\right\}\right\}\left\{r left\left(1 + left\left(frac\left\{partial S\right\}\left\{partial r\right\}right\right)^2right\right)^\left\{1/2\right\}\right\}.$ Mean curvature in fluid mechanics An alternate definition is occasionally used in fluid mechanics to avoid factors of two: $H_f = \left(kappa_1 + kappa_2\right).$ This results in the pressure according to the Young-Laplace equation inside an equilibrium spherical droplet being surface tension times $H_f$; the two curvatures are equal to the reciprocal of the droplet's radius: $kappa_1 = kappa_2 = r^\left\{-1\right\}$. Minimal surfaces A minimal surface is a surface which has zero mean curvature at all points. Classic examples include the catenoid, helicoid and Enneper surface. Recent discoveries include Costa's minimal surface and the Gyroid. An extension of the idea of a minimal surface are surfaces of constant mean curvature.
2013-12-09 12:15:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 35, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478501677513123, "perplexity": 651.7478733952728}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163968717/warc/CC-MAIN-20131204133248-00096-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.excellup.com/classnine/mathnine/quadriextwo.aspx
## Exercise 8.2 Question 1: ABCD is a quadrilateral in which P, Q, R and S are mid-points of the sides AB, BC, CD and DA. AC is a diagonal. Show that: • SR||AC and SR=1/2\AC • PQ=SR • PQRS is a parallelogram Answer: Let us extend the line SR to T so that CT is parallel to AS In ΔDSR and ΔCRT DR=RC (R is the midpoint of side DC) ∠DRS=∠TRS (opposite angles) ∠DSR=∠RTC (alternate angles of transversal ST when DA||CT) Hence, ΔDSR≅ΔCRT So, SR=RT ST=AC (opposite sides of parallelogram) So, SR=1/2\AC As SR is touching the mid points of DA and DC so as per mid point theorem SR||AC Similarly AC || PQ can be proven which will prove that PQRS is a parallelogram. Question 2: ABCD is a rhombus and P, Q, R and S are the mid-points of the sides AB, BC, CD and DA respectively. Show that the quadrilateral PQRS is a rectangle. Answer; Following the method used in the previous question it can be proved that PQRS is a parallelogram. To prove it to be a rectangle we need to prove that ∠S=∠R=∠Q=∠P=90° In ΔDSR, ΔCRQ, ΔBQP and ΔAPS DS=CR=BQ=AP=DR=CQ=BP=AS (All sides of rhombus are equal and PQRS are midpoints) ∠DSR=∠DRS=∠CRQ=∠CQR=∠BQP=∠BPQ=∠APS=∠ASP So, ΔDSR≅ ΔCRQ≅ ΔBQP≅ ΔAPS So, ∠SDR=∠CRQ=∠QBP=∠PAS=90° Hence, ∠DSR+∠DRS=90° Or, ∠DSR=∠DRS=∠CRQ=∠CQR=∠BQP=∠BPQ=∠APS=∠ASP As, ∠ASP+∠PSR+∠DSR=180° Or, ∠PSR=180°-(45°+45°)=90° Similarly, ∠S=∠R=∠Q=∠P=90° Hence, PQRS is a rectangle. Question 3: ABCD is a trapezium in which AB || DC, BD is a diagonal and E is the mid-point of AD. A line is drawn through E parallel to AB intersecting BC at F. Show that F is the mid-point of BC. DG = GB A parallel line to the base originating from mid point of second side will intersect at the midpoint of the third side. AB || DC AB || EF So, EF || DC So, In Δ ADB EG || AB E is the mid point of AD So, G is the mid point of DB Now, in Δ DCB GF || DC G is the mid point of BD So, F will be mid point of BC ( Mid point theorem) Question 4: In a parallelogram ABCD, E and F are the mid-points of sides AB and CD respectively. Show that the line segments AF and EC trisect the diagonal BD. AD = BC (Opposite sides of parallelogram) BF = DE (Half of opposite sides of parallelogram) ∠ADE = ∠CBF (Opposite angles) So, ΔADE ≅ ΔCBF Hence, AE = CF EC || AF & EC = AF AE = CF So, AE || CF So, AECF is a parallelogram. In Δ DQC PE || QC (proved earlier by proving AE || CF) E is the mid point of DC So, P is the mid point of DQ So, DP = PQ In Δ APB FQ || AP F is the mid point of AB So, PQ = QB So, DP = PQ = QB proved Question 5: Show that the line segments joining the mid-points of the opposite sides of a quadrilateral bisect each other. Answer: ABCD is a quadrilateral in which P, Q, R, & S are mid points of AB, BC, CD & AD In Δ ACD SR is touching mid points of CD and AD So, SR || AC Similarly following can be proved PQ || AC QR || BD PS || BD So, PQRS is a parallelogram. PR and QS are diagonals of the parallelogram PQRS, so they will bisect each other. Question 6: ABC is a triangle right angled at C. A line through the mid-point M of hypotenuse AB and parallel to BC intersects AC at D. Show that • D is the midpoint of AC • MD ⊥ AC • CM=MA=1/2\AB Answer: DM || BC M is the mid point of AB So, D is the mid point of AC (Mid point theorem) ∠ACD=∠MDA=90° (alternate angle to transversal MD) Now in ΔCDM and ΔADM CD=AD MD=MD ∠MDC=∠MDA So, ΔCDM≅ ΔADM (SAS theorem) So, MC=MA MA=1/2\AB So, MC=MA=1/2\AB
2022-07-03 15:27:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5596545934677124, "perplexity": 2675.599179784368}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00280.warc.gz"}
http://math.stackexchange.com/questions/81312/exact-sequence-of-abelian-groups-restricted-to-torsion-subgroups
Exact sequence of abelian groups restricted to torsion subgroups For any abelian group $G$, there is a torsion subgroup $TG=\{g\in G, ng=0 \textrm{ for some non-zero integer} n\}$. Now, let $A\to B\to C$ be an exact sequence of abelian groups. Is it true that $TA\to TB\to C$ is exact? (every maps are restriction of given maps) - No. Consider the exact sequence $\mathbb Q \to \mathbb Q/\mathbb Z \to 0$. This restricts to $0 \to \mathbb Q/\mathbb Z \to 0,$ which is no longer exact. Another (closely related) example is given by $\mathbb Z \to \mathbb Z/n\mathbb Z \to 0,$ for any $n > 1$.
2016-06-25 17:42:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9542494416236877, "perplexity": 153.19356592468355}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00003-ip-10-164-35-72.ec2.internal.warc.gz"}
http://stackoverflow.com/questions/5871338/selecting-image-in-folder-using-monotouch?answertab=oldest
# Selecting image in folder using MonoTouch I'm using monotouch and am having a brainfreeze. I'm trying to by code use an image that's in a folder. The project structure: Solution - Project -Images -picture.jpeg The code: UIImage image = UIImage.FromFile("\\Images\\picture.jpeg"); And I've also tried: UIImage image = UIImage.FromFile("Images\\picture.jpeg"); The build action is set to content and I can use the picture without crashing if I just leave it in the root of the project. What's my problem? Thanks - Correct code: UIImage image = UIImage.FromFile("Images/picture.jpeg");
2014-08-29 12:47:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5591856241226196, "perplexity": 7296.230597669151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832155.37/warc/CC-MAIN-20140820021352-00190-ip-10-180-136-8.ec2.internal.warc.gz"}
http://icpc.njust.edu.cn/Problem/Zju/3773/
# Paint the Grid Time Limit: 2 Seconds Memory Limit: 65536 KB ## Description Leo has a grid with N × N cells. He wants to paint each cell either black or white. After he finished painting, the grid will be divided into several parts. Any two connected cells should be in the same part, and any two unconnected cells should be in different parts. Two cells are connected if they share an edge and they are in the same color. If two cells are connected with the same another cell, the two cells are also connected. The size of a part is the number of cells in it. Leo wants to have at least ⌊N×4÷3⌋ different sizes (⌊x⌋ is the maximum integer which is less than or equal to x). Can you tell him how to paint the grid? ## Input There are multiple test cases. The first line of input is an integer T indicates the number of test cases. For each test case: There is one integer N (4 <= N <= 100). ## Output For each test case, output a solution to painting. You should output exactly N lines with each line contains N characters either 'X' (black) or 'O' (white). See the sample output for details. This problem is special judged so any correct answer will be accepted. ## Sample Input 1 5 ## Sample Output XOXXX OOOOO XXXXX OXXOO OXXOO None None
2020-02-22 17:16:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17919446527957916, "perplexity": 910.544046702399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145708.59/warc/CC-MAIN-20200222150029-20200222180029-00477.warc.gz"}
https://www.gradesaver.com/to-build-a-fire/q-and-a/if-the-setting-were-the-present-what-things-would-the-man-have-had-at-his-disposal-that-would-have-improved-his-survival-chances-141701
# If the setting were the present, what things would the man have had at his disposal that would have improved his survival chances? To Build a Fire and yes I'm asking this again
2018-05-24 08:46:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8153192400932312, "perplexity": 2291.782042274254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866107.79/warc/CC-MAIN-20180524073324-20180524093324-00182.warc.gz"}
https://www.lessonplanet.com/teachers/finding-multiples-4th-6th
# Finding Multiples In this multiples learning exercise, students underline multiples of 2, 5, 10, 3, 4 and 6 with various given colors. Some numbers given are multiples of more than 1 number.
2017-07-23 09:16:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326421022415161, "perplexity": 1690.363618485318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424296.90/warc/CC-MAIN-20170723082652-20170723102652-00430.warc.gz"}
https://www.bookofproofs.org/branches/minkowski-inequality/proof/
Welcome guest You're not logged in. 267 users online, thereof 0 logged in ## Proof: (related to "Minkowski's Inequality") • By hypothesis, let $p\in[1,\infty)$ and $x=(x_1,x_2,\ldots x_n)$ and $y=(y_1,y_2,\ldots y_n)$ be two vectors of a vector space $$V$$ over the field of real numbers $\mathbb R$ or the field of complex numbers $\mathbb C$. • If $p=1$, the inequality follows from the triangle inequality. • Let $p > 1$ and define $q$ by $\frac 1p+\frac 1q=1,$ i.e. $q=\frac{p}{p-1}.$ • Consider the vector $z\in\mathbb C^n$ (or, in real case, $z\in\mathbb R^n$) with $z_\nu:=|x_\nu+y_\nu|^{p-1}$ for $\nu=1,\ldots,n.$ • Then we get $z_\nu^q=|x_\nu+y_\nu|^{q(p-1)}=|x_\nu+y_\nu|^{p}$ for $\nu=1,\ldots,n,$ and this yields for the q-norm of the vector $z$ $$\begin{array}{rcl}||z||_q&=&\left(\sum_{\nu=1}^n|z_\nu|^q\right)^{1/q}\\ &=&\left(\sum_{\nu=1}^n|x_\nu+y_\nu|^{p}\right)^{1/q}\\ &=&\left(\sum_{\nu=1}^n|x_\nu+y_\nu|^{p}\right)^{1/p\cdot p/q}\\ &=&||x+y||_p^{p/q}.\end{array}$$ • We can now estimate: • This yields by the definition of the vector $z$ $$\begin{array}{rcl}||x+y||_p^p&=&\sum_{\nu=1}^n|x_\nu+y_\nu|^p\\ &=&\sum_{\nu=1}^n|x_\nu+y_\nu||x_\nu+y_\nu|^{p-1}\\ &=&\sum_{\nu=1}^n|x_\nu+y_\nu||z_\nu|\\ &\le&(||x||_p+||y||_p)||z||_q\\ &=&(||x||_p+||y||_p)||x+y||_p^{p/q}.\end{array}$$ • Since $p-\frac pq=1$, we get (dividing both sides of the last inequality by $||x+y||_p^{p/q}$) $$||x+y||_p\le ||x||_p+||y||_p.$$ q.e.d | | | | created: 2020-02-08 07:50:49 | modified: 2020-02-08 08:20:18 | by: bookofproofs | references: [581] (none)
2020-02-29 01:41:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641035795211792, "perplexity": 737.3346500162954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148163.71/warc/CC-MAIN-20200228231614-20200229021614-00290.warc.gz"}
http://typeocaml.com/2015/01/25/memoize-rec-untying-the-recursive-knot/
# Recursive Memoize & Untying the Recursive Knot When I wrote the section of When we need later substitution in Mutable, I struggled. I found out that I didn't fully understand the recursive memoize myself, so what I had to do was just copying the knowledge from Real World OCaml. Luckily, after the post was published, glacialthinker commented in reddit: (I never thought before that a recursive function can be split like this, honestly. I don't know how to induct such a way and can't explain more. I guess we just learn it as it is and continue. More descriptions of it is in the book.) This is "untying the recursive knot". And I thought I might find a nice wikipedia or similiar entry... but I mostly find Harrop. :) He actually had a nice article on this many years back in his OCaml Journal. Anyway, if the author swings by, searching for that phrase may turn up more material on the technique. It greatly enlightened me. Hence, in this post, I will share with you my futher understanding on recursive memoize together with the key cure untying the recursive knot that makes it possible. # Simple Memoize revamped We talked about the simple memoize before. It takes a non-recursive function and returns a new function which has exactly the same logic as the original function but with new new ability of caching the argument, result pairs. let memoize f = let table = Hashtbl.Poly.create () in let g x = match Hashtbl.find table x with | Some y -> y | None -> let y = f x in y in g The greatness of memoize is its flexibility: as long as f takes a single argument, memoize can make a memo version out of it without touching anything inside f. This means while we create f, we don't need to worry about the ability of caching but just focus on its own correct logic. After we finish f, we simply let memoize do its job. Memoization and functionality are perfectly separated. Unfortunately, the simple memoize cannot handle recursive functions. If we try to do memoize f_rec, we will get this: f_rec is a recursive function so it will call itself inside its body. memoize f_rec will produce f_rec_memo which is a little similar as the previous f_memo, yet with the difference that it is not a simple single call of f_rec arg like we did f arg. Instead, f_rec arg may call f_rec again and again with new arguments. Let's look at it more closely with an example. Say, arg in the recursive process will be always decreased by 1 until 0. 1. Let's first od f_rec_memo 4. 2. f_rec_memo will check the 4 against Hashtbl and it is not in. 3. So f_rec 4 will be called for the first time. 4. Then f_rec 3, f_rec 2, f_rec 1 and f_rec 0. 5. After the 5 calls, result is obtained. Then 4, result pair is stored in Hashtbl and returned. 6. Now let's do f_rec_memo 3, what will happen? Obviously, 3 won't be found in Hashtbl as only 4 is stored in step 5. 7. But should 3, result pair be found? Yes, it should of course because we have dealt with 3 in step 4, right? 8. Why 3 has been done but is not stored? 9. ahh, it is because we did f_rec 3 but not f_rec_memo 3 while only the latter one has the caching ability. Thus, we can use memoize f_rec to produce a memoized version out of f_rec anyway, but it changes only the surface not the f_rec inside, hence not that useful. How can we make it better then? # Recursive Memoize revamped What we really want for memoizing a recursive function is to blend the memo ability deep inside, like this: Essentially we have to replace f_rec inside with f_rec_memo: And only in this way, f_rec can be fully memoized. However, we have one problem: **it seems that we have to change the internal of f_rec. If we can modify f_rec directly, we can solve it easily . For instance of fibonacci: let rec fib_rec n = if n <= 1 then 1 else fib_rec (n-1) + fib_rec (n-2) we can make the memoized version: let fib_rec_memo_trivial n = let table = Hashtbl.Poly.create () in let rec fib_rec_memo x = match Hashtbl.find table x with | Some y -> y | None -> let y = fib_rec_memo (x-1) + fib_rec_memo (x-2) in y in fib_rec_memo In the above solution, we replaced the original fib_rec inside with fib_rec_memo, however, we also changed the declaration to fib_rec_memo completely. In fact, now fib_rec is totally ditched and fib_rec_memo is a new function that blends the logic of memoize with the logic of fib_rec. Well, yes, fib_rec_memo_trivial can achieve our goal, but only for fib_rec specificly. If we need to make a memoized version for another recursive function, then we need to change the body of that function again. This is not what we want. We wish for a memoize_rec that can turn f_rec directly into a memoized version, just like what the simple memoize can do for f. So we don't have a shortcut. Here is what we need to achieve: 1. we have to replace the f_rec inside the body of f_rec with f_rec_memo 2. We have keep the declaration of f_rec. 3. We must assume we can't know the specific logic inside f_rec. It sounds a bit hard. It is like giving you a compiled function without source code and asking you to modify its content. And more imporantly, your solution must be generalised. Fortunately, we have a great solution to create our memoize_rec without any hacking or reverse-engineering and untying the recursive knot is the key. # Untying the Recursive Knot To me, this term sounds quite fancy. In fact, I never heard of it before 2015-01-21. After I digged a little bit about it, I found it actually quite simple but very useful (this recursive memoize case is an ideal demonstration). Let's have a look at what it is. Every recursive function somehow follows a similar pattern where it calls itself inside its body: Once a recursive function application starts, it is out of our hands and we know it will continue and continue by calling itself until the STOP condition is satisfied. What if the users of our recursive function need some more control even after it gets started? For example, say we provide our users fib_rec without source code, what if the users want to print out the detailed trace of each iteration? They are not able unless they ask us for the source code and make a new version with printing. It is not that convenient. So if we don't want to give out our source code, somehow we need to reform our fib_rec a little bit and give the space to our users to insert whatever they want for each iteration. let rec fib_rec n = if n <= 1 then 1 else fib_rec (n-1) + fib_rec (n-2) Have a look at the above fib_rec carefully again, we can see that the logic of fib_rec is already determined during the binding, it is the fib_recs inside that control the iteration. What if we rename the fib_recs within the body to be f and add it as an argument? let fib_norec f n = if n <= 1 then 1 else f (n-1) + f (n-2) (* we actually should now change the name of fib_norec to something like fib_alike_norec as it is not necessarily doing fibonacci anymore, depending on f *) So now fib_norec won't automatically repeat unless f tells it to. Moreover, fib_norec becomes a pattern which returns 1 when n is <= 1 otherwise add f (n-1) and f (n-2). As long as you think this pattern is useful for you, you can inject your own logic into it by providing your own f. Going back to the printing requirement, a user can now build its own version of fib_rec_with_trace like this: let rec fib_rec_with_trace n = Printf.printf "now fibbing %d\n" n; fib_norec fib_rec_with_trace n Untying the recusive knot is a functional design pattern. It turns the recursive part inside the body into a new parameter f. In this way, it breaks the iteration and turns a recursive function into a pattern where new or additional logic can be injected into via f. It is very easy to untie the knots for recusive functions. You just give an addition parameter f and replace f_rec everywhere inside with it. For example, for quicksort: let quicksort_norec f = function | [] | _::[] as l -> l | pivot::rest -> let left, right = partition_fold pivot rest in f left @ (pivot::f right) let rec quicksort l = quicksort_norec quicksort l There are more examples in Martin's blog, though they are not in OCaml. A formalized description of this topic is in the article Tricks with recursion: knots, modules and polymorphism from The OCaml Journal. Now let's come back to recursive memoize problem with our new weapon. # Solve Recursive Memoize At first, we can require that every recursive function f_rec must be supplied to memoize_rec in the untied form f_norec. This is not a harsh requirement since it is easy to transform f_rec to f_norec. Once we get f_norec, we of course cannot apply memoize (non-rec version) on it directly because f_norec now takes two parameters: f and arg. Although we can create f_rec in the way of let rec f_rec arg = f_norec f_rec arg, we won't do it that straightforward here as it makes no sense to have an exactly the same recursive function. Instead, we can for now do something like let f_rec_tmp arg = f_norec f arg. We still do not know what f will be, but f_rec_tmp is non-recursive and we can apply memoize on it: let f_rec_tmp_memo = memoize f_tmp. f_rec_tmp_memo now have the logic of f_norec and the ability of memoization. If f can be f_rec_tmp_memo, then our problem is solved. This is because f is inside f_norec controlling the iteration and we wished it to be memoized. The magic that can help us here is making f mutable. Because f needs to be known in prior and f_rec_tmp_memo is created after, we can temporarily define f as a trivial function and later on after we create f_rec_tmp_memo, we then change f to f_rec_tmp_memo. Let's use a group of bindings to demonstrate: (* trivial initial function and it should not be ever applied in this state *) let f = ref (fun _ -> assert false) let f_rec_tmp arg = f_norec !f arg (* memoize is the simple non-rec version *) let f_rec_tmp_memo = memoize f_rec_tmp (* the later substitution made possible by being mutable *) f := f_rec_tmp_memo The final code for memoize_rec is: let memo_rec f_norec = let f = ref (fun _ -> assert false) in let f_rec_memo = memoize (fun x -> f_norec !f x) in f := f_rec_memo; f_rec_memo
2021-04-18 08:00:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.629262387752533, "perplexity": 2452.111486195887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00263.warc.gz"}
https://projecteuclid.org/euclid.twjm/1499133671
## Taiwanese Journal of Mathematics ### GENERALIZED DERIVATIONS WITH ANNIHILATOR CONDITIONS IN PRIME RINGS #### Abstract Let $R$ be a noncommutative primering with its Utumi ring of quotients $U$, $C=Z(U)$ the extendedcentroid of $R$, $F$ a generalized derivation of $R$ and $I$ anonzero ideal of $R$. Suppose that there exists $0\neq a\in R$ such that $a(F([x,y])^n-[x,y])=0$ for all $x,y \in I$, where $n\geq 1$ is a fixedinteger. Then either $n=1$ and $F(x)=bx$ for all $x\in R$ with$a(b-1)=0$ or $n\geq 2$ and one of the following holds: 1. char $(R)\neq 2$, $R\subseteq M_2(C)$, $F(x)=bx$ for all$x\in R$ with $a(b-1)=0$ (In this case $n$ is an odd integer); 2. char $(R)= 2$, $R\subseteq M_2(C)$ and $F(x)=bx+[c,x]$ forall $x\in R$ with $a(b^n-1)=0$. #### Article information Source Taiwanese J. Math., Volume 19, Number 3 (2015), 943-952. Dates First available in Project Euclid: 4 July 2017 https://projecteuclid.org/euclid.twjm/1499133671 Digital Object Identifier doi:10.11650/tjm.19.2015.4043 Mathematical Reviews number (MathSciNet) MR3353262 Zentralblatt MATH identifier 1357.16057 #### Citation Dhara, Basudeb; De Filippis, Vincenzo; Pradhan, Krishna Gopal. GENERALIZED DERIVATIONS WITH ANNIHILATOR CONDITIONS IN PRIME RINGS. Taiwanese J. Math. 19 (2015), no. 3, 943--952. doi:10.11650/tjm.19.2015.4043. https://projecteuclid.org/euclid.twjm/1499133671 #### References • N. Argaç and Ç. Dem\.ir, Generalized derivations of prime rings on multilinear polynomials with annihilator conditions, Turk. J. Math., 37 (2013), 231-243. • K. I. Beidar, W. S. Martindale III and A. V. Mikhalev, Rings with generalized identities, Pure and Applied Math., 196, Marcel Dekker, New York, 1996. • C. L. Chuang, GPI's having coefficients in Utumi quotient rings, Proc. Amer. Math. Soc., 103(3) (1988), 723-728. • M. N. Daif and H. E. Bell, Remarks on derivations on semiprime rings, Internat. J. Math. Math. Sci., 15(1) (1992), 205-206. • V. De Filippis and S. Huang, Generalized derivations on semiprime rings, Bull. Korean Math. Soc., 48(6) (2011), 1253-1259. • V. De Filippis, Annihilators of power values of generalized derivations on multilinear polynomials, Bull. Austr. Math. Soc., 80 (2009), 217-232. • B. Dhara, V. De Filippis and G. Scudo, Power values of generalized derivations with annihilator conditions in prime rings, Mediterr. J. Math., 10 (2013), 123-135. • T. S. Erickson, W. S. Martindale III and J. M. Osborn, Prime nonassociative algebras, Pacific J. Math., 60 (1975), 49-63. • I. N. Herstein, Center-like elements in prime rings, J. Algebra, 60 (1979), 567-574. • I. N. Herstein, Topics in Ring Theory, Univ. of Chicago Press, Chicago, 1969. • N. Jacobson, Structure of Rings, Amer. Math. Soc. Colloq. Pub., 37, Amer. Math. Soc., Providence, RI, 1964. • V. K. Kharchenko, Differential identity of prime rings, Algebra and Logic, 17 (1978), 155-168. • T. K. Lee, Generalized derivations of left faithful rings, Comm. Algebra, 27(8) (1999), 4057-4073. • T. K. Lee, Semiprime rings with differential identities, Bull. Inst. Math. Acad. Sinica, 20(1) (1992), 27-38. • W. S. Martindale III, Prime rings satisfying a generalized polynomial identity, J. Algebra, 12 (1969), 576-584. • M. A. Quadri, M. S. Khan and N. Rehman, Generalized derivations and commutativity of prime rings, Indian J. Pure Appl. Math., 34(9) (2003), 1393-1396.
2019-09-15 19:58:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6512466669082642, "perplexity": 3052.097379075985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00557.warc.gz"}
https://labs.tib.eu/arxiv/?author=Aurelien%20Bideaud
• ### The NIKA2 large field-of-view millimeter continuum camera for the 30-m IRAM telescope(1707.00908) Nov. 25, 2017 astro-ph.IM Millimeter-wave continuum astronomy is today an indispensable tool for both general Astrophysics studies and Cosmology. General purpose, large field-of-view instruments are needed to map the sky at intermediate angular scales not accessible by the high-resolution interferometers and by the coarse angular resolution space-borne or ground-based surveys. These instruments have to be installed at the focal plane of the largest single-dish telescopes. In this context, we have constructed and deployed a multi-thousands pixels dual-band (150 and 260 GHz, respectively 2mm and 1.15mm wavelengths) camera to image an instantaneous field-of-view of 6.5arc-min and configurable to map the linear polarization at 260GHz. We are providing a detailed description of this instrument, named NIKA2 (New IRAM KID Arrays 2), in particular focusing on the cryogenics, the optics, the focal plane arrays based on Kinetic Inductance Detectors (KID) and the readout electronics. We are presenting the performance measured on the sky during the commissioning runs that took place between October 2015 and April 2017 at the 30-meter IRAM (Institut of Millimetric Radio Astronomy) telescope at Pico Veleta. NIKA2 has been successfully deployed and commissioned, performing in-line with the ambitious expectations. In particular, NIKA2 exhibits FWHM angular resolutions of around 11 and 17.5 arc-seconds at respectively 260 and 150GHz. The NEFD (Noise Equivalent Flux Densities) demonstrated on the maps are, at these two respective frequencies, 33 and 8 mJy*sqrt(s). A first successful science verification run has been achieved in April 2017. The instrument is currently offered to the astronomical community during the coming winter and will remain available for at least the next ten years. • ### Lumped Element Kinetic Inductance Detectors for space applications(1606.00719) June 2, 2016 astro-ph.IM Kinetic Inductance Detectors (KID) are now routinely used in ground-based telescopes. Large arrays, deployed in formats up to kilopixels, exhibit state-of-the-art performance at millimeter (e.g. 120-300 GHz, NIKA and NIKA2 on the IRAM 30-meters) and sub-millimeter (e.g. 350-850 GHz AMKID on APEX) wavelengths. In view of future utilizations above the atmosphere, we have studied in detail the interaction of ionizing particles with LEKID (Lumped Element KID) arrays. We have constructed a dedicated cryogenic setup that allows to reproduce the typical observing conditions of a space-borne observatory. We will report the details and conclusions from a number of measurements. We give a brief description of our short term project, consisting in flying LEKID on a stratospheric balloon named B-SIDE. • ### A passive THz video camera based on lumped element kinetic inductance detectors(1511.06011) Nov. 18, 2015 physics.ins-det, astro-ph.IM We have developed a passive 350 GHz (850 {\mu}m) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs) -- designed originally for far-infrared astronomy -- as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of $\sim$0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.
2021-04-15 04:17:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3970329463481903, "perplexity": 5646.392470441815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00518.warc.gz"}
http://math.stackexchange.com/questions/67749/a-stability-estimate-for-a-first-order-linear-pde
# A stability estimate for a first-order linear PDE If we have $$u_t + u_x =f(x,t)$$ with initial boundary conditions $u(0,t)=0$ for $t>0$ and $u(x,0)=0$ for $0<x<R$ Can anyone tell me how to prove the stability estimate $$\int_0^r (u(x,t))^2 dx \leq e^t \int_0^t \int_0^R f^2(x,s)dx ds$$ where $t>0$ - The method of characteristics yields $$u(x,t)=\int_0 ^x f(s, s + t - x) d s$$ for $t\ge x$ and $$u(x,t)=\int_0 ^t f(s + x - t, s) d s$$ for $x \ge t$. Then you can use Hölder's inequality to estimate $u^2$ in terms of an integral over $f^2$, so the integral over $u^2$ can be estimated by an integral over a quadrilateral or triangle which is a subset of $[0,R]\times[0,t]$. This shows that the inequality is true with a factor of $t$ instead of $e^t$, so that you get an even stronger estimate. - Multiply your equation by $u$ and integrate over $[0,R]$, since $uu_t=\frac{1}{2}(u^2)_t$ we get $$\frac{1}{2}\frac{d}{dt} \int_0^R u^2(x,t)dx + \frac{1}{2}u^2(R,t) = \int_0^R f(x,t)u(x,t)dx \leq \left( \int_0^R f^2 (x,t)dx \right) ^{\frac{1}{2}}\left( \int_0^R u^2 (x,t)dx \right) ^{\frac{1}{2}}$$ but we have $$\frac{d}{dt} \int_0^R u^2(x,t)dx = 2\left( \int_0^R u^2 (x,t)dx \right) ^{\frac{1}{2}} \frac{d}{dt} \left( \int_0^R u^2 (x,t)dx \right) ^{\frac{1}{2}}$$ and so we get $$\frac{d}{dt}\left( \int_0^R u^2 (x,t)dx \right) ^{\frac{1}{2}} \leq \left( \int_0^R f^2 (x,t)dx \right) ^{\frac{1}{2}}$$ and integrate to obtain $$\left( \int_0^R u^2(x,t)dx \right)^{\frac{1}{2}} \leq \int_0^t \left( \int_0^R f^2(x,t)dx \right)^{\frac{1}{2}} \leq t^{\frac{1}{2}} \left( \int_0^t \int_0^R f^2(x,s)dx ds \right)^{\frac{1}{2}}$$ where we used Jensen's inequality for the last one. This is your result. -
2015-07-31 01:46:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797801971435547, "perplexity": 48.51688984421069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987866.61/warc/CC-MAIN-20150728002307-00157-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/two-people-in-spaceshift-shift-seats.302090/
# Two people in spaceshift shift seats 1. Mar 24, 2009 ### diffusion Two people in a spaceship shift seats.... 1. The problem statement, all variables and given/known data Simma (mass 60kg) and Stan (mass 90kg) are testing an ultralight spacepod. They swap seats, with the seats being 4.0m apart, located at equal distances from the center of mass of the space-pod. The space-pod's mass is 50kg. Why does the space-pod not move after they take their new seats? How far does it move and which way? All observations are in the frame in which the space-pod was initially stationary. 2. Relevant equations p$$_{1}$$ + p$$_{2}$$ + p$$_{3}$$ = $$\Sigma$$P center of mass = m$$_{2}$$ / m$$_{1}$$ + m$$_{2}$$ x d 3. The attempt at a solution There are two questions here: How far does the spacepod move when Simma and Stan change seats, and why does it stop after the take their new seats. Unless I'm over looking something, the answer to the second question is simply that the net momemtum = 0, so the spacepod is at rest. In order to find out how far it moves, I figured I could try to figure out the change in position of the center of mass, and that is how far the spacepod moves? Not really sure how to approach it. Last edited: Mar 24, 2009 2. Mar 24, 2009 ### diffusion Re: Two people in spaceshift shift seats.... Anyone? Just need a little steering in the right direction, the rest I should be able to do by myself. 3. Mar 24, 2009 ### Redbelly98 Staff Emeritus Re: Two people in a spaceship shift seats.... Yes, that is exactly how to approach it Since the center-of-mass of pod+Simma+Stan does not move, Δcom of pod = -Δcom of (Simma & Stan)
2017-08-19 14:05:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2542285919189453, "perplexity": 2066.983591546055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105451.99/warc/CC-MAIN-20170819124333-20170819144333-00412.warc.gz"}
https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=63
# IEEE Transactions on Power Electronics ## Issue 9 • Sept. 2018 The purchase and pricing options for this item are unavailable. Select items are only available as part of a subscription package. You may try again later or contact us for more information. ## Filter Results Displaying Results 1 - 25 of 80 Publication Year: 2018, Page(s):C1 - 7299 | PDF (61 KB) • ### IEEE Power Electronics Society Publication Year: 2018, Page(s): C2 | PDF (64 KB) • ### Mechanism Analysis of the Required Rotor Current and Voltage for DFIG-Based WTs to Ride-Through Severe Symmetrical Grid Faults Publication Year: 2018, Page(s):7300 - 7304 | | PDF (449 KB) | HTML Doubly-fed induction generator (DFIG)-based wind turbines is prone to suffering from overcurrent during severe grid faults, due to the high electromotive force. To overcome this problem, various fault ride-through (FRT) control strategies are proposed, but they do not theoretically elaborate how to coordinate the rotor current and voltage to ride-through severe grid faults under the limited capaci... View full abstract» • ### Design Constraints for Series-Stacked Energy Decoupling Buffers in Single-Phase Converters Publication Year: 2018, Page(s):7305 - 7308 | | PDF (1063 KB) | HTML In single-phase dc–ac and ac–dc conversion, energy decoupling is needed to compensate for the instantaneous power mismatch between the ac side and the dc side. The series-stacked buffer (SSB) is a type of active energy decoupling buffer that allows a large voltage ripple on the energy buffering capacitor to improve the energy utilization ratio while actively regulating the dc-bus vol... View full abstract» • ### Multifrequency Current Control Including Distortion-Free Saturation and Antiwindup With Enhanced Dynamics Publication Year: 2018, Page(s):7309 - 7313 | | PDF (2370 KB) | HTML Output-voltage saturation (OVS) in voltage source converters can produce current-control windup. Many OVS and antiwindup (AWU) methods exist for a single-frequency current controller, but they cause additional distortion when applied to multifrequency current control. The latter is important, e.g., for reducing losses in various applications or torque ripple in ac drives. In the existing strategie... View full abstract» • ### Measurement Methodology for Accurate Modeling of SiC MOSFET Switching Behavior Over Wide Voltage and Current Ranges Publication Year: 2018, Page(s):7314 - 7325 | | PDF (5953 KB) | HTML Media This paper presents two novel measurement methods to characterize silicon carbide (SiC) MOSFET devices. The resulting data are utilized to significantly improve the extraction of a custom device model that can now accurately reproduce device switching behavior. First, we consider the $I_{text{d}}- V_{text{ds}}$ output character... View full abstract» • ### An Ultralow Quiescent Current Power Management System With Maximum Power Point Tracking (MPPT) for Batteryless Wireless Sensor Applications Publication Year: 2018, Page(s):7326 - 7337 | | PDF (1322 KB) | HTML Media This paper presents a chip-scale ultralow quiescent current power management system that interfaces with electromechanical energy harvester for enabling self-powering, batteryless wireless sensors. A piezoelectric transducer scavenges and transforms mechanical vibration energy into electricity in ac form, which is then converted into dc power by a full bridge rectifier and collected into a small f... View full abstract» • ### A Variable Inductor Based LCL Filter for Large-Scale Microgrid Application Publication Year: 2018, Page(s):7338 - 7348 | | PDF (3949 KB) | HTML Three-phase LCL filters, compared to L or LC filters, are preferred in high-power converters in microgrid applications because of their better capability of harmonic attenuation. Fixed value inductors are mostly adopted in traditional designs to achieve low current harmonics at rated power. However, due to the intermittent nature of renewable ener... View full abstract» • ### Generation of High-Resolution 12-Sided Voltage Space Vector Structure Using Low-Voltage Stacked and Cascaded Basic Inverter Cells Publication Year: 2018, Page(s):7349 - 7358 | | PDF (1614 KB) | HTML This paper proposes generation of a 15-level (14 concentric) dodecagonal voltage space vector structure (DVSVS) for a star connected induction motor drive. The proposed multilevel DVSVS is obtained by cascading two inverters, namely a primary and secondary inverter. The primary inverter is a five-level (5L) structure formed by stacking two three-level flying capacitors with individual reduced dc s... View full abstract» • ### Improved Modulation Strategy Using Dual Phase Shift Modulation for Active Commutated Current-Fed Dual Active Bridge Publication Year: 2018, Page(s):7359 - 7375 | | PDF (2935 KB) | HTML This paper proposes dual phase shift modulation (DPSM) for active commutated current-fed dual active bridge for low-voltage (LV) high-power application to improve the performance of the converter at light loads. The proposed DPSM uses an additional control variable to actively control the peak current in the converter that helps to improve the performance as compared to simpler single variable but... View full abstract» • ### On the Concept of the Multi-Source Inverter for Hybrid Electric Vehicle Powertrains Publication Year: 2018, Page(s):7376 - 7386 | | PDF (2821 KB) | HTML This paper presents an inverter topology named the multi-source inverter that aims to connect several independent DC sources to the same AC output using a single stage of conversion. This power converter has been developed for applications, such as electrified powertrains. Compared to the conventional hybrid powertrains that use a DC/DC converter to provide an adaptable voltage to the load, the mu... View full abstract» • ### Dual-Purpose Nonoverlapping Coil Sets as Metal Object and Vehicle Position Detections for Wireless Stationary EV Chargers Publication Year: 2018, Page(s):7387 - 7397 | | PDF (1796 KB) | HTML For commercialization of wireless stationary electric vehicles (EV) chargers, metal object detection (MOD) on a power supply coil and detection of position (DoP) of EVs are needed. In this paper, dual-purpose nonoverlapping coil sets for both MOD and DoP, which detect a variation of magnetic flux on the power supply coil, are newly proposed, where the proposed MOD and DoP methods make no contribut... View full abstract» • ### Implementation of the Constant Current and Constant Voltage Charge of Inductive Power Transfer Systems With the Double-Sided LCC Compensation Topology for Electric Vehicle Battery Charge Applications Publication Year: 2018, Page(s):7398 - 7410 | | PDF (5441 KB) | HTML When compared to plugged-in chargers, inductive power transfer (IPT) methods for electric vehicle (EV) battery chargers have several benefits, such as greater convenience and higher safety. In an EV, the battery is an indispensable component, and lithium-ion batteries are identified as the most competitive candidate to be used in EVs due to their high power density, long cycle life, and better saf... View full abstract» • ### A Transformerless 6.6-kV STATCOM Based on a Hybrid Cascade Multilevel Converter Using SiC Devices Publication Year: 2018, Page(s):7411 - 7423 | | PDF (2589 KB) | HTML We have developed a full-scale prototype of transformerless static synchronous compensator (STATCOM), rated at 6.6 kV and 100 kVA, based on a hybrid cascade multilevel converter using SiC devices. The topology employs multivoltage converter cells, Si and SiC semiconductor devices, and hybrid modulations in the converters. One phase of the STATCOM has two Si insulated-gate bipolar transistor conver... View full abstract» • ### Sub- and Super-Synchronous Interactions Between STATCOMs and Weak AC/DC Transmissions With Series Compensations Publication Year: 2018, Page(s):7424 - 7437 | | PDF (2414 KB) | HTML With the increasing integration of power electronic converters into the power system, the interactions between converters and their adjacent transmissions bring emerging oscillation issues. A new type of sub- and super-synchronous interactions (S$^2$SI) between STATCOMs and the weak AC/DC grid were detected in China Southern Gr... View full abstract» • ### Analysis of a High-Power, Resonant DC–DC Converter for DC Wind Turbines Publication Year: 2018, Page(s):7438 - 7454 | | PDF (4693 KB) | HTML This paper is introducing a new method of operation for a series resonant converter, with intended application in megawatt high-voltage dc wind turbines. Compared to a frequency controlled series resonant converter operated in subresonant mode, the method (entitled pulse removal technique) allows the design of the medium frequency transformer for highest switching frequency, while being operated a... View full abstract» • ### A High-Power, Medium-Voltage, Series-Resonant Converter for DC Wind Turbines Publication Year: 2018, Page(s):7455 - 7465 | | PDF (2187 KB) | HTML A new modulation scheme is introduced for a single-phase series-resonant converter, which permits continuous regulation of power from nominal level to zero, in presence of variable input and output dc voltage levels. Rearranging the circuit to locate the resonant LC tank on the rectifier side of the high turns-ratio transformer combined with frequency control and phase-shifted inv... View full abstract» • ### Improved Modulation Mechanism of Parallel-Operated T-Type Three-Level PWM Rectifiers for Neutral-Point Potential Balancing and Circulating Current Suppression Publication Year: 2018, Page(s):7466 - 7479 | | PDF (2109 KB) | HTML In high-power applications, parallel-operated T-type three-level pulse width modulation (PWM) rectifiers (T3LPRs) are widely employed to improve the power capacity and system reliability. For parallel-operated T3LPRs, there are two important issues need to be properly addressed: 1) the neutral-point potential (NPP) balancing, and 2) the zero sequence circulating current (ZSCC) between the common a... View full abstract» • ### Quasi-Square-Wave Modulation of Modular Multilevel High-Frequency DC Converter for Medium-Voltage DC Distribution Application Publication Year: 2018, Page(s):7480 - 7495 | | PDF (2151 KB) | HTML In a direct current (dc) distribution network, the modular multilevel high-frequency dc converter (MDCC) can achieve electrical isolation, voltage conversion and power transmission between the low- and medium-voltage dc buses. In this paper, quasi-square-wave (QSW) modulation, which avoids the $dv/ dt$ stress problem and maint... View full abstract» • ### A Novel Seven-Level ANPC Converter Topology and Its Commutating Strategies Publication Year: 2018, Page(s):7496 - 7509 | | PDF (2463 KB) | HTML Traditional seven-level active neutral-point-clamped (ANPC-7L) converters suffer from the problems of dynamic voltage balancing in series switches and multilevel voltage jumping of phase output voltage during switching states transition. This paper presents a novel ANPC-7L topology with auxiliary commutating branches and elaborates on its commutating strategies. The presented topology and its comm... View full abstract» • ### Achieving Efficiencies Exceeding 99% in a Super-Junction 5-kW DC–DC Converter Power Stage Through the Use of an Energy Recovery Snubber and Dead-Time Optimization Publication Year: 2018, Page(s):7510 - 7520 | | PDF (1683 KB) | HTML A highly efficient 5-kW bidirectional dc–dc converter power stage operating from a 400-V supply implementing super-junction (SJ) metal–oxide–semiconductor field-effect transistor (MOSFETs) is presented. SJ MOSFETs have low on-state resistances and low switching losses. However, their application in voltage-source converters can be compromised by the reverse recovery behavior o... View full abstract» • ### Decentralized Control for Fully Modular Input-Series Output-Parallel (ISOP) Inverter System Based on the Active Power Inverse-Droop Method Publication Year: 2018, Page(s):7521 - 7530 | | PDF (2017 KB) | HTML Input-series-output-parallel (ISOP) inverter systems are suitable for the high-input-voltage and large-output-current applications. In this paper, an active power inverse-droop control method is presented for the ISOP inverter system, in which the polarity of active-power feedback is inversed to be positive compared with the traditional droop control while the reactive power droop is still adopted... View full abstract» • ### Unified Equivalent Steady-State Circuit Model and Comprehensive Design of the LCC Resonant Converter for HV Generation Architectures Publication Year: 2018, Page(s):7531 - 7544 | | PDF (2019 KB) | HTML In this paper, a unified equivalent circuit model which can simplify the design and analysis of a family of high-voltage (HV) generation architectures based on the series–parallel (LCC) resonant converter is proposed. First, four HV generation architectures are reviewed in terms of the modularization level of HV transformers and rectifiers. Next, the steady-state, unified e... View full abstract» • ### High-Frequency Transformer Design for Modular Power Conversion From Medium-Voltage AC to 400 VDC Publication Year: 2018, Page(s):7545 - 7557 | | PDF (4379 KB) | HTML This paper presents a high-frequency modular medium-voltage AC (4160 VAC or 13.8 kVAC) to low-voltage DC (400 VDC) system that is scalable in order to be used for different scale microgrids. A 15 kW, 500 kHz DC/DC converter is demonstrated as the most important stage of the system overall, which can be scalable to a 225 kW 4160 VAC to 400 VDC system. Motivation o... View full abstract» • ### Operation of Three-Level Inverter-Based Shunt Active Power Filter Under Nonideal Grid Voltage Conditions With Dual Fundamental Component Extraction Publication Year: 2018, Page(s):7558 - 7570 | | PDF (1984 KB) | HTML In this paper, a new reference current generation method is proposed for effective harmonics mitigation and reactive power compensation of three-level neutral-point diode clamped inverter-based shunt active power filter (SAPF) under nonideal grid voltage conditions. The proposed method is named as dual fundamental component extraction algorithm. In operation, the proposed algorithm extracts at the... View full abstract» ## Aims & Scope IEEE Transactions on Power Electronics covers fundamental technologies used in the control and conversion of electric power. Full Aims & Scope
2018-07-16 12:48:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2018473744392395, "perplexity": 10739.530703556691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589270.3/warc/CC-MAIN-20180716115452-20180716135452-00605.warc.gz"}
https://byjus.com/question-answer/the-equation-9x-3-9x-2-y-45-x-2-4y-3-4xy-2-20y/
Question # The equation $$9x^3 + 9x^2 y - 45 x^2 = 4y^3 + 4xy^2 - 20y^2$$ represents $$3$$ straight lines, two of which passes through origin. Then find the area of the triangle formed by these lines. Solution ## $$9x^3+9x^2y-45x^2=4y^3+4xy^2-20y^2$$$$9x^2(x+y-5)=4y^2(x+y-5)$$$$9x^2(x+y-5)-4y^2(x+y-5)=0$$$$(9x^2-4x^2)(x+y-5)=0$$$$(3x+2y)(3x-2x)(x+y-5)=0$$so the three lines are$$x=\dfrac{2}{3}y$$, $$x=\dfrac{-2}{3}y$$ and $$x+y-5$$from equation of lines , we get the vertices of triangle formed by the lines$$A(0,0),B(-10,15)$$ and $$C(2,3)$$Area of the triangle $$=\dfrac{1}{2}(x_a-x_c)(y_b-ya)-(x_a-x_b)(y_c-y_a)$$                                   $$=\dfrac{1}{2}(x_ay_b+x_by_c+x_cy_a-x_ay_c-x_cy_b-x_by_a)$$                                    $$=\dfrac{1}{2}(0\times15+-10\times3+2\times0-0\times3-2\times15-(-10\times0)$$                                      $$=\dfrac{1}{2}\times-60$$since, area is non- negativengleSo, the area of the triangle$$=30 unit^2$$Maths Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-21 22:36:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872276544570923, "perplexity": 2335.4096077534196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00466.warc.gz"}
https://codegolf.stackexchange.com/questions/1109/shortest-port-scanner
# Shortest Port Scanner Write the shortest program that will attempt to connect to open ports on a remote computer and check if they are open. (It's called a Port Scanner) Take input from command line arguments. your-port-scanner host_ip startPort endPort Assume, startPort < endPort (and endPort - startPort < 1000) Output: All the open ports between that range should space or comma seperated. • Are there any legal reasons to go about using port scanners? Except to stop other people hacking into your network by closing unnecessary ports? – Alexander Craggs Aug 23 '14 at 18:50 nc -vz $1$2-$3 2>&1|cut -f3 -d\ |xargs Netcat does the scanning and returns results in this form on standard error: localhost [127.0.0.1] 22 (ssh) open localhost [127.0.0.1] 25 (smtp) open cut and xargs extract the port number and make a single line out of it. Remind me to shut SMTP down on that node. ## Perl, 92 $_='use I;$h=shift;grep I->new("$h:$_"),shift..shift'; s/I/IO::Socket::INET/g;@_=eval;say"@_" Perl 5.10 or later, run with perl -E 'code here'. Uses regexes to compress that long IO::Socket::INET, then eval; final formatting done with array interpolation. By request, a more detailed explanation. To ungolf, let's first respace: $_ = << 'EOC'; use I; $h = shift; grep I->new("$h:$_"), shift..shift; EOC s/I/IO::Socket::INET/g; @_ = eval; say "@_"; The line before the eval replaces all (two) occurences of 'I' with 'IO::Socket::INET', that's a standard Perl golfing trick to reduce the impact of unavoidable long identifiers. Naming a few of the temporaries, the code is then equivalent to this: use IO::Socket::INET;$h = shift; $p1 = shift;$p2 = shift; @_ = grep IO::Socket::INET->new("$h:$_"), ($p1 ..$p2); say "@_"; In a nutshell: read host and port range arguments from the command line; attempt a connection to all of them in sequence (IO::Socket::INET->new()); keep a list of those who succeeded (grep); display the result nicely (say). • cant understand somehow :( looks like deepest black magic for me :P but still a +1 for weirdness – masterX244 Mar 4 '14 at 19:36 • @masterX244 I've detailed the innards a bit. HTH – J B Mar 5 '14 at 9:53 # sh/nmap/GNU grep/xargs - 36 nmap -p$2-$3 $1|grep -Po '^\d+'|xargs Follows input and output specs: $ sh 1109.sh 127.0.0.1 1 80 22 25 80 • I don't know if nmap counts as a valid answer here, but it is definitely not sh :) – Eelvex Feb 21 '11 at 15:15 • The output format could also use a little polishing. – J B Feb 21 '11 at 16:26 • @Eelvex that's a sh script, with a your-port-scanner host_ip startPort endPort interface, calling nmap ;-) – Arnaud Le Blanc Feb 21 '11 at 16:41 • @J B, done :-) __ – Arnaud Le Blanc Feb 21 '11 at 16:50 • does nmap come bundled with linux? :-\ i didn't know that... :( – st0le Feb 22 '11 at 5:10 # Ruby - 85 require"socket" h,p,e=$* p.upto(e){|p|begin TCPSocket.new h,p$><<"#{p} " rescue end} # BASH - 105 In pure BASH (i.e no nmap or netcat). exec 2>&- && exec 2<> /dev/null for p in $(seq$2 $3); do > /dev/tcp/$1/$p && echo -n "$p " done When using with an address other than localhost the timeout is quite long (in the order of minutes) when encountering a closed port so some sort of timeout/alarm function would be required in all likelihood. # PHP - 70 <?for(list(,$h,$p,$e)=$argv;$p<=$e;++$p)@fsockopen($h,$p)&&print"$p "; • i'm sure you can squeeze the $p++ in other mention of $p, will save one char... – st0le Feb 22 '11 at 5:16 ## Perl, 178 I'm new to Perl, any advice on shortening is appreciated! use IO::Socket::INET;for($x=$ARGV[1];$x<$ARGV[2]+1;$x++){if(fork()){if($sock=new IO::Socket::INET(PeerAddr=>$ARGV[0],PeerPort=>$x,Proto=>'tcp')){print"$x ";}close($sock);exit;}}
2020-02-17 14:08:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17457956075668335, "perplexity": 13726.268113241458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142323.84/warc/CC-MAIN-20200217115308-20200217145308-00449.warc.gz"}
http://forums.nesdev.com/viewtopic.php?f=22&t=17111&view=print
nesdev.comhttp://forums.nesdev.com/ Someone for finish Universal PPU project (FPGA for real NES)http://forums.nesdev.com/viewtopic.php?f=22&t=17111 Page 1 of 1 Author: nonosto [ Sun Mar 04, 2018 4:49 am ] Post subject: Someone for finish Universal PPU project (FPGA for real NES) Hello WorldI'm sorry to bother you for my first post and I'm assuming it's not the first time you've this request. I am the proud owner of a NES PAL HI Def HDMI up-to-date last FW (XMAS 80's gift). However to have the ultimate NES missing an element: new FPGA PPU improve sprite display (no flickering/lag ith lot of sprite in same scrren) and good speed for any region.One guy begin a project called Universal PPU but never finished (see youtube link below). All code/scheme is avallable I can send it.I contacted kevtris (Hidef HDMI and Anlog NT conceptor) but he have no interest.Please someone can see this project for help NES community.THX allhttps://www.youtube.com/user/UniversalPPU Author: lidnariq [ Sun Mar 04, 2018 12:03 pm ] Post subject: Re: Someone for finish Universal PPU project (FPGA for real Ccovell has this wonderful image for your situation: Author: nonosto [ Sun Mar 04, 2018 12:40 pm ] Post subject: Re: Someone for finish Universal PPU project (FPGA for real You mean that:Unfortanelly I am better with Latex or in Mathematica than FPGA programming...If nobody can do it it's so sadly...Universal PPU is si nice idea.... Author: lidnariq [ Sun Mar 04, 2018 1:05 pm ] Post subject: Re: Someone for finish Universal PPU project (FPGA for real If you're smart enough to understand how to write LaTeX, you're smart enough to teach yourself Verilog. Author: nonosto [ Sun Mar 04, 2018 1:21 pm ] Post subject: Re: Someone for finish Universal PPU project (FPGA for real really it's impossible this instruction from my picture:Code:\begin{document}\beginpgfgraphicnamed{intersection}%\small   \begin{tikzpicture}   \begin{scope}   \clip (2,0) ellipse (1.5cm and 1cm);   \end{scope}       \draw (0,0) ellipse (1.5cm and 1cm);    \draw (2.5,0) node {$A\cap B = \emptyset$};    \draw (5,0) ellipse (1.5cm and 1cm);        \draw (-1,1) node [left]{$A$};    \draw (4.5,1) node [left]{$B$};    \end{tikzpicture}It's very easier than program FPGA....I havo no skill just basic knoledg in C++...no retro ingenring...I dont wh'at's tools use...I have only lastet code source version....here:http://uptobox.com/54twxg2u50hg Author: infiniteneslives [ Sun Mar 04, 2018 2:28 pm ] Post subject: Re: Someone for finish Universal PPU project (FPGA for real nonosto wrote:Unfortanelly I am better with Latex or in Mathematica than FPGA programming...If nobody can do it it's so sadly...Universal PPU is si nice idea....It's typically not fortune that you're better at one programming/descriptor language than another. It's just that you happen have more experience with some more that others due to your past choices & environment. Regardless of how related it is, the programming experience you have is a great benefit to learning any new language. If you have the motivation, dedication, and access to the internet, learning any language is entirely possible. I think Henry Ford said it best with the following quotes:“Whether you think you can, or you think you can't--you're right.” Sounds like you're saying you can't and you're right so long as that's what you say. We're trying to convince you you can, which is also true if you decide to agree with us. Both your and lidariq's Venn diagrams are correct, you just get to choose which applies to you.One more Ford quote for good measure: “Anyone who stops learning is old, whether at twenty or eighty. Anyone who keeps learning stays young.”All that said, this is honestly not the best first project if you'd like to learn verilog/VHDL. Author: nonosto [ Sun Mar 04, 2018 2:43 pm ] Post subject: Re: Someone for finish Universal PPU project (FPGA for real you are so right, I will see...I continu to ask help anyhere too.THX for your thank you for your warm welcome. Page 1 of 1 All times are UTC - 7 hours Powered by phpBB® Forum Software © phpBB Grouphttp://www.phpbb.com/
2018-03-17 11:04:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6283380389213562, "perplexity": 9287.388613633215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644877.27/warc/CC-MAIN-20180317100705-20180317120705-00677.warc.gz"}
https://tex.stackexchange.com/questions/18462/tikz-decoration-doesnt-work
TiKZ decoration doesn't work I am trying to get TikZ decorations to work, but so far without success. In this minimal example, I am trying to compile an example found in pgfmanual.pdf in section 20.2: \documentclass{article} \usepackage{tikz} \usetikzlibrary{decorations} \begin{document} \begin{tikzpicture} \draw decorate [decoration=zigzag] {(0,0) -- (2,2)}; \end{tikzpicture} \end{document} I always get this error "! Package pgfkeys Error: I do not know the key '/pgf/decoration/crosses' and I am going to ignore it. Perhaps you misspelled it." What is the problem? • The section 20.2 is called Specifying a Uniform Opacity in the current version of PGF (2.10) and has nothing to do with decorations but transparency. I assume you have an older version. Some LaTeX distributions (e.g. Ubuntu TeXLive) don't update the packages often. It would be much better to state the section title instead. – Martin Scharrer May 17 '11 at 17:09 • Found it. In v2.10 it is section 21.2, named Decorating a Subpath Using the Decorate Path Command. – Martin Scharrer May 17 '11 at 17:17 • @Jake: There was no reason to delete your answer. I wrote my one in my editor to test it and didn't reload the page before posting mine, so I didn't saw yours. It's OK to have similar answers around. Just let the people decide which one to up-vote. I hope I didn't bullied you or something ;-) – Martin Scharrer May 17 '11 at 17:26 • General advice on the TikZ/PGF manual section numbering problem, include the name of the subsection and the version of TikZ/PGF that you're using. – Andrew Stacey May 17 '11 at 18:20 • I think the wrong error message was quoted. Shouldn't it be /pgf/decoration/zigzag?.Anyway: /pgf/decoration/crosses belongs to \usetikzlibrary{decorations.shapes} just in case someone else comes from google. – someonr Jan 2 '14 at 15:35 You need to also load the (sub-)library for the particular decoration (decorations.pathmorphing for zigzag). See also section 51 Decoration Library for more information (using PGF 3.1.5b, the section number can be different for other versions). One issue with the otherwise great pgfmanual is, that it is hard to know which libraries are required for the examples. \documentclass{article} \usepackage{tikz} %\usetikzlibrary{decorations} \usetikzlibrary{decorations.pathmorphing} \begin{document} \begin{tikzpicture} \draw decorate [decoration={zigzag}] {(0,0) -- (2,2)}; \end{tikzpicture} \end{document} • Thank you! I agree with you upon required libraries being not very well documented. I will keep the changing section numbers in mind. – Turion May 17 '11 at 17:20
2020-08-10 17:35:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8730619549751282, "perplexity": 2273.246061491612}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736057.87/warc/CC-MAIN-20200810145103-20200810175103-00411.warc.gz"}
https://hifogedujyqonykan.stichtingdoel.com/heavy-ion-collisions-at-intermediate-and-relativistic-energies-book-22169yk.php
Last edited by Maulkree Saturday, May 2, 2020 | History 4 edition of Heavy ion collisions at intermediate and relativistic energies found in the catalog. # Heavy ion collisions at intermediate and relativistic energies ## by International School of Nuclear Physics (1992 Erice, Italy) Written in English Subjects: • Heavy ion collisions -- Congresses. • Edition Notes Includes bibliographical references. The Physical Object ID Numbers Statement edited by Amand Faessler. Series Progress in particle and nuclear physics -- v. 30. Contributions Faessler, Amand. Pagination xii, 428 p. : Number of Pages 428 Open Library OL18007903M ISBN 10 0080421946 Relativistic heavy-ion collisions at energies of the order of 1 GeV are analysed in terms of a covariant transport model that is based on nucleon and Adegrees of €reedom and mesonic mean fields. The questions of possible classical meson-field ra- diation and of the thermal properties of the system during the collision are addressed. Heavy flavor hadrons serve as valuable probes of the transport properties of the quark-gluon plasma (QGP) created in relativistic heavy-ion collisions. In this dis-sertation, we introduce a comprehensive framework that describes the full-time evo-lution of heavy flavor in heavy-ion collisions, including its initial production, in-. The main task of the book is to collect the available information and establish a uniform picture of ultra-relativistic heavy-ion collisions. The properties of hot and dense matter implied by this Author: Wojciech Florkowski. This general introduction to the field of high energy heavy ion physics covers a range of subjects, from intermediate to ultra-relativistic energies, in order to enable advanced undergraduates and . The main goal of the experiments on heavy-ion collisions at relativistic energies is to study the properties of strongly interacting matter under extreme conditions, especially those of the quark. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): We perform a systematic study of the fragmentation path of excited nuclear matter in central heavy ion collisions at the intermediate energy of AGeV. The theoretical calculations are based on a Relativistic Boltzmann-Uehling-Uhlenbeck (RBUU) transport equation including stochastic effects. You might also like Mission heroes Mission heroes Media Incs Master Lists Media Incs Master Lists Hudson-Fulton Celebration of 1909 Hudson-Fulton Celebration of 1909 Gallions Reach Gallions Reach last honors last honors Collins Primary Poetry Collins Primary Poetry impact of tuition fees on university access impact of tuition fees on university access Pudge Pigs counting book Pudge Pigs counting book Encyclopedia of the arts Encyclopedia of the arts family in uniform, 1780-2000. family in uniform, 1780-2000. Connection and disconnection Connection and disconnection The tooth of crime ; and, Geography of a horse dreamer The tooth of crime ; and, Geography of a horse dreamer Financial survey of fireplace manufacturers and suppliers. Financial survey of fireplace manufacturers and suppliers. ### Heavy ion collisions at intermediate and relativistic energies by International School of Nuclear Physics (1992 Erice, Italy) Download PDF EPUB FB2 Introduction to Relativistic Heavy Ion Collisions László P. Csernai University of Bergen, Norway Written for postgraduates and advanced undergraduates in physics, this clear and concise work covers a wide range of subjects from intermediate to ultra-relativistic energies, thus providing an introductory overview of heavy ion by: Introduction to Heavy Ion Collisions Hardcover – Ma this clear and concise work covers a wide range of subjects from intermediate to ultra-relativistic energies, thus providing an introductory overview of heavy ion : Laszlo P. Csernai. Relativistic heavy ion collision is a fascinating field of research. In recent years, the field has seen an unprecedented level of progress. A new state of matter, deconfined quark–gluon plasma (QGP), was predicted. An accelerator was built to detect this new state of matter. After an introduction and an overview on other models we explain in detail the so called “Quantum” Molecular Dynamics (QMD), which is a successful model to describe heavy ion collisions at intermediate energies in a many-body approach. The results obtained using the QMD prove the success of this : E. Lehmann. HEAVY ION COLLISIONS AT INTERMEDIATE ENERGY (o) I.O 0-l.O I G.5 l.O b/b FIG. Angular distribution coefficient e for low- energy particles (E& ~~ E.) emerging from heavy ion collisions, as a function of impact parimeter. Soli.d and dashed lines are as in caption to Fig. 1 ~ The coef- ficient becomes ill determined at the maximum impact parameter, since the number of low. Abstract. Collisions between heavy nuclei at “relativistic” energies are tremendously complicated processes, evolving from a simple initial state (two nuclei in their ground states) to highly complex final states involving hundreds of free : Berndt Müller. We show that the phenomenology of isospin effects on heavy ion reactions at intermediate energies (few AGeV range) is extremely rich and can allow a “direct” study of the covariant structure of the isovector interaction in the hadron medium. We work within a relativistic transport frame, beyond a cascade picture, consistently derived from effective Lagrangians, where isospin effects are accounted for in the mean field and collision Cited by: 5. involving relativistic heavy-ion collisions. This resulted in several generations of experiments at CERN and BNL to search the formation of QGP at ultra-relativistic energies. The experimental searches were focused on isolating signatures of two types of phase transitions which might occur in extremely hot and/or dense nuclear matter. The QCD. Dileptons are considered as one of the cleanest signals of the quark-gluon plasma (QGP), however, the QGP radiation is masked by many 'background' sources from either hadronic decays or semileptonic decays from correlated charm pairs. In this study we investigate the relative contribution of these channels in heavy-ion collisions from $\\sqrt{s_{\\rm NN}}=$ 8 GeV to 5 TeV with a focus on the Cited by: 5. Elastic charge-exchange in relativistic heavy ion collisions is responsible for the nondisruptive change of the charge state of the nuclei. We show that it can be reliably calculated within the eikonal approximation for the reaction part. The formalism is applied to the charge-pickup cross sections of GeV=nucleon Pb projectiles on several. Intermediate-energy heavy ion collisions explore the nuclei far from stability valley, the incompressibility of nuclear matter, the liquid–gas phase transition in nuclear environment, the symmetry energy far from the normal density, and other phenomena. This has been an active field of research for last four decades. Isospin effects on two-nucleon correlation functions in heavy-ion collisions at intermediate energies Lie-Wen Chen, V. Greco, C. Ko and Bao-An Li 22 July | Physical Review C, Vol. 68, No. 1Cited by: The kinetic model developed previously for soft hadron-nucleus and nucleus-nucleus collisions has been extended to describe low mass dilepton production. A fairly good description of available experimental data on dileptons created in both p+Be collisions from 12 to 1 GeV and Ca+Ca collisions at energies of 2 and 1 GeV/nucleon, is attained taking into account a variety of e +e. We review the progress achieved in extracting the properties of hot and dense matter from relativistic heavy ion collisions at the relativistic heavy ion collider (RHIC) at Brookhaven National Laboratory and the large hadron collider (LHC) at CERN. Using this integrated model, we first simulate relativistic heavy ion collisions at the RHIC and LHC energies starting from conventional smooth initial conditions. We next utilise each Monte Carlo sample of initial conditions on an event-by-event basis and perform event-by-event dynamical simulations to accumulate a large number of minimum bias Cited by: Get this from a library. Heavy ion collisions at intermediate and relativistic energies: proceedings of the International School of Nuclear Physics, Erice, September, [Amand Faessler;]. Koch et a!., Strangeness in relativistic heavy ion collisions 1. Introduction Overview Nearly all matter around us is built of up (u) and down (d) quarks. However, as soon as adequate excitation energy becomes available in hadronic interactions, it becomes apparent that further quark flavours exist and are easily accessible. the mechanism behind the light and intermediate mass cluster production is largely studied due to the possible signature of liquid–gas phase transition and bimodality in the nuclear system [5,9]. The multifragment pro-duction following the collisions of Au on Au and other heavy nuclei at relativistic bombarding energies has been studied. The purpose of this text is to give a general introduction to all beginners in the field of high energy heavy ion physics. It tries to cover a wide range of subjects from intermediate to ultra-relativistic energies, so that it provides an overview of heavy ion physics, in order to enable the reader to understand and communicate with researchers in neighbouring or related fields. Abstract Using a hybrid (viscous hydrodynamics + hadronic cascade) framework, we model the bulk dynamical evolution of relativistic heavy-ion collisions at Relativistic Heavy Ion Collider (RHIC) Beam Energy Scan (BES) collision energies, including the effects from non-zero net baryon current and its dissipative diffusion. Title: Fragment Formation in Central Heavy Ion Collisions at Relativistic Energies Authors: i, os, a, Toro (Submitted on 10 Jan ( Cited by: 5. Anisotropic flows (ν 2 and ν 4) of hadrons and light nuclear clusters are studied by a partonic transport model and nucleonic transport model, respectively, in ultra‐relativistic and intermediate energy heavy ion collisions. Both number‐of‐constituent‐quark scaling of hadrons, especially for φ meson which is composed of strange quarks, and number‐of‐nucleon scaling of light Cited by: 1.The rapid thermalization of quarks and gluons in the initial stages of relativistic heavy-ion collisions is treated using analytic solutions of a nonlinear diffusion equation with schematic initial conditions, and for gluons with boundary conditions at the singularity. On a similarly short time scale of t ≤ 1 fm/c, the stopping of baryons is accounted for through a QCD-inspired approach.
2021-01-19 22:02:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6377307772636414, "perplexity": 2614.700510888757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00028.warc.gz"}
https://gmatclub.com/forum/in-the-rectangular-coordinate-system-above-the-area-of-tria-167216.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Feb 2019, 06:25 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in February PrevNext SuMoTuWeThFrSa 272829303112 3456789 10111213141516 17181920212223 242526272812 Open Detailed Calendar • ### Valentine's day SALE is on! 25% off. February 18, 2019 February 18, 2019 10:00 PM PST 11:00 PM PST We don’t care what your relationship status this year - we love you just the way you are. AND we want you to crush the GMAT! • ### Get FREE Daily Quiz for 2 months February 18, 2019 February 18, 2019 10:00 PM PST 11:00 PM PST Buy "All-In-One Standard (\$149)", get free Daily quiz (2 mon). Coupon code : SPECIAL # In the rectangular coordinate system above, the area of tria Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 52935 In the rectangular coordinate system above, the area of tria  [#permalink] ### Show Tags 07 Feb 2014, 04:53 13 00:00 Difficulty: 5% (low) Question Stats: 87% (01:03) correct 13% (01:19) wrong based on 843 sessions ### HideShow timer Statistics The Official Guide For GMAT® Quantitative Review, 2ND Edition Attachment: Untitled.png [ 17.05 KiB | Viewed 17208 times ] In the rectangular coordinate system above, the area of triangle RST is (A) bc/2 (B) b(c-1)/2 (C) c(b-1)/2 (D) a(c-1)/2 (E) c(a-1)/2 Problem Solving Question: 85 Category: Geometry Simple-coordinate geometry Page: 72 Difficulty: 600 GMAT Club is introducing a new project: The Official Guide For GMAT® Quantitative Review, 2ND Edition - Quantitative Questions Project Each week we'll be posting several questions from The Official Guide For GMAT® Quantitative Review, 2ND Edition and then after couple of days we'll provide Official Answer (OA) to them along with a slution. We'll be glad if you participate in development of this project: 2. Please vote for the best solutions by pressing Kudos button; 3. Please vote for the questions themselves by pressing Kudos button; 4. Please share your views on difficulty level of the questions, so that we have most precise evaluation. Thank you! _________________ Math Expert Joined: 02 Sep 2009 Posts: 52935 Re: In the rectangular coordinate system above, the area of tria  [#permalink] ### Show Tags 07 Feb 2014, 04:53 5 5 In the rectangular coordinate system above, the area of triangle RST is (A) bc/2 (B) b(c-1)/2 (C) c(b-1)/2 (D) a(c-1)/2 (E) c(a-1)/2 The area of a triangle is 1/2*(base)(height): (base) = c - 1 (the difference between the x-coordinates of points T and R). (height) = b (the "height", the y-coordinate, of point S). Therefore, the area is 1/2*(base)(height) = 1/2*(c - 1)b. _________________ ##### General Discussion Director Joined: 14 Dec 2012 Posts: 747 Location: India Concentration: General Management, Operations GMAT 1: 700 Q50 V34 GPA: 3.6 Re: In the rectangular coordinate system above, the area of tria  [#permalink] ### Show Tags 07 Feb 2014, 07:12 1 Area of triangle = 1/2 base*height base = c-1 height = y co-ordinate of S = b therefore area =$$((c-1)b)/2$$ hence OA should be = B _________________ When you want to succeed as bad as you want to breathe ...then you will be successfull.... GIVE VALUE TO OFFICIAL QUESTIONS... learn AWA writing techniques while watching video : http://www.gmatprepnow.com/module/gmat-analytical-writing-assessment Senior Manager Joined: 25 Feb 2010 Posts: 341 Re: In the rectangular coordinate system above, the area of tria  [#permalink] ### Show Tags 07 Feb 2014, 08:03 1 IMO - B base = c-1 y = b therefore area =((c-1)b)/2 _________________ GGG (Gym / GMAT / Girl) -- Be Serious Its your duty to post OA afterwards; some one must be waiting for that... Manager Joined: 11 Jan 2014 Posts: 87 Concentration: Finance, Statistics GMAT Date: 03-04-2014 GPA: 3.77 WE: Analyst (Retail Banking) Re: In the rectangular coordinate system above, the area of tria  [#permalink] ### Show Tags 07 Feb 2014, 12:14 Area of a triangle is given by: (Base * Height)/2 Base is measured by length on the x-axis, thus, c-1. Height is measured by the vertical axis y, so, coordinate b from point S. Hence, b(c-1)/2, which is (B). Math Expert Joined: 02 Sep 2009 Posts: 52935 Re: In the rectangular coordinate system above, the area of tria  [#permalink] ### Show Tags 08 Feb 2014, 04:03 2 In the rectangular coordinate system above, the area of triangle RST is (A) bc/2 (B) b(c-1)/2 (C) c(b-1)/2 (D) a(c-1)/2 (E) c(a-1)/2 The area of a triangle is 1/2*(base)(height): (base) = c - 1 (the difference between the x-coordinates of points T and R). (height) = b (the "height", the y-coordinate, of point S). Therefore, the area is 1/2*(base)(height) = 1/2*(c - 1)b. _________________ Director Joined: 24 Nov 2015 Posts: 507 Location: United States (LA) Re: In the rectangular coordinate system above, the area of tria  [#permalink] ### Show Tags 21 Jun 2016, 00:04 Area of triangle = $$\frac{1}{2}$$ * base * height Base is (c-1) for given triangle Height is b for given triangle Area = $$\frac{1}{2}$$ * b * (c-1) Non-Human User Joined: 09 Sep 2013 Posts: 9837 Re: In the rectangular coordinate system above, the area of tria  [#permalink] ### Show Tags 06 Oct 2018, 01:20 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: In the rectangular coordinate system above, the area of tria   [#permalink] 06 Oct 2018, 01:20 Display posts from previous: Sort by
2019-02-18 14:25:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8051367998123169, "perplexity": 10578.563332451793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486936.35/warc/CC-MAIN-20190218135032-20190218161032-00016.warc.gz"}
https://answerbun.com/physics/superconductivity-in-ceramics/
# Superconductivity in ceramics Physics Asked by Thirsty for concepts on September 5, 2020 I understand that superconductivity mainly occurs due to the formation of the Cooper pairs in which electrons, instead of repelling each other, actually attract because one electron actually attracts the positive charges nearby which further attract the the other electron thus establishing a Cooper pair. And since the temperature is very very low the Cooper pair doesn’t get enough energy to bump into anything and scatter its energy. But isn’t 150 K too high for the stability and existence of Cooper pairs. Then why do the ceramic materials behave as superconductors? (Note:- A similar kind of question was asked but it was about what is the maximum temperature possible for a superconductor and if we could explain these ceramic superconductors on the basis of BCS theory. But it was not specifically concerned with my specific question. So pls help!) ## Related Questions ### Spring combination in series and parallel 1  Asked on December 20, 2020 by devanshu-pandey ### Under Newtonian mechanics not all inertial reference frames are moving at constant velocity relative to each other? 2  Asked on December 20, 2020 by leosha ### Intensity of light passing through polarising filters 1  Asked on December 20, 2020 by e-c ### AdS/CFT and finiteness of entanglement entropy in CFT 1  Asked on December 20, 2020 by cruz-briz ### Black coloured region of smartphone screens 1  Asked on December 20, 2020 by steelcubes ### Why do we lose information from constraint equations at branch cut points? 1  Asked on December 20, 2020 ### Finding helicity eigenstates 1  Asked on December 20, 2020 by user3397129 ### By what mechanism does a solar flare overload electronics? 3  Asked on December 19, 2020 by uidalexd ### What are the axes in the structure of an atom? 3  Asked on December 19, 2020 by manish-s ### How molecules radiate heat as electromagnetic wave? 2  Asked on December 19, 2020 by user136782 ### Fundamentals of body deformations (recommend some books? soft robots?) 0  Asked on December 19, 2020 by daniel-arevalo ### Which is correct way to find error in degree of linear polarization? 1  Asked on December 19, 2020 ### Why are all transformations of quantum operators inner automorphisms? 1  Asked on December 19, 2020 by aakash-lakshmanan ### Does time-varying magnetic field induce time varying-electric field? 2  Asked on December 19, 2020 by i-am-the-hope-of-the-universe ### WKB approximation difficulty – deciding what term to neglect 1  Asked on December 19, 2020 ### order of phase transition/constant volume heat capacity 0  Asked on December 19, 2020 by tan-tixuan ### Sign of shaft work done when piston moves upward 1  Asked on December 19, 2020
2022-07-06 14:15:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.171963632106781, "perplexity": 2125.043704390306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00790.warc.gz"}
https://www.physicsforums.com/threads/particular-integral-of-exponential.634810/
Particular Integral of Exponential First-Order Linear System Transient Response Hey there, I am trying to solve a problem of first order equation which is A temperature sensor has a first order response with τ = 18 seconds. The calibration curve of the sensor is presented in Figure 1. Graph the sensor response when it is exposed to the temperature profile shown in Figure 2. Present the senor voltage as a function of time. Sensor has been kept at 15 °C for a long time before the operation. Homework Statement $\tau$dy(t)/dt + y(t) = K Figure 1 & 2 are attached Homework Equations where $\tau$ =18 and K is given by an equation 5+t/3 (deduce from figure 1) The Attempt at a Solution I know that first order equation’s answer is the addition of two general solutions: homogenous (natural response of the equation) and particular-integral (generated by the step function there T(total) = T(homogenous) + T(particular-integral) T(homogenous)= Ae^-t/5 T(particular-integral)= I am not sure how to do the integral of Ae^-t/5 ... Any Idea's So if someone can help me with the T(particular-integral) then i would be able to find the value of A by putting the initial condition of T(0)=5V Attachments • Figure 1 & 2.jpg 21.7 KB · Views: 310 Last edited:
2022-10-07 19:01:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7397477626800537, "perplexity": 844.1840468968277}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00041.warc.gz"}
https://www.learncram.com/maharashtra-board/class-9-maths-solutions-part-2-chapter-4-problem-set-4/
# Maharashtra Board Class 9 Maths Solutions Chapter 4 Constructions of Triangles Problem Set 4 ## Maharashtra State Board Class 9 Maths Solutions Chapter 4 Constructions of Triangles Problem Set 4 Question 1. Construct ∆XYZ, such that XY + XZ = 10.3 cm, YZ = 4.9 cm, ∠XYZ = 45°. Solution: As shown in the rough figure draw segYZ = 4.9cm Draw a ray YT making an angle of 45° with YZ Take a point W on ray YT, such that YW= 10.3 cm Now,YX + XW = YW [Y-X-W] ∴ YX + XW=10.3cm …..(i) Also, XY + X∠10.3cm ……(ii) [Given] ∴ YX + XW = XY + XZ [From (i) and (ii)] ∴ XW = XZ ∴ Point X is on the perpendicular bisector of seg WZ ∴ The point of intersection of ray YT and perpendicular bisector of seg WZ is point X. Steps of construction: i. Draw seg YZ of length 4.9 cm. ii. Draw ray YT, such that ∠ZYT = 75°. iii. Mark point W on ray YT such that l(YW) = 10.3 cm. iv. Join points W and Z. v. Draw perpendicular bisector of seg WZ intersecting ray YT. Name the point as X. vi. Join the points X and Z. Hence, ∆XYZ is the required triangle. Question 2. Construct ∆ABC, in which ∠B = 70°, ∠C = 60°, AB + BC + AC = 11.2 cm. Solution: i. As shown in the figure, take point D and E on line BC, such that BD = AB and CE = AC ……(i) BD + BC + CE = DE [D-B-C, B-C-E] ∴ AB + BC + AC = DE …..(ii) Also, AB + BC + AC= 11.2 cm ….(iii) [Given] ∴ DE = 11.2 cm [From (ii) and (iii)] AB = BD [From (i)] ∴ ∠BAD = ∠BDA = x° ….(iv) [Isosceles triangle theorem] In ∆ABD, ∠ABC is the exterior angle. ∴ ∠BAD + ∠BDA = ∠ABC [Remote interior angles theorem] x + x = 70° [From (iv)] ∴ 2x = 70° x = 35° ∴ ∠D = 35° Similarly, ∠E = 30° ∠D = 35°, ∠E = 30° and DE = 11.2 cm iv. Since, AB = BD ∴ Point B lies on perpendicular bisector of seg AD. Also AC = CE ∴ Point C lies on perpendicular bisector of seg AE. ∴ Points B and C can be located by drawing the perpendicular bisector of AD and AE respectively. ∴ ∆ABC can be drawn. Steps of construction: i. Draw seg DE of length 11.2 cm. ii. From point D draw ray making angle of 35°. iii. From point E draw ray making angle of 30°. iv. Name the point of intersection of two rays as A. v. Draw the perpendicular bisector of seg DA and seg EA intersecting seg DE in B and C respectively. vi. Join AB and AC. Hence, ∆ABC is the required triangle. Question 3. The perimeter of a triangle is 14.4 cm and the ratio of lengths of its side is 2 : 3 : 4. Construct the triangle. Solution: Let the common multiple be x ∴ In ∆ABC, AB = 2x cm, AC = 3x cm, BC = 4x cm Perimeter of triangle = 14.4 cm ∴ AB + BC + AC= 14.4 ∴ 9x = 14.4 ∴ x = $$\frac { 14.4 }{ 9 }$$ ∴ x = 1.6 ∴ AB = 2x = 2x 1.6 = 3.2 cm ∴ AC = 3x = 3 x 1.6 = 4.8 cm ∴ BC = 4x = 4 x 1.6 = 6.4 cm Question 4. Construct ∆PQR, in which PQ – PR = 2.4 cm, QR = 6.4 cm and ∠PQR = 55°. Solution: Here, PQ – PR = 2.4 cm ∴ PQ > PR As shown in the rough figure draw seg QR = 6.4 cm Draw a ray QT making on angle of 55° with QR Take a point S on ray QT, such that QS = 2.4 cm. Now, PQ – PS = QS [Q-S-P] ∴ PQ – PS = 2.4 cm …(i) Also, PQ – PR = 2.4 cm ….(ii) [Given] ∴ PQ – PS = PQ – PR [From (i) and (ii)] ∴ PS = PR ∴ Point P is on the perpendicular bisector of seg RS ∴ Point P is the intersection of ray QT and the perpendicular bisector of seg RS Steps of construction: i. Draw seg QR of length 6.4 cm. ii. Draw ray QT, such that ∠RQT = 55°. iii. Take point S on ray QT such that l(QS) = 2.4 cm. iv. Join the points S and R. v. Draw perpendicular bisector of seg SR intersecting ray QT. Name that point as P. vi. Join the points P and R. Hence, ∆PQR is the required triangle.
2020-11-29 07:43:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6044554710388184, "perplexity": 6188.110258938038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00060.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-6-section-6-3-polar-coordinates-exercise-set-page-742/26
## Precalculus (6th Edition) Blitzer (a)$$(6, \frac{5\pi }{2})$$ (b)$$(-6, \frac{3\pi }{2})$$ (c)$$(6, -\frac{3\pi }{2})$$ To plot the point $(r, \theta )=(6, \frac{\pi }{2})$, begin with the $\frac{\pi }{2}$ angle. Because $\frac{\pi }{2}$ is a positive angle, draw $\theta = \frac{\pi }{2}$ counterclockwise from the polar axis. Now consider $r=6$. Because $r \gt 0$, plot the point by going out six units on the terminal side of $\theta$. Please note that if $n$ is any integer, the point $(r, \theta )$ can be represented as$$(r, \theta ) = (r, \theta +2n\pi ) \\ \text{or} \\ (r, \theta )= (-r, \theta +\pi + 2n\pi ).$$So we have (a)$$(6, \frac{\pi }{2})= (6, \frac{\pi }{2}+2(1)\pi)=(6, \frac{5\pi }{2}),$$(b)$$(6, \frac{\pi }{2})= (-6, \frac{\pi }{2}+\pi +2(0)\pi )=(-6, \frac{3\pi }{2}),$$(c)$$(6, \frac{\pi }{2})= (6, \frac{\pi }{2}+2(-1)\pi)=(6, -\frac{3\pi }{2}).$$
2023-03-22 06:06:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8651108145713806, "perplexity": 469.80314667892037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00124.warc.gz"}
https://mathematica.stackexchange.com/questions/118560/how-to-sum-over-half-integers
# How to sum over half integers? [closed] I have an expression of the form Sum[1 + x^n + x^(n^2/2), {n, 0, 10}] but I want to sum over half integers, that is, I require that $n \in \mathbb{Z}+\frac{1}{2}$ (and later I also want to consider other fractions). How can I make mathematica, therefore to sum over $\mathbb{Z}+\frac{1}{2}$? ## closed as off-topic by m_goldberg, MarcoB, Yves Klett, Bob Hanlon, xzczdJun 16 '16 at 15:33 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – m_goldberg, MarcoB, Yves Klett, Bob Hanlon, xzczd If this question can be reworded to fit the rules in the help center, please edit the question. • So, Sum[1 + x^n + x^(n^2/2), {n, Range[0, 10] + 1/2}]? – J. M. is away Jun 16 '16 at 10:35 • Oh, is it that simple? Ok thanks! Can instead of Range[..] have a set? Like $\matbb{Z}+1/2?$ – Marion Jun 16 '16 at 10:37 Sum[1 + x^n + x^(n^2/2), {n, 0.5, 10.5}] • Also Sum[1 + x^n + x^(n^2/2), {n, 1/2, 10 + 1/2}] – Mr.Wizard Jun 16 '16 at 10:41
2019-06-27 09:31:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001818656921387, "perplexity": 3509.7819581400563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001014.85/warc/CC-MAIN-20190627075525-20190627101525-00034.warc.gz"}
https://www.tubesandmore.com/products/diodes-rectifiers
# Diodes & Rectifiers Diode - General purpose rectifier, 1A, 1N400X A semiconductor device which passes current in the forward direction (from anode to cathode), and blocks current in the opposite direction. Starting at $0.15 Diode - standard recovery rectifier, 3A, 1000V, 1N5408 1N5408 Diode. Lead mounted standard recovery rectifiers are designed for use in power supplies and other applications having need of a device with the following features: • High Current to Small Size • High Surge Current Capability • Low Forward Voltage Drop$0.38 Diode - 1N34A Germanium, low leakage The 1N34A Germanium diode is an old standby in electronics. Widely used for detecting the rectifying efficiency or for switching on a radio, TV or stereo etc. $1.75 Diode - 1N5817, Schottky, High Switching Speed Schottky diode with low forward voltage drop and high switching speeds.$0.15 Diode - 1N914, Small Signal The 1N914 is a small signal diode similar to the 1N4148 and is common in many older electronics. The 1N914 is general purpose and can be used in a multitude of different applications. $0.18 Diode - small signal fast switching, 1N4148A A semiconductor device which passes current in the forward direction(from anode to cathode), and blocks current in the opposite direction. For extreme fast switches.$0.20 Diode - Zener, 1W, 5% Tolerance A semiconductor device that, above a certain reverse voltage(the zener value) has a sudden rise in current. If forward biased, the diode is an ordinary rectifier. When reverse biased, it exhibits a sharp break in its current-voltage graph. Voltage across the diode remains essentially constant for any further increase of reverse current. Starting at $0.20 Diode - Zener, 5 Watt, ±5% tolerance 5W zener diodes that are great for use in tube amps. These are commonly used to drop B+ voltages for tube modification compatibility or to create low power modes for the amp. A zener is a semiconductor device that, above a certain reverse voltage(the zener value) has a sudden rise in current. If forward biased, the diode is an ordinary rectifier. When reverse biased, it exhibits a sharp break in its current-voltage graph. Voltage across the diode remains essentially constant for any further increase of reverse current.$0.55 Diode - 1N60, Germanium, DO-7 1N60 point-contact germanium diodes in a DO-7 package. 1N60 diodes were used in many vintage electronics and also make for great clipping diodes in stompbox effects pedals. $1.25 Diode - BAT41, Schottky, Small Signal The BAT41 is a schottky diode used in many DIY projects. General purpose metal to silicon diode featuring very low turn-on voltage and fast switching. This device has integrated protection against excessive voltage such as electrostatic discharges.$0.28 Bridge Rectifier - Single-phase, 3A 400V A full wave rectifier with 4 elements in a bridge circuit so that DC voltage is obtained from one pair of junctions when an AC voltage is applied to another pair of junctions. $0.95 Diode - 1N270, Germanium, DO-7 1N60 point-contact germanium diodes in a DO-7 package. 1N60 diodes were used in many vintage electronics and also make for great clipping diodes in stompbox effects pedals. The MXR® Distortion+ uses the 1N270$1.75 Diode - Hexfred, ultrafast soft recovery, 8A, 600V High performance, ultrafast diodes. Improve efficiency, reduce noise, allow higher frequency operation. $2.75 Diode - Small Signal Fast Switching, 1N4448 A semiconductor device which passes current in the forward direction(from anode to cathode), and blocks current in the opposite direction. Features • Silicon epitaxial planar diodes • Low power loss, high efficiency • Low leakage • Low forward voltage • High speed switching • High current capability • High reliability Used in a variety of Fender amps. Replacement for Fender part number 0006260049$0.15 Diode - 1N5818, Schottky, High Switching Speed, 30V, 10A 1N5818 schottky diode with low forward voltage drop and high switching speeds. • Very small conduction losses • Negligible switching losses • Extremely fast switching • Low forward voltage drop • Avalanche capability specified $0.15 Diode - Fast Recovery, 1A, 600V, 1N4937 A semiconductor device that, above a certain reverse voltage(the zener value) has a sudden rise in current. If forward biased, the diode is an ordinary rectifier. When reverse biased, it exhibits a sharp break in its current-voltage graph. Voltage across the diode remains essentially constant for any further increase of reverse current.$0.39 Diode - Silicon Rectifier, 3A, 400V, 1N5404 A semiconductor device which passes current in the forward direction(from anode to cathode), and blocks current in the opposite direction. $0.35 Bridge Rectifier - Single-phase, 2A, In-line 2A bridge rectifier in an inline package ideal for PCB use. Features: • Low reverse leakage • Low forward voltage • High forward surge current capability Starting at$0.85 Diode - Hexfred, 16A, 600V, ultrafast High performance, ultrafast diodes. Improve efficiency, reduce noise, allow higher frequency operation. $2.92 Save 27% Originally:$4.00 Transistor - 2N5135, Silicon, TO-105 case, NPN The 2N5135 is a silicon NPN transistor in a TO-105 case. This transistor is a close match to the rare 2N5133 but with higher voltage ratings and is in a larger package. The hFE range is less wide than the 2N5133 with more values in the range found in the batches used on vintage Big Muff pedals. The lower hFE range also makes these a great option for silicon fuzz pedals. \$1.75
2022-05-17 04:56:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4004339575767517, "perplexity": 10757.187147144257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00400.warc.gz"}
https://lib5c.readthedocs.io/en/0.6.0/lib5c.util.primers/
# lib5c.util.primers module¶ Module containing utilities for manipulating 5C primer information. lib5c.util.primers.aggregate_primermap(primermap, region_order=None)[source] Aggregates a primermap into a single list. Parameters • primermap (Dict[str, List[Dict[str, Any]]]) – Primermap to aggregate. See lib5c.parsers.primers.get_primermap(). • region_order (Optional[List[str]]) – Order in which regions should be concatenated. If None, the regions will be concatenated in order of increasing genomic coordinate. See lib5c.util.primers.determine_region_order(). Returns The dicts represent primers in the same format as the inner dicts of the passed primermap; however, they exist as a single flat list instead of within an outer dict structure. The regions are arranged within this list in contiguous blocks, arranged in the order specified by the region_order kwarg. Return type List[Dict[str, Any]] Notes This function returns a list of references to the original primermap, under the assumption that primer dicts are rarely modified. To avoid this, pass a copy of the primermap instead of the original primermap. lib5c.util.primers.determine_region_order(primermap)[source] Orders regions in a primermap by genomic coordinate. Parameters primermap (Dict[str, List[Dict[str, Any]]]) – Primermap containing information about the regions to be ordered. See lib5c.parsers.primers.get_primermap(). Returns List of ordered region names. Return type List[str] lib5c.util.primers.guess_bin_step(regional_pixelmap)[source] Guesses the bin step from a regional pixelmap. Parameters regional_pixelmap (List[Dict[str, Any]]) – Ordered list of bins for a single region. Returns The guessed bin step for this pixelmap. Return type int lib5c.util.primers.natural_sort_key(s)[source] Function to enable natural sorting of alphanumeric strings. Parameters s (str) – String being sorted. Returns This list is an alternative represenation of the input string that will sort in natural order. Return type List[Union[int, str]] Notes Function written by SO user http://stackoverflow.com/users/15055/claudiu and provided in answer http://stackoverflow.com/a/16090640.
2020-10-30 12:59:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26253727078437805, "perplexity": 8982.047032175451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910815.89/warc/CC-MAIN-20201030122851-20201030152851-00139.warc.gz"}
https://cplberry.com/2016/12/10/gw150914-the-rise-of-the-revenge/
# GW150914—The papers II GW150914, The Event to its friends, was our first direct observation of gravitational waves. To accompany the detection announcement, the LIGO Scientific & Virgo Collaboration put together a suite of companion papers, each looking at a different aspect of the detection and its implications. Some of the work we wanted to do was not finished at the time of the announcement; in this post I’ll go through the papers we have produced since the announcement. ### The papers I’ve listed the papers below in an order that makes sense to me when considering them together. Each started off as an investigation to check that we really understood the signal and were confident that the inferences made about the source were correct. We had preliminary results for each at the time of the announcement. Since then, the papers have evolved to fill different niches [bonus points note]. #### 13. The Basic Physics Paper Title: The basic physics of the binary black hole merger GW150914 arXiv: 1608.01940 [gr-qc] Journal: Annalen der Physik529(1–2):1600209(17); 2017 The Event was loud enough to spot by eye after some simple filtering (provided that you knew where to look). You can therefore figure out some things about the source with back-of-the-envelope calculations. In particular, you can convince yourself that the source must be two black holes. This paper explains these calculations at a level suitable for a keen high-school or undergraduate physics student. More details: The Basic Physics Paper summary #### 14. The Precession Paper Title: Improved analysis of GW150914 using a fully spin-precessing waveform model arXiv: 1606.01210 [gr-qc] Journal: Physical Review X; 6(4):041014(19); 2016 To properly measure the properties of GW150914’s source, you need to compare the data to predicted gravitational-wave signals. In the Parameter Estimation Paper, we did this using two different waveform models. These models include lots of features binary black hole mergers, but not quite everything. In particular, they don’t include all the effects of precession (the wibbling of the orbit because of the black holes spins). In this paper, we analyse the signal using a model that includes all the precession effects. We find results which are consistent with our initial ones. More details: The Precession Paper summary #### 15. The Systematics Paper Title: Effects of waveform model systematics on the interpretation of GW150914 arXiv: 1611.07531 [gr-qc] Journal: Classical & Quantum Gravity; 34(10):104002(48); 2017 LIGO science summary: Checking the accuracy of models of gravitational waves for the first measurement of a black hole merger To check how well our waveform models can measure the properties of the source, we repeat the parameter-estimation analysis on some synthetic signals. These fake signals are calculated using numerical relativity, and so should include all the relevant pieces of physics (even those missing from our models). This paper checks to see if there are any systematic errors in results for a signal like GW150914. It looks like we’re OK, but this won’t always be the case. More details: The Systematics Paper summary #### 16. The Numerical Relativity Comparison Paper Title: Directly comparing GW150914 with numerical solutions of Einstein’s equations for binary black hole coalescence arXiv: 1606.01262 [gr-qc] Journal: Physical Review D; 94(6):064035(30); 2016 LIGO science summary: Directly comparing the first observed gravitational waves to supercomputer solutions of Einstein’s theory Since GW150914 was so short, we can actually compare the data directly to waveforms calculated using numerical relativity. We only have a handful of numerical relativity simulations, but these are enough to give an estimate of the properties of the source. This paper reports the results of this investigation. Unsurprisingly, given all the other checks we’ve done, we find that the results are consistent with our earlier analysis. If you’re interested in numerical relativity, this paper also gives a nice brief introduction to the field. More details: The Numerical Relativity Comparison Paper summary ### The Basic Physics Paper Synopsis: Basic Physics Paper Read this if: You are teaching a class on gravitational waves Favourite part: This is published in Annalen der Physik, the same journal that Einstein published some of his monumental work on both special and general relativity It’s fun to play with LIGO data. The LIGO Open Science Center (LOSC), has put together a selection of tutorials to show you some of the basics of analysing signals. I wouldn’t blame you if you went of to try them now, instead of reading the rest of this post. Even though it would mean that no-one read this sentence. Purple monkey dishwasher. The LOSC tutorials show you how to make your own version of some of the famous plots from the detection announcement. This paper explains how to go from these, using the minimum of theory, to some inferences about the signal’s source: most significantly that it must be the merger of two black holes. GW150914 is a chirp. It sweeps up from low frequency to high. This is what you would expect of a binary system emitting gravitational waves. The gravitational waves carry away energy and angular momentum, causing the binary’s orbit to shrink. This means that the orbital period gets shorter, and the orbital frequency higher. The gravitational wave frequency is twice the orbital frequency (for circular orbits), so this goes up too. The rate of change of the frequency depends upon the system’s mass. To first approximation, it is determined by the chirp mass, $\displaystyle \mathcal{M} = \frac{(m_1 m_2)^{3/5}}{(m_1 + m_2)^{1/5}}$, where $m_1$ and $m_2$ are the masses of the two components of the binary. By looking at the signal (go on, try the LOSC tutorials), we can estimate the gravitational wave frequency $f_\mathrm{GW}$ at different times, and so track how it changes. You can rewrite the equation for the rate of change of the gravitational wave frequency $\dot{f}_\mathrm{GW}$, to give an expression for the chirp mass $\displaystyle \mathcal{M} = \frac{c^3}{G}\left(\frac{5}{96} \pi^{-8/3} f_\mathrm{GW}^{-11/3} \dot{f}_\mathrm{GW}\right)^{3/5}$. Here $c$ and $G$ are the speed of light and the gravitational constant, which usually pop up in general relativity equations. If you use this formula (perhaps fitting for the trend $f_\mathrm{GW}$) you can get an estimate for the chirp mass. By fiddling with your fit, you’ll see there is some uncertainty, but you should end up with a value around $30 M_\odot$ [bonus note]. Next, let’s look at the peak gravitational wave frequency (where the signal is loudest). This should be when the binary finally merges. The peak is at about $150~\mathrm{Hz}$. The orbital frequency is half this, so $f_\mathrm{orb} \approx 75~\mathrm{Hz}$. The orbital separation $R$ is related to the frequency by $\displaystyle R = \left[\frac{GM}{(2\pi f_\mathrm{orb})^2}\right]^{1/3}$, where $M = m_1 + m_2$ is the binary’s total mass. This formula is only strictly true in Newtonian gravity, and not in full general relativity, but it’s still a reasonable approximation. We can estimate a value for the total mass from our chirp mass; if we assume the two components are about the same mass, then $M = 2^{6/5} \mathcal{M} \approx 70 M_\odot$. We now want to compare the binary’s separation to the size of black hole with the same mass. A typical size for a black hole is given by the Schwarzschild radius $\displaystyle R_\mathrm{S} = \frac{2GM}{c^2}$. If we divide the binary separation by the Schwarzschild radius we get the compactness $\mathcal{R} = R/R_\mathrm{S} \approx 1.7$. A compactness of $\sim 1$ could only happen for black holes. We could maybe get a binary made of two neutron stars to have a compactness of $\sim2$, but the system is too heavy to contain two neutron stars (which have a maximum mass of about $3 M_\odot$). The system is so compact, it must contain black holes! What I especially like about the compactness is that it is unaffected by cosmological redshifting. The expansion of the Universe will stretch the gravitational wave, such that the frequency gets lower. This impacts our estimates for the true orbital frequency and the masses, but these cancel out in the compactness. There’s no arguing that we have a highly relativistic system. You might now be wondering what if we don’t assume the binary is equal mass (you’ll find it becomes even more compact), or if we factor in black hole spin, or orbital eccentricity, or that the binary will lose mass as the gravitational waves carry away energy? The paper looks at these and shows that there is some wiggle room, but the signal really constrains you to have black holes. This conclusion is almost as inescapable as a black hole itself. There are a few things which annoy me about this paper—I think it could have been more polished; “Virgo” is improperly capitalised on the author line, and some of the figures are needlessly shabby. However, I think it is a fantastic idea to put together an introductory paper like this which can be used to show students how you can deduce some properties of GW150914’s source with some simple data analysis. I’m happy to be part of a Collaboration that values communicating our science to all levels of expertise, not just writing papers for specialists! During my undergraduate degree, there was only a single lecture on gravitational waves [bonus note]. I expect the topic will become more popular now. If you’re putting together such a course and are looking for some simple exercises, this paper might come in handy! Or if you’re a student looking for some project work this might be a good starting reference—bonus points if you put together some better looking graphs for your write-up. If this paper has whetted your appetite for understanding how different properties of the source system leave an imprint in the gravitational wave signal, I’d recommend looking at the Parameter Estimation Paper for more. ### The Precession Paper Synopsis: Precession Paper Read this if: You want our most detailed analysis of the spins of GW150914’s black holes Favourite part: We might have previously over-estimated our systematic error The Basic Physics Paper explained how you could work out some properties of GW150914’s source with simple calculations. These calculations are rather rough, and lead to estimates with large uncertainties. To do things properly, you need templates for the gravitational wave signal. This is what we did in the Parameter Estimation Paper. In our original analysis, we used two different waveforms: • The first we referred to as EOBNR, short for the lengthy technical name SEOBNRv2_ROM_DoubleSpin. In short: This includes the spins of the two black holes, but assumes they are aligned such that there’s no precession. In detail: The waveform is calculated by using effective-one-body dynamics (EOB), an approximation for the binary’s motion calculated by transforming the relevant equations into those for a single object. The S at the start stands for spin: the waveform includes the effects of both black holes having spins which are aligned (or antialigned) with the orbital angular momentum. Since the spins are aligned, there’s no precession. The EOB waveforms are tweaked (or calibrated, if you prefer) by comparing them to numerical relativity (NR) waveforms, in particular to get the merger and ringdown portions of the waveform right. While it is easier to solve the EOB equations than full NR simulations, they still take a while. To speed things up, we use a reduced-order model (ROM), a surrogate model constructed to match the waveforms, so we can go straight from system parameters to the waveform, skipping calculating the dynamics of the binary. • The second we refer to as IMRPhenom, short for the technical IMRPhenomPv2. In short: This waveform includes the effects of precession using a simple approximation that captures the most important effects. In detail: The IMR stands for inspiral–merger–ringdown, the three phases of the waveform (which are included in in the EOBNR model too). Phenom is short for phenomenological: the waveform model is constructed by tuning some (arbitrary, but cunningly chosen) functions to match waveforms calculated using a mix of EOB, NR and post-Newtonian theory. This is done for black holes with (anti)aligned spins to first produce the IMRPhenomD model. This is then twisted up, to include the dominant effects of precession to make IMRPhenomPv2. This bit is done by combining the two spins together to create a single parameter, which we call $\chi_\mathrm{p}$, which determines the amount of precession. Since we are combining the two spins into one number, we lose a bit of the richness of the full dynamics, but we get the main part. The EOBNR and IMRPhenom models are created by different groups using different methods, so they are useful checks of each other. If there is an error in our waveforms, it would lead to systematic errors in our estimated paramters In this paper, we use another waveform model, a precessing EOBNR waveform, technically known as SEOBNRv3. This model includes all the effects of precession, not just the simple model of the IMRPhenom model. However, it is also computationally expensive, meaning that the analysis takes a long time (we don’t have a ROM to speed things up, as we do for the other EOBNR waveform)—each waveform takes over 20 times as long to calculate as the IMRPhenom model [bonus note]. Our results show that all three waveforms give similar results. The precessing EOBNR results are generally more like the IMRPhenom results than the non-precessing EOBNR results are. The plot below compares results from the different waveforms [bonus note]. Comparison of parameter estimates for GW150914 using different waveform models. The bars show the 90% credible intervals, the dark bars show the uncertainty on the 5%, 50% and 95% quantiles from the finite number of posterior samples. The top bar is for the non-precessing EOBNR model, the middle is for the precessing IMRPhenom model, and the bottom is for the fully precessing EOBNR model. Figure 1 of the Precession Paper; see Figure 9 for a comparison of averaged EOBNR and IMRPhenom results, which we have used for our overall results. We had used the difference between the EOBNR and IMRPhenom results to estimate potential systematic error from waveform modelling. Since the two precessing models are generally in better agreement, we have may have been too pessimistic here. The main difference in results is that our new refined analysis gives tighter constraints on the spins. From the plot above you can see that the uncertainty for the spin magnitudes of the heavier black hole $a_1$, the lighter black hole $a_2$ and the final black hole (resulting from the coalescence) $a_\mathrm{f}$, are slightly narrower. This makes sense, as including the extra imprint from the full effects of precession gives us a bit more information about the spins. The plots below show the constraints on the spins from the two precessing waveforms: the distributions are more condensed with the new results. Comparison of orientations and magnitudes of the two component spins. The spin is perfectly aligned with the orbital angular momentum if the angle is 0. The left disk shows results using the precessing IMRPhenom model, the right using the precessing EOBNR model. In each, the distribution for the more massive black hole is on the left, and for the smaller black hole on the right. Adapted from Figure 5 of the Parameter Estimation Paper and Figure 4 of the Precession Paper. In conclusion, this analysis had shown that included the full effects of precession do give slightly better estimates of the black hole spins. However, it is safe to trust the IMRPhenom results. If you are looking for the best parameter estimates for GW150914, these results are better than the original results in the Parameter Estimation Paper. However, I would prefer the results in the O1 Binary Black Hole Paper, even though this doesn’t use the fully precessing EOBNR waveform, because we do use an updated calibration of the detector data. Neither the choice of waveform or the calibration make much of an impact on the results, so for most uses it shouldn’t matter too much. ### The Systematics Paper Synopsis: Systematics Paper Read this if: You want to know how parameter estimation could fare for future detections Favourite part: There’s no need to panic yet The Precession Paper highlighted how important it is to have good waveform templates. If there is an error in our templates, either because of modelling or because we are missing some physics, then our estimated parameters could be wrong—we would have a source of systematic error. We know our waveform models aren’t perfect, so there must be some systematic error, the question is how much? From our analysis so far (such as the good agreement between different waveforms in the Precession Paper), we think that systematic error is less significant than the statistical uncertainty which is a consequence of noise in the detectors. In this paper, we try to quantify systematic error for GW150914-like systems. To asses systematic errors, we analyse waveforms calculated by numerical relativity simulations into data around the time of GW150914. Numerical relativity exactly solves Einstein’s field equations (which govern general relativity), so results of these simulations give the most accurate predictions for the form of gravitational waves. As we know the true parameters for the injected waveforms, we can compare these to the results of our parameter estimation analysis to check for biases. We use waveforms computed by two different codes: the Spectral Einstein Code (SpEC) and the Bifunctional Adaptive Mesh (BAM) code. (Don’t the names make them sound like such fun?) Most waveforms are injected into noise-free data, so that we know that any offset in estimated parameters is dues to the waveforms and not detector noise; however, we also tried a few injections into real data from around the time of GW150914. The signals are analysed using our standard set-up as used in the Parameter Estimation Paper (a couple of injections are also included in the Precession Paper, where they are analysed with the fully precessing EOBNR waveform to illustrate its accuracy). The results show that in most cases, systematic errors from our waveform models are small. However, systematic errors can be significant for some orientations of precessing binaries. If we are looking at the orbital plane edge on, then there can be errors in the distance, the mass ratio and the spins, as illustrated below [bonus note]. Thankfully, edge-on binaries are quieter than face-on binaries, and so should make up only a small fraction of detected sources (GW150914 is most probably face off). Furthermore, biases are only significant for some polarization angles (an angle which describes the orientation of the detectors relative to the stretch/squash of the gravitational wave polarizations). Factoring this in, a rough estimate is that about 0.3% of detected signals would fall into the unlucky region where waveform biases are important. Parameter estimation results for two different GW150914-like numerical relativity waveforms for different inclinations and polarization angles. An inclination of $0^\circ$ means the binary is face on, $180^\circ$ means it face off, and an inclination around $90^\circ$ is edge on. The bands show the recovered 90% credible interval; the dark lines the median values, and the dotted lines show the true values. The (grey) polarization angle $\psi = 82^\circ$ was chosen so that the detectors are approximately insensitive to the $h_+$ polarization. Figure 4 of the Systematics Paper. While it seems that we don’t have to worry about waveform error for GW150914, this doesn’t mean we can relax. Other systems may show up different aspects of waveform models. For example, our approximants only include the dominant modes (spherical harmonic decompositions of the gravitational waves). Higher-order modes have more of an impact in systems where the two black holes are unequal masses, or where the binary has a higher total mass, so that the merger and ringdown parts of the waveform are more important. We need to continue work on developing improved waveform models (or at least, including our uncertainty about them in our analysis), and remember to check for biases in our results! ### The Numerical Relativity Comparison Paper Synopsis: Numerical Relativity Comparison Paper Read this if: You are really suspicious of our waveform models, or really like long tables or numerical data Favourite part: We might one day have enough numerical relativity waveforms to do full parameter estimation with them In the Precession Paper we discussed how important it was to have accurate waveforms; in the Systematics Paper we analysed numerical relativity waveforms to check the accuracy of our results. Since we do have numerical relativity waveforms, you might be wondering why we don’t just use these in our analysis? In this paper, we give it a go. Our standard parameter-estimation code (LALInference) randomly hops around parameter space, for each set of parameters we generate a new waveform and see how this matches the data. This is an efficient way of exploring the parameter space. Numerical relativity waveforms are too computationally expensive to generate one each time we hop. We need a different approach. The alternative, is to use existing waveforms, and see how each of them match. Each simulation gives the gravitational waves for a particular mass ratio and combination of spins, we can scale the waves to examine different total masses, and it is easy to consider what the waves would look like if measured at a different position (distance, inclination or sky location). Therefore, we can actually cover a fair range of possible parameters with a given set of simulations. To keep things quick, the code averages over positions, this means we don’t currently get an estimate on the redshift, and so all the masses are given as measured in the detector frame and not as the intrinsic masses of the source. The number of numerical relativity simulations is still quite sparse, so to get nice credible regions, a simple Gaussian fit is used for the likelihood. I’m not convinced that this capture all the detail of the true likelihood, but it should suffice for a broad estimate of the width of the distributions. The results of this analysis generally agree with those from our standard analysis. This is a relief, but not surprising given all the other checks that we have done! It hints that we might be able to get slightly better measurements of the spins and mass ratios if we used more accurate waveforms in our standard analysis, but the overall conclusions are  sound. I’ve been asked if since these results use numerical relativity waveforms, they are the best to use? My answer is no. As well as potential error from the sparse sampling of simulations, there are several small things to be wary of. • We only have short numerical relativity waveforms. This means that the analysis only goes down to a frequency of $30~\mathrm{Hz}$ and ignores earlier cycles. The standard analysis includes data down to $20~\mathrm{Hz}$, and this extra data does give you a little information about precession. (The limit of the simulation length also means you shouldn’t expect this type of analysis for the longer LVT151012 or GW151226 any time soon). • This analysis doesn’t include the effects of calibration uncertainty. There is some uncertainty in how to convert from the measured signal at the detectors’ output to the physical strain of the gravitational wave. Our standard analysis fold this in, but that isn’t done here. The estimates of the spin can be affected by miscalibration. (This paper also uses the earlier calibration, rather than the improved calibration of the O1 Binary Black Hole Paper). • Despite numerical relativity simulations producing waveforms which include all higher modes, not all of them are actually used in the analysis. More are included than in the standard analysis, so this will probably make negligible difference. Finally, I wanted to mention one more detail, as I think it is not widely appreciated. The gravitational wave likelihood is given by an inner product $\displaystyle L \propto \exp \left[- \int_{-\infty}^{\infty} \mathrm{d}f \frac{|s(f) - h(f)|^2}{S_n(f)} \right]$, where $s(f)$ is the signal, $h(f)$ is our waveform template and $S_n(f)$ is the noise spectral density (PSD). These are the three things we need to know to get the right answer. This paper, together with the Precession Paper and the Systematics Paper, has been looking at error from our waveform models $h(f)$. Uncertainty from the calibration of $s(f)$ is included in the standard analysis, so we know how to factor this in (and people are currently working on more sophisticated models for calibration error). This leaves the noise PSD $S_n(f)$ The noise PSD varies all the time, so it needs to be estimated from the data. If you use a different stretch of data, you’ll get a different estimate, and this will impact your results. Ideally, you would want to estimate from the time span that includes the signal itself, but that’s tricky as there’s a signal in the way. The analysis in this paper calculates the noise power spectral density using a different time span and a different method than our standard analysis; therefore, we expect some small difference in the estimated parameters. This might be comparable to (or even bigger than) the difference from switching waveforms! We see from the similarity of results that this cannot be a big effect, but it means that you shouldn’t obsess over small differences, thinking that they could be due to waveform differences, when they could just come from estimation of the noise PSD. Lots of work is currently going into making sure that the numerator term $|s(f) - h(f)|^2$ is accurate. I think that the denominator $S_n(f)$ needs attention too. Since we have been kept rather busy, including uncertainty in PSD estimation will have to wait for a future set papers. ### Bonus notes #### Finches 100 bonus points to anyone who folds up the papers to make beaks suitable for eating different foods. Our current best estimate for the chirp mass (from the O1 Binary Black Hole Paper) would be $30.6^{+1.9}_{-1.6} M_\odot$. You need proper templates for the gravitational wave signal to calculate this. If you factor in the the gravitational wave gets redshifted (shifted to lower frequency by the expansion of the Universe), then the true chirp mass of the source system is $28.1^{+1.8}_{-1.5} M_\odot$. #### Formative experiences My one undergraduate lecture on gravitational waves was the penultimate lecture of the fourth-year general relativity course. I missed this lecture, as I had a PhD interview (at the University of Birmingham). Perhaps if I had sat through it, my research career would have been different? #### Good things come… The computational expense of a waveform is important, as when we are doing parameter estimation, we calculate lots (tens of millions) of waveforms for different parameters to see how they match the data. Before O1, the task of using SEOBNRv3 for parameter estimation seemed quixotic. The first detection, however, was enticing enough to give it a try. It was a truly heroic effort by Vivien Raymond and team that produced these results—I am slightly suspicious the Vivien might actually be a wizard. GW150914 is a short signal, meaning it is relatively quick to analyse. Still, it required us using all the tricks at our disposal to get results in a reasonable time. When it came time to submit final results for the Discovery Paper, we had just about 1,000 samples from the posterior probability distribution for the precessing EOBNR waveform. For comparison, we had over 45,000 sample for the non-precessing EOBNR waveform. 1,000 samples isn’t enough to accurately map out the probability distributions, so we decided to wait and collect more samples. The preliminary results showed that things looked similar, so there wouldn’t be a big difference in the science we could do. For the Precession Paper, we finally collected 2,700 samples. This is still a relatively small number, so we carefully checked the uncertainty in our results due to the finite number of samples. The Precession Paper has shown that it is possible to use the precessing EOBNR for parameter estimation, but don’t expect it to become the norm, at least until we have a faster implementation of it. Vivien is only human, and I’m sure his family would like to see him occasionally. #### Parameter key In case you are wondering what all the symbols in the results plots stand for, here are their usual definitions. First up, the various masses • $m_1$—the mass of the heavier black hole, sometimes called the primary black hole; • $m_2$—the mass of the lighter black hole, sometimes called the secondary black hole; • $M$—the total mass of the binary, $M = m_1 + m_2$; • $M_\mathrm{f}$—the mass of the final black hole (after merger); • $\mathcal{M}$—the chirp mass, the combination of the two component masses which sets how the binary inspirals together; • $q$—the mass ratio, $q = m_1/m_2 \leq 1$. Confusingly, numerical relativists often use the opposite  convention $q = m_2/m_1 \geq 1$ (which is why the Numerical Relativity Comparison Paper discusses results in terms of $1/q$: we can keep the standard definition, but all the numbers are numerical relativist friendly). A superscript “source” is sometimes used to distinguish the actual physical masses of the source from those measured by the detector which have been affected by cosmological redshift. The measured detector-frame mass is $m = (1 + z) m^\mathrm{source}$, where $m^\mathrm{source}$ is the true, redshift-corrected source-frame mass and $z$ is the redshift. The mass ratio $q$ is independent of the redshift. On the topic of redshift, we have • $z$—the cosmological redshift ($z = 0$ would be now); • $D_\mathrm{L}$—the luminosity distance. The luminosity distance sets the amplitude of the signal, as does the orientation which we often describe using • $\iota$—the inclination, the angle between the line of sight and the orbital angular momentum ($\boldsymbol{L}$). This is zero for a face-on binary. • $\theta_{JN}$—the angle between the line of sight ($\boldsymbol{N}$) and the total angular momentum of the binary ($\boldsymbol{J}$); this is approximately equal to the inclination, but is easier to use for precessing binaries. As well as masses, black holes have spins • $a_1$—the (dimensionless) spin magnitude of the heavier black hole, which is between $0$ (no spin) and $1$ (maximum spin); • $a_2$—the (dimensionless) spin magnitude of the lighter black hole; • $a_\mathrm{f}$—the (dimensionless) spin magnitude of the final black hole; • $\chi_\mathrm{eff}$—the effective inspiral spin parameter, a combinations of the two component spins which has the largest impact on the rate of inspiral (think of it as the spin equivalent of the chirp mass); • $\chi_\mathrm{p}$—the effective precession spin parameter, a combination of spins which indicate the dominant effects of precession, it’s $0$ for no precession and $1$ for maximal precession; • $\theta_{LS_1}$—the primary tilt angle, the angle between the orbital angular momentum and the heavier black holes spin ($\boldsymbol{S_1}$). This is zero for aligned spin. • $\theta_{LS_2}$—the secondary tilt angle, the angle between the orbital angular momentum and the lighter black holes spin ($\boldsymbol{S_2}$). • $\phi_{12}$—the angle between the projections of the two spins on the orbital plane. The orientation angles change in precessing binaries (when the spins are not perfectly aligned or antialigned with the orbital angular momentum), so we quote values at a reference time corresponding to when the gravitational wave frequency is $20~\mathrm{Hz}$. Finally (for the plots shown here) • $\psi$—the polarization angle, this is zero when the detector arms are parallel to the $h_+$ polarization’s stretch/squash axis. For more detailed definitions, check out the Parameter Estimation Paper or the LALInference Paper.
2018-07-22 08:39:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 82, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7065792679786682, "perplexity": 665.4316924745605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593142.83/warc/CC-MAIN-20180722080925-20180722100925-00099.warc.gz"}
https://www.broadinstitute.org/gatk/guide/tagged?tag=indel-realignment
# Tagged with #indel-realignment 1 documentation article | 0 announcements | 3 forum discussions Created 2016-03-08 16:49:04 | Updated 2016-03-23 22:16:45 | Tags: indelrealigner realignertargetcreator indel-realignment This tutorial replaces Tutorial#2800 and applies to data types within the scope of the GATK Best Practices variant discovery workflow. We provide example data and example commands for performing local realignment around small insertions and deletions (indels) against a reference. The resulting BAM reduces false positive SNPs and represents indels parsimoniously. First we use RealignerTargetCreator to identify and create a target intervals list (step 1). Then we perform local realignment for the target intervals using IndelRealigner (step 2). ## 1. Introduction and tutorial materials #### Why do indel realignment? Local realignment around indels allows us to correct mapping errors made by genome aligners and make read alignments more consistent in regions that contain indels. Genome aligners can only consider each read independently, and the scoring strategies they use to align reads relative to the reference limit their ability to align reads well in the presence of indels. Depending on the variant event and its relative location within a read, the aligner may favor alignments with mismatches or soft-clips instead of opening a gap in either the read or the reference sequence. In addition, the aligner's scoring scheme may use arbitrary tie-breaking, leading to different, non-parsimonious representations of the event in different reads. In contrast, local realignment considers all reads spanning a given position. This makes it possible to achieve a high-scoring consensus that supports the presence of an indel event. It also produces a more parsimonious representation of the data in the region . This two-step indel realignment process first identifies such regions where alignments may potentially be improved, then realigns the reads in these regions using a consensus model that takes all reads in the alignment context together. #### Prerequisites • Installed GATK tools • Coordinate-sorted and indexed BAM alignment data • Reference sequence, index and dictionary • An optional VCF file representing population variants, subset for indels • To download the reference, open ftp://gsapubftp-anonymous@ftp.broadinstitute.org/bundle/2.8/b37/ in your browser. Leave the password field blank. Download the following three files (~860 MB) to the same folder: human_g1k_v37_decoy.fasta.gz, .fasta.fai.gz, and .dict.gz. This same reference is available to load in IGV. • Click tutorial_7156.tar.gz to download the tutorial data. The data is human paired 2x150 whole genome sequence reads originally aligning at ~30x depth of coverage. The sample is a PCR-free preparation of the NA12878 individual run on the HiSeq X platform. I took the reads aligning to a one Mbp genomic interval (10:96,000,000-97,000,000) and sanitized and realigned the reads (BWA-MEM -M) to the entire genome according to the workflow presented in Tutorial#6483 and marked duplicates using MarkDuplicates according to Tutorial#6747. We expect the alignment to reveal a good proportion of indels given its long reads (~150 bp per read), high complexity (PCR-free whole genome data) and deep coverage depth (30x). Tutorial download also contains a known indels VCF from Phase 3 of the 1000 Genomes Project subset for indel-only records in the interval 10:96,000,000-97,000,000. These represent consensus common and low-frequency indels in the studied populations from multiple approaches. The individual represented by our snippet, NA12878, is part of the 1000 Genomes Project data. Because of the differences in technology and methods used by the Project versus our sample library, our library has potential to reveal additional variants. ## 2. Create target intervals list using RealignerTargetCreator For simplicity, we use a single known indels VCF, included in the tutorial data. For recommended resources, see Article#1247. In the command, RealignerTargetCreator takes a coordinate-sorted and indexed BAM and a VCF of known indels and creates a target intervals file. java -jar GenomeAnalysisTK.jar \ -T RealignerTargetCreator \ -R human_g1k_v37_decoy.fasta \ -L 10:96000000-97000000 \ -known INDEL_chr10_1Mb_b37_1000G_phase3_v4_20130502.vcf \ -I 7156_snippet.bam \ -o 7156_realignertargetcreator.intervals In the resulting file, 7156_realignertargetcreator.intervals, intervals represent sites of extant and potential indels. If sites are proximal, the tool represents them as a larger interval spanning the sites. • We specify the BAM alignment file with -I. • We specify the known indels VCF file with -known. The known indels VCF contains indel records only. • Three input choices are technically feasible in creating a target intervals list: you may provide RealignerTargetCreator (i) one or more VCFs of known indels each passed in via -known, (ii) one or more alignment BAMs each passed in via -I or (iii) both. We recommend the last mode, and we use it in the example command. We use these same input files again in the realignment step. The tool adds indel sites present in the known indels file and indel sites in the alignment CIGAR strings to the targets. Additionally, the tool considers the presence of mismatches and soft-clips, and adds regions that pass a concentration threshold to the target intervals. If you create an intervals list using only the VCF, RealignerTargetCreator will add sites of indel only records even if SNPs are present in the file. If you create an intervals list using both alignment and known indels, the known indels VCF should contain only indels. See Related resources. • We include -L 10:96000000-97000000 in the command to limit processing time. Otherwise, the tool traverses the entire reference genome and intervals outside these coordinates may be added given our example 7156_snippet.bam contains a small number of alignments outside this region. • The tool samples to a target coverage of 1,000 for regions with greater coverage. #### The target intervals file The first ten rows of 7156_realignertargetcreator.intervals are as follows. The file is a text-based one-column list with one interval per row in 1-based coordinates. Header and column label are absent. For an interval derived from a known indel, the start position refers to the corresponding known variant. For example, for the first interval, we can zgrep -w 96000399 INDEL_chr10_1Mb_b37_1000G_phase3_v4_20130502.vcf for details on the 22bp deletion annotated at position 96000399. 10:96000399-96000421 10:96002035-96002036 10:96002573-96002577 10:96003556-96003558 10:96004176-96004177 10:96005264-96005304 10:96006455-96006461 10:96006871-96006872 10:96007627-96007628 10:96008204 To view intervals on IGV, convert the list to 0-based BED format using the following AWK command. The command saves a new text-based file with .bed extension where chromosome, start and end are tab-separated, and the start position is one less than that in the intervals list. awk -F '[:-]' 'BEGIN { OFS = "\t" } { if( $3 == "") { print$1, $2-1,$2 } else { print $1,$2-1, 3}}' 7156_realignertargetcreator.intervals > 7156_realignertargetcreator.bed back to top ## 3. Realign reads using IndelRealigner In the following command, IndelRealigner takes a coordinate-sorted and indexed BAM and a target intervals file generated by RealignerTargetCreator. IndelRealigner then performs local realignment on reads coincident with the target intervals using consenses from indels present in the original alignment. java -Xmx8G -Djava.io.tmpdir=/tmp -jar GenomeAnalysisTK.jar \ -T IndelRealigner \ -R human_g1k_v37_decoy.fasta \ -targetIntervals 7156_realignertargetcreator.intervals \ -known INDEL_chr10_1Mb_b37_1000G_phase3_v4_20130502.vcf \ -I 7156_snippet.bam \ -o 7156_snippet_indelrealigner.bam The resulting coordinate-sorted and indexed BAM contains the same records as the original BAM but with changes to realigned records and their mates. Our tutorial's two IGV screenshots show realigned reads in two different loci. For simplicity, the screenshots show the subset of reads that realigned. For screenshots of full alignments for the same loci, see here and here. #### Comments on specific parameters • The -targetIntervals file from RealignerTargetCreator, with extension .intervals or .list, is required. See section 1 for a description. • Specify each BAM alignment file with -I. IndelRealigner operates on all reads simultaneously in files you provide it jointly. • Specify each optional known indels VCF file with -known. • For joint processing, e.g. for tumor-normal pairs, generate one output file for each input by specifying -nWayOut instead of -o. • By default, and in this command, IndelRealigner applies the USE_READS consensus model. This is the consensus model we recommend because it balances accuracy and performance. To specify a different model, use the -model argument. The KNOWNS_ONLY consensus model constructs alternative alignments from the reference sequence by incorporating any known indels at the site, the USE_READS model from indels in reads spanning the site and the USE_SW model additionally from Smith-Waterman alignment of reads that do not perfectly match the reference sequence. The KNOWNS_ONLY model can be sufficient for preparing data for base quality score recalibration. It can maximize performance at the expense of some accuracy. This is the case only given the known indels file represents common variants for your data. If you specify -model KNOWNS_ONLY but forget to provide a VCF, the command runs but the tool does not realign any reads. • If you encounter out of memory errors, try these options. First, increase max java heap size from -Xmx8G. To find a system's default maximum heap size, type java -XX:+PrintFlagsFinal -version, and look for MaxHeapSize. If this does not help, and you are jointly processing data, then try running indel realignment iteratively on smaller subsets of data before processing them jointly. • IndelRealigner performs local realignment without downsampling. If the number of reads in an interval exceeds the 20,000 default threshold set by the -maxReads parameter, then the tool skips the region. • The tool has two read filters, BadCigarFilter and MalformedReadFilter. The tool processes reads flagged as duplicate. #### Changes to alignment records For our example data,194 alignment records realign for ~89 sites. These records now have the OC tag to mark the original CIGAR string. We can use the OC tag to pull out realigned reads and instructions for this are in section 4. The following screenshot shows an example pair of records before and after indel realignment. We note seven changes with asterisks, blue for before and red for after, for both the realigned read and for its mate. Changes to the example realigned record: • MAPQ increases from 60 to 70. The tool increases each realigned record's MAPQ by ten. • The CIGAR string, now 72M20I55M4S, reflects the realignment containing a 20bp insertion. • The OC tag retains the original CIGAR string (OC:Z:110M2I22M1D13M4S) and replaces the MD tag that stored the string for mismatching positions. • The NM tag counts the realigned record's mismatches, and changes from 8 to 24. Changes to the realigned read's mate record: • The MC tag updates the mate CIGAR string (to MC:Z:72M20I55M4S). • The MQ tag updates to the new mapping quality of the mate (to MQ:i:70). • The UQ tag updates to reflect the new Phred likelihood of the segment, from UQ:i:100 to UQ:i:68. back to top ## 3. Some additional considerations RealignerTargetCreator documentation has a -maxInterval cutoff to drop intervals from the list if they are too large. This is because increases in number of reads per interval quadratically increase the compute required to realign a region, and larger intervals tend to include more reads. By the same reasoning, increasing read depth, e.g. with additional alignment files, increases required compute. Our tutorial's INDEL_chr10_1Mb_b37_1000G_phase3_v4_20130502.vcf contains 1168 indel-only records. The following are metrics on intervals created using the three available options. #intervals avg length basepair coverage VCF only 1161 3.33 3,864 BAM only 487 15.22 7,412 VCF+BAM 1151 23.07 26,558 You can project the genomic coverage of the intervals as a function of the interval density (number of intervals per basepair) derived from varying the known indel density (number of indel records in the VCF). This in turn allows you to anticipate compute for indel realignment. The density of indel sites increases the interval length following a power law (y=ax^b). The constant (a) and the power (b) are different for intervals created with VCF only and with VCF+BAM. For our example data, these average interval lengths are well within the length of a read and minimally vary the reads per interval and thus the memory needed for indel realignment. back to top ## 4. Related resources • See the Best Practice Workflow and click on the flowchart's Realign Indels icon for best practice recommendations and links including to a 14-minute video overview. • See Article#1247 for guidance on using VCF(s) of known variant sites. • To subset realigned reads only into a valid BAM, as shown in the screenshots, use samtools view 7088_snippet_indelrealigner.bam | grep 'OC' | cut -f1 | sort > 7088_OC.txt to create a list of readnames. Then, follow direction in blogpost SAM flags down a boat on how to create a valid BAM using FilterSamReads. • See discussion on multithreading for options on speeding up these processes. The document titled How can I use parallelism to make GATK tools run faster? gives two charts: (i) the first table relates the three parallelism options to the major GATK tools and (ii) the second table provides recommended configurations for the tools. Briefly, RealignerTargetCreator runs faster with increasing -nt threads, while IndelRealigner shows diminishing returns for increases in scatter-gather threads provided by Queue. See blog How long does it take to run the GATK Best Practices? for a breakdown of the impact of threading and CPU utilization for Best Practice Workflow tools. • See DePristo et al's 2011 Nature Genetics technical report for benchmarked effects of indel realignment as well as for the mathematics behind the algorithms. • See Tutorial#6517 for instructions on creating a snippet of reads corresponding to a genomic interval. For your research aims, you may find testing a small interval of your alignment and your choice VCF, while adjusting parameters, before committing to processing your full dataset, is time well-invested. • The tutorial's PCR-free 2x150 bp reads give enough depth of coverage (34.67 mean and 99.6% above 15) and library complexity to allow us the confidence to use aligner-generated indels in realignment. Check alignment coverage with DepthofCoverage for WGS or DiagnoseTargets for WES. • See SelectVariants to subset out indel calls using the -selectType INDEL option. Note this excludes indels that are part of mixed variant sites (see FAQ). Current solutions to including indels from mixed sites involves the use of JEXL expressions, as discussed here. Current solutions to selecting variants based on population allelic frequency (AF), as we may desire to limit our known indels to those that are more common than rare for more efficient processing, are discussed in two forum posts (1,2). • See Tutorial#6491 for basic instructions on using the Integrative Genomics Viewer (IGV). No articles to display. Created 2016-05-21 01:11:18 | Updated | Tags: indel-realignment I came cross with the following error and it was suggested to be a potential bug: ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR stack trace java.lang.ExceptionInInitializerError at org.broadinstitute.sting.gatk.GenomeAnalysisEngine.(GenomeAnalysisEngine.java:167) at org.broadinstitute.sting.gatk.CommandLineExecutable.(CommandLineExecutable.java:57) at org.broadinstitute.sting.gatk.CommandLineGATK.(CommandLineGATK.java:66) at org.broadinstitute.sting.gatk.CommandLineGATK.main(CommandLineGATK.java:106) Caused by: java.lang.NullPointerException at org.reflections.Reflections.scan(Reflections.java:220) at org.reflections.Reflections.scan(Reflections.java:166) at org.reflections.Reflections.(Reflections.java:94) at org.broadinstitute.sting.utils.classloader.PluginManager.(PluginManager.java:79) ... 4 more ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR A GATK RUNTIME ERROR has occurred (version 3.1-1-g07a4bf8): ##### ERROR ##### ERROR This might be a bug. Please check the documentation guide to see if this is a known problem. ##### ERROR If not, please post the error message, with stack trace, to the GATK forum. ##### ERROR Visit our website and forum for extensive documentation and answers to ##### ERROR commonly asked questions http://www.broadinstitute.org/gatk ##### ERROR ##### ERROR MESSAGE: Code exception (see stack trace for error itself) My command used as follows: java -Xmx240g -Djava.io.tmpdir={TMPDIR} -jar /data004/software/GIF/packages/gatk/3.1-1/GenomeAnalysisTK.jar -I 2A_MD.bam -R /home/lwang/lwang/Zea_mays.AGPv3/Zea_mays.AGPv3.22.dna.genome.fa -T RealignerTargetCreator -o 2AforIndelRealigner.intervals java -Xmx240g -Djava.io.tmpdir={TMPDIR} -jar /data004/software/GIF/packages/gatk/3.1-1/GenomeAnalysisTK.jar -I 2A_MD.bam -R /home/lwang/lwang/Zea_mays.AGPv3/Zea_mays.AGPv3.22.dna.genome.fa -T IndelRealigner -targetIntervals 2AforIndelRealigner.intervals -o 2A.IndelRealigned.bam Anyone has any idea? Any suggestion is appreciated. Created 2014-04-10 16:24:48 | Updated | Tags: indelrealigner realignertargetcreator bqsr knownsites mouse indel-realignment Hello, I was wondering about the format of the known site vcfs used by the RealignerTargetCreator and BaseRecalibrator walkers. I'm working with mouse whole genome sequence data, so I've been using the Sanger Mouse Genome project known sites from the Keane et al. 2011 Nature paper. From the output, it seems that the RealignerTargetCreator walker is able to recognise and use the gzipped vcf fine: INFO 15:12:09,747 HelpFormatter - -------------------------------------------------------------------------------- INFO 15:12:09,751 HelpFormatter - The Genome Analysis Toolkit (GATK) v2.5-2-gf57256b, Compiled 2013/05/01 09:27:02 INFO 15:12:09,751 HelpFormatter - Copyright (c) 2010 The Broad Institute INFO 15:12:09,752 HelpFormatter - For support and documentation go to http://www.broadinstitute.org/gatk INFO 15:12:09,758 HelpFormatter - Program Args: -T RealignerTargetCreator -R mm10.fa -I DUK01M.sorted.dedup.bam -known /tmp/mgp.v3.SNPs.indels/ftp-mouse.sanger.ac.uk/REL-1303-SNPs_Indels-GRCm38/mgp.v3.indels.rsIDdbSNPv137.vcf.gz -o DUK01M.indel.intervals.list INFO 15:12:09,758 HelpFormatter - Date/Time: 2014/03/25 15:12:09 INFO 15:12:09,758 HelpFormatter - -------------------------------------------------------------------------------- INFO 15:12:09,759 HelpFormatter - -------------------------------------------------------------------------------- INFO 15:12:09,918 ArgumentTypeDescriptor - Dynamically determined type of /fml/chones/tmp/mgp.v3.SNPs.indels/ftp-mouse.sanger.ac.uk/REL-1303-SNPs_Indels-GRCm38/mgp.v3.indels.rsIDdbSNPv137.vcf.gz to be VCF INFO 15:12:10,010 GenomeAnalysisEngine - Strictness is SILENT INFO 15:12:10,367 GenomeAnalysisEngine - Downsampling Settings: Method: BY_SAMPLE, Target Coverage: 1000 INFO 15:12:10,377 SAMDataSourceSAMReaders - Initializing SAMRecords in serial INFO 15:12:10,439 SAMDataSource\$SAMReaders - Done initializing BAM readers: total time 0.06 INFO 15:12:10,468 RMDTrackBuilder - Attempting to blindly load /fml/chones/tmp/mgp.v3.SNPs.indels/ftp-mouse.sanger.ac.uk/REL-1303-SNPs_Indels-GRCm38/mgp.v3.indels.rsIDdbSNPv137.vcf.gz as a tabix indexed file INFO 15:12:11,066 IndexDictionaryUtils - Track known doesn't have a sequence dictionary built in, skipping dictionary validation INFO 15:12:11,201 GenomeAnalysisEngine - Creating shard strategy for 1 BAM files INFO 15:12:12,333 GenomeAnalysisEngine - Done creating shard strategy INFO 15:12:12,334 ProgressMeter - [INITIALIZATION COMPLETE; STARTING PROCESSING] I've checked the indel interval lists for my samples and they do all appear to contain different intervals. However, when I use the equivalent SNP vcf in the following BQSR step, GATK errors as follows: ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR ------------------------------------------------------------------------------------------ Which means that the SNP vcf (which has the same format as the indel vcf) is not used by BQSR. My question is: given that the BQSR step failed, should I be worried that there are no errors from the Indel Realignment step? As the known SNP/indel vcfs are in the same format, I don't know whether I can trust the realigned .bams. Thanks very much! Created 2014-01-31 21:16:16 | Updated | Tags: knownsites dbsnp indel-realignment Dear GATK team, Would you please clarify that, based on your experience or the logic used in the realignment algorithm, which option between using dbSNP, 1K gold standard (mills...), or "no known dbase" might result in a more accurate set of indels in the Indel-based realignment stage (speed and efficiency is not my concern). Based on the documentation I found on your site, the "known" variants are used to identify "intervals" of interest to then perform re-alignment around indels. So, it makes sense to me to use as many number of indels as possible (even if they are unreliable and garbage such as many of those found in dbSNP) in addition to those more accurate calls found in 1K gold-standard datasets for choosing the intervals. After all, that increases he number of indel regions to be investigated and therefore potentially increase the accuracy. Depending on your algorithm logic, also, it seems that providing no known dbase would increase the chance of investigating more candidates of mis-alignment and therefore improving the accuracy. But if your logic uses the "known" indel sets to just "not" perform the realignment and ignore those candidates around known sites, it makes sense to use the more accurate set such as 1K gold standard. Please let me know what you suggest. Thank you Regards Amin Zia
2016-05-25 18:58:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26414433121681213, "perplexity": 8515.930874644413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275328.0/warc/CC-MAIN-20160524002115-00204-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.baryonbib.org/bib/284a4c51-9ff5-423c-9be8-48e9474b4c9e
PREPRINT # Euclid: Modelling massive neutrinos in cosmology -- a code comparison J. Adamek, R. E. Angulo, C. Arnold, M. Baldi, M. Biagetti, B. Bose, C. Carbone, T. Castro, J. Dakin, K. Dolag, W. Elbers, C. Fidler, C. Giocoli, S. Hannestad, F. Hassani, C. Hernández-Aguayo, K. Koyama, B. Li, R. Mauland, P. Monaco, C. Moretti, D. F. Mota, C. Partmann, G. Parimbelli, D. Potter, A. Schneider, S. Schulz, R. E. Smith, V. Springel, J. Stadel, T. Tram, M. Viel, F. Villaescusa-Navarro, H. A. Winther, B. S. Wright, M. Zennaro, N. Aghanim, L. Amendola, N. Auricchio, D. Bonino, E. Branchini, M. Brescia, S. Camera, V. Capobianco, V. F. Cardone, J. Carretero, F. J. Castander, M. Castellano, S. Cavuoti et al. Submitted on 22 November 2022 ## Abstract The measurement of the absolute neutrino mass scale from cosmological large-scale clustering data is one of the key science goals of the Euclid mission. Such a measurement relies on precise modelling of the impact of neutrinos on structure formation, which can be studied with $N$-body simulations. Here we present the results from a major code comparison effort to establish the maturity and reliability of numerical methods for treating massive neutrinos. The comparison includes eleven full $N$-body implementations (not all of them independent), two $N$-body schemes with approximate time integration, and four additional codes that directly predict or emulate the matter power spectrum. Using a common set of initial data we quantify the relative agreement on the nonlinear power spectrum of cold dark matter and baryons and, for the $N$-body codes, also the relative agreement on the bispectrum, halo mass function, and halo bias. We find that the different numerical implementations produce fully consistent results. We can therefore be confident that we can model the impact of massive neutrinos at the sub-percent level in the most common summary statistics. We also provide a code validation pipeline for future reference. ## Preprint Comment: 43 pages, 17 figures, 2 tables; published on behalf of the Euclid Consortium; data available at https://doi.org/10.5281/zenodo.7297976 Subject: Astrophysics - Cosmology and Nongalactic Astrophysics
2022-11-29 06:17:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43707093596458435, "perplexity": 9731.235382462974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00838.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=CCSHBU_2015_v28n4_525
HYPERBOLIC SPINOR DARBOUX EQUATIONS OF SPACELIKE CURVES IN MINKOWSKI 3-SPACE Title & Authors HYPERBOLIC SPINOR DARBOUX EQUATIONS OF SPACELIKE CURVES IN MINKOWSKI 3-SPACE Balci, Yakup; Erisir, Tulay; Gungor, Mehmet Ali; Abstract In this paper, we study on spinors with two hyperbolic components. Firstly, we express the hyperbolic spinor representation of a spacelike curve dened on an oriented (spacelike or time-like) surface in Minkowski space $\small{{\mathbb{R}}^3_1}$. Then, we obtain the relation between the hyperbolic spinor representation of the Frenet frame of the spacelike curve on oriented surface and Darboux frame of the surface on the same points. Finally, we give one example about these hyperbolic spinors. Keywords hyperbolic space;hyperbolic spinors;Frenet formula; Language English Cited by References 1. F. Antonuccio, Hyperbolic Numbers and the Dirac Spinor, arXiv:hepth/9812036v1, 1998. 2. M. Carmeli, Group Theory and General Relativity, Representations of the Lorentz Group and Their Applications to the Gravitational Field. McGraw-Hill, New York, Imperial College Press, 1977. 3. E. Cartan, The theory of spinors. Dover, New York, 1981. 4. G. F. T. Del Castillo, Spinors in Four-Dimensional Spaces, Springer New York Dordrecht Heidelberg London, 2009. 5. G. F. T. Del Castillo, G. S. Barrales, Spinor formulation of the di erential geometry of curves. Revista Colombiana de Matematicas 38 (2004), 27-34. 6. P. A. M. Dirac, Spinors in Hilbert Space. Plenum Press, 1974. 7. P. A. M. Dirac, The quantum theory of the electron, Proceedings of the Royal Society of London A117: JSTOR 94981, (1928), 610-624. 8. M. P. Do Carmo, Differential Geometry of Curves and Surfaces. Prentice Hall, Englewood Cliffs, NJ, 1976. 9. T. Erisir, M. A. Gungor, and M. Tosun, Geometry of the Hyperbolic Spinors Corresponding to Alternative Frame, Adv. in Appl. Clifford Algebr. 25 (2015), no. 4, 799-810. 10. T. Ikawa, On curves and submanifolds in an indefinite-Riemannian manifold. Tsukuba J. Math. 9 (1985), no. 2, 353-371. 11. Z. Ketenci, T. Erisir, and M. A. Gungor, Spinor Equations of Curves in Minkowski Space, V. Congress of the Turkic World Mathematicians, Kyrgyzstan, June 05-07, 2014. 12. I. Kisi and M. Tosun, Spinor Darboux Equations of Curves in Euclidean 3-Space. Math. Morav. 19 (2015), no. 1, 87-93. 13. P. Kustaanheimo and E. Stiefel, Perturbation Theory of Kepler Motion Based on Spinor Regularization, J. Reine Angew. Math. 218 (1965), 204-219. 14. B. W. Montague, Elemenatry spinor algebra for polarized beams in strage rings, Particle Accelerators 11 (1981), 219-231. 15. B. O'Neill, Semi-Riemannian Geometry with Applications to Relativity, Academic Press, New York, 1983. 16. M. Ozdemir and A. A. Ergin, Spacelike Darboux curves in Minkowski 3-space. Differ. Geom. Dyn. Syst. 9 (2007), 131-137. 17. W. Pauli, Zur Quantenmechanik des magnetischen Elektrons, Zeitschrift fr Physik 43 (9-10) (1927), 601-632. 18. D. H. Sattinger and O. L. Weaver, Lie Groups and Algebras with Applications to Physics, Geometry and Mechanics. Springer-Verlag, New York, 1986. 19. S. I. Tomonaga, The Quantity Which Is Neither Vector nor Tensor, The story of spin, University of Chicago Press, p. 129, ISBN 0-226-80794-0, 1998. 20. D. Unal, I. Kisi and M. Tosun, Spinor Bishop Equation of Curves in Euclidean 3-Space. Adv. in Appl. Clifford Algebr. 23 (2013), no. 3, 757-765. 21. I. M. Yaglom, A Simple non-Euclidean Geometry and its Physical Basis. Springer-Verlag, New-York, 1979.
2018-06-18 04:27:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6878705620765686, "perplexity": 3831.978430627498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860041.64/warc/CC-MAIN-20180618031628-20180618051628-00437.warc.gz"}
https://deepai.org/publication/square-free-graphs-with-no-six-vertex-induced-path
# Square-free graphs with no six-vertex induced path We elucidate the structure of (P_6,C_4)-free graphs by showing that every such graph either has a clique cutset, or a universal vertex, or belongs to several special classes whose structure is completely characterized. Using this result, we show that for any (P_6,C_4)-free graph G, the following hold: (i) 5ω(G)/4 and Δ(G) + ω(G) +1/2 are upper bounds for the chromatic number of G. Moreover, these bounds are tight. (ii) There is a polynomial-time algorithm that computes the chromatic number of G. ## Authors • 7 publications • 3 publications • ### Colouring graphs with no induced six-vertex path or diamond The diamond is the graph obtained by removing an edge from the complete ... 06/16/2021 ∙ by Jan Goedgebeur, et al. ∙ 0 • ### Some Results on k-Critical P_5-Free Graphs A graph G is k-vertex-critical if G has chromatic number k but every pro... 08/12/2021 ∙ by Qingqiong Cai, et al. ∙ 0 • ### Linearly χ-Bounding (P_6,C_4)-Free Graphs Given two graphs H_1 and H_2, a graph G is (H_1,H_2)-free if it contains... 09/27/2017 ∙ by Serge Gaspers, et al. ∙ 0 • ### Wheel-free graphs with no induced five-vertex path A 4-wheel is the graph consisting of a chordless cycle on four vertices ... 04/03/2020 ∙ by Arnab Char, et al. ∙ 0 • ### 2× n Grids have Unbounded Anagram-Free Chromatic Number We show that anagram-free vertex colouring a 2× n square grid requires a... 05/05/2021 ∙ by Saman Bazarghani, et al. ∙ 0 • ### Supporting Ruled Polygons We explore several problems related to ruled polygons. Given a ruling of... 07/04/2017 ∙ by Nicholas J. Cavanna, et al. ∙ 0 • ### Characterising AT-free Graphs with BFS An asteroidal triple free graph is a graph such that for every independe... 07/13/2018 ∙ by Jesse Beisegel, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction All our graphs are finite and have no loops or multiple edges. For any integer , a -coloring of a graph is a mapping such that any two adjacent vertices in satisfy . A graph is -colorable if it admits a -coloring. The chromatic number of a graph is the smallest integer such that is -colorable. In general, determining whether a graph is -colorable or not is well-known to be -complete for every fixed . Thus designing algorithms for computing the chromatic number by putting restrictions on the input graph and obtaining bounds for the chromatic number are of interest. A clique in a graph is a set of pairwise adjacent vertices. Let denote the maximum clique size in a graph . Clearly for every induced subgraph of . A graph is perfect if every induced subgraph of satisfies . The existence of triangle-free graphs with aribtrarily large chromatic number shows that for general graphs the chromatic number cannot be upper bounded by a function of the clique number. However, for restricted classes of graphs such a function may exist. Gyárfás [19] called such classes of graphs -bounded classes. A family of graphs is -bounded with -bounding function if, for every induced subgraph of , . For instance, the class of perfect graphs is -bounded with . Given a family of graphs , a graph is -free if no induced subgraph of is isomorphic to a member of ; when has only one element we say that is -free. Several classes of graphs defined by forbidding certain families of graphs were shown to be -bounded: even-hole-free graphs [1] ; odd-hole-free graphs [34]; quasi-line graphs [10]; claw-free graphs with stability number at least 3 [13]; see also [6, 8, 12, 22, 24] for more instances. For any integer we let denote the path on vertices and denote the cycle on vertices. A cycle on vertices is referred to as a square. It is well known that every -free graph is perfect. Gyárfás [19] showed that the class of -free graphs is -bounded. Gravier et al. [18] improved Gyárfás’s bound slightly by showing that every -free graph satisfies . In particular every -free graph satisfies . Improving this exponential bound seems to be a difficult open problem. In fact the problem of determining whether the class of -free graphs admits a polynomial -bounding function remains open, and the known -bounding function for such class of graphs satisfies [23]. So the recent focus is on obtaining (linear) -bounding functions for some classes of -free graphs, where . It is shown in [8] that every -free graph satisfies , and in [7] that every -free graph satisfies . Gaspers and Huang [14] studied the class of -free graphs (which generalizes the class of -free graphs and the class of -free graphs) and showed that every such graph satisfies . We improve their result and establish the best possible bound, as follows. ###### Theorem 1.1 Let be any -free graph. Then . Moreover, this bound is tight. The degree of a vertex in is the number of vertices adjacent to it. The maximum degree over all vertices in is denoted by . For any graph , we have . Brooks [5] showed that if is a graph with and , then . Reed [33] conjectured that every graph satisfies . Despite several partial results [25, 31, 33], Reed’s conjecture is still open in general, even for triangle-free graphs. Using Theorem 1.1, we will show that Reed’s conjecture holds for the class of (,)-free graphs: ###### Theorem 1.2 If is a -free graph, then . One can readily see that the bounds in Theorem 1.1 and in Theorem 1.2 are tight on the following example. Let be a graph whose vertex-set is partitioned into five cliques such that for each , every vertex in is adjacent to every vertex in and to no vertex in , and for all (). Clearly and . Since has no stable set of size , is -free and . Moreover, since no two non-adjacent vertices in has a common neighbor in , we also see that is -free. Finally, we also have the following result. ###### Theorem 1.3 There is a polynomial-time algorithm which computes the chromatic number of any -free graph. The proof of Theorem 1.3 is based on the concept of clique-width of a graph , which was defined in [9] as the minimum number of labels which are necessary to generate using a certain type of operations. (We omit the details.) It is known from [26, 32] that if a class of graphs has bounded clique-width, then there is a polynomial-time algorithm that computes the chromatic number of every graph in this class. We are able to prove that every -free graph that has no clique cutset has clique-width at most , which implies the validity of Theorem 1.3. However a similar result, using similar techniques, was proved by Gaspers, Huang and Paulusma [15]. Hence we refer to [15], or to the extended version of our manuscript [21] for the detailed proof of Theorem 1.3. We finish on this theme by noting that the class of -free graph itself does not have bounded clique-width, since the class of split graphs (which are all -free) does not have bounded clique-width [2, 29]. The clique-width argument might also be used for solving other optimization problems in (-free graphs, in particular the stability number. However this problem was solved earlier by Mosca [30], and the weighted version was solved in [4], and both algorithms have reasonably low complexity. Theorems 1.1 and 1.2 will be derived from the structural theorem below (Theorem 1.4). Before stating it we recall some definitions. In a graph , the neighborhood of a vertex is the set ; we drop the subscript when there is no ambiguity. The closed neighborhood is the set . Two vertices are clones if . For any and , we let . For any two subsets and of , we denote by , the set of edges that has one end in and other end in . We say that is complete to or is complete if every vertex in is adjacent to every vertex in ; and is anticomplete to if . If is singleton, say , we simply write is complete (anticomplete) to instead of writing is complete (anticomplete) to . If , then denote the subgraph induced by in . A vertex is universal if it is adjacent to all other vertices. A stable set is a set of pairwise non-adjacent vertices. A clique-cutset of a graph is a clique in such that has more connected components than . A matching is a set of pairwise non-adjacent edges. The union of two vertex-disjoint graphs and is the graph with vertex-set and edge-set . The union of copies of the same graph will be denoted by ; for example denotes the graph that consists in two disjoint copies of . A vertex is simplicial if its neighborhood is a clique. It is easy to see that in any graph that has a simplicial vertex, letting denote the set of simplicial vertices, every component of is a clique, and any two adjacent simplicial vertices are clones. A hole is an induced cycle of length at least . A graph is chordal if it contains no hole as an induced subgraph. Chordal graphs have many interesting properties (see e.g. [17]), in particular: every chordal graph has a simplicial vertex; every chordal graph that is not a clique has a clique-cutset; and every chordal graph that is not a clique has two non-adjacent simplicial vertices. In a graph , let be disjoint subsets of . It is easy to see that the following two conditions (i) and (ii) are equivalent: (i) any two vertices satisfy either or ; (ii) any two vertices satisfy either or . If this condition holds we say that the pair is graded. Clearly in a -free graph any two disjoint cliques form a graded pair. See also Lemma 2.3 below. #### Some special graphs Let be three graphs (as in [14]), as shown in Figure 1. Let be five graphs, as shown in Figure 2, where is the Petersen graph. #### Graphs Fk,ℓ For integers let be the graph whose vertex-set can be partitioned into sets and such that: • is a clique of size , and is a stable set of size , and the edges between and form a matching of size , namely, ; • is a clique of size , and is a stable set of size , and the edges between and form a matching of size , namely, ; • The neighborhood of is ; • The neighborhood of is ; • The neighborhood of is . See Figure 3 for the schematic representation of the graph and for the graph . #### Blowups A blowup of a graph is any graph such that can be partitioned into (not necessarily non-empty) cliques , , such that is complete if , and if . See Figure 4:(a) for a blowup of a . #### Bands A band is any graph (see Figure 4:(b)) whose vertex-set can be partitioned into seven sets such that: • Each of is a clique. • The sets , , and are complete. • The sets , and are empty. • The pairs , and are graded. #### Belts A belt is any -free graph (see Figure 4:(c)) whose vertex-set can be partitioned into seven sets such that: • Each of is a clique. • The sets and are complete. • The sets , , are empty. • For each , is complete, every vertex in has a neighbor in , and no vertex of is universal in . #### Boilers A boiler is a -free graph whose vertex-set can be partitioned into five sets such that: • The sets , , and are non-empty, and , and are cliques. • The sets , , and are complete. • The sets , and are empty. • and are -free. • Every vertex in has a neighbor in . • For some integer , is partitioned into non-empty sets , pairwise anticomplete, and is partitioned into non-empty sets , such that for each every vertex in has a neighbor in and no neighbor in ; and every vertex in has a neighbor in . • is complete, and for each every vertex in is either complete or anticomplete to , and no vertex in is complete to . See Figure 5 for the partial structure of a boiler. We consider that the definition of blowups (of certain fixed graphs) and of bands (using Lemma 2.3) is also a complete description of the structure of such graphs. However this is not so for belts and boilers. Such graphs have additional properties, and a description of their structure is given in Section 4. Now we can state our main structural result. The existence of such a decomposition theorem was inspired to us by the results from [14] which go a long way in that direction. ###### Theorem 1.4 If is any -free graph, then one of the following holds: • has a clique cutset. • has a universal vertex. • is a blowup of either , or (for some ). • is either a band, a belt, or a boiler. Theorem 1.4 is derived from Theorem 1.5. ###### Theorem 1.5 Let be a -free graph that has no clique-cutset and no universal vertex. Then the following hold: 1. If contains an , then is a blowup of . 2. If contains an and no , then is a band. 3. If is -free, and contains an induced , then is a blowup of one of the graphs . 4. If is -free, and contains an , then is a blowup of either or for some integers . 5. If contains no and no , and contains a , then is either a belt or a boiler. Proof. The proof of each of these items is given below in Theorems 3.4, 3.5, 3.6, 3.7 and 3.8 respectively. Proof of Theorem 1.4, assuming Theorem 1.5. Let be any -free graph. If is chordal, then either is a complete graph (so it has a universal vertex) or has a clique cutset. Now suppose that is not chordal. Then it contains an induced cycle of length either or . So it satisfies the hypothesis of one of the items of Theorem 1.5 and consequently it satisfies the conclusion of this item. This established Theorem 1.4. ## 2 Classes of square-free graphs In this section, we study some classes of square-free graphs and prove some useful lemmas and theorems that are needed for the later sections. We first note that any blowup of a -free chordal graph is -free chordal. ###### Lemma 2.1 In a chordal graph , every non-simplicial vertex lies on a chordless path between two simplicial vertices. Proof. Let be a non-simplicial vertex in , so it has two non-adjacent neighbors . If both are simplicial, then -- is the desired path. Hence assume that is non-simplicial. Since is not a clique, it has two simplicial vertices, so it has a simplicial vertex different from . So . In , the vertex is non-simplicial, so, by induction, there is a chordless path --- in , with , such that and are simplicial in and for some . If and are simplicial in , then is the desired path. So suppose that is not simplicial in , so . Since is simplicial in we have . Then we see that either ---- or --- is the desired path. ###### Lemma 2.2 In a chordal graph , let and be disjoint subsets of such that is a clique and every simplicial vertex of has a neighbor in . Then every vertex in has a neighbor in . Proof. Consider any non-simplicial vertex of . By Lemma 2.1 there is a chordless path --- in , with , such that and are simplicial in and for some . By the hypothesis has neighbor and has a neighbor in . Suppose that has no neighbor in . Let be the largest integer in such that has a neighbor in , and let be the smallest integer in such that has a neighbor in . Then contains a hole, a contradiction. So has a neighbor in . ###### Lemma 2.3 In a -free graph , let be two disjoint cliques. Then: • There is a labeling of the vertices of such that . Similarly, there is a labeling of the vertices of such that . • If every vertex in has a neighbor in , then some vertex in is complete to . • If every vertex in has a non-neighbor in , then some vertex in is anticomplete to . • If is not complete, there are indices and such , and for all , and for all . Moreover, every maximal clique of contains one of . Proof. Consider any two vertices . If there are vertices and , then induces a . Hence we have either or . This inclusion relation for all implies the existence of a total ordering on , which corresponds to a labeling as desired, and the same holds for . This proves the first item of the lemma. The second and third item are immediate consequences of the first. Now suppose that is not complete to . Consider any vertex that has a non-neighbor in , and let be the smallest index such that . Let be the smallest index such that . So . We have for all by the choice of . We also have for all , for otherwise, since we also have , contradicting the definition of . This proves the first part of the fourth item. Finally, consider any maximal clique of . Let be the largest index such that and let be the largest index such that . By the properties of the labelings and the maximality of we have . If both and , then the properties of imply that (and also ) is a clique of , contradicting the maximality of . Hence we have either or , and so contains one of . ###### Lemma 2.4 In a -free graph , let , and be disjoint subsets of such that: • is a clique, and every vertex in has a neighbor in , • is complete to and anticomplete to ; • Either is not connected, or there are vertices such that is complete to and anticomplete to , and is anticomplete to , and . Then is -free. Proof. First suppose that there is a --- in . By the hypothesis has a neighbor . Then , for otherwise induces a ; and similarly . If is connected, then either ----- or ----- is a . Now suppose that is not connected. So contains a vertex that is anticomplete to . By the hypothesis has a neighbor . As above we have and for all for otherwise there is a . But then either ----- or ----- is a . Now suppose that there is a in , with vertices and edges , . We know that has a neighbor , and as above we have for each , for otherwise there is a . Likewise, has a neighbor , and for each . Then ----- is an induced for some and . #### (P4,c4)-free graphs We want to understand the structure of -free graphs as they play a major role in the structure of belts and boilers. Recall that -free graphs were studied by Golumbic [16], who called them trivially perfect graphs. Clearly any such graph is chordal. It was proved in [16] that every connected )-free graph has a universal vertex. It follows that trivially perfect graphs are exactly the class of graphs that can be built recursively as follows, starting from complete graphs: – The disjoint union of any number of trivially perfect graphs is trivially perfect; – If is any trivially perfect graph, then the graph obtained from by adding a universal vertex is trivially perfect. As a consequence, any connected member of can be represented by a rooted directed tree defined as follows. If is a clique, let have one node, which is the set . If is not a clique, then by Golumbic’s result the set of universal vertices of is not empty, and has a number of components . Let then be the tree whose root is and the children (out-neighbors) of are the roots of . The following properties of appear immediately. Every node of is a non-empty clique of , and every vertex of is in exactly one such clique, which we call ; moreover, is a homogeneous set (all member of are pairwise clones). For every vertex of , the closed neighborhood of consists of and all the vertices in the cliques that are descendants and ancestors of in . Every maximal clique of is the union of the nodes of a directed path in . All vertices in any leaf of are simplicial vertices of , and every simplicial vertex of is in some leaf of . We say that a member of is basic if every node of is a clique of size . (We can view as a directed tree, where every edge is directed away from the root; and then is the underlying undirected graph of the transitive closure of .). It follows that every member of is a blowup of a basic member of . In a basic member of , two vertices are adjacent if and only if one of them is an ancestor of the other in , and every clique of consists of the set of vertices of any directed path in . A dart is the graph with vertex-set and edge-set . Let be the tree obtained from by subdividing one edge. Next we give the following useful lemma. ###### Lemma 2.5 Let be a -free graph. (a) If does not have three pairwise non-adjacent simplicial vertices, then is a blowup of . (b) If does not have four pairwise non-adjacent simplicial vertices, then is a blowup of a dart. Proof. The hypothesis of (a) or (b) means that, if is a connected component of , then is a tree with at most three leaves. Since each internal vertex of has at least two leaves, is either , , (rooted at its vertex of degree ), (rooted at its vertex of degree ), or (rooted at its vertex of degree ). Then the conclusion follows directly from our assumption on and the preceding arguments. #### (P4,c4,2p3)-free graphs Let be the class of -free graphs. So . If is any member of , and is connected and not a clique, then since is -free all components of , except possibly one, are cliques. So all children of in , except possibly one, are leaves. Applying this argument recursively we see that the tree consists of a rooted directed path plus a positive number of leaves adjacent to every node of this path, with at least two leaves adjacent to the last node of this path. We call such a tree a bamboo. By the same argument as above, every member of is a blowup of a basic member of . #### C-pairs A graph is a -pair if is -free, chordal, and can be partitioned into two sets and such that is a clique, , every vertex in has a neighbor in , and any two non-adjacent vertices in have no common neighbor in . Depending on the context we may also write that is a -pair. We say that is a basic -pair if the subgraph is a basic member of , with vertices for some integer , and a clique ; and for each , if is simplicial in then , else consists of plus the union of over all descendants of in . Before describing how all -pairs can be obtained from basic -pairs we need to introduce another definition. Let be any graph and be a matching in . An augmentation of along is any graph whose vertex-set can be partitioned into cliques , , such that is complete if , and if , and is a graded pair if . (See [28] for a similar definition.) In a basic -pair , with the same notation as above, we say that a matching is acceptable if there is a clique in such that . ###### Theorem 2.1 A graph is a -pair then it is an augmentation of a basic -pair along an acceptable matching. Proof. Let be any -pair, with the same notation as above. Since is -free it admits a representative tree which is a bamboo. We claim that: If Y,Z are two nodes of T(G[X]) such that Z is a descendant of Y, then Y is complete to NA(Z). (1) Proof: Consider any and ; so there is a vertex with . Since is not a leaf of , there is a child of in such that is not on the directed path from to , and so and are not adjacent (they are anticomplete to each other). Pick any . Then and . We know that has a neighbor . We have by the definition of a -pair ( and have no common neighbor in ). Then , for otherwise contains an induced hole of length or , contradicting the fact that is chordal. So (1) holds. Let be the nodes of . For each , let be the union of over all descendants of in , and let . Let (so ). Let be the nodes of that are not homogeneous in (if any). Note that for each the pair is graded since is -free. We claim that: Xi1∪⋯∪Xih is a clique. (2) Proof: Suppose, on the contrary, and up to symmetry, that is not complete, and so . For each , since is not homogeneous in , there are vertices and a vertex that is adjacent to and not to . Since non-adjacent vertices in have no common neighbor in , we have and . Then ----- is a . So (2) holds. Let be the basic member of of which is a blowup. Let have vertices , where corresponds to the node of for all . Let be the graph obtained from by adding a set , disjoint from , and edges so that is a clique in and, for all and , vertices and are adjacent in if and only if in . By this construction and by (1) is a basic -pair. In let . It follows from (2) that is an acceptable matching of and from all the points above that is an augmentation of along . ## 3 Structure of (P6, C4)-free graphs In this section, we give the proof of Theorem 1.5. We say that a subgraph of is dominating if every vertex in is a adjacent to a vertex in . We will use the following theorem of Brandstädt and Hoàng [4]. ###### Theorem 3.1 ([4]) Let be a ()-free graph that has no clique cutset. Then the following statements hold. (i) Every induced is dominating. (ii) If contains an induced which is not dominating, then is the join of a complete graph and a blowup of the Petersen graph. In the next two theorems we make some general observations about the situation when a -free graph contains a hole (which must have length either or ). Observe that in a -free graph , if -- is a , then any which is adjacent to and is also adjacent to . ###### Theorem 3.2 Let be any -free graph that contains a with vertex-set and . Let: A = {x∈V(G)∖C∣NC(x)=C}. Ti = {x∈V(G)∖C∣NC(x)={vi−1,vi,vi+1}. Wi = {x∈V(G)∖C∣NC(x)={vi}. Xi,i+1 = {x∈V(G)∖C∣NC(x)={vi,vi+1}. Moreover, let , , and . Then the following properties hold for all : 1. is a clique. 2. , , , , and are empty. 3. ,
2021-10-19 00:45:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055578708648682, "perplexity": 567.3218203649479}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585215.14/warc/CC-MAIN-20211018221501-20211019011501-00480.warc.gz"}
https://socratic.org/questions/an-75-pair-of-shoes-is-on-sale-for-66-how-do-you-find-the-percent-discount
# An $75 pair of shoes is on sale for$66. How do you find the percent discount? Mar 11, 2018 12% #### Explanation: Firstly, find the discount, $75-$66=$9 Then, divide the discount against the original, ($9)/(\$75)=0.12 Lastly, multiply by 100% to find the percentage, 0.12xx100%=12% Therefore, the percentage discount is 12%.
2019-12-15 18:05:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7146777510643005, "perplexity": 5883.021824447344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541309137.92/warc/CC-MAIN-20191215173718-20191215201718-00094.warc.gz"}
https://aplwiki.com/index.php?title=Simple_examples&oldid=7185
# Simple examples (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) This page contains examples that show APL's strengths. The examples require minimal background and have no special dependencies. If these examples are too simple for you, have a look at our advanced examples. ## Arithmetic mean Here is an APL program to calculate the average (arithmetic mean) of a list of numbers, written as a dfn: ``` {(+⌿⍵)÷≢⍵} ``` It is unnamed: the enclosing braces mark it as a function definition. It can be assigned a name for use later, or used anonymously in a more complex expression. The `⍵` refers to the argument of the function, a list (or 1-dimensional array) of numbers. The `≢` denotes the tally function, which returns here the length of (number of elements in) the argument `⍵`. The divide symbol `÷` has its usual meaning. The parenthesised `+⌿⍵` denotes the sum of all the elements of `⍵`. The `⌿` operator combines with the `+` function: the `⌿` fixes the `+` function between each element of `⍵`, so that ``` +⌿ 1 2 3 4 5 6 21 ``` is the same as ``` 1+2+3+4+5+6 21 ``` ### Operators Operators like `⌿` can be used to derive new functions not only from primitive functions like `+`, but also from defined functions. For example ``` {⍺,', ',⍵}⌿ ``` will transform a list of strings representing words into a comma-separated list: ``` {⍺,', ',⍵}⌿'cow' 'sheep' 'cat' 'dog' ┌────────────────────┐ │cow, sheep, cat, dog│ └────────────────────┘ ``` So back to our mean example. `(+⌿⍵)` gives the sum of the list, which is then divided by `≢⍵`, the number elements in it. ``` {(+⌿⍵)÷≢⍵} 3 4.5 7 21 8.875 ``` ### Tacit programming Main article: Tacit programming In APL’s tacit definition, no braces are needed to mark the definition of a function: primitive functions just combine in a way that enables us to omit any reference to the function arguments — hence tacit. Here is the same calculation written tacitly: ``` (+⌿÷≢) 3 4.5 7 21 8.875 ``` This is a so called 3-train, also known as a fork. It is evaluated like this: ```(+⌿ ÷ ≢) 3 4.5 7 21 ``` ${\displaystyle \Leftrightarrow }$ ```(+⌿ 3 4.5 7 21) ÷ (≢ 3 4.5 7 21) ``` Note that `+⌿` is evaluated as a single derived function. The general scheme for monadic 3-trains is the following: ```(f g h) ⍵ ``` ${\displaystyle \Leftrightarrow }$ ```(f ⍵) g (h ⍵) ``` But other types of trains are also possible. ## Text processing APL represents text as character lists (vectors), making many text operations trivial. ### Split text by delimiter `≠` gives 1 for true and 0 for false. It pairs up a single element argument with all the elements of the other arguments: ``` ','≠'comma,delimited,text' 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 1 1 ``` `⊢` returns its right argument: ``` ','⊢'comma,delimited,text' comma,delimited,text ``` `⊆` returns a list of runs as indicated by runs of 1s, leaving out elements indicated by 0s: ``` 1 1 0 1 1 1⊆'Hello!' ┌──┬───┐ │He│lo!│ └──┴───┘ ``` We use the comparison vector to partition the right argument: ``` ','(≠⊆⊢)'comma,delimited,text' ┌─────┬─────────┬────┐ │comma│delimited│text│ └─────┴─────────┴────┘ ``` Works in: Dyalog APL Notice that you can read the tacit function `≠⊆⊢` like an English sentence: The inequality partitions the right argument. Many dialects do not support the above tacit syntax, and use the glyph `⊂` for partition primitive function. In such dialects, the following formulation can be used: ``` (','≠s)⊂s←'comma,delimited,text' ``` Works in: APL2, APLX, GNU APL This assigns the text to the variable `s`, then separately computes the partitioning vector and applies it. ### Indices of multiple elements `∊` gives us a mask for elements (characters) in the left argument that are members of the right argument: ``` 'mississippi'∊'sp' 0 0 1 1 0 1 1 0 1 1 0 ``` `⍸` gives us the indices where true (1): ``` ⍸'mississippi'∊'sp' 3 4 6 7 9 10 ``` We can combine this into an anonymous infix (dyadic) function: ``` 'mississippi' (⍸∊) 'sp' 3 4 6 7 9 10 ``` ### Frequency of characters in a string The Outer Product allows for an intuitive way to compute the occurrence of characters at a given location in a string: ``` 'abcd' ∘.= 'cabbage' 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` Then it is simply a matter of performing a sum-reduce `+/` to calculate the total frequency of each character:[1] ``` +/ 'abcd' ∘.= 'cabbage' 2 2 1 0 ``` ### Parenthesis nesting level "Ken was showing some slides — and one of his slides had something on it that I was later to learn was an APL one-liner. And he tossed this off as an example of the expressiveness of the APL notation. I believe the one-liner was one of the standard ones for indicating the nesting level of the parentheses in an algebraic expression. But the one-liner was very short — ten characters, something like that — and having been involved with programming things like that for a long time and realizing that it took a reasonable amount of code to do, I looked at it and said, “My God, there must be something in this language.”" Alan Perlis. Almost Perfect Artifacts Improve only in Small Ways: APL is more French than English at APL78. What was the one-liner for the nesting level of parentheses? It would take a bit of work to figure out, because at the time of the meeting Perlis described, no APL implementation existed. Two possibilities are explained here. #### Method A For this more complex computation, we can expand on the previous example's use of `∘.=`. First we compare all characters to the opening and closing characters; ``` '()'∘.='plus(square(a),plus(square(b),times(2,plus(a,b)))' 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 ``` An opening increases the current level, while a closing decreases, so we convert this to changes (or deltas) by subtracting the bottom row from the top row: ``` -⌿'()'∘.='plus(square(a),plus(square(b),times(2,plus(a,b)))' 0 0 0 0 1 0 0 0 0 0 0 1 0 ¯1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 ¯1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 ¯1 ¯1 ¯1 ``` The running sum is what we're looking for: ``` +\-⌿'()'∘.='plus(square(a),plus(square(b),times(2,plus(a,b)))' 0 0 0 0 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 2 2 2 2 2 2 2 3 3 2 2 2 2 2 2 2 3 3 3 3 3 3 3 4 4 4 4 3 2 1 ``` Works in: all APLs #### Method B Alternatively, we can utilise that if the Index Of function `⍳` doesn't find what it is looking for, it returns the next index after the last element in the the lookup array: ``` 'ABBA'⍳'ABC' 1 2 5 '()'⍳'plus(square(a),plus(square(b),times(2,plus(a,b)))' 3 3 3 3 1 3 3 3 3 3 3 1 3 2 3 3 3 3 3 1 3 3 3 3 3 3 1 3 2 3 3 3 3 3 3 1 3 3 3 3 3 3 1 3 3 3 2 2 2 ``` Whenever we have a 1 the parenthesis level increases, and when we have a 2 it decreases. If we have a 3, it remains as-is. We can do this mapping by indexing into these values: ``` 1 ¯1 0['()'⍳'plus(square(a),plus(square(b),times(2,plus(a,b)))'] 0 0 0 0 1 0 0 0 0 0 0 1 0 ¯1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 ¯1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 ¯1 ¯1 ¯1 ``` The running sum is what we're looking for: ``` +\1 ¯1 0['()'⍳'plus(square(a),plus(square(b),times(2,plus(a,b)))'] 0 0 0 0 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 2 2 2 2 2 2 2 3 3 2 2 2 2 2 2 2 3 3 3 3 3 3 3 4 4 4 4 3 2 1 ``` Works in: all APLs ### Grille cypher A grille is a 500 year old method for encrypting messages. Represent both the grid of letters and the grille as character matrices. ``` ⎕←(grid grille)←5 5∘⍴¨'VRYIALCLQIFKNEVPLARKMPLFF' '⌺⌺⌺ ⌺ ⌺⌺⌺ ⌺ ⌺ ⌺⌺⌺ ⌺⌺⌺ ⌺⌺' ┌─────┬─────┐ │VRYIA│⌺⌺⌺ ⌺│ │LCLQI│ ⌺⌺⌺ │ │FKNEV│⌺ ⌺ ⌺│ │PLARK│⌺⌺ ⌺⌺│ │MPLFF│⌺ ⌺⌺│ └─────┴─────┘ ``` Retrieve elements of the grid where there are spaces in the grille. ``` grid[⍸grille=' '] ILIKEAPL ``` An alternative method using ravel. ``` (' '=,grille)/,grid ILIKEAPL ``` ### References 1. Marshall Lochbaum used this example as part of his talk on Outer Product at LambdaConf 2019. APL development  Interface SessionTyping glyphs (on Linux) ∙ FontsText editors Publications IntroductionsLearning resourcesSimple examplesAdvanced examplesMnemonicsStandardsA Dictionary of APLCase studiesDocumentation suitesBooksPapersVideosPeriodicalsTerminology (Chinese, German) ∙ Neural networksError trapping with Dyalog APL (in forms) Sharing code Backwards compatibilityAPLcartAPLTreeAPL-CationDfns workspaceTatin Implementation Developers (APL2000, Dyalog, GNU APL community, IBM, IPSA, STSC) ∙ ResourcesOpen-sourceMagic functionPerformance
2022-01-19 07:50:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7465096712112427, "perplexity": 2234.1188165444596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00665.warc.gz"}
http://www.mathworks.com/help/physmod/sps/ref/gto.html?requestedDomain=true&nocookie=true
# GTO Gate Turn-Off Thyristor ## Library Semiconductors / Fundamental Components ## Description The GTO block models a gate turn-off thyristor (GTO). The I-V characteristic of a GTO is such that if the gate-cathode voltage exceeds the specified gate trigger voltage, the GTO turns on. If the gate-cathode voltage falls below the specified gate turn-off voltage value, or if the load current falls below the specified holding-current value, the device turns off . In the on state, the anode-cathode path behaves like a linear diode with forward-voltage drop, Vf, and on-resistance, Ron. In the off state, the anode-cathode path behaves like a linear resistor with a low off-state conductance value, Goff. The defining Simscape™ equations for the block are: ``` if ((v > Vf)&&((G>Vgt)||(i>Ih)))&&(G>Vgt_off) i == (v - Vf*(1-Ron*Goff))/Ron; else i == v*Goff; end ``` where: • v is the anode-cathode voltage. • Vf is the forward voltage. • G is the gate voltage. • Vgt is the gate trigger voltage. • i is the anode-cathode current. • Ih is the holding current. • Vgt_off is the gate turn-off voltage. • Ron is the on-state resistance. • Goff is the off-state conductance. Using the Integral Diode tab of the block dialog box, you can include an integral cathode-anode diode. A GTO that includes an integral cathode-anode diode is known as an asymmetrical GTO (A-GTO) or reverse-conducting GTO (RCGTO). An integral diode protects the semiconductor device by providing a conduction path for reverse current. An inductive load can produce a high reverse-voltage spike when the semiconductor device suddenly switches off the voltage supply to the load. The table shows you how to set the Integral protection diode parameter based on your goals. GoalValue to SelectBlock Behavior Prioritize simulation speed.`Protection diode with no dynamics`The block includes an integral copy of the Diode block. The block dialog box shows parameters relating to the Diode block. Precisely specify reverse-mode charge dynamics.`Protection diode with charge dynamics`The block includes an integral copy of the Commutation Diode block. The block dialog box shows parameters relating to the Commutation Diode block. ### Modeling Variants The block provides four modeling variants. To select the desired variant, right-click the block in your model. From the context menu, select Simscape > Block choices, and then one of these variants: • PS Control Port — Contains a physical signal port that is associated with the gate terminal. This variant is the default. • Electrical Control Port — Contains an electrical conserving port that is associated with the gate terminal. • PS Control Port | Thermal Port — Contains a thermal port and a physical signal port that is associated with the gate terminal. • Electrical Control Port | Thermal Port — Contains a thermal port and an electrical conserving port that is associated with the gate terminal. The variants of this block without the thermal port do not simulate heat generation in the device. The variants with the thermal port allow you to model the heat that switching events and conduction losses generate. For numerical efficiency, the thermal state does not affect the electrical behavior of the block. The thermal port is hidden by default. To enable the thermal port, select a thermal block variant. ### Thermal Loss Equations The figure shows an idealized representation of the output voltage, Vout, and the output current, Iout, of the semiconductor device. The interval shown includes the entire nth switching cycle, during which the block turns off and then on. #### Heat Loss Due to a Switch-On Event When the semiconductor turns on during the nth switching cycle, the amount of thermal energy that the device dissipates increments by a discrete amount. If you select ```Voltage, current, and temperature``` for the Thermal loss dependent on parameter, the equation for the incremental change is `${E}_{on\left(n\right)}=\frac{{V}_{off\left(n\right)}}{{V}_{off_data}}fcn\left(T,{I}_{on\left(n-1\right)}\right),$` where: • Eon(n) is the switch-on loss at the nth switch-on event. • Voff(n) is the off-state output voltage,Vout, just before the device switches on during the nth switching cycle. • Voff_data is the Off-state voltage for losses data parameter value. • T is the device temperature. • Ion(n-1) is the on-state output current, Iout, just before the device switches off during the cycle that precedes the nth switching cycle. The function fcn is a 2-D lookup table with linear interpolation and linear extrapolation: `$E=tablelookup\left({T}_{j_data},{I}_{out_data},{E}_{on_data},T,{I}_{on\left(n-1\right)}\right),$` where: • Tj_data is the Temperature vector, Tj parameter value. • Iout_data is the Output current vector, Iout parameter value. • Eon_data is the Switch-on loss, Eon=fcn(Tj,Iout) parameter value. If you select `Voltage and current` for the Thermal loss dependent on parameter, when the semiconductor turns on during the nth switching cycle, the equation that the block uses to calculate the incremental change in the discrete amount of thermal energy that the device dissipates is `${E}_{on\left(n\right)}=\left(\frac{{V}_{off\left(n\right)}}{{V}_{off_data}}\right)\left(\frac{{I}_{on\left(n-1\right)}}{{I}_{out_scalar}}\right)\left({E}_{on_scalar}\right)$` where: • Iout_scalar is the Output current, Iout parameter value. • Eon_scalar is the Switch-on loss parameter value. #### Heat Loss Due to a Switch-Off Event When the semiconductor turns off during the nth switching cycle, the amount of thermal energy that the device dissipates increments by a discrete amount. If you select ```Voltage, current, and temperature``` for the Thermal loss dependent on parameter, the equation for the incremental change is `${E}_{off\left(n\right)}=\frac{{V}_{off\left(n\right)}}{{V}_{off_data}}fcn\left(T,{I}_{on\left(n\right)}\right),$` where: • Eoff(n) is the switch-off loss at the nth switch-off event. • Voff(n) is the off-state output voltage, Vout, just before the device switches on during the nth switching cycle. • Voff_data is the Off-state voltage for losses data parameter value. • T is the device temperature. • Ion(n) is the on-state output current, Iout, just before the device switches off during the nth switching cycle. The function fcn is a 2-D lookup table with linear interpolation and linear extrapolation: `$E=tablelookup\left({T}_{j_data},{I}_{out_data},{E}_{off_data},T,{I}_{on\left(n\right)}\right),$` where: • Tj_data is the Temperature vector, Tj parameter value. • Iout_data is the Output current vector, Iout parameter value. • Eoff_data is the Switch-off loss, Eoff=fcn(Tj,Iout) parameter value. If you select `Voltage and current` for the Thermal loss dependent on parameter, when the semiconductor turns off during the nth switching cycle, the equation that the block uses to calculate the incremental change in the discrete amount of thermal energy that the device dissipates is `${E}_{off\left(n\right)}=\left(\frac{{V}_{off\left(n\right)}}{{V}_{off_data}}\right)\left(\frac{{I}_{on\left(n-1\right)}}{{I}_{out_scalar}}\right)\left({E}_{off_scalar}\right)$` where: • Iout_scalar is the Output current, Iout parameter value. • Eoff_scalar is the Switch-off loss parameter value. #### Heat Loss Due to Electrical Conduction If you select `Voltage, current, and temperature` for the Thermal loss dependent on parameter, then, for both the on state and the off state, the heat loss due to electrical conduction is `${E}_{conduction}=\int fcn\left(T,{I}_{out}\right)\text{\hspace{0.17em}}dt,$` where: • Econduction is the heat loss due to electrical conduction. • T is the device temperature. • Iout is the device output current. The function fcn is a 2-D lookup table: `${Q}_{conduction}=tablelookup\left({T}_{j_data},{I}_{out_data},{I}_{out_data_repmat}\text{\hspace{0.17em}}.*\text{\hspace{0.17em}}{V}_{on_data},T,{I}_{out}\right),$` where: • Tj_data is the Temperature vector, Tj parameter value. • Iout_data is the Output current vector, Iout parameter value. • Iout_data_repmat is a matrix that contains length, Tj_data, copies of Iout_data. • Von_data is the On-state voltage, Von=fcn(Tj,Iout) parameter value. If you select `Voltage and current` for the Thermal loss dependent on parameter, then, for both the on state and the off state, the heat loss due to electrical conduction is `${E}_{conduction}=\int \left({I}_{out}*{V}_{on_scalar}\right)dt,$` where Von_scalar is the On-state voltage parameter value. #### Heat Flow The block uses the Energy dissipation time constant parameter to filter the amount of heat flow that the block outputs. The filtering allows the block to: • Avoid discrete increments for the heat flow output • Handle a variable switching frequency The filtered heat flow is `$Q=\frac{1}{\tau }\left(\sum _{i=1}^{n}{E}_{on\left(i\right)}+\sum _{i=1}^{n}{E}_{off\left(i\right)}+{E}_{conduction}-\int Q\text{\hspace{0.17em}}dt\right),$` where: • Q is the heat flow from the component. • τ is the Energy dissipation time constant parameter value. • n is the number of switching cycles. • Eon(i) is the switch-on loss at the ith switch-on event. • Eoff(i) is the switch-off loss at the ith switch-off event. • Econduction is the heat loss due to electrical conduction. • ∫Qdt is the total heat previously dissipated from the component. ## Ports The figure shows the block port names. `G` Port associated with the gate terminal. You can set the port to either a physical signal or electrical port. `A` Electrical conserving port associated with the anode terminal. `K` Electrical conserving port associated with the cathode terminal. `H` Thermal conserving port. The thermal port is optional and is hidden by default. To enable this port, select a variant that includes a thermal port. ## Parameters ### Main Tab Forward voltage, Vf Minimum voltage required across the anode and cathode block ports for the gradient of the device I-V characteristic to be 1/Ron, where Ron is the value of On-state resistance. The default value is `0.8` `V`. On-state resistance Rate of change of voltage versus current above the forward voltage. The default value is `0.001` `Ohm`. Off-state conductance Anode-cathode conductance when the device is off. The value must be less than 1/R, where R is the value of On-state resistance. The default value is `1e-5` `1/Ohm`. Gate trigger voltage, Vgt Gate-cathode voltage threshold. The device turns on when the gate-cathode voltage is above this value. The default value is `1` `V`. Gate turn-off voltage, Vgt_off Gate-cathode voltage threshold. The device turns off when the gate-cathode voltage is below this value. The default value is `-1` `V`. Holding current Current threshold. The device stays on when the current is above this value, even when the gate-cathode voltage falls below the gate trigger voltage. The default value is `1` `A`. ### Integral Diode Tab Integral protection diode Block integral protection diode. The default value is `None`. The diodes you can select are: • `Protection diode with no dynamics` • `Protection diode with charge dynamics` #### Parameters for Protection diode with no dynamics When you select `Protection diode with no dynamics`, additional parameters appear. #### Parameters for Protection diode with charge dynamics When you select `Protection diode with charge dynamics`, additional parameters appear. ### Thermal Model Tab The Thermal Model tab is enabled only when you select a block variant that includes a thermal port. Thermal loss dependent on Select a parameterization method. The option that you select determines which other parameters are enabled. Options are: • `Voltage and current` — Use scalar values to specify the output current, switch-on loss, switch-off loss, and on-state voltage data. • `Voltage, current, and temperature` — Use vectors to specify the output current, switch-on loss, switch-off loss, on-state voltage, and temperature data. This is the default parameterization method. Off-state voltage for losses data The output voltage of the device during the off state. This is the blocking voltage at which the switch-on loss and switch-off loss data are defined. The default value is `300` `V`. Energy dissipation time constant Time constant used to average the switch-on losses, switch-off losses, and conduction losses. This value is equal to the period of the minimum switching frequency. The default value is `1e-4` `s`.
2018-02-22 19:03:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6568220853805542, "perplexity": 5451.1022166679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814249.56/warc/CC-MAIN-20180222180516-20180222200516-00457.warc.gz"}
https://thecollegepanda.com/dont-let-these-4-sat-math-concepts-confuse-you/
# Don't Let These 4 SAT Math Concepts Confuse You All the questions in this post are official ones sourced from The College Board's question of the day app. The dates referred to in this post will be different if you have a newer version of the app. However, the app just cycles through the same questions, so everything in this post will still be relevant to you. You just won't be able to track down questions according to their date. #### 1. Remainder Theorem (December 4th, 2015) A polynomial function $$f$$ has $$x + 8$$ as a factor. Which of the following must be true about the function $$f$$? I. $$f(0) = 8$$ II. $$f(8) = 0$$ III. $$f(-8) = 0$$ A) I only B) II only C) III only D) II and III only Since $$x + 8$$ is a factor of $$f$$, the remainder must be 0 when $$f$$ is divided by $$x + 8$$. Now the remainder theorem states that when a polynomial (e.g. $$f(x) = x^3 + 2x^2 - 3$$) is divided by a monomial such as $$x + 8$$, the remainder is equal to $$f(-8)$$. Therefore, $$f(-8) = 0$$. The other options aren't necessarily true and can't be proven with just the information given. If this is completely new to you, I highly recommend that you read up on synthetic division and the remainder theorem. I cover both topics extensively in my math guide. #### 2. Scatterplots (December 6th, 2015) Each year from 2007 to 2015, a group of people were selected at random and surveyed about their use of online radio. The survey asked respondents whether or not they listened to online radio in the last month. The scatterplot shown gives the results of the survey, where $$P$$ represents the percent of respondents who reported listening to online radio and $$t$$ represents years since 2007. Which of the following equations best models the relationship between $$t$$ and $$P$$? A) $$P = 0.23t + 18$$ B) $$P = t + 20$$ C) $$P = 2.3t + 20$$ D) $$P = 4.3t + 18$$ If you haven't worked with scatterplots before, it's simple. Each dot represents a data point. For example, in 2015 (8 years since 2007), the percent of respondents who listen to online radio was 53 percent. The question is asking for the equation of the line of best fit, which is the line that most closely follows the points as a whole. The SAT will never ask you to find the exact line of best fit. We can make a pretty good guess at it ourselves by drawing in our own line of best fit. Let's find the equation of our line. We can do so by using two points on our line. The points $$(5, 39)$$ and $$(8,51)$$ look like easy points to work with. The slope between these points is $\dfrac{y_2 - y_1}{x_2 - x_1} = \dfrac{51 - 39}{8 - 5} = 4$ Now we can use point-slope form to find the equation of the line: $y - y_1 = m(x - x_1)$ $y - 39 = 4(x - 5)$ $y - 39 = 4x - 20$ $y = 4x + 19$ In the context of this problem, this equation should be expressed as $$P = 4t + 19$$. The closest answer choice to our equation is D. #### 3. Experimental Design (January 19th, 2016) A restaurant chain with 8 locations wants to introduce healthier options to its menu. In order to determine customer preferences, the chain will offer three new healthy options for a two-week period at all of its locations and analyze the percent of total orders that include at least one of the new healthy options. Which of the following research designs is most likely to produce valid results? A) Recording how many customers ordered each of the new options at all of the restaurants over the two-week period B) Asking a random sample of customers at all of the restaurants which new item they might order in the future C) Recording how many customers ordered each of the new options at one of the restaurants over the two-week period D) Recording how many customers ordered each of the new options for breakfast at all of the restaurants over the two-week period In statistical experiments or studies, you want to keep the goal in mind. It's important to perform the study in a way that gives you the most accurate data possible as it relates to the goal. The goal in this question is to see which of the new healthy options the restaurant's customers prefer. Now if you think about it, the most accurate data would come from a study that • samples all the customers, not just a segment • runs for a long time • is conducted at all the restaurants Answer B is wrong because what customers say is very often not reflective of what they do. Answer C is wrong because it focuses only on one restaurant. Answer D is wrong because it restricts the study to the restaurant's breakfast customers, who are not necessarily representative of all the restaurant's customers. #### 4. Interpreting the Vertex (February 10th, 2016) The scatterplot above relates a certain household's daily electricity usage, $$W$$, in kilowatt-hours (kWh), to the average temperature for that day (24-hour period). A quadratic function that best fits the data is modeled in the graph above. Given that $$T$$ represents the average temperature for a specific day, in degrees Fahrenheit ($$^\circ$$F), which of the following is the best interpretation of the vertex of the best fit curve in this situation? A) The household uses approximately 14kWh of electricity on a day when the average temperature is 0$$^\circ$$F. B) The household uses about 22kWh of electricity on a day when the average temperature is 0$$^\circ$$F. C) The least amount of electricity used by the household on a specific day is 55kWh. D) The least amount of electricity used on a specific day when the average temperature is 55$$^\circ$$F. Think of the vertex of a quadratic as the "midpoint"—the graph is symmetrical on either side of it. The vertex also designates the maximum or the minimum of a quadratic. Because the graph is U-shaped in this question, the vertex designates the minimum. From the graph, we can estimate the vertex to be $$(55,8)$$. This means the household uses the least amount of electricity (8 kWh) when the average temperature for the day is 55$$^\circ$$F. Note that answers (A) and (B) are wrong because they deal with the $$y$$-intercept, not the vertex.
2019-07-23 12:17:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5063908100128174, "perplexity": 530.7081733585364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00279.warc.gz"}
http://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_revolute_joint.html
Drake RevoluteJoint< T > Class Template Referencefinal This Joint allows two bodies to rotate relatively to one another around a common axis. More... #include <drake/multibody/multibody_tree/joints/revolute_joint.h> Inheritance diagram for RevoluteJoint< T >: [legend] Collaboration diagram for RevoluteJoint< T >: [legend] ## Public Types template<typename Scalar > using Context = systems::Context< Scalar > ## Public Member Functions RevoluteJoint (const std::string &name, const Frame< T > &frame_on_parent, const Frame< T > &frame_on_child, const Vector3< double > &axis) Constructor to create a revolute joint between two bodies so that frame F attached to the parent body P and frame M attached to the child body B, rotate relatively to one another about a common axis. More... const Vector3< double > & get_revolute_axis () const Returns the axis of revolution of this joint as a unit vector. More... Does not allow copy, move, or assignment RevoluteJoint (const RevoluteJoint &)=delete RevoluteJointoperator= (const RevoluteJoint &)=delete RevoluteJoint (RevoluteJoint &&)=delete RevoluteJointoperator= (RevoluteJoint &&)=delete Context-dependent value access These methods require the provided context to be an instance of MultibodyTreeContext. Failure to do so leads to a std::logic_error. const T & get_angle (const Context< T > &context) const Gets the rotation angle of this mobilizer from context. More... const RevoluteJoint< T > & set_angle (Context< T > *context, const T &angle) const Sets the context so that the generalized coordinate corresponding to the rotation angle of this joint equals angle. More... const T & get_angular_rate (const Context< T > &context) const Gets the rate of change, in radians per second, of this joint's angle (see get_angle()) from context. More... const RevoluteJoint< T > & set_angular_rate (Context< T > *context, const T &angle) const Sets the rate of change, in radians per second, of this this joint's angle to theta_dot. More... Public Member Functions inherited from Joint< T > Joint (const std::string &name, const Frame< T > &frame_on_parent, const Frame< T > &frame_on_child) Creates a joint between two Frame objects which imposes a given kinematic relation between frame F attached on the parent body P and frame M attached on the child body B. More... virtual ~Joint () const std::string & get_name () const Returns the name of this joint. More... const Body< T > & get_parent_body () const Returns a const reference to the parent body P. More... const Body< T > & get_child_body () const Returns a const reference to the child body B. More... const Frame< T > & get_frame_on_parent () const Returns a const reference to the frame F attached on the parent body P. More... const Frame< T > & get_frame_on_child () const Returns a const reference to the frame M attached on the child body B. More... Joint (const Joint &)=delete Jointoperator= (const Joint &)=delete Joint (Joint &&)=delete Jointoperator= (Joint &&)=delete ## Friends template<typename > class RevoluteJoint class JointTester Protected Member Functions inherited from Joint< T > void DoSetTopology (const MultibodyTreeTopology &) const JointImplementationget_implementation () const Returns a const reference to the internal implementation of this joint. More... ## Detailed Description ### template<typename T> class drake::multibody::RevoluteJoint< T > This Joint allows two bodies to rotate relatively to one another around a common axis. That is, given a frame F attached to the parent body P and a frame M attached to the child body B (see the Joint class's documentation), this Joint allows frames F and M to rotate with respect to each other about an axis â. The rotation angle's sign is defined such that child body B rotates about axis â according to the right hand rule, with thumb aligned in the axis direction. Axis â is constant and has the same measures in both frames F and M, that is, â_F = â_M. Template Parameters T The scalar type. Must be a valid Eigen scalar. Instantiated templates for the following kinds of T's are provided: • double • AutoDiffXd They are already available to link against in the containing library. No other values for T are currently supported. ## Member Typedef Documentation using Context = systems::Context ## Constructor & Destructor Documentation RevoluteJoint ( const RevoluteJoint< T > & ) delete RevoluteJoint ( RevoluteJoint< T > && ) delete RevoluteJoint ( const std::string & name, const Frame< T > & frame_on_parent, const Frame< T > & frame_on_child, const Vector3< double > & axis ) inline Constructor to create a revolute joint between two bodies so that frame F attached to the parent body P and frame M attached to the child body B, rotate relatively to one another about a common axis. See this class's documentation for further details on the definition of these frames and rotation angle. The first three arguments to this constructor are those of the Joint class constructor. See the Joint class's documentation for details. The additional parameter axis is: Parameters [in] axis A vector in ℝ³ specifying the axis of revolution for this joint. Given that frame M only rotates with respect to F and their origins are coincident at all times, the measures of axis in either frame F or M are exactly the same, that is, axis_F = axis_M. In other words, axis_F (or axis_M) is the eigenvector of R_FM with eigenvalue equal to one. This vector can have any length, only the direction is used. This method aborts if axis is the zero vector. ## Member Function Documentation const T& get_angle ( const Context< T > & context ) const inline Gets the rotation angle of this mobilizer from context. Parameters [in] context The context of the MultibodyTree this joint belongs to. Returns The angle coordinate of this joint stored in the context. const T& get_angular_rate ( const Context< T > & context ) const inline Gets the rate of change, in radians per second, of this joint's angle (see get_angle()) from context. Parameters [in] context The context of the MultibodyTree this joint belongs to. Returns The rate of change of this joint's angle as stored in the context. const Vector3& get_revolute_axis ( ) const inline Returns the axis of revolution of this joint as a unit vector. Since the measures of this axis in either frame F or M are the same (see this class's documentation for frames's definitions) then, axis = axis_F = axis_M. RevoluteJoint& operator= ( RevoluteJoint< T > && ) delete RevoluteJoint& operator= ( const RevoluteJoint< T > & ) delete const RevoluteJoint& set_angle ( Context< T > * context, const T & angle ) const inline Sets the context so that the generalized coordinate corresponding to the rotation angle of this joint equals angle. Parameters [in] context The context of the MultibodyTree this joint belongs to. [in] angle The desired angle in radians to be stored in context. Returns a constant reference to this joint. const RevoluteJoint& set_angular_rate ( Context< T > * context, const T & angle ) const inline Sets the rate of change, in radians per second, of this this joint's angle to theta_dot. The new rate of change theta_dot gets stored in context. Parameters [in] context The context of the MultibodyTree this joint belongs to. [in] theta_dot The desired rate of change of this joints's angle in radians per second. Returns a constant reference to this joint. Here is the call graph for this function: ## Friends And Related Function Documentation friend class JointTester friend friend class RevoluteJoint friend The documentation for this class was generated from the following files:
2017-10-21 02:47:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2967727482318878, "perplexity": 7290.012493031209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00059.warc.gz"}
https://www.queryhome.com/puzzle/39108/raina-makes-score-inning-career-thus-find-average-after-inning
# Raina makes a score of 110 runs in 22nd inning of his career and thus................Find his average after 22nd inning. 20 views Raina makes a score of 110 runs in 22nd inning of his career and thus increases his average by 4. Find his average after 22nd inning. posted Mar 11 Looking for solution? Promote on: Similar Puzzles Ajit has played 50 Test innings and his average is 50. How many runs should Ajit score in his 51st test inning, so that his average moves up to 51? Note: He got dismissed every time. In a cricket match the average number of runs per over in the first 20 overs was three. After a further 30 overs the average number of runs rose to seven. What was the average number of runs in the last 30 overs only?
2021-04-15 05:20:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8864685893058777, "perplexity": 4683.136238817869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00130.warc.gz"}