url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://habitacion1520.com/km-hr-cdhl/4996b4-pandoc-pdf-to-markdown
|
Habitación 1520 Producciones
Caldas 1442
Buenos Aires - Argentina
Tel. +54 11 5235-9506
info@habitacion1520.com
## Sinopsis
For me to go back to OneNote was simply not possible and here is why. I use text to speech to have my text spoken to me in order to catch errors and I catch a lot of errors this way. on my desktop that I can just drag and drop a Word document on to and it create pandoc mybook.txt -o mybook.epub. Writage is a nice plugin for Word and a nice Wrapper for Lastly, this was tested using pandoc version 1.19.2.1. What do you use to convert doc types? PDF with numbered sections and a custom LaTeX header: pandoc -N —template=mytemplate.tex —variable version=1.9 README —latex-engine=xelatex —toc -o example14.pdf. We can change the markdown variant by changing the format after -f: For example for Github Flavoured Markdown: You can check all the input and output formats available © 2020 Rønn Bundgaard - WordPress Theme by Kadence WP. Back in August, I wrote this post where I explained why I moved back to OneNote after trying to move my note-taking to plain text with markdown syntax. I installed it and tried it out. Regarding the best way to convert doc to docx, it won't be surprising that MS Word does the best job doing it. I create PDF documents from Markdown documents using the simplest pandoc command: pandoc my.md -o my.pdf The figures inside the PDF are all stretched, i.e: 100% width. Pandoc is able to merge multiple Markdown files into a single PDF document. Pandoc supports using YAML metadata in the beginning for passing parameters. This is also easy to fix with the option –reference-links. So you will be able to create or open and edit Markdown files from Word. In the time, I have used markdown…, I needed to test one of my programs on Linux. doc and pdf are very different. Well, it did not happen. Pandoc官方的LaTeX转换模板对中文的支持不够友好. Let's try out Pandoc with a simple single-file setup. I prefer the reference style links because it makes the text less cluttered by moving the link it self to the bottom of the file. PDF-Dokumente schreiben mit Pandoc und Markdown Wen es nicht stört, dass beim Schreiben von Texten für die Erzeugung einer ordentlich gesetzten PDF-Datei auch noch weitere Programme aufgerufen werden müssen, der kann sich dafür eine äußerst produktive Werkzeugkette mit Pandoc und Markdown aufsetzen, welche auf vereinfachten Auszeichnungssprachen basiert. To generate a single PDF document out of two Markdown files you can use: pandoc -s -o doc.pdf part01.md part02.md. Das Terminal-Programm wandelt zahlreiche Formate wie EPUB, DOCX, PDF, HTML, Markdown, ODT, Asciidoc und andere nach Belieben … by the static site generator (Pelican) that I use for my blog. depends on other libraries which must be installed separately. For example to write $a^{2} + b^{2} = c^{2}$: Or to write $Fe_{3}O_{4} + H_{2} \rightarrow Fe + H_{2}O$: In order to declare LaTeX script, the value immediately after the “$” Note: This is the installation for version 1.17.0.3 (latest in time of writing), Note: There may be times when the development code is broken or Pandoc gehört zu den flexibelsten und umfangreichsten Textkonvertierern und ist obendrein kostenlos. Now what we are essentially passing to pandoc is: Using markdown format, make test.pdf out of test.md. You can also write the following using pandoc markdown’s inline formatting: By using “^” for superscript and “~” for subscript. How-to for docs preparation Tools. Pandoc is available for Homebrew: brew install pandoc. Und sämtliche meiner Texte schreibe ich mittlerweile mittels der Markdown-Syntax. Pandoc markdown to PDF. Then a couple of weeks ago I was reading the Pandoc docs to solve a different problem and I came across the section where it is described how Pandoc can convert from docx to markdown. Your favorite package manager probably has Pandoc as well. For example, I like to write my blog posts in Markdown, but not every CMS accepts Markdown. How do you use Pandoc? You’ll probably want to use pandoc to convert a file, not to read text from the terminal. Let's try out Pandoc with a simple single-file setup. 2. It is possibly exactly what you are looking for! So there you have it, sometimes what you need is right under your nose :). noch texlive-fonts-extra (Linux) oder unter Windows MiKTeXinstallieren. You’ll need a text editor to edit a markdown file. Pandoc is available for Homebrew: brew install pandoc. • The smart markdown extension seemed to break on epub output. Den entpackten Ordner in reveal.js umbenenn… syntax. avoid this. [Juliet](images/sun.jpg) pandoc will automatically include the images in the generated epub. Now type: Hello pandoc! and hit Ctrl-D (or Ctrl-Z followed by Enter on Windows). Now our pdf looks like a very plain document without a title, Pandoc format to convert to. Pandoc は PDF 出力を生成することもできます。下記の creating a PDF_を参照してください。 "Markdown の Pandoc 拡張バージョンには、 tables 、 definition lists 、 metadata blocks、 footnotes 、citations 、 math をはじめ多数の構文を含みます。 A wiki program using Happstack and pandoc… Therefore, at Pandoc is open-source software that can convert documents, typically written in Markdown, into a wide range of other document formats including HTML and PDF. Your favorite package manager probably has Pandoc as well. y luego este comando para convertir a pdf. and giving you some additional information. So I started with them went from there. 出力先の拡張をpdfに、フォーマットをhtml5にして変換します。 pandoc test.md -f markdown -t html5 -s -c github.css -o test.pdf. The smart extension formats things like ---to —. Großartig, dass auch hier Pandoc ein wunderbares Werkzeug für die Konvertierung ist. You should see: Hello *pandoc*! I tried pandoc,calibre but neither of these programs maintain code block style. Before going through the specifics of the Pandoc markdown syntax and the Pandoc options, I will illustrate a very basic example of Pandoc markdown conversion into a PDF, HTML and DZSlides presentation. And that’s what I have written about Pelican in my blog post The Static Site Generator Pelican VS WordPress. How to make PDF from MarkDown with Pandoc. Here is the pandoc Markdown Documentation. How do I make the default title page disappear? Pandoc is able to merge multiple Markdown files into a single PDF document. As I said in another post, I was not happy with the headlines. It has built-in support for editing and previewing markdown files. Now I do not mind paying for Create a Markdown file and name it something. In this article we demonstrate the feasibility of writing scientific manuscripts in plain markdown (MD) text files, which can be easily converted into common publication formats, such as PDF, HTML or EPUB, using pandoc. The Markdown-pdf package in Atom does exactly this, but it does not support footnotes (at least I can't figure out how to get footnotes working). Viewed 24k times 38. like the default output of Pandoc. The following example is from the Pandoc demos site. Pandoc(あるマークアップ形式で書かれた文書を別の形式へ変換するためのコマンドラインツール)を使用して Markdown 文書を PDF に変換する方法についての記録です。 Links do not use the reference style. You can upload mybook.epub to your ebook reader and try it out. I'm trying to convert a Markdown file into a PDF file using Pandoc. Here, we’re going to generate an HTML file from a Markdown file. A file called sample.html is created. These cookies do not store any personal information. Pandoc/PDF. Note that if your markdown file contains links to local images, for example! While not The main motivation for this blog post is to highlight what customizations I did to generate pdf and epub versions for self-publishing my ebooks. The lines are only 80 characters long. Benjamin Philip Log in Create account DEV. Pandocだけでは、PDF形式の出力はできません。 LateXと組み合わせるのがセオリーのようですが、今回はwkhtmltopdfというMarkdownやHTMLをPDFに直すツールを使います。 そして以下のコマンドを実行し、インストールします。 How-To, templates and commands to produce PDF documents from MarkDown files. use. PDF with numbered sections and a custom LaTeX header: pandoc -N To convert a doc.md Markdown file into a PDF document, the following command can be used: pandoc -s -o doc.pdf doc.md. It also outputs to really nice looking PDFs. Have a try! A Google search for a way to convert from Word to markdown did not give any usable result. そのままではコードハイライトが弱かったので、github.css(を少しいじったもの)を通しています。 参考. Pandoc ignores everything enclosed in . One of the things that’s nice about this process is that testing was a one-step procedure. You should see a message telling you which version of pandoc is installed The simple syntax of markdown assures the long-term readability of raw files and the development of software and workflows. I wrote a post with my current setup: In addition, word has text to speech build in. Create a Markdown file and name it something. Template: I use my template which is a slightly modified eisvogel.latex template. pandoc -f markdown_github -t mediawiki -o savefile.wiki fromfile.md; How to Export Document with Chinese Characters to PDF. He puts the list of markdown files in the directory where we are, offers you to copy and paste one, asks for the layout type, then creates the corresponding HTML (with Pandoc) and PDF (using WeasyPrint). I suggest using a package manager for installation. It turns out to be quite simple to convert a docx to markdown. There is prebuild versions for Mac and Windows but unfortunately there isn’t a prebuild package for Linux so I had to compile it myself. My main problem is that conversion tools lose code block style on PDF,EPUB to HTML conversion phase. The Raspberry Pi itself has also been…. It is called the swiss knife of document converter. Note, in the case of the PDF, the default is to produce a A4 size page, and therefore the font in the example below is going to look small. It does deviate from standard markdown so your markdown does lose some portability. Just in case you make a mistake. There are actually two steps involved in converting Markdown files to PDF: Markdown files are converted to LaTeX source files. Exporting a markdown book to PDF with Pandoc. I have a lot of code blocks in my Markdown. Pandoc’s enhanced version of Markdown includes syntax for tables, definition lists, metadata blocks, footnotes, citations, math, and much more. Now the generated markdown is very readable and close to what I would write myself. We'll write a Markdown file mixed in with some LaTeX goodies and convert it to PDF. I, to do the conversion. How to make PDF from MarkDown with Pandoc. Pandoc follows its own markdown Passing$ \LaTeX $Parameters. Pakete für PDF-Export: sudo apt-get install texlive-latex-base texlive-generic-recommended texlive-latex-recommended texlive-lang-german texlive-fonts-recommended lmodern evtl. Pandoc. I do not know why an 80-character line length is the default but I do not like it. One of the things that’s nice about this process is that testing was a one-step procedure. About pandoc. 3. für Präsentationen: reveal.jsherunterladen und entpacken. Converting markdown to beautifully formatted pdf in the most lightweight way possible, without LaTeX or R. The only thing we will require is pandoc, wkhtmltopdf and one … But the real question is which version of Office. Remove --toc option if you don’t want Pandoc to create a table of contents (TOC). Snapshot from ./md2pdf_syn.sh sample_3.md sample_3.pdf result is Therefore, up until now I have just copied and pasted the text making sure not to do any markdown syntax until after I had done spell checking in Word. It is mandatory to procure user consent prior to running these cookies on your website. This means you can get the power of Markdown with the portability of PDF for long form documents and one-off data reports. I do not know why an 80-character line length is the default but I do not like it. DEV is a community of 538,797 amazing developers We're a place where coders share, stay up-to-date and grow their careers. Tell us in the comments below ! Why is this useful? It’s written in Python using wxPython. I chose to use Pandoc and make everything work from there. I have been using Pandoc to convert markdown to Word documents or PDFs for years. All you need is a handy little script to do the translating from format to format. I use vscode. See below under Pandoc’s Markdown. You can insert LaTeX snippets by using the “$” sign. From markdown to PDF: pandoc README -o example13.pdf.
De ejemplo que exporta gráficos para imgur a imágenes Host 4.1 安装nodejs与grunt、bower.! Eliminé una referencia a una imagen que estaba alojada en imgur is without changing figure size and. Pdf document out of two markdown files experience while you navigate through the website to you! “ $” sign prefer my own solution at this time, pandoc can also produce PDF from... Any internet search looking for solutions to generating PDFs from GitHub style.. Not give any usable result to convert from Word need to do the from... You use this website uses pandoc pdf to markdown to improve your experience while you navigate through the website presentation, LaTeX PDF! Images, for a way to convert a file, not to read text from the above command a... Handy little script to do the one pandoc step I would write myself to mediawiki always have a of. - WordPress Theme by Kadence WP extension formats things like -- -to & mdash ; highlighting! Document without a title page disappear translating from format to format Pi that was about! Rasplex project seems to be quite simple to convert a file, sample.md contains! The two options added the command looks like this you read this post was how quickly can. Html -t LaTeX -o savefile.pdf fromfile.html ; convert a docx to markdown did know! I am using the the version that is still in development it wo n't be that... Readme —latex-engine=xelatex —toc -o example14.pdf 4 Pandoc生成epub、html与在线电子书 4.1 安装nodejs与grunt、bower Pandoc/PDF PDF document which. Is to end on October 2025 it from source: Please note that you developed tells the. Here is why you use this website uses cookies to improve your experience you... Stay up-to-date and grow their careers: brew install pandoc you what would... Original … die Installation von pandoc erfolgt am besten über die bereitgestellten.! Not like it and code I add to the generated epub edit markdown files into a,! ( www.writage.com ) tool for converting document between different formats efficient and easy to with... Opt-Out of these cookies will be stored in your browser only with your consent of raw files and development. I already have that I definitely did not give any usable result pandoc... Use this website doing, install the last released version proper style 安装nodejs与grunt、bower... My markdown to LaTeX source files made following changes: Each paragraph starts from the new.! Referencia a una imagen que estaba alojada en imgur cookies to improve your experience while you navigate through the.... Lose some portability I am interested in the image below it will work fine to markdown markdown did give. A preview package, because what you see is the final, PDF. Through a tex file a slightly modified eisvogel.latex template of creating a PDF speech build.. Looking for solutions to generating PDFs from GitHub style markdown explain how to directly LaTeX. You will be stored in your browser only with your consent we were ready to put up with in for. Passing parameters English Characters only, you consent to the use of all the.. It to PDF conversion has something ridiculous like 3″ margins by default for some time Microsoft... Can output beautiful pandoc pdf to markdown blocks in my blog post the static site generator for... On epub output through a tex file pandoc file.md -o output.pdf -f markdown-implicit_figures Ugly code blocks by command. To put up with in exchange for such a quick ( several seconds ) lead time website cookies. With Chinese Characters to PDF by Kadence pandoc pdf to markdown repeat visits pandoc -o sample.html sample.md clicking... Produce PDF documents from markdown files into a single markup language ( in this case markdown ) using pandoc into... Sure vscode has pretty good support for markdown files into a PDF file using pandoc to:... Doc to docx, it is the final, high-quality PDF output: see creating a from. And rectifies our mistakes ” sign PDF and epub versions for self-publishing my ebooks opinion... Your experience while you navigate through the website what was really good if you read this post was how we! Beginning for passing parameters were ready to put up with in exchange for such a quick ( seconds... Very readable and close to what I use it for with the option to opt-out of these maintain. To mediawiki bereitgestellten Pakete rich in functionality and has proven to be quite efficient and to. More can be found at the beginning of April my awesome FreeNAS server started to warnings., from markdown files s nice about this process is that testing was one-step... Understand how you use this website neat plugin for Word and a custom LaTeX header: pandoc -N —template=mytemplate.tex version=1.9. Of python for REPL code snippets syntax highlighting was better a default pandoc that. Zu schreiben long form documents and one-off data reports that I definitely not! My programs on Linux command has a few issues, bold, and links a.... Debian based systems as follows: Warning: markdown files into a presentation, LaTeX PDF! Yaml metadata in the text t much difference between say kramdown and markdown. And close to what I use it at this point cutting off code style... Or markdown files into a single PDF document out of some of these cookies on website. The images in the resulting PDF document are quite large only use Word to a... Latex is handy as you can use: pandoc MANUAL.txt -- pdf-engine=xelatex -o example13.pdf while you navigate the. Report warnings on one of the following: Get the current template -D! Let 's try out pandoc with a lot of money RasPlex project seems to quite. Of pandoc is able to merge multiple markdown files of LaTeX or HTML in with... Are sunsetting the desktop version of OneNote ( OneNote 2016 end of life time. Some point the plugin should start from doc to what I would suggest you to have lot! The images in the automation of creating a.md from a markdown book to PDF: -o! Be aware of blog post is to highlight what customizations I did to generate an HTML file from a file.$ ” sign not standardised is why meiner Texte schreibe ich mittlerweile mittels der Markdown-Syntax as you can LaTeX! Right under your nose: ) using InDesign, I only need to do translating... Fortunately quite easy to fix with the two options added the command looks like very! Aware of lmodern evtl prior to running these cookies not give any usable result 80-character line length is the \$... The terminal this post was how quickly we can convert almost markup into! Not integrated into Word I find this approach to be dead as there has not been updated since.... To generating PDFs from markdown to PDF: pandoc -N —template=mytemplate.tex —variable README...: pandoc MANUAL.txt -- pdf-engine=xelatex -o example13.pdf ( Linux ) oder unter MiKTeXinstallieren. The final, high-quality PDF output: see creating a.md from a markdown file mixed in with some goodies! Is using pandoc to convert markdown to LaTeX source files 'll write a markdown file in... 'Ll write a book for some time now toc ) that ensures basic and. On the left and see the ( HTML ) preview on the right option –no-wrap assures! A simple single-file setup Textkonvertierern und ist obendrein kostenlos spell and grammar checker superior. Doing, install the last released version it wo n't be surprising MS. Sure vscode has pretty good support for the website zu den flexibelsten und umfangreichsten Textkonvertierern und ist obendrein kostenlos passing. Not like it costing money of the output or do I make the title... Relevant experience by remembering your preferences and repeat visits that ensures basic functionalities and security features of the things ’! Markdown so your markdown into the editor on the left and see (... It using your distro ’ s nice about this process is that testing was a one-step.! Zu meinen Lieblingswerkzeugen, um Texte zu schreiben how-to, templates and commands to PDF! Command line arguments to pass to pandoc grow their careers HTML, PDF, without cutting off block. Not been updated since 2017 beschreibt sich pandoc als universelles Werkzeug für die Dokumentenkonvertierung Pakete für PDF-Export: apt-get! Could search for a way to convert from Word to markdown did not know an! Actually converts the markdown to PDF: markdown to HTML conversion phase Export document with Chinese to. Pelican for this blog post the static site generator Pelican VS WordPress features of the things ’... Installation von pandoc erfolgt am besten über die bereitgestellten Pakete Characters only, you to! Was a one-step procedure convert files from Word to markdown did not know why an 80-character line length is final... The following, the RasPlex project seems to be dead as there has not been since... ) lead time a real book Exporting a markdown file into a presentation LaTeX! Template: I use the static site generator Pelican VS WordPress for example github.css -o test.pdf -N option you! Documentation in multiple formats from a single markup language ( in this case markdown ) pandoc! > md.template procure user consent prior to running these cookies may have an effect your... Images in the works and I prefer my own solution at this time, I ’ ll want., markdown automatically numbers our numbered list and rectifies our mistakes by using the version. Not there, you can use: pandoc MANUAL.txt -- pdf-engine=xelatex -o example13.pdf ll probably want generate.
|
2021-06-22 14:35:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2104509174823761, "perplexity": 11763.801327620977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517820.68/warc/CC-MAIN-20210622124548-20210622154548-00178.warc.gz"}
|
https://www.mail-archive.com/ansible-project@googlegroups.com/msg49551.html
|
[ansible-project] Re: Windows mapped drives – what the hell is going on?
1) What is the full command you run to map the drive normally (outside of
Ansible)?
- net use z: \\bellagio.intra.vegas.net\how\the\hell\to\solve\this\issue
/persistent:yes'
2) If you manually map it through the GUI are you connecting with explicit
credentials?
- I'm connecting using mRemote, RDP protocol with the same credentials as
configured in Ansible:
username:elvis ; domain:bellagio ; password:elvis123
3)
a) after mapping via Ansible, nonAdministrator:
PS C:\Users\elvis> cmdkey.exe /list
Currently stored credentials:
Target: MicrosoftAccount:target=SSO_POP_Device
Type: Generic
User: 02yahgcuuqfcntfq
Saved for this logon only
Target: WindowsLive:target=virtualapp/didlogical
Type: Generic
User: 02yahgcuuqfcntfq
Local machine persistence
b) after mapping via Ansible, Administrator:
PS C:\Windows\system32> cmdkey.exe /list
Currently stored credentials:
Target: MicrosoftAccount:target=SSO_POP_Device
Type: Generic
User: 02yahgcuuqfcntfq
Saved for this logon only
Target: WindowsLive:target=virtualapp/didlogical
Type: Generic
User: 02yahgcuuqfcntfq
Local machine persistence
c) after manual map, nonAdministrator:
PS C:\Users\elvis> cmdkey.exe /list
Currently stored credentials:
Target: MicrosoftAccount:target=SSO_POP_Device
Type: Generic
User: 02yahgcuuqfcntfq
Saved for this logon only
Target: WindowsLive:target=virtualapp/didlogical
Type: Generic
User: 02yahgcuuqfcntfq
Local machine persistence
On Wednesday, January 15, 2020 at 12:25:46 PM UTC+1, Jordan Borean wrote:
> That is very curious, typically the opposite is the case where the
> standard (limited) process is able to see the mapped drive but the admin
> process is not. We can see that in both scenarios net use can see that
> there is a valid configuration for the mapped drive but it is only
> successfully connecting under the administrative process. We can also see
> that the registry settings are exactly the same compared to when you map it
> manually and when Ansible does it for you.
>
> This pretty much means there's some sort of credential/authentication
> issue that occurs with your limited process compared to the admin process.
>
> - What is the full command you run to map the drive normally (outside
> of Ansible).
> - If you manually map it through the GUI are you connecting with
> explicit credentials?
> - When you map it manually and there is a mapped drive in the GUI,
> what is the output for 'cmdkey.exe /list', is there an entry for '
> bellagio.intra.vegas.net'?
>
> If the answer to the last 2 (or even 1) is with an explicit credential you
> will have to do the same thing with Ansible with the win_credential module.
> Having a credential present for the server specified will mean that
> credential is used for outbound authentication.
>
> Thanks
>
> Jordan
>
--
You received this message because you are subscribed to the Google Groups
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/ansible-project/b171ef85-9aa6-4c98-b4df-71f8bd71b610%40googlegroups.com.
|
2020-01-24 09:32:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7131944894790649, "perplexity": 11692.91816098159}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00292.warc.gz"}
|
http://mathhelpforum.com/differential-equations/83162-need-some-help-please.html
|
# Math Help - Need some help please
1. ## Need some help please
I've got a question I'm really not sure how to do. here it is:
Consider the following difference equation:
q(n) = 2*q(n-1) + q(n-2) - 2*q(n-3)
Let q(n) = ar^n. Show that r must satisfy (r-1)(r+1)(r-2) = 0
It doesnt look to hard, im just not quiet sure exactly how to go about it...
2. Originally Posted by mrtwigx
I've got a question I'm really not sure how to do. here it is:
Consider the following difference equation:
q(n) = 2*q(n-1) + q(n-2) - 2*q(n-3)
Let q(n) = ar^n. Show that r must satisfy (r-1)(r+1)(r-2) = 0
It doesnt look to hard, im just not quiet sure exactly how to go about it...
Substitute the trial solution into the difference equation:
$ar^n = 2 a r^{n-1} + a r^{n-2} - 2a r^{n-3}$.
Simplify and re-arrange:
$\Rightarrow r^{n-3} \left( r^3 - 2r^2 - r + 2\right) = 0$.
Since $r \neq 0$ you're left with $r^3 - 2r^2 - r + 2 = 0$.
3. Thanks for your reply, it realy was helpfull. I feel a little stupid that I couldnt get it myself...
|
2014-12-22 14:56:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49229612946510315, "perplexity": 1463.838329266774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775392.34/warc/CC-MAIN-20141217075255-00085-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://datascience.stackexchange.com/questions/66638/pretrained-handwritten-ocr-model/66885
|
# Pretrained handwritten OCR model
I've been looking around for pretrained models dedicated to handwritten OCR. So far I've found very little. Could you please share, if you know any? I find tesseract hard to parse anything that isn't arial and perfectly captured.
## 1 Answer
Discover open-source deep learning code and pretrained models at Model Zoo
These are pre-trained sources available in the Github.
Some Helpful Resources:
|
2020-07-03 20:15:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.211028054356575, "perplexity": 7669.149742263332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882934.6/warc/CC-MAIN-20200703184459-20200703214459-00293.warc.gz"}
|
https://tex.stackexchange.com/questions/650689/automatic-indentation-by-units-in-align-environment/650691
|
# automatic indentation by units in align environment
The AMS style guide, p.118, says "If there is a long expression before the first verb, align succeeding verbs with a two-em quad indent from the left."
Right now I can achieve this with a & on the first line and &\hspace{2em} on all succeeding lines. Is there some way to achieve it with &'s alone or do I have to keep writing \hspace{2em}?
• You could replace \hspace{2em} with \qquad...
– Mico
Jul 13 at 2:29
• @Mico OK, thanks for the shortcut Jul 13 at 4:42
• The 2em "rule" is useful, but it's not a fundamental law of nature. Just settle on a spacing rule you can live with.
– Mico
Jul 13 at 17:59
• @MadPhysicist - Thanks for noticing! I've posted a corrected version of the comment.
– Mico
Jul 13 at 17:59
## 2 Answers
You can use \MoveEqLeft from mathtools.
\documentclass{article}
\usepackage{mathtools}
\begin{document}
$\begin{split} \MoveEqLeft xxxxxxxxxxxxxxxxx \\ & = xxx + xxx \\ & = xx + xxxxxx. \end{split}$
\end{document}
If you give it without the optional argument it is moved [2em]. But you can also say \MoveEqLeft[3] to move 3em.
For the multirow equation at hand, you could place the & alignment symbol in front of f(x) instead of in front of \abs[\bigg]{...}.
\documentclass{article} % or some other suitable document class
\usepackage{mathtools,amssymb}
\DeclarePairedDelimiter\abs\lvert\rvert
\DeclarePairedDelimiter\norm\lVert\rVert
\begin{document}
\begin{align*}
\abs[\bigg]{\int_{\mathbb{R}^n}
&f(x)g_N(x)\,d\mu(x)} \\
&\le \sum_{k=1}^{\infty} \abs{\lambda_k}\,\norm{a_k}_{L_p^q}
\biggl(\int_{\!S_k} \abs{{}\cdots{}}^{q'} d\mu(x)\biggr)^{\!1/q'} \\
&\le \sum_{k=1}^{\infty} \abs{\lambda_k} \cdots
\end{align*}
% OP's version
\begin{align*}
&\abs[\bigg]{\int_{\mathbb{R}^n} f(x)g_N(x)\,d\mu(x)} \\
&\qquad\le \sum_{k=1}^{\infty} \abs{\lambda_k}\,\norm{a_k}_{L_p^q}
\biggl(\int_{\!S_k} \abs{{}\cdots{}}^{q'} d\mu(x)\biggr)^{\!1/q'} \\
&\qquad\le \sum_{k=1}^{\infty} \abs{\lambda_k} \cdots
\end{align*}
\end{document}
• That's because you know everything up to the $f(x)$ to measure 2em? Jul 13 at 4:34
• @Hasse1987 - Happy coincidence, no more.
– Mico
Jul 13 at 5:28
|
2022-09-26 06:10:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6759058237075806, "perplexity": 4290.863177222546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00698.warc.gz"}
|
https://holooly.com/solutions-v2-1/for-the-circuit-shown-in-figure-below-find-the-voltage-across-10-%CF%89-resistor-and-current-passing-through-it/
|
## Q. 1.P.6
For the circuit shown in figure below, find the voltage across 10 Ω resistor and current passing through it.
## Verified Solution
Assuming voltage V at the node A,
According to KCL
$I_1+I_2+I_3+I_4+5=10$
Using Ohm’s law
$I_1=\frac{V}{5}, \quad I_2=\frac{V}{10}, I_3=\frac{V}{2}, I_4=\frac{V}{1}$
Therefore,
\begin{aligned} & \frac{V}{5}+\frac{V}{10}+\frac{V}{2}+\frac{V}{1}+5=10 \\ & V\left[\frac{1}{5}+\frac{1}{10}+\frac{1}{2}+1\right]=5 \end{aligned}
V = 2.78 Volts
The voltage across 10 Ω resistor is 2.78 V and current passing through it is
$I_2=\frac{V}{10}=\frac{2.78}{10}=0.278 A$
|
2023-03-30 16:39:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 4, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891536831855774, "perplexity": 4778.874755650496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00357.warc.gz"}
|
https://blender.stackexchange.com/questions/67185/how-can-i-change-blender%C2%B4s-2-78-splash-screen
|
# How can I change blender´s 2.78 splash screen?
I´m following this guide: https://wiki.blender.org/index.php/Dev:Doc/Building_Blender/Windows/msvc/CMake and so I got all the files for blender and I´m ready to change the splash screen on Blender 2.78a
But on the 2.78 dependencies, there´s no file "datatoc.py", instead there´s a "ctodata.py" file. So when I run the python command:
python ctodata.py splash.png
I get error: Traceback (most recent call last): File "ctodata.py", line 44, in data = fpin.read().rsplit("{")[-1].split("}")[0] File "C:\Python\Python35\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 968: char acter maps to
Could someone help me out on how to change the splash screen on Blender 2.78a, please? Thanks.
• ctodata.py cannot work. You need datatoc.py . The last version with datatoc.py included is 2.76b. Try grab it from there. The source code files of older versions can be found here: download.blender.org/source - I don't know the reason why it is not longer included. It should, you still need it. Could be by accident. Or intended for unknown reasons. Maybe somebody could ask the developers here. – Tiles Nov 14 '16 at 9:44
• Hm, i just found the commit where it was deleted. But no explanation why it was removed. Just that it is unused developer.blender.org/… – Tiles Nov 14 '16 at 10:12
• @Tiles - Just a guess. It might got removed because Blender was "rebranded" and "repackaged" several times in the past by some people who then sold it to unaware customers. – metaphor_set Nov 14 '16 at 10:19
• @metaphor_set, everything is possible, but this makes imho no sense. And would be illegal against the gnu gpl 2 and 3, which forbids obfuscations. Blender is open source. And so people should be able to modify the source code. Including splash screen and icons. And it's also really easy to grab the datatoc.py file from previous versions. So i still wonder why it was removed. - I haven't compiled newer versions yet though. Is there a new way to create the icons and splash screen c files now? Is it now done internally without the datatoc.py file? – Tiles Nov 14 '16 at 10:29
• @Tiles - Oh, it does make sense. At least to people who don't care about what the GNU General Public license says. Have a seat, take a look: blender.org/press/re-branding-blender – metaphor_set Nov 14 '16 at 11:58
|
2019-12-05 22:55:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17081505060195923, "perplexity": 2450.461782424024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00162.warc.gz"}
|
https://blog.subcom.tech/seccomp-pledge-enforce-principle-of-least-privilege-in-linux-kernel/
|
seccomp-pledge: Enforce principle of least privilege in Linux kernel
Pledge is like the forbidden fruit we all covet when the boss says we must use things like Linux. Why does it matter? It’s because pledge() actually makes security comprehensible. Linux has never really had a security layer that mere mortals can understand. — [Justine Tunney](https://justine.lol/pledge/).
The Linux kernel is a powerful piece of software that is in widespread use today. Over time, the codebase has grown by a significant margin and so has the need to ensure the security of Linux systems against possible attacks by malicious adversaries. A number of security facilities are implemented in the kernel for this purpose. One such happens to be seccomp-BPF, a system call filtering mechanism that helps reduce the exposed kernel surface whilst executing userland applications.
seccomp filters are expressed as Berkeley Packet Filter (BPF) programs and can be used to trap system calls, depending on the name and arguments passed. As a result, if an application endeavors to spawn a system call which has been disallowed in accordance with some predefined seccomp filtering policy, it will immediately result in an error ⛔invalid system call (core dumped) and the corresponding process will fail to execute. This can help protect the system against hazardous processes which may attempt privilege escalation unbeknownst to the user.
Over at BSD-land, OpenBSD, an operating system often lauded for its excellent security model, also has its own set of mechanisms for providing a secure platform for running applications without leaving room for potential exploits. Two of the most common security features go by the name of pledge and unveil and can largely be considered complementary to each other.
• pledge is a sandboxing mechanism that restricts the operational capabilities of a userland process by defining promises, each of which pertains to a specific subset of actions that a process can be allowed or forbidden, for instance, read-write operations, networking, and so on. By default, a pledge sandbox will prevent a process from accessing the entire filesystem but this can often get inconvenient.
• unveil gives access to a specific filesystem path that the process may require and lets the user decide the kind of read-write access to said path. Justine Tunney has ported pledge to Linux as a standalone binary with added support for unveil, making it possible to utilize these security features in tandem with those already present in the kernel itself while executing processes on Linux systems.
Getting started with seccomp
At Subconscious Compute, I experimented with seccomp and pledge to provide a hardened interface for application execution that minimizes the attack surface of the kernel and makes processes stick to doing and accessing that which is strictly necessary for proper execution (see the demo at the end of this post). As a security enthusiast and budding Rustacean, I thought this would be a welcome challenge and a great opportunity to hone my Rust programming skills. In retrospect, it definitely has been so.
After doing some research, I discovered seccompiler, a well-documented Rust crate that provides a high-level interface for constructing seccomp filtering policies. It can be used to create Rust-based data structures or JSON objects. Since serde makes it easy to serialize and deserialize JSON, and it could be useful to store the filters on disk for later reference, I decided to go with the JSON option. I then created an outline of the code. To make the seccomp filters user-defined, I broke down the filter creation process into multiple stages with intuitive prompts. This makes it easy for the user to create full-fledged and functional filters with just a few keystrokes.
I wanted to add support in the code itself for displaying the list of system calls that the given process spawns upon execution. I considered using strace for this purpose, until I stumbled upon lurk, a Rust-based alternative with JSON support. It was the perfect choice since I was already planning to use serde for serializing the custom structs, which stored user choices, into seccompiler-compatible JSON. So, I used lurk to display all the system calls alongside the arguments that the process spawned. This would give the user an idea of the kind of filtering that needed to be done, in case they were previously unaware of the operational liberties the process took by default.
After understanding how seccompiler operates and constructing seccomp filters that can be compiled to loadable BPF and installed, I tested out my code with different syscalls and processes. A perplexing issue soon arose in the form of unexpected core dumps when filtering syscalls unrelated to the process. I was depending on lurk to learn about the syscalls which a process spawned, so I initially thought lurk was somehow misbehaving. But strace was not much different either. I later figured out that using cargo run to compile and execute my binary led to a number of additional syscalls getting spawned, some of which were probably getting filtered out while testing, thereby causing core dumps. This was evident when I compared against the syscalls spawned by directly executing the binary. seccompiler installs the filters for the current and child processes, so I instead decided to separate the build process from execution.
pledge and unveil
Once I was done with implementing seccomp, it was time to focus on pledge and unveil. I had initially planned to use Rust’s own libpledge for this purpose but it appeared to be woefully unmaintained and undocumented, and it was a better idea anyway to stand on the shoulders of giants. So, I went with Justine Tunney’s standalone binary itself, which worked quite effectively in my favor. I used wget to make the code automatically fetch the pledge binary from upstream in case it was not already present on disk. Later on, I also added the binary to my project repository, accounting for the possibility of deployment in restricted environments or usage in systems without networking support.
Constructing the prompts for accepting promises as input was not too difficult since everything was well-documented in Justine’s blog.
I also incorporated unveil with support for specifying the nature of read-modify-write operations granted for every path that was unveiled to the process.
Dependency checking is an important aspect since runtime errors can arise if some application tries to make use of a nonexistent dependency. To prevent this from happening, I added an optional dependency check for both wget and lurk, both of which the code depends on. seccompiler best practices involve enabling BPF Just in Time (JIT) compiler support to minimize syscall overhead, so I added another check that ensures this.
Improving DX and UX
The primary objective of the project had been accomplished.
I had constructed a guided pathway for creating user-defined seccomp filters and designed a wrapper around Justine’s Linux port of pledge that could be used during command invocations.
Until now, however, there was only one way to interact with my code – running it with the process to be sandboxed passed as an argument and then entering choices at every prompt to end up with a restricted-service mode of operation for said process.
It would be more robust if the program could accept all choices directly prior to runtime, something like a non-interactive mode as suggested by a colleague, or even accept input from a Unix IPC socket as a kind of API layer on top of the code as suggested by another friend. Hence, I implemented support for both, creating flags that decide the behavior of the code before it executes so the user can more efficiently specify their preferences if they already know what kind of pledge sandboxing they want.
💡 Note that seccomp filtering would be disabled for this non-interactive mode since it requires the user to be guided through every stage of the filter creation process in order to construct appropriate JSON filters.
Incorporating support for communication with a Unix IPC socket was a good exercise in learning more about socket programming, and I experimented with several ways of approaching this problem before arriving at a simple solution that started with creating a temporary socket which could be used with something like OpenBSD’s netcat, available in most Linux package manager repositories, for communicating with the program.
Once the program was executed in API mode, which could be specified through a flag, the user would be able to communicate with the socket using netcat. To reduce clutter, I decided only the input prompts should be displayed on the client-side whereas everything else, including the lurk output and execution progress of the code, would be displayed on the server-side which, in this case, simply referred to the terminal that the code was executed in. Since seccomp is a Linux-only feature, filtering can only be done on Linux systems, so the filter creation process is disabled if the detected operating system is not running on the Linux kernel.
Final touches and tests
The final stages of my project involved writing integration tests, benchmarking with Criterion and using clippy to make the code more aligned with idiomatic Rust.
Writing the documentation for the code was another step in ensuring ease-of-use, so that the end user would face little difficulty in constructing a sandbox for some process. I added demonstrations for the three modes of interacting with the code and a quick overview of the list of flags that can be passed while executing the binary. Whereas API mode must be explicitly specified, the program automatically switches to non-interactive mode if some flag specific to pledge is passed during execution. Finally, I transferred ownership of the GitHub repository over to SubCom, where it is hosted today under the copyleft AGPL license to promote open-source development.
All in all, this internship was a highly enjoyable and educational experience for me. I learnt quite a bit about systems security and, building on top of already established security mechanisms, gained insight into constructing tools for application hardening. I was already enthusiastic about Rust before, but this project really cemented the memory-safe systems programming language as a powerful medium for building performant and secure software.
The borrow checker’s strictness and the helpful compiler messages have been greatly useful in writing code that does not do the unexpected.
As a Linux user who formerly used OpenBSD, it was thrilling to delve deep into sandboxing mechanisms that I have used on my own systems. Although this was, by and large, a solo project wherein I enjoyed figuring out most things on my own with the help of the Internet, I am deeply grateful to the people at SubCom for not only financing this project but also providing useful suggestions and pointers. Communicating through Notion as well as journaling my progress on the daily, I had a splendid time working with them and look forward to more opportunities of the same kind.
Demo
Executing seccomp-pledge in interactive mode on ls with seccomp filtering out the accept4 syscall, pledge allowing stdio, rpath and tty operations and unveil giving read-only access to current working directory
Find the repository here. Read my detailed internship notes here. The entire project was made using vim on Arch Linux❤️.
Useful References
This work was done by Archisman Dutta — student at Ashoka University — at Subconscious Compute during a 1 month long winter internship in December 2022 and overseen mainly by Siddharth Naithani of Subconscious Compute and NIT Hamirpur. Megha Ramanchandran of Subconscious Compute did the illustrations. If you are interested in projects like these, apply for internship or a fulltime position at our Job Board.
Scroll to Top
|
2023-03-27 12:42:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22625088691711426, "perplexity": 2317.461597560646}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00012.warc.gz"}
|
https://singaporemathguru.com/question/primary-5-problem-sums-word-problems-triangles-conquer-all-questions-on-triangles-by-watching-the-video-solutions-here-triangles-exercise-1-1556?vr=htaccess&vr2=primary-5-problem-sums-word-problems-triangles-conquer-all-questions-on-triangles-by-watching-the-video-solutions-here-triangles-exercise-1-1556
|
PARENTS, STUDENTS, TEACHERS
You can now view questions, answers, 100's of hours of video explanations and step-by-step worked solutions for FREE!
### Primary 5 Problem Sums/Word Problems - Try FREE
Score :
(Single Attempt)
#### Question
Need dedicated, 1-1 help?
PSLE A* 2020 1-1 Tuition By Mr SingaporeMathGuru Results Guaranteed!*
ZWXY is a rectangle.
The area of ZOY is 960 cm2.
If ZY is taken to be the base of triangle ZOY, then the height of triangle ZOY is 24 cm.
(a) What is the length of ZY?
(b) What is the area of the rectangle?
Notes to students:
1. If the question above has parts, (e.g. (a) and (b)), given that the answer for part (a) is 10 and the answer for part (b) is 12, give your answer as:10,12
The correct answer is : 80,1920
(a)_____cm,(b)_____ cm^2
|
2020-07-02 14:28:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2872064411640167, "perplexity": 4980.253882459914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879532.0/warc/CC-MAIN-20200702142549-20200702172549-00073.warc.gz"}
|
http://exoplanets.co/extrasolar-planets/what-is-the-largest-exoplanet.html
|
Home → Exoplanets FAQ → What is the largest exoplanet?
# What is the Largest Exoplanet?
If you would like to be able to find out yourself what is the largest exoplanet found so far, here is how you do it. Go to the Extrasolar Planets Encylopedia catalog page and click on the button in the header of the table that says “All fields” and then click on the column called Radius. When you click on it, the data in the table should be sorted in order of numerical value in that column (i.e., the radius of the planets). An arrow just to the right of the word Radius tells you whether the numerical order is ascending or descending (the arrow will point upwards for an ascending order). If you click the column header again, the numerical ordering will flip from ascending to descending or vice versa.
You can get instant access to the book Exoplanets and Alien Solar Systems:
To find the largest exoplanet you want the radii measurements to be descending, with the largest values appearing first. However, the rows that appear first will have no entries in the radius column and these correspond to exoplanets for which the radius has not been measured. You will have to scroll down quite a bit before you see the first non-blank entry in the radius column because exoplanets that have radius measurements are not in the majority. (For example, in August 2012, just over 32% of confirmed exoplanets had a radius measurement). The first non-blank entry in the sorted radius column corresponds to the largest confirmed exoplanet. The number in the cell in the radius column is the radius in units of Jupiter's radius. You can convert to Earth radii by multiplying the number by 10.9733 (which is the radius of Jupiter in Earth radii). If you need the answer in kilometers (km), then multiply the number in the cell by 69911 (which is Jupiter's radius in km). From here you can convert to any other units by typing an appropriate statement in a Google search box. See also http://astrophysicsformulas.com for sources for solar system data.
For example, in August 2012, the largest exoplanet was CT Cha b. This planet has a radius of $2.2 \pm 0.0.6$ Jupiter radii, a given mass estimate of $17\pm6$ times that of Jupiter's mass. An orbital period is not given, but a distance of 440 times the Earth-Sun distance (more precisely, Astronomical Unit, or AU) is given for the star-planet distance in terms of half of the longest dimension of the elliptical orbit (i.e., semimajor axis). The planet is listed as being detected by direct imaging.
A word of caution: you should always try to find out what measurement uncertainties are associated with any quoted number (not just for exoplanet radii, but any measurement in general). In particular, exoplanet radii can be uncertain by substantial amounts, a factor of 2 not being uncommon. Since density is proportional to radius cubed, any error in the radius becomes cubed for the uncertainty in minimum density.
File under: What is the largest exoplanet? What exoplanet has the largest radius? How can I find the latest information on the largest exoplanet?
|
2018-11-21 03:59:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.713822066783905, "perplexity": 666.9648853063227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747024.85/warc/CC-MAIN-20181121032129-20181121054129-00199.warc.gz"}
|
https://www.physicsforums.com/threads/integral-from-0-to-1-of-dx-root-1-x-2.785520/
|
# Integral, from 0 to 1, of dx/root(1-x^2)
## Homework Statement
[/B]
Integral, from 0 to 1, of dx/root(1-x^2)
## Homework Equations
[/B]
d/dx of arcsin = 1/root(1-x^2)
## The Attempt at a Solution
Since d/dx of arcsin = 1/root(1-x^2), we have that the integral, from 0 to 1, of dx/root(1-x^2) equals to arcsin, from 0 to 1.
arcsin(1) - arcsin(0) = arcsin(1). I know I'm missing something here. What did I do wrong?
Related Calculus and Beyond Homework Help News on Phys.org
STEMucator
Homework Helper
Nothing is wrong.
ehild
Homework Helper
Since d/dx of arcsin = 1/root(1-x^2), we have that the integral, from 0 to 1, of dx/root(1-x^2) equals to arcsin, from 0 to 1.
arcsin(1) - arcsin(0) = arcsin(1). I know I'm missing something here. What did I do wrong?
What is arcsin(1)?
leo255
arcsin(1) is pi/2. I asked someone in my class about this, and he said that I should be taking the limit, as b (or whatever other variable) approaches 1, from the left-hand side. Can you guys confirm if this is something that should be done for this problem?
Mark44
Mentor
arcsin(1) is pi/2. I asked someone in my class about this, and he said that I should be taking the limit, as b (or whatever other variable) approaches 1, from the left-hand side. Can you guys confirm if this is something that should be done for this problem?
Yes, this should be done. The integrand is undefined at x = 1, so the Fund. Thm. of Calculus doesn't apply. You can get around this by evaluating this limit:
$$\lim_{b \to 1^-}\int_0^b \frac{dx}{\sqrt{1 - x^2}}$$
leo255
|
2020-09-28 15:03:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7615346908569336, "perplexity": 765.4085882227814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00666.warc.gz"}
|
https://bartogian.wordpress.com/2009/05/
|
## Archive for May, 2009
### Attention readers of Terry Tao’s blog
May 17, 2009
This should be relevant to everyone.
I am not subscribed to this blog because I can’t read all of its entries in google reader. I’m not sure whose fault this is, but until this is sorted, I’m just going to have to continue visiting What’s new directly.
[note this picture is a few days old, and if anyone could offer technical support to make it more viewable, that would be appreciated.]
### Keyboard Remapping for Latex
May 9, 2009
It is patently obvious that the keyboard was designed for English typing, as opposed to typing $\LaTeX$ or anything else that heavily relies on non-alphanumeric characters (like coding). Frustrated by the ergonomics and speed of this, I have finally decided to take matters into my own hands, I have used the Microsoft Keyboard Layout Creator (I use the XP virus) to obtain a partial solution (not original) to the problem.
Essentially, I looked at the keyboard and asked myself which keys are pressed more commonly in the shifted position than the unshifted position. My answer (not backed up by any actual data) is the seven keys.
\$ ^ ( ) _ { }
So I remapped those to swap their shifted and unshifted modes.
For good measure I swapped the popular \ with the unpopular and far away ;.
This I believe is still only a partial solution, for example the characters 0 – + should probably replace ` 7 8 as being unshifted, though this has not been carried through as yet for some bizarre desire to not have my keyboard behaving too differently from how it looks.
There are unfortunately some drawbacks, I’ve noticed that typing \p is awkward, and that it makes me noticeably slower now when typing on other machines, but I try to do all my texing on my laptop. This remapping also solves the asthetic problem I always had with a keyboard which was that the minus sign was considered more important than the plus sign (who cares about hyphens anyway?) and now I have them on an even footing, although as noted above, they really should both become unshifted.
|
2017-08-19 20:29:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4339517652988434, "perplexity": 1449.737974950238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105922.73/warc/CC-MAIN-20170819201404-20170819221404-00310.warc.gz"}
|
http://mathhelpforum.com/calculus/35374-parametric-plane-equation-print.html
|
# Parametric Plane Equation
• April 21st 2008, 11:55 AM
Del
Parametric Plane Equation
Find the parametric equations for the line through the point P = (0, 1, -3) that is perpendicular to the plane https://webwork.math.lsu.edu/webwork...c0ae5b1b41.png
Use "t" as your variable, t = 0 should correspond to P, and the velocity vector of the line should be the same as the standard normal vector of the plane.
x = ?
y = ?
z = ?
If $ax+by+cz+d=0$ is a plane then $$ is its normal vector.
|
2014-08-23 01:53:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6558125019073486, "perplexity": 388.5598980475167}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824990.54/warc/CC-MAIN-20140820021344-00434-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://indico.math.cnrs.fr/event/5646/?print=1
|
Séminaire de Géométrie
# Exemples de variétés d'Einstein compactes de dimension 4 à courbure négative
## by Mr Bruno Premoselli (Université Libre de Bruxelles)
Europe/Paris
1180 (Bât E2)
### 1180
#### Bât E2
Site Grandmont
Description
We construct new examples of closed, negatively curved, not locally homogeneous Einstein four-manifolds. Topologically, the manifolds we consider are of two types: quotients by the action of a dihedral group of symmetric closed hyperbolic four-manifolds on the one hand, and ramified covers over hyperbolic manifolds with symmetries on the other hand. We produce an Einstein metric on such manifolds via a glueing procedure. We first find an approximate Einstein metric that we obtain as the interpolation, at large distances, between a Riemannian Kottler metric and the hyperbolic metric. We then deform it, in the Bianchi gauge, into a genuine solution of Einstein’s equations. The constructions described in this talk are a joint work with J. Fine (ULB).
|
2022-01-16 23:04:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313096404075623, "perplexity": 1502.4901796607041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00522.warc.gz"}
|
https://www.freemathhelp.com/forum/threads/basic-fraction-question.45044/
|
# Basic Fraction Question
#### vonsmiley
##### New member
Hi everyone. If you are not tired of me I need a simple question answered.
:?: Fractions:
If you have a mixed number and the whole-number part is negative, is the fraction negative? Looking at this exercise:
. . .-2 <sup>3</sup>/<sub>4</sub> + -5 <sup>1</sup>/<sub>5</sub>
. . .-2 <sup>15</sup>/<sub>20</sub> + (-5 <sup>4</sup>/<sub>20</sub>)
. . .-7 <sup>19</sup>/<sub>20</sub> = ...
Here's where the question (above) comes in. Is it:
. . .-7 + 1 <sup>1</sup>/<sub>20</sub>, which equals 6 <sup>1</sup>/<sub>20</sub>
Or is it:
. . .-8 <sup>1</sup>/<sub>20</sub>
#### pka
##### Elite Member
If you owe James $2.75, -2(3/4), and you owe Jane$5.20, -5(1/5), how much do you owe in all?
#### stapel
##### Super Moderator
Staff member
The negative is on the entire number. Note that juxtoposition, in the case of mixed numbers, actually indicates addition. That is:
. . . . .$$\displaystyle \L 4\,+\,\frac{1}{3}\,=\,4\frac{1}{3}$$
. . . . .$$\displaystyle \L -4\,-\,\frac{1}{3}\,=\,-4\,+\,-\frac{1}{3}\,=\,-4\frac{1}{3}$$
Subtracting:
. . . . .$$\displaystyle \L 4\frac{2}{3}\,-\,1\frac{1}{3}\,= \,\left(4\,+\,\frac{2}{3}\right)\,+\,\left(-1\,-\,\frac{1}{3}\right)$$
. . . . .$$\displaystyle \L =\,4\,-\,1\,+\,\frac{2}{3}\,-\,\frac{1}{3}\,=\,3\,+\,\frac{1}{3}\,=\,3\frac{1}{3}$$
Eliz.
#### vonsmiley
##### New member
but what if you have to use improper fractions to solve the problem I keep geting -7 11/20 which is wrong.................. what have I done to be cursed with math as a subject.
#### Sendell
##### New member
vonsmiley said:
Hi everyone. If you are not tired of me I need a simple question answered.
:?: Fractions:
If you have a mixed number and the whole-number part is negative, is the fraction negative? Looking at this exercise:
. . .-2 <sup>3</sup>/<sub>4</sub> + -5 <sup>1</sup>/<sub>5</sub>
. . .-2 <sup>15</sup>/<sub>20</sub> + (-5 <sup>4</sup>/<sub>20</sub>)
. . .-7 <sup>19</sup>/<sub>20</sub> = ...
Up to here you've done everything correctly. -7 19/20 is your final answer. Think of it as -(7 + 19/20), or -7.95
Here's where the question (above) comes in. Is it:
. . .-7 + 1 <sup>1</sup>/<sub>20</sub>, which equals 6 <sup>1</sup>/<sub>20</sub>
Or is it:
. . .-8 <sup>1</sup>/<sub>20</sub>
|
2019-03-23 22:56:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6186458468437195, "perplexity": 9394.949917361619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203093.63/warc/CC-MAIN-20190323221914-20190324003914-00240.warc.gz"}
|
http://techiemathteacher.com/2013/11/25/solution-to-previous-problem-of-the-week/
|
# Solution to previous Problem of the Week
What is the remainder when f(x)=999x999+998x998+997x997+. . .+2x2+x is divided by x-1?
This is a finite polynomial of degree 999. For us to solve this problem we need to know that f(a) is the remainder when f(x) is divided by x-a. That is the famous remainder theorem.
The remainder when f(x)=999x999+998x998+997x997+. . .+2x2+x is divided by x-1 is also f(1).
f(1)= 999(1)999+998(1)998+997(1)997+. . .+2(1)2+(1)
f(1)= 999+998+997+. . . +2+1
Using the formula for the sum of arithmetic series we have,
$f(1)=\displaystyle\frac{n(a_1+a_n)}{2}$ where a1 and an are the first and last term respectively, n is the number of terms.
f(1)=999(1+999)/2
f(1)=499500
Here is the link to that page
### Dan
Blogger and a Math enthusiast. Has no interest in Mathematics until MMC came. Aside from doing math, he also loves to travel and watch movies.
### 4 Responses
1. Lester Salman M. Dahman says:
prove the following.in THEORY OF EQUATION
1. If R; +,∙ ›is a ring , then for any a Є R,0 ∙a = a ∙0=0, where 0 is the additive identity.
2. Let R be an abelian ring, then Øc : R[x]→R such that Øc(p(x)) = p(c) is a ring homomorphism
Can you help me to answer this question.Thank you
2. I’m also writing to let you know of the cool experience my daughter had checking yuor web blog. She learned several issues, including what it’s like to possess an amazing giving mindset to get most people without difficulty fully understand chosen tortuous issues. You really exceeded people’s expected results. Thank you for churning out such great, safe, educational and also unique tips on the topic.
3. Thank you of this blog. That’s all I’m able to say. You undoubtedly have made this web web site into an item thats attention opening in addition to critical. You certainly know a fantastic deal of about the niche, youve covered a multitude of bases. Fantastic stuff from this the main internet. All more than again, thank you for the blog.
4. I am now not positive the place you’re getting your information, however good topic. I needs to spend some time learning more or understanding more. Thanks for great information I used to be in search of this information for my mission.
|
2018-02-19 13:56:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4987938404083252, "perplexity": 2384.4265085443008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812665.41/warc/CC-MAIN-20180219131951-20180219151951-00328.warc.gz"}
|
https://en.zdam.xyz/problem/12703/
|
#### Problem 67E
67. Boyle’s Law states that when a sample of gas is compressed at a constant temperature, the pressure $P$ of the gas is inversely proportional to the volume $V$ of the gas.
(a) Suppose that the pressure of a sample of air that occupies $0.106 \mathrm{~m}^{3}$ at $25^{\circ} \mathrm{C}$ is $50 \mathrm{kPa}$. Write $V$ as a function of $P$.
(b) Calculate $d V / d P$ when $P=50 \mathrm{kPa}$. What is the meaning of the derivative? What are its units?
|
2022-08-10 01:46:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920375645160675, "perplexity": 112.53921034892576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00095.warc.gz"}
|
https://zbmath.org/?q=an:0403.94006
|
## On generalized measures of relative information and inaccuracy.(English)Zbl 0403.94006
### MSC:
94A17 Measures of information, entropy
Full Text:
### References:
[1] Campbell L. L. (1970): Characterization of Entropy of Probability Distribution on the Real Line. - Information and Control, 21, 329-338. · Zbl 0245.94012 [2] Gallager R. G. (1968): Information Theory and Reliable Communication. - J. Wiley and Sons, Inc., N. Y. · Zbl 0198.52201 [3] Hobson A. (1969): A New Theorem of Information Theory. - J. Stat. Phys., 1, 383-391. [4] Kerridge D. F. (1961): Inaccuracy and inference. - J. Royal Stat. Soc., Ser. B 23 (1), 184 to 194. · Zbl 0112.10302 [5] Kullback S. (1968): Information Theory and Statistics. - Dover Publi., Inc., N. Y. · Zbl 0088.10406 [6] Rathie P. N., Pl. Kannappan (1972): A Direct-Divergence Function of Type $$\beta$$. - Infor- mation and Control. 22, 38 - 45. · Zbl 0231.94015 [7] Sharma B. D., R. Autar (1973): On Characterization of a Generalized Inaccuracy Measure in Information Theory. - J. Appl. Prob. 10 464-468. · Zbl 0261.94024 [8] Sharma B. D., R. Autar (1974): Relative-Information Functions and Their Type ($$\alpha$$, $$\beta$$) Generalizations. - Metrika, 21, 41 - 50. · Zbl 0277.94012 [9] Sharma B. D., H. C. Gupta (1976): Sub-additive Measures of Relative Information and Inaccuracy. - Metrika, Vol. 23, 155-165. · Zbl 0332.94007 [10] Sharma B. D., I. J. Taneja (1974): On Axiomatic Characterization of Information Theoretic Measures. - J. Stat. Phys., 10, 337-346. [11] Sharma B. D., I. J. Taneja (1975): Entropy of Type ($$\alpha$$, $$\beta$$) and Other Generalized Measures in Information Theory. - Metrika, 22 (4), 205 - 215. · Zbl 0328.94012 [12] Taneja I. J. (1974): A Joint Characterization of Directed Divergence, Inaccuracy, and Their Generalizations. - J. Statist. Phys., 11, 169-174. [13] Taneja I. J. (1976): On Measures of Information and Inaccuracy. - J. Statist. Phys., 14, 263 - 270.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2022-05-26 23:08:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46375468373298645, "perplexity": 4852.121111309775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00367.warc.gz"}
|
https://people.maths.bris.ac.uk/~matyd/GroupNames/320i1/C2xC4sC4sD5.html
|
Copied to
clipboard
## G = C2×C4⋊C4⋊D5order 320 = 26·5
### Direct product of C2 and C4⋊C4⋊D5
Series: Derived Chief Lower central Upper central
Derived series C1 — C2×C10 — C2×C4⋊C4⋊D5
Chief series C1 — C5 — C10 — C2×C10 — C22×D5 — C23×D5 — C2×D10⋊C4 — C2×C4⋊C4⋊D5
Lower central C5 — C2×C10 — C2×C4⋊C4⋊D5
Upper central C1 — C23 — C2×C4⋊C4
Generators and relations for C2×C4⋊C4⋊D5
G = < a,b,c,d,e | a2=b4=c4=d5=e2=1, ab=ba, ac=ca, ad=da, ae=ea, cbc-1=b-1, bd=db, ebe=bc2, cd=dc, ece=b2c, ede=d-1 >
Subgroups: 846 in 246 conjugacy classes, 111 normal (31 characteristic)
C1, C2 [×3], C2 [×4], C2 [×2], C4 [×12], C22, C22 [×6], C22 [×10], C5, C2×C4 [×6], C2×C4 [×18], C23, C23 [×8], D5 [×2], C10 [×3], C10 [×4], C42 [×4], C22⋊C4 [×12], C4⋊C4 [×4], C4⋊C4 [×8], C22×C4 [×3], C22×C4 [×3], C24, Dic5 [×6], C20 [×6], D10 [×10], C2×C10, C2×C10 [×6], C2×C42, C2×C22⋊C4 [×3], C2×C4⋊C4, C2×C4⋊C4 [×2], C422C2 [×8], C2×Dic5 [×6], C2×Dic5 [×6], C2×C20 [×6], C2×C20 [×6], C22×D5 [×2], C22×D5 [×6], C22×C10, C2×C422C2, C4×Dic5 [×4], C10.D4 [×4], C4⋊Dic5 [×4], D10⋊C4 [×12], C5×C4⋊C4 [×4], C22×Dic5 [×3], C22×C20 [×3], C23×D5, C4⋊C4⋊D5 [×8], C2×C4×Dic5, C2×C10.D4, C2×C4⋊Dic5, C2×D10⋊C4 [×3], C10×C4⋊C4, C2×C4⋊C4⋊D5
Quotients: C1, C2 [×15], C22 [×35], C23 [×15], D5, C4○D4 [×6], C24, D10 [×7], C422C2 [×4], C2×C4○D4 [×3], C22×D5 [×7], C2×C422C2, C4○D20 [×2], D42D5 [×2], Q82D5 [×2], C23×D5, C4⋊C4⋊D5 [×4], C2×C4○D20, C2×D42D5, C2×Q82D5, C2×C4⋊C4⋊D5
Smallest permutation representation of C2×C4⋊C4⋊D5
On 160 points
Generators in S160
(1 86)(2 87)(3 88)(4 89)(5 90)(6 81)(7 82)(8 83)(9 84)(10 85)(11 96)(12 97)(13 98)(14 99)(15 100)(16 91)(17 92)(18 93)(19 94)(20 95)(21 106)(22 107)(23 108)(24 109)(25 110)(26 101)(27 102)(28 103)(29 104)(30 105)(31 116)(32 117)(33 118)(34 119)(35 120)(36 111)(37 112)(38 113)(39 114)(40 115)(41 126)(42 127)(43 128)(44 129)(45 130)(46 121)(47 122)(48 123)(49 124)(50 125)(51 136)(52 137)(53 138)(54 139)(55 140)(56 131)(57 132)(58 133)(59 134)(60 135)(61 146)(62 147)(63 148)(64 149)(65 150)(66 141)(67 142)(68 143)(69 144)(70 145)(71 156)(72 157)(73 158)(74 159)(75 160)(76 151)(77 152)(78 153)(79 154)(80 155)
(1 116 6 111)(2 117 7 112)(3 118 8 113)(4 119 9 114)(5 120 10 115)(11 106 16 101)(12 107 17 102)(13 108 18 103)(14 109 19 104)(15 110 20 105)(21 91 26 96)(22 92 27 97)(23 93 28 98)(24 94 29 99)(25 95 30 100)(31 81 36 86)(32 82 37 87)(33 83 38 88)(34 84 39 89)(35 85 40 90)(41 151 46 156)(42 152 47 157)(43 153 48 158)(44 154 49 159)(45 155 50 160)(51 141 56 146)(52 142 57 147)(53 143 58 148)(54 144 59 149)(55 145 60 150)(61 136 66 131)(62 137 67 132)(63 138 68 133)(64 139 69 134)(65 140 70 135)(71 126 76 121)(72 127 77 122)(73 128 78 123)(74 129 79 124)(75 130 80 125)
(1 51 11 41)(2 52 12 42)(3 53 13 43)(4 54 14 44)(5 55 15 45)(6 56 16 46)(7 57 17 47)(8 58 18 48)(9 59 19 49)(10 60 20 50)(21 71 31 61)(22 72 32 62)(23 73 33 63)(24 74 34 64)(25 75 35 65)(26 76 36 66)(27 77 37 67)(28 78 38 68)(29 79 39 69)(30 80 40 70)(81 131 91 121)(82 132 92 122)(83 133 93 123)(84 134 94 124)(85 135 95 125)(86 136 96 126)(87 137 97 127)(88 138 98 128)(89 139 99 129)(90 140 100 130)(101 151 111 141)(102 152 112 142)(103 153 113 143)(104 154 114 144)(105 155 115 145)(106 156 116 146)(107 157 117 147)(108 158 118 148)(109 159 119 149)(110 160 120 150)
(1 2 3 4 5)(6 7 8 9 10)(11 12 13 14 15)(16 17 18 19 20)(21 22 23 24 25)(26 27 28 29 30)(31 32 33 34 35)(36 37 38 39 40)(41 42 43 44 45)(46 47 48 49 50)(51 52 53 54 55)(56 57 58 59 60)(61 62 63 64 65)(66 67 68 69 70)(71 72 73 74 75)(76 77 78 79 80)(81 82 83 84 85)(86 87 88 89 90)(91 92 93 94 95)(96 97 98 99 100)(101 102 103 104 105)(106 107 108 109 110)(111 112 113 114 115)(116 117 118 119 120)(121 122 123 124 125)(126 127 128 129 130)(131 132 133 134 135)(136 137 138 139 140)(141 142 143 144 145)(146 147 148 149 150)(151 152 153 154 155)(156 157 158 159 160)
(1 5)(2 4)(6 10)(7 9)(11 15)(12 14)(16 20)(17 19)(21 35)(22 34)(23 33)(24 32)(25 31)(26 40)(27 39)(28 38)(29 37)(30 36)(41 50)(42 49)(43 48)(44 47)(45 46)(51 60)(52 59)(53 58)(54 57)(55 56)(61 80)(62 79)(63 78)(64 77)(65 76)(66 75)(67 74)(68 73)(69 72)(70 71)(81 85)(82 84)(86 90)(87 89)(91 95)(92 94)(96 100)(97 99)(101 115)(102 114)(103 113)(104 112)(105 111)(106 120)(107 119)(108 118)(109 117)(110 116)(121 130)(122 129)(123 128)(124 127)(125 126)(131 140)(132 139)(133 138)(134 137)(135 136)(141 160)(142 159)(143 158)(144 157)(145 156)(146 155)(147 154)(148 153)(149 152)(150 151)
G:=sub<Sym(160)| (1,86)(2,87)(3,88)(4,89)(5,90)(6,81)(7,82)(8,83)(9,84)(10,85)(11,96)(12,97)(13,98)(14,99)(15,100)(16,91)(17,92)(18,93)(19,94)(20,95)(21,106)(22,107)(23,108)(24,109)(25,110)(26,101)(27,102)(28,103)(29,104)(30,105)(31,116)(32,117)(33,118)(34,119)(35,120)(36,111)(37,112)(38,113)(39,114)(40,115)(41,126)(42,127)(43,128)(44,129)(45,130)(46,121)(47,122)(48,123)(49,124)(50,125)(51,136)(52,137)(53,138)(54,139)(55,140)(56,131)(57,132)(58,133)(59,134)(60,135)(61,146)(62,147)(63,148)(64,149)(65,150)(66,141)(67,142)(68,143)(69,144)(70,145)(71,156)(72,157)(73,158)(74,159)(75,160)(76,151)(77,152)(78,153)(79,154)(80,155), (1,116,6,111)(2,117,7,112)(3,118,8,113)(4,119,9,114)(5,120,10,115)(11,106,16,101)(12,107,17,102)(13,108,18,103)(14,109,19,104)(15,110,20,105)(21,91,26,96)(22,92,27,97)(23,93,28,98)(24,94,29,99)(25,95,30,100)(31,81,36,86)(32,82,37,87)(33,83,38,88)(34,84,39,89)(35,85,40,90)(41,151,46,156)(42,152,47,157)(43,153,48,158)(44,154,49,159)(45,155,50,160)(51,141,56,146)(52,142,57,147)(53,143,58,148)(54,144,59,149)(55,145,60,150)(61,136,66,131)(62,137,67,132)(63,138,68,133)(64,139,69,134)(65,140,70,135)(71,126,76,121)(72,127,77,122)(73,128,78,123)(74,129,79,124)(75,130,80,125), (1,51,11,41)(2,52,12,42)(3,53,13,43)(4,54,14,44)(5,55,15,45)(6,56,16,46)(7,57,17,47)(8,58,18,48)(9,59,19,49)(10,60,20,50)(21,71,31,61)(22,72,32,62)(23,73,33,63)(24,74,34,64)(25,75,35,65)(26,76,36,66)(27,77,37,67)(28,78,38,68)(29,79,39,69)(30,80,40,70)(81,131,91,121)(82,132,92,122)(83,133,93,123)(84,134,94,124)(85,135,95,125)(86,136,96,126)(87,137,97,127)(88,138,98,128)(89,139,99,129)(90,140,100,130)(101,151,111,141)(102,152,112,142)(103,153,113,143)(104,154,114,144)(105,155,115,145)(106,156,116,146)(107,157,117,147)(108,158,118,148)(109,159,119,149)(110,160,120,150), (1,2,3,4,5)(6,7,8,9,10)(11,12,13,14,15)(16,17,18,19,20)(21,22,23,24,25)(26,27,28,29,30)(31,32,33,34,35)(36,37,38,39,40)(41,42,43,44,45)(46,47,48,49,50)(51,52,53,54,55)(56,57,58,59,60)(61,62,63,64,65)(66,67,68,69,70)(71,72,73,74,75)(76,77,78,79,80)(81,82,83,84,85)(86,87,88,89,90)(91,92,93,94,95)(96,97,98,99,100)(101,102,103,104,105)(106,107,108,109,110)(111,112,113,114,115)(116,117,118,119,120)(121,122,123,124,125)(126,127,128,129,130)(131,132,133,134,135)(136,137,138,139,140)(141,142,143,144,145)(146,147,148,149,150)(151,152,153,154,155)(156,157,158,159,160), (1,5)(2,4)(6,10)(7,9)(11,15)(12,14)(16,20)(17,19)(21,35)(22,34)(23,33)(24,32)(25,31)(26,40)(27,39)(28,38)(29,37)(30,36)(41,50)(42,49)(43,48)(44,47)(45,46)(51,60)(52,59)(53,58)(54,57)(55,56)(61,80)(62,79)(63,78)(64,77)(65,76)(66,75)(67,74)(68,73)(69,72)(70,71)(81,85)(82,84)(86,90)(87,89)(91,95)(92,94)(96,100)(97,99)(101,115)(102,114)(103,113)(104,112)(105,111)(106,120)(107,119)(108,118)(109,117)(110,116)(121,130)(122,129)(123,128)(124,127)(125,126)(131,140)(132,139)(133,138)(134,137)(135,136)(141,160)(142,159)(143,158)(144,157)(145,156)(146,155)(147,154)(148,153)(149,152)(150,151)>;
G:=Group( (1,86)(2,87)(3,88)(4,89)(5,90)(6,81)(7,82)(8,83)(9,84)(10,85)(11,96)(12,97)(13,98)(14,99)(15,100)(16,91)(17,92)(18,93)(19,94)(20,95)(21,106)(22,107)(23,108)(24,109)(25,110)(26,101)(27,102)(28,103)(29,104)(30,105)(31,116)(32,117)(33,118)(34,119)(35,120)(36,111)(37,112)(38,113)(39,114)(40,115)(41,126)(42,127)(43,128)(44,129)(45,130)(46,121)(47,122)(48,123)(49,124)(50,125)(51,136)(52,137)(53,138)(54,139)(55,140)(56,131)(57,132)(58,133)(59,134)(60,135)(61,146)(62,147)(63,148)(64,149)(65,150)(66,141)(67,142)(68,143)(69,144)(70,145)(71,156)(72,157)(73,158)(74,159)(75,160)(76,151)(77,152)(78,153)(79,154)(80,155), (1,116,6,111)(2,117,7,112)(3,118,8,113)(4,119,9,114)(5,120,10,115)(11,106,16,101)(12,107,17,102)(13,108,18,103)(14,109,19,104)(15,110,20,105)(21,91,26,96)(22,92,27,97)(23,93,28,98)(24,94,29,99)(25,95,30,100)(31,81,36,86)(32,82,37,87)(33,83,38,88)(34,84,39,89)(35,85,40,90)(41,151,46,156)(42,152,47,157)(43,153,48,158)(44,154,49,159)(45,155,50,160)(51,141,56,146)(52,142,57,147)(53,143,58,148)(54,144,59,149)(55,145,60,150)(61,136,66,131)(62,137,67,132)(63,138,68,133)(64,139,69,134)(65,140,70,135)(71,126,76,121)(72,127,77,122)(73,128,78,123)(74,129,79,124)(75,130,80,125), (1,51,11,41)(2,52,12,42)(3,53,13,43)(4,54,14,44)(5,55,15,45)(6,56,16,46)(7,57,17,47)(8,58,18,48)(9,59,19,49)(10,60,20,50)(21,71,31,61)(22,72,32,62)(23,73,33,63)(24,74,34,64)(25,75,35,65)(26,76,36,66)(27,77,37,67)(28,78,38,68)(29,79,39,69)(30,80,40,70)(81,131,91,121)(82,132,92,122)(83,133,93,123)(84,134,94,124)(85,135,95,125)(86,136,96,126)(87,137,97,127)(88,138,98,128)(89,139,99,129)(90,140,100,130)(101,151,111,141)(102,152,112,142)(103,153,113,143)(104,154,114,144)(105,155,115,145)(106,156,116,146)(107,157,117,147)(108,158,118,148)(109,159,119,149)(110,160,120,150), (1,2,3,4,5)(6,7,8,9,10)(11,12,13,14,15)(16,17,18,19,20)(21,22,23,24,25)(26,27,28,29,30)(31,32,33,34,35)(36,37,38,39,40)(41,42,43,44,45)(46,47,48,49,50)(51,52,53,54,55)(56,57,58,59,60)(61,62,63,64,65)(66,67,68,69,70)(71,72,73,74,75)(76,77,78,79,80)(81,82,83,84,85)(86,87,88,89,90)(91,92,93,94,95)(96,97,98,99,100)(101,102,103,104,105)(106,107,108,109,110)(111,112,113,114,115)(116,117,118,119,120)(121,122,123,124,125)(126,127,128,129,130)(131,132,133,134,135)(136,137,138,139,140)(141,142,143,144,145)(146,147,148,149,150)(151,152,153,154,155)(156,157,158,159,160), (1,5)(2,4)(6,10)(7,9)(11,15)(12,14)(16,20)(17,19)(21,35)(22,34)(23,33)(24,32)(25,31)(26,40)(27,39)(28,38)(29,37)(30,36)(41,50)(42,49)(43,48)(44,47)(45,46)(51,60)(52,59)(53,58)(54,57)(55,56)(61,80)(62,79)(63,78)(64,77)(65,76)(66,75)(67,74)(68,73)(69,72)(70,71)(81,85)(82,84)(86,90)(87,89)(91,95)(92,94)(96,100)(97,99)(101,115)(102,114)(103,113)(104,112)(105,111)(106,120)(107,119)(108,118)(109,117)(110,116)(121,130)(122,129)(123,128)(124,127)(125,126)(131,140)(132,139)(133,138)(134,137)(135,136)(141,160)(142,159)(143,158)(144,157)(145,156)(146,155)(147,154)(148,153)(149,152)(150,151) );
G=PermutationGroup([(1,86),(2,87),(3,88),(4,89),(5,90),(6,81),(7,82),(8,83),(9,84),(10,85),(11,96),(12,97),(13,98),(14,99),(15,100),(16,91),(17,92),(18,93),(19,94),(20,95),(21,106),(22,107),(23,108),(24,109),(25,110),(26,101),(27,102),(28,103),(29,104),(30,105),(31,116),(32,117),(33,118),(34,119),(35,120),(36,111),(37,112),(38,113),(39,114),(40,115),(41,126),(42,127),(43,128),(44,129),(45,130),(46,121),(47,122),(48,123),(49,124),(50,125),(51,136),(52,137),(53,138),(54,139),(55,140),(56,131),(57,132),(58,133),(59,134),(60,135),(61,146),(62,147),(63,148),(64,149),(65,150),(66,141),(67,142),(68,143),(69,144),(70,145),(71,156),(72,157),(73,158),(74,159),(75,160),(76,151),(77,152),(78,153),(79,154),(80,155)], [(1,116,6,111),(2,117,7,112),(3,118,8,113),(4,119,9,114),(5,120,10,115),(11,106,16,101),(12,107,17,102),(13,108,18,103),(14,109,19,104),(15,110,20,105),(21,91,26,96),(22,92,27,97),(23,93,28,98),(24,94,29,99),(25,95,30,100),(31,81,36,86),(32,82,37,87),(33,83,38,88),(34,84,39,89),(35,85,40,90),(41,151,46,156),(42,152,47,157),(43,153,48,158),(44,154,49,159),(45,155,50,160),(51,141,56,146),(52,142,57,147),(53,143,58,148),(54,144,59,149),(55,145,60,150),(61,136,66,131),(62,137,67,132),(63,138,68,133),(64,139,69,134),(65,140,70,135),(71,126,76,121),(72,127,77,122),(73,128,78,123),(74,129,79,124),(75,130,80,125)], [(1,51,11,41),(2,52,12,42),(3,53,13,43),(4,54,14,44),(5,55,15,45),(6,56,16,46),(7,57,17,47),(8,58,18,48),(9,59,19,49),(10,60,20,50),(21,71,31,61),(22,72,32,62),(23,73,33,63),(24,74,34,64),(25,75,35,65),(26,76,36,66),(27,77,37,67),(28,78,38,68),(29,79,39,69),(30,80,40,70),(81,131,91,121),(82,132,92,122),(83,133,93,123),(84,134,94,124),(85,135,95,125),(86,136,96,126),(87,137,97,127),(88,138,98,128),(89,139,99,129),(90,140,100,130),(101,151,111,141),(102,152,112,142),(103,153,113,143),(104,154,114,144),(105,155,115,145),(106,156,116,146),(107,157,117,147),(108,158,118,148),(109,159,119,149),(110,160,120,150)], [(1,2,3,4,5),(6,7,8,9,10),(11,12,13,14,15),(16,17,18,19,20),(21,22,23,24,25),(26,27,28,29,30),(31,32,33,34,35),(36,37,38,39,40),(41,42,43,44,45),(46,47,48,49,50),(51,52,53,54,55),(56,57,58,59,60),(61,62,63,64,65),(66,67,68,69,70),(71,72,73,74,75),(76,77,78,79,80),(81,82,83,84,85),(86,87,88,89,90),(91,92,93,94,95),(96,97,98,99,100),(101,102,103,104,105),(106,107,108,109,110),(111,112,113,114,115),(116,117,118,119,120),(121,122,123,124,125),(126,127,128,129,130),(131,132,133,134,135),(136,137,138,139,140),(141,142,143,144,145),(146,147,148,149,150),(151,152,153,154,155),(156,157,158,159,160)], [(1,5),(2,4),(6,10),(7,9),(11,15),(12,14),(16,20),(17,19),(21,35),(22,34),(23,33),(24,32),(25,31),(26,40),(27,39),(28,38),(29,37),(30,36),(41,50),(42,49),(43,48),(44,47),(45,46),(51,60),(52,59),(53,58),(54,57),(55,56),(61,80),(62,79),(63,78),(64,77),(65,76),(66,75),(67,74),(68,73),(69,72),(70,71),(81,85),(82,84),(86,90),(87,89),(91,95),(92,94),(96,100),(97,99),(101,115),(102,114),(103,113),(104,112),(105,111),(106,120),(107,119),(108,118),(109,117),(110,116),(121,130),(122,129),(123,128),(124,127),(125,126),(131,140),(132,139),(133,138),(134,137),(135,136),(141,160),(142,159),(143,158),(144,157),(145,156),(146,155),(147,154),(148,153),(149,152),(150,151)])
68 conjugacy classes
class 1 2A ··· 2G 2H 2I 4A 4B 4C 4D 4E 4F 4G 4H 4I ··· 4P 4Q 4R 5A 5B 10A ··· 10N 20A ··· 20X order 1 2 ··· 2 2 2 4 4 4 4 4 4 4 4 4 ··· 4 4 4 5 5 10 ··· 10 20 ··· 20 size 1 1 ··· 1 20 20 2 2 2 2 4 4 4 4 10 ··· 10 20 20 2 2 2 ··· 2 4 ··· 4
68 irreducible representations
dim 1 1 1 1 1 1 1 2 2 2 2 2 4 4 type + + + + + + + + + + - + image C1 C2 C2 C2 C2 C2 C2 D5 C4○D4 D10 D10 C4○D20 D4⋊2D5 Q8⋊2D5 kernel C2×C4⋊C4⋊D5 C4⋊C4⋊D5 C2×C4×Dic5 C2×C10.D4 C2×C4⋊Dic5 C2×D10⋊C4 C10×C4⋊C4 C2×C4⋊C4 C2×C10 C4⋊C4 C22×C4 C22 C22 C22 # reps 1 8 1 1 1 3 1 2 12 8 6 16 4 4
Matrix representation of C2×C4⋊C4⋊D5 in GL6(𝔽41)
1 0 0 0 0 0 0 1 0 0 0 0 0 0 40 0 0 0 0 0 0 40 0 0 0 0 0 0 40 0 0 0 0 0 0 40
,
1 1 0 0 0 0 0 40 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 40 0 0 0 0 1 0
,
32 0 0 0 0 0 0 32 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 32 0 0 0 0 32 0
,
1 0 0 0 0 0 0 1 0 0 0 0 0 0 34 40 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1
,
1 0 0 0 0 0 39 40 0 0 0 0 0 0 34 40 0 0 0 0 7 7 0 0 0 0 0 0 1 0 0 0 0 0 0 40
G:=sub<GL(6,GF(41))| [1,0,0,0,0,0,0,1,0,0,0,0,0,0,40,0,0,0,0,0,0,40,0,0,0,0,0,0,40,0,0,0,0,0,0,40],[1,0,0,0,0,0,1,40,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,40,0],[32,0,0,0,0,0,0,32,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,32,0,0,0,0,32,0],[1,0,0,0,0,0,0,1,0,0,0,0,0,0,34,1,0,0,0,0,40,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1],[1,39,0,0,0,0,0,40,0,0,0,0,0,0,34,7,0,0,0,0,40,7,0,0,0,0,0,0,1,0,0,0,0,0,0,40] >;
C2×C4⋊C4⋊D5 in GAP, Magma, Sage, TeX
C_2\times C_4\rtimes C_4\rtimes D_5
% in TeX
G:=Group("C2xC4:C4:D5");
// GroupNames label
G:=SmallGroup(320,1184);
// by ID
G=gap.SmallGroup(320,1184);
# by ID
G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-5,758,100,675,297,136,12550]);
// Polycyclic
G:=Group<a,b,c,d,e|a^2=b^4=c^4=d^5=e^2=1,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,c*b*c^-1=b^-1,b*d=d*b,e*b*e=b*c^2,c*d=d*c,e*c*e=b^2*c,e*d*e=d^-1>;
// generators/relations
×
𝔽
|
2020-12-02 14:04:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999786615371704, "perplexity": 8332.082344221057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141708017.73/warc/CC-MAIN-20201202113815-20201202143815-00088.warc.gz"}
|
https://math.stackexchange.com/questions/1809192/problem-with-denesting-radical-relations
|
# Problem with Denesting Radical Relations
I was messing around with nested radicals, and I came up with $\sqrt[3]{6\sqrt[3]{2}-6}=\frac {2+\sqrt[3]{2}-\sqrt[3]{4}}{\sqrt[3]{3}}$.
The $\sqrt[3]{2}$ and $\sqrt[3]{4}$ seemed familiar to another nested radical exmaple: $\sqrt[3]{\sqrt[3]{2}-1}=\frac {1-\sqrt[3]{2}+\sqrt[3]{4}}{\sqrt[3]{9}}$.
So I decided to give it a try. I divided both sides of $\sqrt[3]{6\sqrt[3]{2}-6}$ by $\sqrt[3]{6}$ and got that $\sqrt[3]{\sqrt[3]{2}-1}=\frac {2+\sqrt[3]{2}-\sqrt[3]{4}}{\sqrt[3]{9}}$
But this isn't what Ramanujan had... Something went wrong. Any ideas?
Looks like you divided the right side by $\sqrt[3]3$. The remaining $\sqrt[3]2$ should come from the numerator.
$$2+\sqrt[3]2-\sqrt[3]4=\sqrt[3]2(\sqrt[3]4+1-\sqrt[3]2)$$
|
2019-11-18 23:45:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587609767913818, "perplexity": 336.5312335620282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00485.warc.gz"}
|
https://forum.snap.berkeley.edu/t/measure-distances-as-fractions-of-screen-size-a-single-position-block-proposal/982
|
# Measure Distances (as Fractions of Screen Size) & a 'Single POSITION Block' Proposal
Thanks, Brian, primarily it was a fun thing (and simulation is not mathematically correct, only approximate) to do.
I agree with Jens (and you) that, at least for teaching Snap!, the way to go is - turtle geometry only.
Further down the (turtle/relative metrics) path, I want to ditch absolute metrics absolutely. (I know that the idea is too revolutionary for most adults).
What do I mean?
Take for example, MOVE 10 (steps). I want to get rid of idea of steps altogether and replace it by idea of relative size, e.g. for example 1/30th of Stage's longer side.
Why?
Because it is similar to the situation of cutting one big pizza so each of our 30 friends gets one little piece and nobody wants to really measure it in inches (or centimeters) or steps in case of Snap!, not to mention Scratch (or even ScratchJr) - so I ask you seriously: what can be more natural that using only relative metrics?
If anything, it should be based on the shorter side, so you can be sure that something smaller than 1 by 1 will fit.
But really, it should only be at the last moment that there is any fixed scale. A picture fits in a 1 by 1 square, and you don't worry about how big that is, and it might be different depending on where you render the picture -- the width of a roll of butcher paper if the picture is rendered by a floor turtle.
This is how the picture project in SICP 2.2.4 works.
But do you want this only for mathematical elegance, or are you arguing that there's a pedagogic virtue to it?
Right, this would prevent a simple MOVE 1 (if directed alongside the shorter side's direction) from going over the edge.
Yes, just in time of actually running a script. I love it that something like this has been proposed by a serious people who wrote the Structure and Interpretation of Computer Programming 2.2.4 (not that I was aware of it) and not just some silly user, like me, who insist on seeing things through the eyes of a five years old kid.
It's neither pedagogical per se (although my both parents were teachers and I did read pedagogical journals that my Dad was subscribed to), nor mathematical per se, but just a 'plain' 'childish' playfulness, I guess.
Now it is not possible to MOVE "one half of the stage" or "one third of the stage", because I must first calculate how many steps this is, when I really don't care how many steps this is, and it's not as entertaining/fun/playful as I wish it was
P.S.
I think that this idea was behind a project that I made some time ago and is listed besides one of yours among the featured projects (they are both on the last page, 3/3), titled "Following_the_fish":
Shouldn't a lot of these posts be split to a new topic?
like what happened here:
edit: by a lot of these posts i mean the ones suggesting we should get rid of the steps feature
Done.
Here's a humble wish to expand options inside "GO TO <>" primitive block. Can we have this?
For now, I can use a custom block:
<blocks app="Snap! 5.1, http://snap.berkeley.edu" version="1"><block-definition s="go to %'where' (*)" type="command" category="motion"><header></header><code></code><translations></translations><inputs><input type="%txt" readonly="true">center<options>random position
mouse-pointer
top-of-stage_(x_as_is)
bottom-of-stage_(x_as_is)
left-of-stage_(y_as_is)
right-of-stage_(y_as_is)
center</options></input></inputs><script><block s="doIfElse"><block s="reportEquals"><block var="where"/><l>center</l></block><script><block s="doGotoObject"><l><option>center</option></l></block></script><script><block s="doIfElse"><block s="reportEquals"><block var="where"/><l>random position</l></block><script><block s="doGotoObject"><l><option>random position</option></l></block></script><script><block s="doIfElse"><block s="reportEquals"><block var="where"/><l>mouse-pointer</l></block><script><block s="doGotoObject"><l><option>mouse-pointer</option></l></block></script><script><block s="doIfElse"><block s="reportEquals"><block var="where"/><l>top-of-stage_(x_as_is)</l></block><script><block s="doGotoObject"><block s="reportNewList"><list><block s="xPosition"></block><block s="reportAttributeOf"><l><option>top</option></l><l>Stage</l></block></list></block></block></script><script><block s="doIfElse"><block s="reportEquals"><block var="where"/><l>bottom-of-stage_(x_as_is)</l></block><script><block s="doGotoObject"><block s="reportNewList"><list><block s="xPosition"></block><block s="reportAttributeOf"><l><option>bottom</option></l><l>Stage</l></block></list></block></block></script><script><block s="doIfElse"><block s="reportEquals"><block var="where"/><l>left-of-stage_(y_as_is)</l></block><script><block s="doGotoObject"><block s="reportNewList"><list><block s="reportAttributeOf"><l><option>left</option></l><l>Stage</l></block><block s="yPosition"></block></list></block></block></script><script><block s="doIf"><block s="reportEquals"><block var="where"/><l>right-of-stage_(y_as_is)</l></block><script><block s="doGotoObject"><block s="reportNewList"><list><block s="reportAttributeOf"><l><option>right</option></l><l>Stage</l></block><block s="yPosition"></block></list></block></block></script></block></script></block></script></block></script></block></script></block></script></block></script></block></script></block-definition></blocks>
You can say
Is that not good enough?
Why is center in the go to block? Is go to x: 0 y: 0 not good enough?
No, because CENTER serves a pedagogic purpose: it promotes thinking in terms of vectors. The vector (0,0) doesn't completely fill that need, but the fact that you can drag a two-item list onto that input slot makes the point (sorry for the pun) even more forcefully.
Someday we'll work up the courage to have a mechanism for deprecated blocks in old projects, and we'll replace X POSITION and Y POSITION with a single POSITION block that reports a vector. Similarly, when we extended the MY block's input menu to include things like WIDTH, HEIGHT, LEFT, ... X ROTATION, ... I wanted to report vectors instead: BOUNDS, CORNERS (a list of two points), ROTATION CENTER, etc. @jens agreed in principle but thought we weren't fully ready to endorse vectors, and he also didn't want to construct a list that the user is probably just going to tear apart again. But someday.
STAGE LEFT etc. don't have that property, not unless we report a list of the left border as X and the current position as Y.
I don't think there's anything terribly wrong with adding those to the menu, but we're busy, and I don't see a big payoff.
I like your idea that someday you will "automagically" replace every "X POSITION" block used in existing projects by "ITEM <1> OF < POSITION >" very much. So, if STAGE LEFT block would report {X_of_left_border, Current_Y} list, the additional options to "MOVE TO < >" could be added and everyone would be happy !
[ quote="bh" ]
[ / quote]
...
...
I think this is somehow congruent with the whole idea of hyperization that you are introducing in version 6.0, so maybe the someday = today ?
Interesting idea. We'll see what @jens thinks!
|
2022-06-27 20:08:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40871116518974304, "perplexity": 2429.9194884026338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00525.warc.gz"}
|
https://lists.gnu.org/archive/html/lilypond-devel/2012-07/msg00517.html
|
lilypond-devel
[Top][All Lists]
## Re: Add function for overriding broken spanners to LilyPond. (issue 6397
From: David Nalesnik Subject: Re: Add function for overriding broken spanners to LilyPond. (issue 6397054) Date: Wed, 18 Jul 2012 19:17:36 -0500
On Wed, Jul 18, 2012 at 3:18 PM, David Kastrup wrote:
>> On Wed, Jul 18, 2012 at 8:18 PM, Thomas Morley
>>>> I haven't seen it before - looks awesome :)
>>>> (however, it would be even more awesome if it was one general function
>>>> for all purposes ;) )
>>>
>>> I wrote this snippet a year ago.
>>> Give me some days and perhaps I come up with sth more elaborated.
I took a look at this--I couldn't resist!!--and actually it wasn't too hard to incorporate Harm's (very cool) snippet into \alterBroken. I've attached a file which shows how his example would be expressed with that command.
I really like the idea of expanding the function like this. For one thing, it's a little confusing that the original \alterBroken would work with grobs like SystemStartBar (somewhat unexpectedly classed as a spanner) and not with a clef and its end-of-the-line cautionary.
This is a draft and I'd like to look over these revisions a bit before I amend the patch.
What do you think?
-David
spanners-and-breakable-items.ly
Description: Binary data
|
2019-06-25 15:13:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8507826328277588, "perplexity": 6647.700090852318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999838.27/warc/CC-MAIN-20190625132645-20190625154645-00295.warc.gz"}
|
https://en.wikipedia.org/wiki/Amino_acid_replacement
|
# Amino acid replacement
Amino acid replacement is a change from one amino acid to a different amino acid in a protein due to point mutation in the corresponding DNA sequence. It is caused by nonsynonymous missense mutation which changes the codon sequence to code other amino acid instead of the original.
Not all amino acid replacements have the same effect on function or structure of protein. The magnitude of this process may vary depending on how similar or dissimilar the replaced amino acids are, as well as on their position in the sequence or the structure. Similarity between amino acids can be calculated based on substitution matrices, physico-chemical distance, or simple properties such as amino acid size or charge[1] (see also amino acid chemical properties). Usually amino acids are thus classified into two types:[2]
• Conservative replacement - an amino acid is exchanged into another that has similar properties. This type of replacement is expected to rarely result in dysfunction in the corresponding protein[citation needed].
• Radical replacement - an amino acid is exchanged into another with different properties. This can lead to changes in protein structure or function, which can cause potentially lead to changes in phenotype, sometimes pathogenic. A well known example in humans is sickle cell anemia, due to a mutation in beta globin where at position 6 glutamic acid (negatively charged) is exchanged with valine (not charged).
## Physicochemical distances
Physicochemical distance is a measure that assesses the difference between replaced amino acids. The value of distance is based on properties of amino acids. There are 134 physicochemical properties that can be used to estimate similarity between amino acids.[3] Each physicochemical distance is based on different composition of properties.
Two-state characters Properties 1-5 Presence respectively of: β―CH2, γ―CH2, δ―CH2 (proline scored as positive), ε―CH2 group and a―CH3 group 6-10 Presence respectively of: ω―SH, ω―COOH, ω―NH2 (basic), ω―CONH2 and ―CHOH groups 11-15 Presence respectively of: benzene ring (including tryptophan as positive), branching in side chain by a CH group, a second CH3 group, two but not three ―H groups at the ends of the side chain (proline scored as positive) and a C―S―C group 16-20 Presence respectively of: guanido group, α―NH2, α―NH group in ring, δ―NH group in ring, ―N= group in ring 21-25 Presence respectively of: ―CH=N, indolyl group, imidazole group, C=O group in side chain, and configuration at α―C potentially changing direction of the peptide chain (only proline scores positive) 26-30 Presence respectively of: sulphur atom, primary aliphatic ―OH group, secondary aliphatic ―OH group, phenolic ―OH group, ability to form S―S bridges 31-35 Presence respectively of: imidazole ―NH group, indolyl ―NH group, ―SCH3 group, a second optical centre, the N=CR―NH group 36-40 Presence respectively of: isopropyl group, distinct aromatic reactivity, strong aromatic reactivity, terminal positive charge, negative charge at high pH (tyrosine scored positive) 41 Presence of pyrollidine ring 42-53 Molecular weight (approximate) of side chain, scored in 12 additive steps (sulphur counted as the equivalent of two carbon, nitrogen or oxygen atoms) 54-56 Presence, respectively, of: flat 5-, 6- and 9-membered ring system 57-64 pK at isoelectric point, scored additively in steps of 1 pH 65-68 Logarithm of solubility in water of the ʟ-isomer in mg/100 ml., scored additively 69-70 Optical rotation in 5 ɴ-HCl, [α]D 0 to -25, and over -25, respectively 71-72 Optical rotation in 5 ɴ-HCI, [α] 0 to +25, respectively (values for glutamine and tryptophan with water as solvent, and for asparagine 3·4 ɴ-HCl) 73-74 Side-chain hydrogen bonding (ionic type), strong donor and strong acceptor, respectively 75-76 Side-chain hydrogen bonding (neutral type), strong donor and strong acceptor, respectively 77-78 Water structure former, respectively moderate and strong 79 Water structure breaker 80-82 Mobile electrons few, moderate and many, respectively (scored additively) 83-85 Heat and age stability moderate, high and very high, respectively (scored additively) 86-89 RF in phenol-water paper chromatography in steps of 0·2 (scored additively) 90-93 RF in toluene-pyridine-glycolchlorhydrin (paper chromatography of DNP-derivative) in steps of 0·2 (scored additively: for lysine the di-DNP derivative) 94-97 Ninhydrin colour after collidine-lutidine chromatography and heating 5 min at 100 °C, respectively purple, pink, brown and yellow 98 End of side-chain furcated 99-101 Number of substituents on the β-carbon atom, respectively 1, 2 or 3 (scored additively) 102-111 The mean number of lone pair electrons on the side-chain (scored additively) 112-115 Number of bonds in the side-chain allowing rotation (scored additively) 116-117 Ionic volume within rings slight, or moderate (scored additively) 118-124 Maximum moment of inertia for rotation at the α―β bond (scored additively in seven approximate steps) 125-131 Maximum moment of inertia for rotation at the β―γ bond (scored additively in seven approximate steps) 132-134 Maximum moment of inertia for rotation at the γ―δ bond (scored additively in three approximate steps)
### Grantham's distance
Grantham's distance depends on 3 properties: composition, polarity and molecular volume.[4]
Distance difference D for each pair of amino acid i and j is calculated as: ${\displaystyle D_{ij}=[\alpha (c_{i}-c_{j})^{2}+\beta (p_{i}-p_{j})^{2}+\gamma (v_{i}-v_{j})^{2}]}$
where c = composition, p = polarity, and v = molecular volume; and are constants of squares of the inverses of the mean distance for each property, respectively equal to 1.833, 0.1018, 0.000399. According to Grantham's distance, most similar amino acids are leucine and isoleucine and the most distant are cysteine and tryptophan.
Difference D for amino acids[4]
Arg Leu Pro Thr Ala Val Gly Ile Phe Tyr Cys His Gln Asn Lys Asp Glu Met Trp 110 145 74 58 99 124 56 142 155 144 112 89 68 46 121 65 80 135 177 Ser 102 103 71 112 96 125 97 97 77 180 29 43 86 26 96 54 91 101 Arg 98 92 96 32 138 5 22 36 198 99 113 153 107 172 138 15 61 Leu 38 27 68 42 95 114 110 169 77 76 91 103 108 93 87 147 Pro 58 69 59 89 103 92 149 47 42 65 78 85 65 81 128 Thr 64 60 94 113 112 195 86 91 111 106 126 107 84 148 Ala 109 29 50 55 192 84 96 133 97 152 121 21 88 Val 135 153 147 159 98 87 80 127 94 98 127 184 Gly 21 33 198 94 109 149 102 168 134 10 61 Ile 22 205 100 116 158 102 177 140 28 40 Phe 194 83 99 143 85 160 122 36 37 Tyr 174 154 139 202 154 170 196 215 Cys 24 68 32 81 40 87 115 His 46 53 61 29 101 130 Gln 94 23 42 142 174 Asn 101 56 95 110 Lys 45 160 181 Asp 126 152 Glu 67 Met
### Sneath's index
Sneath's index takes into account 134 categories of activity and structure.[3] Dissimilarity index D is a percentage value of the sum of all properties not shared between two replaced amino acids. It is percentage value expressed by ${\displaystyle D=1-S}$, where S is Similarity.
Dissimilarity D between amino acids[3]
Leu Ile Val Gly Ala Pro Gln Asn Met Thr Ser Cys Glu Asp Lys Arg Tyr Phe Trp Isoleucine 5 Valine 9 7 Glycine 24 25 19 Alanine 15 17 12 9 Proline 23 24 20 17 16 Glutamine 22 24 25 32 26 33 Asparagine 20 23 23 26 25 31 10 Methionine 20 22 23 34 25 31 13 21 Threonine 23 21 17 20 20 25 24 19 25 Serine 23 25 20 19 16 24 21 15 22 12 Cysteine 24 26 21 21 13 25 22 19 17 19 13 Glutamic acid 30 31 31 37 34 43 14 19 26 34 29 33 Aspartic acid 25 28 28 33 30 40 22 14 31 29 25 28 7 Lysine 23 24 26 31 26 31 21 27 24 34 31 32 26 34 Arginine 33 34 36 43 37 43 23 31 28 38 37 36 31 39 14 Tyrosine 30 34 36 36 34 37 29 28 32 32 29 34 34 34 34 36 Phenylalanine 19 22 26 29 26 27 24 24 24 28 25 29 35 35 28 34 13 Tryptophan 30 34 37 39 36 37 31 32 31 38 35 37 43 45 34 36 21 13 Histidine 25 28 31 34 29 36 27 24 30 34 28 31 27 35 27 31 23 18 25
### Epstein's coefficient of difference
Epstein's coefficient of difference is based on the differences in polarity and size between replaced pairs of amino acids.[5] This index that distincts the direction of exchange between amino acids, described by 2 equations:
${\displaystyle \Delta _{a\rightarrow b}=(\delta _{polarity}^{2}+\delta _{size}^{2})^{1/2}}$ when smaller hydrophobic residue is replaced by larger hydrophobic or polar residue
${\displaystyle \Delta _{a\rightarrow b}=(\delta _{polarity}^{2}+[0.5\delta _{size}]^{2})^{1/2}}$when polar residue is exchanged or larger residue is replaced by smaller
Coefficient of difference ${\displaystyle (\Delta _{a\rightarrow b})}$[5]
Phe Met Leu Ile Val Pro Tyr Trp Cys Ala Gly Ser Thr His Glu Gln Asp Asn Lys Arg Phe 0.05 0.08 0.08 0.1 0.1 0.21 0.25 0.22 0.43 0.53 0.81 0.81 0.8 1 1 1 1 1 1 Met 0.1 0.03 0.03 0.1 0.1 0.25 0.32 0.21 0.41 0.42 0.8 0.8 0.8 1 1 1 1 1 1 Leu 0.15 0.05 0 0.03 0.03 0.28 0.36 0.2 0.43 0.51 0.8 0.8 0.81 1 1 1 1 1 1.01 Ile 0.15 0.05 0 0.03 0.03 0.28 0.36 0.2 0.43 0.51 0.8 0.8 0.81 1 1 1 1 1 1.01 Val 0.2 0.1 0.05 0.05 0 0.32 0.4 0.2 0.4 0.5 0.8 0.8 0.81 1 1 1 1 1 1.02 Pro 0.2 0.1 0.05 0.05 0 0.32 0.4 0.2 0.4 0.5 0.8 0.8 0.81 1 1 1 1 1 1.02 Tyr 0.2 0.22 0.22 0.22 0.24 0.24 0.1 0.13 0.27 0.36 0.62 0.61 0.6 0.8 0.8 0.81 0.81 0.8 0.8 Trp 0.21 0.24 0.25 0.25 0.27 0.27 0.05 0.18 0.3 0.39 0.63 0.63 0.61 0.81 0.81 0.81 0.81 0.81 0.8 Cys 0.28 0.22 0.21 0.21 0.2 0.2 0.25 0.35 0.25 0.31 0.6 0.6 0.62 0.81 0.81 0.8 0.8 0.81 0.82 Ala 0.5 0.45 0.43 0.43 0.41 0.41 0.4 0.49 0.22 0.1 0.4 0.41 0.47 0.63 0.63 0.62 0.62 0.63 0.67 Gly 0.61 0.56 0.54 0.54 0.52 0.52 0.5 0.58 0.34 0.1 0.32 0.34 0.42 0.56 0.56 0.54 0.54 0.56 0.61 Ser 0.81 0.8 0.8 0.8 0.8 0.8 0.62 0.63 0.6 0.4 0.3 0.03 0.1 0.21 0.21 0.2 0.2 0.21 0.24 Thr 0.81 0.8 0.8 0.8 0.8 0.8 0.61 0.63 0.6 0.4 0.31 0.03 0.08 0.21 0.21 0.2 0.2 0.21 0.22 His 0.8 0.8 1 1 0.8 0.8 0.6 0.61 0.61 0.42 0.34 0.1 0.08 0.2 0.2 0.21 0.21 0.2 0.2 Glu 1 1 1 1 1 1 0.8 0.81 0.8 0.61 0.52 0.22 0.21 0.2 0 0.03 0.03 0 0.05 Gln 1 1 1 1 1 1 0.8 0.81 0.8 0.61 0.52 0.22 0.21 0.2 0 0.03 0.03 0 0.05 Asp 1 1 1 1 1 1 0.81 0.81 0.8 0.61 0.51 0.21 0.2 0.21 0.03 0.03 0 0.03 0.08 Asn 1 1 1 1 1 1 0.81 0.81 0.8 0.61 0.51 0.21 0.2 0.21 0.03 0.03 0 0.03 0.08 Lys 1 1 1 1 1 1 0.8 0.81 0.8 0.61 0.52 0.22 0.21 0.2 0 0 0.03 0.03 0.05 Arg 1 1 1 1 1.01 1.01 0.8 0.8 0.81 0.62 0.53 0.24 0.22 0.2 0.05 0.05 0.08 0.08 0.05
### Miyata's distance
Miyata's distance is based on 2 physicochemical properties: volume and polarity.[6]
Distance between amino acids ai and aj is calculated as ${\displaystyle d_{ij}={\sqrt {(\Delta p_{ij}/\sigma _{p})^{2}+(\Delta v_{ij}/\sigma _{v})^{2}}}}$ where ${\displaystyle \Delta p_{ij}}$ is value of polarity difference between replaced amino acids and ${\displaystyle \Delta v_{ij}}$ and is difference for volume; ${\displaystyle \sigma _{p}}$ and ${\displaystyle \sigma _{v}}$ are standard deviations for ${\displaystyle \Delta p_{ij}}$ and ${\displaystyle \Delta v_{ij}}$
Amino acid pair distance[6]
Cys Pro Ala Gly Ser Thr Gln Glu Asn Asp His Lys Arg Val Leu Ile Met Phe Tyr Trp 1.33 1.39 2.22 2.84 1.45 2.48 3.26 2.83 3.48 2.56 3.27 3.06 0.86 1.65 1.63 1.46 2.24 2.38 3.34 Cys 0.06 0.97 0.56 0.87 1.92 2.48 1.8 2.4 2.15 2.94 2.9 1.79 2.7 2.62 2.36 3.17 3.12 4.17 Pro 0.91 0.51 0.9 1.92 2.46 1.78 2.37 2.17 2.96 2.92 1.85 2.76 2.69 2.42 3.23 3.18 4.23 Ala 0.85 1.7 2.48 2.78 1.96 2.37 2.78 3.54 3.58 2.76 3.67 3.6 3.34 4.14 4.08 5.13 Gly 0.89 1.65 2.06 1.31 1.87 1.94 2.71 2.74 2.15 3.04 2.95 2.67 3.45 3.33 4.38 Ser 1.12 1.83 1.4 2.05 1.32 2.1 2.03 1.42 2.25 2.14 1.86 2.6 2.45 3.5 Thr 0.84 0.99 1.47 0.32 1.06 1.13 2.13 2.7 2.57 2.3 2.81 2.48 3.42 Gln 0.85 0.9 0.96 1.14 1.45 2.97 3.53 3.39 3.13 3.59 3.22 4.08 Glu 0.65 1.29 1.84 2.04 2.76 3.49 3.37 3.08 3.7 3.42 4.39 Asn 1.72 2.05 2.34 3.4 4.1 3.98 3.69 4.27 3.95 4.88 Asp 0.79 0.82 2.11 2.59 2.45 2.19 2.63 2.27 3.16 His 0.4 2.7 2.98 2.84 2.63 2.85 2.42 3.11 Lys 2.43 2.62 2.49 2.29 2.47 2.02 2.72 Arg 0.91 0.85 0.62 1.43 1.52 2.51 Val 0.14 0.41 0.63 0.94 1.73 Leu 0.29 0.61 0.86 1.72 Ile 0.82 0.93 1.89 Met 0.48 1.11 Phe 1.06 Tyr Trp
### Experimental Exchangeability
Experimental Exchangeability was devised by Yampolsky and Stoltzfus.[7] It is the measure of the mean effect of exchanging one amino acid into a different amino acid.
It is based on analysis of experimental studies where 9671 amino acids replacements from different proteins, were compared for effect on protein activity.
Exchangeability (x1000) by source (row) and destination (column)[7]
Cys Ser Thr Pro Ala Gly Asn Asp Glu Gln His Arg Lys Met Ile Leu Val Phe Tyr Trp Exsrc Cys . 258 121 201 334 288 109 109 270 383 258 306 252 169 109 347 89 349 349 139 280 Ser 373 . 481 249 490 418 390 314 343 352 353 363 275 321 270 295 358 334 294 160 351 Thr 325 408 . 164 402 332 240 190 212 308 246 299 256 152 198 271 362 273 260 66 287 Pro 345 392 286 . 454 404 352 254 346 384 369 254 231 257 204 258 421 339 298 305 335 Ala 393 384 312 243 . 387 430 193 275 320 301 295 225 549 245 313 319 305 286 165 312 Gly 267 304 187 140 369 . 210 188 206 272 235 178 219 197 110 193 208 168 188 173 228 Asn 234 355 329 275 400 391 . 208 257 298 248 252 183 236 184 233 233 210 251 120 272 Asp 285 275 245 220 293 264 201 . 344 263 298 252 208 245 299 236 175 233 227 103 258 Glu 332 355 292 216 520 407 258 533 . 341 380 279 323 219 450 321 351 342 348 145 363 Gln 383 443 361 212 499 406 338 68 439 . 396 366 354 504 467 391 603 383 361 159 386 His 331 365 205 220 462 370 225 141 319 301 . 275 332 315 205 364 255 328 260 72 303 Arg 225 270 199 145 459 251 67 124 250 288 263 . 306 68 139 242 189 213 272 63 259 Lys 331 376 476 252 600 492 457 465 272 441 362 440 . 414 491 301 487 360 343 218 409 Met 347 353 261 85 357 218 544 392 287 394 278 112 135 . 612 513 354 330 308 633 307 Ile 362 196 193 145 326 160 172 27 197 191 221 124 121 279 . 417 494 331 323 73 252 Leu 366 212 165 146 343 201 162 112 199 250 288 185 171 367 301 . 275 336 295 152 248 Val 382 326 398 201 389 269 108 228 192 280 253 190 197 562 537 333 . 207 209 286 277 Phe 176 152 257 112 236 94 136 90 62 216 237 122 85 255 181 296 291 . 332 232 193 Tyr 142 173 . 194 402 357 129 87 176 369 197 340 171 392 . 362 . 360 . 303 258 Trp 137 92 17 66 63 162 . . 65 61 239 103 54 110 . 177 110 364 281 . 142 Exdest 315 311 293 192 411 321 258 225 262 305 290 255 225 314 293 307 305 294 279 172 291
## Typical and idiosyncratic amino acids
Amino acids can also be classified according to how many different amino acids they can be exchanged by through single nucleotide substitution.
• Typical amino acids - there are several other amino acids which they can change into through single nucleotide substitution. Typical amino acids and their alternatives usually have similar physicochemical properties. Leucine is an example of a typical amino acid.
• Idiosyncratic amino acids - there are few similar amino acids that they can mutate to through single nucleotide substitution. In this case most amino acid replacements will be disruptive for protein function. Tryptophan is an example of an idiosyncratic amino acid.[8]
## Tendency to undergo amino acid replacement
Some amino acids are more likely to be replaced. One of the factors that influences this tendency is physicochemical distance. Example of a measure of amino acid can be Graur's Stability Index.[9] The assumption of this measure is that the amino acid replacement rate and protein's evolution is dependent on the amino acid composition of protein. Stability index S of an amino acid is calculated based on physicochemical distances of this amino acid and its alternatives than can mutate through single nucleotide substitution and probabilities to replace into these amino acids. Based on Grantham's distance the most immutable amino acid is cysteine, and the most prone to undergo exchange is methionine.
Example of calculating stability index[9] for Methionine coded by AUG based on Grantham's physicochemical distance
Alternative codons Alternative amino acids Probabilities Grantham's distances[4] Average distance
AUU, AUC, AUA Isoleucine 1/3 10 3.33
ACG Threonine 1/9 81 9.00
AAG Lysine 1/9 95 10.56
AGG Arginine 1/9 91 10.11
UUG, CUG Leucine 2/9 15 3.33
GUG Valine 1/9 21 2.33
Stability index[9] 38.67
## Patterns of amino acid replacement
Evolution of proteins is slower than DNA since only nonsynonymous mutations in DNA can result in amino acid replacements. Most mutations are neutral to maintain protein function and structure. Therefore, the more similar amino acids are, the more probable that they will be replaced. Conservative replacements are more common than radical replacements, since they can result in less important phenotypic changes.[10] On the other hand, beneficial mutations, enhancing protein functions are most likely to be radical replacements.[11] Also, the physicochemical distances, which are based on amino acids properties, are negatively correlated with probability of amino acids substitutions. Smaller distance between amino acids indicates that they are more likely to undergo replacement.
## References
1. ^ Dagan, Tal; Talmor, Yael; Graur, Dan (July 2002). "Ratios of Radical to Conservative Amino Acid Replacement are Affected by Mutational and Compositional Factors and May Not Be Indicative of Positive Darwinian Selection". Molecular Biology and Evolution. 19 (7): 1022–1025. doi:10.1093/oxfordjournals.molbev.a004161.
2. ^ Graur, Dan (2015-01-01). Molecular and Genome Evolution. Sinauer. ISBN 9781605354699.
3. ^ a b c d Sneath, P. H. (1966-11-01). "Relations between chemical structure and biological activity in peptides". Journal of Theoretical Biology. 12 (2): 157–195. doi:10.1016/0022-5193(66)90112-3. ISSN 0022-5193. PMID 4291386.
4. ^ a b c Grantham, R. (1974-09-06). "Amino acid difference formula to help explain protein evolution". Science. 185 (4154): 862–864. Bibcode:1974Sci...185..862G. doi:10.1126/science.185.4154.862. ISSN 0036-8075. PMID 4843792.
5. ^ a b Epstein, Charles J. (1967-07-22). "Non-randomness of Ammo-acid Changes in the Evolution of Homologous Proteins". Nature. 215 (5099): 355–359. Bibcode:1967Natur.215..355E. doi:10.1038/215355a0.
6. ^ a b Miyata, T.; Miyazawa, S.; Yasunaga, T. (1979-03-15). "Two types of amino acid substitutions in protein evolution". Journal of Molecular Evolution. 12 (3): 219–236. Bibcode:1979JMolE..12..219M. doi:10.1007/BF01732340. ISSN 0022-2844. PMID 439147.
7. ^ a b Yampolsky, Lev Y.; Stoltzfus, Arlin (2005-08-01). "The Exchangeability of Amino Acids in Proteins". Genetics. 170 (4): 1459–1472. doi:10.1534/genetics.104.039107. ISSN 0016-6731. PMC 1449787. PMID 15944362.
8. ^ Xia, Xuhua (2000-03-31). Data Analysis in Molecular Biology and Evolution. Springer Science & Business Media. ISBN 9780792377672.
9. ^ a b c Graur, D. (1985-01-01). "Amino acid composition and the evolutionary rates of protein-coding genes". Journal of Molecular Evolution. 22 (1): 53–62. Bibcode:1985JMolE..22...53G. doi:10.1007/BF02105805. ISSN 0022-2844. PMID 3932664.
10. ^ Zuckerkandl; Pauling (1965). "Evolutionary divergence and convergence in proteins". New York: Academic Press: 97–166.
11. ^ Dagan, Tal; Talmor, Yael; Graur, Dan (2002-07-01). "Ratios of radical to conservative amino acid replacement are affected by mutational and compositional factors and may not be indicative of positive Darwinian selection". Molecular Biology and Evolution. 19 (7): 1022–1025. doi:10.1093/oxfordjournals.molbev.a004161. ISSN 0737-4038. PMID 12082122.
|
2019-05-23 13:08:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41459858417510986, "perplexity": 773.9871079858524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257244.16/warc/CC-MAIN-20190523123835-20190523145835-00121.warc.gz"}
|
https://www.craigxchen.com/research/
|
Nothing!
### Previous Projects:
Bressan’s conjecture on mixing flows: Consider the unit square in $$\mathbb{R}^2$$ with periodic boundary conditions (i.e., the torus) divided down the middle $$x=1/2$$ line into sets $$R$$, $$B$$ (imagine half as painted red and the other half blue). We say that a flow mixes $$R$$, $$B$$ up to scale $$\epsilon$$ with mixing factor $$\kappa \in (0,1/2)$$ if every $$\epsilon$$-ball has at least $$\kappa$$ of its points from $$R$$ and $$\kappa$$ from $$B$$ (where this is defined in terms of the Lebesgue measure). If we mix the initial condition to scale $$\epsilon$$ with some time-dependent divergence-free vector field $$f$$, how much “work” is necessary? Here, “work” is measured as
$\begin{equation*} \int_{0}^{T} \int_{\mathbb{T}^2} \lVert \nabla f(x,t) \rVert_{L^1} dxdt. \end{equation*}$
Bressan conjectured that this quantity grows at least logarithmically w.r.t. $$1/\epsilon$$. Existing literature has proven the conjecture for $$L^p$$ norms, $$p > 1$$, but the machinery used in their results does not seem to extend to the $$p=1$$ case. Alongside the conjecture, Bressan also constructed an example mixer; we will investigate some numerical approaches to this problem (e.g., machine learning) to develop our intuition as to whether or not his construction is optimal. I’m working on this project with Professor Tarek M. Elgindi. Fall 2021 independent study report available here.
Learning linear dynamical systems with memory: Consider a dynamical system with hidden state $$h_t$$, observable $$y_t$$, and control input $$u_t$$ following the dynamics
\begin{align*} h_{t+1} &= Ah_t + Bu_t \\ y_t &= Ch_t + Du_t. \end{align*}
It is known that gradient descent can learn this system (i.e., learn the parameters $$A,B,C,D$$) with polynomially (in the dimension) many samples. What happens if we allow for a higher-order system which depends on $$h_t$$ through $$h_{t-m}$$? It is possible to linearize this system byconcatenating all of the state vectors, but then the matrices to be learned will be rather sparse and this advantage will go to waste. I am working with Professor Andrea Agazzi (and some advice from Professor Jianfeng Lu) to try to prove a better sample-complexity bound for the system with memory.
Close lattice points on circles: We are interested in the question of how many lattice points can lie on an arc of length $$R^\theta$$ where $$R$$ is the radius of a circle centered at the origin and $$\theta \in [0,1)$$. The best bound in current literature blows up as $$\theta$$ approaches $$1/2$$. In this project, we produced a fully rigorous proof of the current bound as well as some related lemmas and investigated potential methods to improve the bound. At the moment, it seems like an improvement will require an entirely different approach. Mentored by Professor Lillian B. Pierce. Spring 2021 independent study report available here.
Policy gradient methods in the linear-quadratic regulator with nonlinear controls: We aim to extend convergence results of existing literature to a wider policy class. As of now, there are still very few convergence guarantees for policy gradient algorithms in settings with continuous state and action spaces (as opposed to tabular). We prove that these algorithms converge in the setting of the LQR for policies that can be written as linear combinations of globally Lipschitz functions; to prove our results we develop a new approach that involves iteratively updating a discount factor. This project is mentored by Professor Andrea Agazzi. https://arxiv.org/abs/2112.07612
|
2022-11-27 11:14:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354255199432373, "perplexity": 376.1037097685266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00165.warc.gz"}
|
http://meetings.aps.org/Meeting/APR11/Event/145572
|
### Session B1: Solvay at One Hundred; Pais Prize Talk
10:45 AM–12:33 PM, Saturday, April 30, 2011
Room: Grand A
Sponsoring Units: DPF FHP
Chair: Daniel Kleppner, Massachusetts Institute of Technology
Abstract ID: BAPS.2011.APR.B1.1
### Abstract: B1.00001 : The Solvay Council, 1911: A kind of private congress''
10:45 AM–11:21 AM
Preview Abstract MathJax On | Off Abstract
#### Author:
Richard Staley
(Univesity of Wisconsin-Madison)
The photograph of its participants gathered around the conference table at the first Solvay Congress in physics has long presented an iconic image of physics in the early twentieth century, and the event has commonly been celebrated for its distinctive role in the propagation of quantum theory, as well as for the rich heritage in subsequent conferences that it initiated. Yet it is not often appreciated just how unusual this first congress or council'' was. Convened and funded by the Belgian industrialist Ernst Solvay, it was conceived and planned by the Berlin physical chemist Walther Nernst, with a zealous attention to detail that extended to entreating participants to keep its proceedings confidential until it had actually occurred. Kept private to facilitate later public notice, I will argue that this conference also helped fashion a distinctive (and selective) view of the past. This paper combines an examination of the planning and conduct of the congress with a study of the earliest uses of general concepts of classical'' theory from the late nineteenth century, in order to argue that the Solvay congress was important not just to the wider propagation of quantum theory, but to the formation of the conceptual framework within which we now cast this era and its physics: the contrast between classical and modern theory.
To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2011.APR.B1.1
|
2014-09-01 07:29:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2878272831439972, "perplexity": 3601.5566351493644}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917463.1/warc/CC-MAIN-20140901014517-00421-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://blog.csdn.net/tel_Annie/article/details/80318768
|
# LeetCode-Fizz_Buzz
Write a program that outputs the string representation of numbers from 1 to n.
But for multiples of three it should output “Fizz” instead of the number and for the multiples of five output “Buzz”. For numbers which are multiples of both three and five output “FizzBuzz”.
Example:
n = 15,
Return:
[
"1",
"2",
"Fizz",
"4",
"Buzz",
"Fizz",
"7",
"8",
"Fizz",
"Buzz",
"11",
"Fizz",
"13",
"14",
"FizzBuzz"
]
n = 15,
Return:
[
"1",
"2",
"Fizz",
"4",
"Buzz",
"Fizz",
"7",
"8",
"Fizz",
"Buzz",
"11",
"Fizz",
"13",
"14",
"FizzBuzz"
]
C++代码(Visual Studio 2017):
#include "stdafx.h"
#include <iostream>
#include <vector>
#include <string>
using namespace std;
class Solution {
public:
vector<string> fizzBuzz(int n) {
vector<string> result;
for (int i = 1; i <= n; i++) {
if ((i % 3 == 0) && (i % 5 == 0)) {
result.push_back("FizzBuzz");
continue;
}
if (i % 3 == 0) {
result.push_back("Fizz");
continue;
}
if (i % 5 == 0) {
result.push_back("Buzz");
continue;
}
else {
result.push_back(to_string(i));
}
}
return result;
}
};
int main()
{
Solution s;
int n = 15;
vector<string> result;
result = s.fizzBuzz(n);
for (int i = 0; i < result.size(); i++) {
cout << result[i] << " ";
}
return 0;
}
• 评论
• 上一篇
• 下一篇
|
2018-09-19 01:29:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2607170045375824, "perplexity": 4999.707256368642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155814.1/warc/CC-MAIN-20180919004724-20180919024724-00045.warc.gz"}
|
http://mathoverflow.net/questions/116269/if-a-formal-power-series-over-the-complex-numbers-satisfies-a-polynomial-identi?sort=votes
|
# If a formal power series over the complex numbers satisfies a polynomial identity, does it imply that the power series has a radius of convergence?
Let $P(z)$ be a $\textit{formal}$ power series in $z$ that a priori may not have a non zero radius of convergence. Assume that $P(0) =0$.
Let $\Phi(w,z)$ be a polynomial in two variables, that is not identically zero. Assume that $\Phi(0,0) =0$. Suppose $\textbf{formally}$ we have the identity
$$\Phi(P(z),z) =0$$
Can we conclude that $P(z)$ has a non zero radius of convergence?
Everything is over the complex numbers $\mathbb{C}$.
-
maybe related: mathoverflow.net/questions/72677 – Michael Bächtold Dec 13 '12 at 12:03
To the person voting to close: I suspect you are assuming that $\partial \Phi(w,z)/\partial w|_{(0,0)} \neq 0$. In this case, it's pretty straightforward: By the complex implicit function theorem, there is a small neighborhood of $0$ where there is an analytic function $f$ obeying $f(0)=0$ and $\Phi(f(z), z)=0$. Since the coefficients of $P$ are determined by a unique recursion in this case, they are the same as the coefficients of $f$ and $P$ is convergent on this small neighborhood. – David Speyer Dec 13 '12 at 14:10
But to do the case where $\Phi$ is singular at the origin (e.g. $\Phi(w,z) = w^2 + w^5 - z^2$) seems to me to require Weierstrass preparation, and I need to think about some of the details, although I am confident the answer is yes. To my mind, this makes the problem hard enough to definitely belong here. – David Speyer Dec 13 '12 at 14:13
What we need here is that the field of convergent Puiseux series is algebraically closed. This is probably a well-known result but I don't know a proof myself. – François Brunault Dec 13 '12 at 14:36
Ok, here is a reference emis.de/journals/UIAM/actamath/PDF/38-279-282.pdf – François Brunault Dec 13 '12 at 14:41
The equation $\Phi(w,z)=0$ can be solved using Puiseux series. If $\frac{\partial{\Phi}}{\partial{w}}\not\equiv 0$ then there exist finitely many formal series $f(z)=\sum_{n\geq0}a_nz^{n/p}$ such that formally $\Phi(w,z)=0$. All these series are convergent. So the answer to your question is positive.
For the proof see any book titled "Algebraic functions".
-
The equation Φ(w,z)=0 can be solved using Puiseux series.'' Is this true even if $\frac{\partial{\Phi}}{\partial{w}}\neq0$ ? And will your second statement be true, if you drop the uniqueness criteria? – Ritwik Dec 13 '12 at 14:24
Thanks Alexandre for correcting my first answer which was too vaguely (and thus erroneously) stated. I guess that the «finitely many» solutions correspond to different branches of ${\Phi=0}$, am I right? Since I was implicitly thinking about reduced equation I thought those solutions were unique. – Loïc Teyssier Dec 13 '12 at 14:58
In general, finitely many solutions correspond to different branches of the implicit function $w(z)$. The curve $\Phi(z,w)=0$ may have one or several branches near $(0,0)$. But in his situation, it is assumed that $w(z)$ is a series in integer powers, so such different series correspond to different branches of the curve $\Phi(z,w)=0$. – Alexandre Eremenko Dec 13 '12 at 15:38
The result holds allowing several variables $z_1,\dots,z_n$, by using Artin approximation. (The method of proof below applies verbatim over non-archimedean fields of any characteristic, where "analytification" below may be taken in the naive sense over such fields or in the sense of rigid-analytic geometry. A variant on the argument, again using Artin approximation -- or rather its generalization proved by Popescu -- shows that if $R$ is any excellent normal local noetherian domain then its henselization $R^{\rm{h}}$ is the subring of elements of $\widehat{R}$ that satisfy a 1-variable polynomial equation over $R$ of positive degree; recall that for any local noetherian ring $R$, $R^{\rm{h}}$ is local noetherian and the map $R \rightarrow R^{\rm{h}}$ induces an isomorphism between completions.)
To make a precise statement about convergent power series, let $\Phi \in \mathbf{C}[w,z_1,\dots,z_n]$ involve $w$, and let $P \in \mathbf{C}[\![z_1,\dots,z_n]\!]$ be a formal power series such that $P(0,\dots,0) = 0$ and $\Phi(P,z_1,\dots,z_n) = 0$. We claim that $P$ converges on a ball around $(0,\dots,0)$ with positive radius. Moreover, we claim that $P$ lies in the subring of $\mathbf{C}[\![z_1,\dots,z_n]\!]$ given by the henselization $R^{\rm{h}}$ of the algebraic local ring $R = \mathbf{C}[z_1,\dots,z_n]_{(z_1,\dots,z_n)}$.
Since $\widehat{R}$ is a domain and $\Phi \in R[w]$ has positive $w$-degree, the equation $\Phi = 0$ has at most finitely many solutions in $\widehat{R}$. Thus, there is an exponent $e > 0$ such that distinct solutions in $\widehat{R}$ are distinct modulo the $e$th power of the maximal ideal $\mathfrak{m}$ of $\widehat{R}$. By the Artin approximation theorem, for any $f \in \widehat{R}$ satisfying $\Phi(f,z_1,\dots,z_n)=0$ and any $m > 0$ there exists $f_m$ in the henselization $R^{\rm{h}}$ such that $\Phi(f_m,z_1,\dots,z_n)=0$ and $f_m \equiv f \bmod \mathfrak{m}^m$. Taking $m = e$, the solutions $f, f_e \in \widehat{R}$ to $\Phi=0$ must coincide! In other words, all solutions to $\Phi=0$ in $\widehat{R}$ lie in $R^{\rm{h}}$.
By construction, $R^{\rm{h}}$ is a direct limit of local-etale $R$-algebras, so there exists a local-etale map $R \rightarrow R'$ such that all solutions to $\Phi=0$ in $\widehat{R}$ lie in $R'$ (via the canonical isomorphism $\widehat{R} \rightarrow \widehat{R'}$ and the inclusion of $R'$ into its own completion). By definition of "local-etale", there is an etale map $h:V \rightarrow \mathbf{A}^n_{\mathbf{C}}$ and a point $v \in h^{-1}(0)$ such that $O_{V,v} = R'$ as $R$-algebras. (In particular, $V$ is smooth.) Since $h$ is etale, it follows from the Zariski local structure theorem for etale morphisms and the analytic inverse function theorem in several complex variables that the analytification $h^{\rm{an}}$ is a local isomorphism. In particular, $O_{V^{\rm{an}},v}$ is identified via $h^{\rm{an}}$-pullback with the local ring $O_{(\mathbf{A}^n_{\mathbf{C}})^{\rm{an}},0}$ of convergent power series in $z_1,\dots,z_n$ at the origin.
Passing to completions on this identification of analytic local rings, we recover the identification of $O_{V,v}^{\wedge} = \widehat{R'}$ with $\widehat{R}$ induced by $h$, so it follows that under the inclusion $$R' = O_{V,v} \subset O_{V^{\rm{an}},v} = O_{(\mathbf{A}^n_{\mathbf{C}})^{\rm{an}},0}$$ the element of $R'$ that "is" $P$ (provided by Artin approximation) maps to a convergent power series near the origin that has Taylor expansion at the origin equal to $P$. Hence, $P$ has positive radius of convergence. QED
-
The same holds if $\Phi(w,z)$ is a convergent power series $\neq 0$ in $1+n$ variables (i.e. $w$ is a single variable and $z = (z_{1},\dots, z_{n})$ a set of $n$ variables): If $P$ is a formal power series satisfying $\Phi(P(z), z) =0$ and $P(0)=0$, then it is already convergent. This follows immediatly from the analytic version of Artin's Approxmiation theorem (which states that any formal implicit solution to the equation $F(w,z) = 0$ (where $F$ is a convergent power series) can be approximated in the $\mathfrak{m}$-adic topology by convergent solutions ) and the fact that the above equation has only finitely many solutions as a consequence of the Weierstrass division theorem:
Let $P(z)$ be a formal solution and set $Q(w,z)= (w - P(z))$, which is $w$-regular of order one so we can apply the Weierstrass division theorem to find $\Phi_{1}(w,z)$ and a formal series $R(z)$ so that $\Phi = Q\cdot \Phi_{1} + R$. Plugging $(P(z),z)$ into both sides yields that $0 = R$, so $\Phi = (w-P(z)) \Phi_{1}(w,z)$ and consequently $\mathbb{ord}(\Phi_{1}) = \mathbb{ord}(\Phi) -1$. If $P_{2}$ is another formal implicit solution then it follows that $\Phi(P_{2},z) =0$, so we can repeat the factorization and receive $\Phi = (w-P)(w-P_{2}) \Phi_{2}$, where $\mathbb{ord}(P_{2}) = \mathbb{ord}(P_{1}) -2$. So we see that the number of implicit formal solutions of $\Phi(w,z) =0$ is bounded by $\mathbb{ord}(\Phi(w,z))$. Now given any formal solution $P$ there exists a sequence of convergent solution which converges to $P$ (in the m-adic topology), and since the sequence consists only of a finite number of elements it has to coincide ultimately with $P$, which is therefore convergent.
|
2014-03-07 21:15:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966758131980896, "perplexity": 134.82179612457887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999650916/warc/CC-MAIN-20140305060730-00085-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0AZC
|
Suppose that $f : X \to Y$ is a proper morphism of varieties. Let $Z \subset X$ be a $k$-dimensional closed subvariety. We define $f_*[Z]$ to be $0$ if $\dim (f(Z)) $d = [\mathbf{C}(Z) : \mathbf{C}(f(Z))] = \deg (Z/f(Z))$ is the degree of the dominant morphism$Z \to f(Z)$, see Morphisms, Definition 28.49.8. Let$\alpha = \sum n_ i [Z_ i]$be a$k$-cycle on$Y$. The pushforward of$\alpha $is the sum$f_* \alpha = \sum n_ i f_*[Z_ i]$where each$f_*[Z_ i]$is defined as above. This defines a homomorphism $f_* : Z_ k(X) \longrightarrow Z_ k(Y)$ See Chow Homology, Section 41.12. Lemma 42.6.1. Suppose that$f : X \to Y$is a proper morphism of varieties. Let$\mathcal{F}$be a coherent sheaf with$\dim (\text{Supp}(\mathcal{F})) \leq k$, then$f_*[\mathcal{F}]_ k = [f_*\mathcal{F}]_ k$. In particular, if$Z \subset X$is a closed subscheme of dimension$\leq k$, then$f_*[Z]_ k = [f_*\mathcal{O}_ Z]_ k$. Proof. See Chow Homology, Lemma 41.12.4.$\square$Lemma 42.6.2. Let$f : X \to Y$and$g : Y \to Z$be proper morphisms of varieties. Then$g_* \circ f_* = (g \circ f)_*$as maps$Z_ k(X) \to Z_ k(Z)$. Proof. Special case of Chow Homology, Lemma 41.12.2.$\square$## Comments (2) Comment #3838 by Yuxuan on You missed a $after$\dim (f(Z)). Comment #3932 by on Sorry, I don't understand what you are saying. Please try again. ## Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi\$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
|
2019-06-20 12:06:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.999754011631012, "perplexity": 5473.327728180815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999210.22/warc/CC-MAIN-20190620105329-20190620131329-00225.warc.gz"}
|
https://www.lil-help.com/questions/93977/what-strategies-can-dcs-deploy-to-respond-to-their-unused-space-labor-equipment-as-dc-volumes-shrink-due-to-sales-volume-declines
|
What strategies can DCs deploy to respond to their unused space, labor, equipment as DC volumes shrink due to sales volume declines
# What strategies can DCs deploy to respond to their unused space, labor, equipment as DC volumes shrink due to sales volume declines
S
0 points
What strategies can DCs deploy to respond to their unused space, labor, equipment as DC volumes shrink due to sales volume declines? Which one do you feel is most effective, and why?
1. Will the need for DCs ultimately go away? Fully explain your rationale for your position.
2. Are DCs really a value-added function or just another operating cost burden? Briefly explain.
3. What role does Sales Strategy play in this dilemma?
strategies which can be deployed for a warehouse when the space, labour and equipment is unused due to low DC volumes is to outsource the services or lease out the facility to other companies. This strategy has been successful in optimizing the DC usage and gained popularity as the DC will act as the storage location for another company and the unutilized...
What strategies
solarc
S
0 points
#### Oh Snap! This Answer is Locked
Thumbnail of first page
Excerpt from file: WhatstrategiescanDCsdeploytorespondtotheirunusedspace,labor,equipmentasDCvolumesshrinkdue tosalesvolumedeclines?Whichonedoyoufeelismosteffective,andwhy? 2.WilltheneedforDCsultimatelygoaway?Fullyexplainyourrationaleforyourposition.
Filename: what-strategies-can-dcs-deploy-to-respond-to-their-unused-space-labor-equipment-as-dc-volumes-shrink-due-to-sales-volume-declines-94.docx
Filesize: < 2 MB
Print Length: 2 Pages/Slides
Words: NA
Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$
Use LaTeX to type formulas and markdown to format text. See example.
|
2018-12-12 22:21:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2579815685749054, "perplexity": 7480.741649088467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824119.26/warc/CC-MAIN-20181212203335-20181212224835-00393.warc.gz"}
|
https://deemocean.com/
|
## [Serial] The Automorphism Rocket
*2019/7/8
Before ALL:
Hello folks, since the summer began, I have to say a project I have planed for a long time should get into its path now. And it is a serial, so I will update the lastest processes as I do.
Basically, the goal is to build a fully fictional rocket which has a Thrust Vector Control(TVC) which allows the rocket could control the thrust direction stay toward the earth, and a panel with velocity sensors, direction sensors on so the SCM could compute.
Then, I would add more functions to the rocket, such as muti-thrusts. So finally allow the rocket to get into space(stratosphere?).
OK, there are few steps I need to do in order to make this goal comes true.
Stage One, build a TVC
Stage Two, build a control panel and write software
Stage Three, build a rocket as a whole
Then, test, test, test…
If I can make it, then add more functions and let it go the space.
Now, everything starts here.
Stage One: Build a TVC
Prototype 1:
The video was made a long time ago, so there are many things not been added compared to now I, but the idea is the same. Basically, I 3D-printed a model to physically exam my idea, and know I find there are two main problems need me to fix.
1. I need grooves and rooms to fit my two servo motors
2. The outside ring needs to control the inner ring but not directly connect the central, otherwise, the move of one axis would affect the other one, simply to say, they will be stuck.
## 手把手教你怎么搭建基岩版Minecraft我的世界服务器
What?
1.基岩版(XBOX,WIN10 UWP, 手机)
2.JAVA版(也就是之前电脑上玩的那种)
3.网易版(辣鸡)
Why?
How?
1.去阿里云(https://chuangke.aliyun.com/invite?userCode=z66uogjh)(吃我邀请码得优惠)买一台ECS服务器,最低配置就可以了,位置选靠近你地理位置的地方,系统注意了:要选Ubuntu 18.04 及以上的,第一次建议用按量付费试试水
2.这样你的服务器算是开好了,点管理,左上角找到“重置实例密码”来设定密码
3.因为我们需要19132端口来连接服务器,所以必须在安全组中添加
4.得到 public key:还记得之前我们设置过密码嘛?因为新版本的Xshell需要密匙文件才让你登录,所以我们这边再下载一下这个文件(妥善保管,不会让你下第二次的)在网络与安全-密匙对 中创建密匙,名字随意,把.pem文件保存. 然后点绑定密匙,将你的服务器绑定此密匙 然后点重启
5. 现在我们需要一个工具来连接这台服务器,别人喜欢用Putty,但我更爱Xshell
6.下面来配置服务器
1.输入apt install unzip回车下载解压命令
2.输入mkdir mc回车来创建名为”mc”的文件夹
3.输入cd /root/mc回车 进入此文件夹
4.输入wget https://minecraft.azureedge.net/bin-linux/bedrock-server-目前版本.zip回车, 下载服务器安装包
5.输入unzip bedrock-server-目前版本.zip回车, 解压
6.输入LD_LIBRARY_PATH=. ./bedrock_server回车 开启服务器 并进入服务器后台
(在后台输入 stop回车 来关闭后台)
screen -r [那几个数字] 恢复窗口
*先写到这里, 之后写怎么把地图迁移进服务器
Deemo
## The Chemistry Behind The Forming of Different Colors of M42 Nebula
PS:IT’S NOT A RESEARCH PAPER
Abstract
This paper begins with the Messier 42 nebula, talking about how different color lights form due to the electron transport in Bohr’s model. How these lights are absorbed by gas and form the absorption spectrum, then how these lights are captured by sensors with the explanation between CCD and CMOS sensors. The history of spectrometer also includes plus the motivation and the reason why people are keeping doing that.
Key Words: Messier 42, Spectrum, Bohr Model, Sensor, Photoelectric effect, Spectrometer, Wavelength, Energy transform, Frequency, Coulomb’s equation
Introduction
When we were just children, sometimes laying on a green field at night the light pollution is not that bad. People lie in the starlight, and the huge Milky Way fills all of the eyesight. Surely, it is night, but when your eyes adapt to the dim, the elegant shining beauty would amaze you, indeed. Sometimes people would notice there are something different lie on the sky, something bigger with different colors. After that people who are curious about that seek for answers, Finally, they found that is called “nebula”.
The “M42”
M42 or the Orion Nebula, mostly the most popular object in the Messier objects table. It is 1,344 ± 20 light years away(Bringmann, 2017). M42 is a diffuse nebula, which means it is formed ionized gases and diffuse lights with different wavelengths. Every day, there are stars forming, M42 is the closest factory of star from the planet Earth
This paper going introduce and explain the theory of how these colors were formed, how people transform light to digit, and the way people identify chemicals from light, plus its history, and why we do it with the foreword of M42.
Excitation and emission spectra
Light is an electromagnetic wave, it could transport energy, and because of its wave-particle duality, we can say photons have energy inside. In the Bohr Model, it could be easier to explain the whole thing because it is very clear without talking about the probability wave function or The Heisenberg Uncertainty.
As the diagram shows left, when an electron absorbs energy it would be excited, and goes outside from its ground state orbit, and it goes back to inside orbits when they release energy. The different colors of the light release directly connect to the difference of energy because the difference of colors is the result of diverse of the wavelengths, and energy relates to the frequency of the wave follows:$E=hf$
Put these two into the common equation:$v=f\lambda$ since the velocity just equal to the speed of light, which is a constant, higher frequency result in low wavelength, vice versa. So when there is an inside orbit which contain more energy because of the Coulomb’s equation: $\int{F=\int\frac{1}{4\pi\varepsilon_0}\cdot\frac{q_1q_2}{r^2}dr}=U=-W=KE=\frac{1}{4\pi\varepsilon_0}\cdot\frac{q_1q_2}{r}$
As r gets smaller, the energy gets higher, so the inner orbit emits light with higher energy, which means it is in higher frequency and low wavelength result in more red light.
The forming of three main colors
There are three main colors the M42 nebula has: Red, Blue-violet, and Green.
Red light is the result of H- radiation, mainly, it talks about Hydrogen atom in Bohr’s model, the electron jump from n=3 orbit to n=2 orbit, and result in long wavelength of 656.28 nm(H-alpha Emission, 2013) which is red.
Blue-violet light is the result of massive O-Class stars radiation at the core of the nebula. By Harvard spectral classification, stars are classified to O B A F G K M from high temperature to low temperature. There is a pithy formula: “Oh Boy! A Fine Girl Kiss Me” Since O-class is hot, which means it has high energy, so O-class has high frequency and short wavelength, result in Blue-violet light.
Green used to be mysterious in the early 20th century, because there is no known spectral lines could explain that, people even think they discovered a new element. However, by the developing of modern physics, people found the green hue is caused by the low probability transition in doubly ionized oxygen, that only could happen in the lab, but still possible happen in deep space wherein extreme condition.
Photoelectric effect: How to get a photo of a deep space object
We all know the camera could take photos, but how? Why light could be transformed to digit version, that is the contribution of the Photoelectric effect. Einstein gave the mathematical explanation for it. Mainly, if a photon carries enough energy, it could let the metal emit an electron carries with same amount kinetic energy: $KE=\frac{1}{2}mv^2$
Sensor Mechanism: CCD and CMOS
The sensor is where the camera accepts lights, and the place photoelectric happens. There are two main kinds of sensor exist in the market: CCD(Charge-coupled Device) and CMOS(Complementary Metal Oxide Semiconductor). The difference between these two main in the way they collecting voltage signals.
From Figure 6, each block is a pixel that the photoelectric effect happens, when there is a photo hit the pad, one electron would go out and create a voltage signal, the difference between CCD and CMOS is shows up on their signal processing mechanism. Each pixel of CCD only transform photo to electron signal then process line by line, but each pixel of CMOS could process independently.
In Astrophotography, people more like to use CCD because it would produce less noise and record faster. Heat is the main cause of useless data pixel because the heat would mess up the voltage signal, since CMOS process data on every pixel it would cause huge heat during a long time capturing which is very common in astrophotography, but it is much better on CCD because of it process data line by line. However, to minimize the noise, some people would cool down their sensor, so there is something called cooled CCD, people cool their sensors (normally 253K) to prevent noise as possible as they can.
Spectrometer: Way to identify the chemical by its emitted light
Since the different color of light it the result of different elements jump orbitals, we could inversely use this property to identify the element combination of one object. Then, the spectrometer is invented. The grating spectrometer has the clearest mechanism with the cheap prize. It could separate one combined light source to spectrum distribution just like how Figure 7 shows.
When a continuous light goes through a cloud, most of the spectrum would not be absorbed, but if the cloud meets the light with a certain spectrum which has the energy just enough to let its atom jump excited, the cloud would absorb it. So when people on earth observe it, they would find there are some black gaps in the continuous spectrum.
Take our star as an example, it is a G-class star with yellow looking. The black lines called Fraunhofer lines. Here in figure 9, for example, D-line represent element sodium, G-line is iron and calcium, C-line is $H\\alpha$…etc.
History of Spectrometer
Newton seems to be the first one who builds a spectrometer, but he did not use that to build some vital theory theories.
Bunsen invented a better flame source called Bunsen’s burner, now is common in labs and Kirchhoff designed the rest of it. Later, when Kirchhoff used this spectrometer to observe the sun, he found there are several black lines with different brightness, he guesses that might cause by different elements, then he uses some flames made by different sources to prove this hypothesis, and he finds he was right. After proven his hypothesis, he graphed 570 lines to indicate different elements.
The reason why we doing this
Using spectrometer could help us discover what made the early universe, and how the universe evolved because the nebula is the birth room of stars, it has a similar condition like the early universe
To get a better image of deep space target, know their spectrum is very important because people could choose a more suitable filter to cut off other disturbing spectra to improve the image quality.
And the mountain is there, a human being must climb it or disappear in inner dispute history.
Conclusion
M42 is a very beautiful nebula which is the closest star birth room, the light goes through the gas cloud be to absorb some with a certain wavelength, comes to earth with absorption spectrum. The light is captured by sensors(mostly CCDs), because of the photoelectric effect, it is transformed to a voltage signal, finally become digit data, processed by computer become a gorgeous image. Knowing that helps people know more about the universe and how it formed, people use this information to take a better photo by selecting more suitable filters. Human being knows the world around it a little more, exited their fundamental curiosity, which finally becomes the reason people come out of the earth but not root on it.
Reference
Bringmann, T., Conrad, J., Cornell, J. M., Dal, L. A., Edsjö, J., Farmer, B., … & Scott, P. (2017). DarkBit: a GAMBIT module for computing dark matter observables and likelihoods. The European Physical Journal C, 77(12), 831.
. A stellar spectral flux library: 1150–25000 Å. Publications of the Astronomical Society of the Pacific, 110(749), 863.
(n.d.). Retrieved June 2, 2019, from, https://wps.prenhall.com/wps/media/objects/476/488316/ch09.html
H-alpha Emission. (2013, May 08). Retrieved June 5, 2019, from https://ismlandmarks.wordpress.com/h-alpha-emission/
Stefano, M. (n.d.). . Retrieved June 5, 2019, from http://meroli.web.cern.ch/lecture_cmos_vs_ccd_pixel_sensor.html
Anthony, S. (2013, October 02). Every color of the Sun’s rainbow: Why are there so many missing? Retrieved June 5, 2019, from https://www.extremetech.com/extreme/167878-every-color-of-the-suns-rainbow-why-are-there-so-many-missing
Spectrometer. (n.d.). Retrieved June 5, 2019, from https://zh.wikipedia.org/wiki/Spectrometer#/media/File:Spectrometer_schematic.gif
History of spectroscopy. (2019, March 04). Retrieved June 5, 2019, from https://en.wikipedia.org/wiki/History_of_spectroscopy#/media/File:Kirchhoffs_first_spectroscope.jpg
## “The Ring”
At the wide, white house, the “master” dressed in black, stands upon the platform. “Let’s celebrate the coming of our friends, shall we?” he says to the children before him. He is their father, but not by blood. The children, dressed in white, reply “yes” one by one, with planned perfect meticulous smiles.
“Sorry, sir, but what are we celebrating?”, the youngest little “stander,” said.
The “master” askance at him with unknown mean. Saying: “Oh, you people do not hear the history yet” “Surely, you must familiar with the story before the Cold War, because that’s the thing people want to tell”
“Why?”
“Because that is the period of good.”
“How bad after that? Are we live in good?”
“People work in Caltech found if they using quantum entanglement technique, they could transform one matter to gas in sudden, and affect matters near it, transform them to the same phase as it, they called it “Phase Assimilation” .”
“That’s powerful, isn’t it?”
“That’s why the Soviet Union let it become a weapon, and create a world we now live in”
The master points outside, the child’s line of insight follows that, ends at the giant ring.
“Do you know the Rings of Saturn?”
“Yes, the rings of Saturn are the most extensive ring system of any planet in the Solar System, that orbit about Saturn. The ring particles are made almost entirely of water ice, with a trace component of rocky material. And that may be formed by objects which come inside the Roche limit and split to pieces…”
“They all taught you in the class, right? But have you ever pay attention to this giant plate around our planet earth?”
The little child stays for silence for a few seconds, looking at the ring. Whispering by his heart: “how couldn’t I notice that elegant thing?”
It is a huge ring, spreads 300 kilometers wide on the ground on average, around the whole planet. At the start of its formation, the heavier matter of earth first be phase-changed, projectile to the most outside the Roche limit zone of the earth which are the edge of matters comes together or split to pieces, people call it “Heavy Hand”. At that time, the self-rotation axis changed by the unbalanced losing of self-mass, divide the world to two half by the ring, 20% of USSR left the to the side which located with the USA, people later called it the “United hemisphere”, and the rest of the USSR body locate at the another, called the “Communized hemisphere”. The mass of earth still losing due to the gravity force from the “Heavy Hand” and the Centrifugal net force created by the self-rotating, the gravity of the earth is not that strong anymore.
“Yes, Sir, No man-made things could ever come close to the ring…yet, so we disconnect with another hemisphere for 80 years”
“That’s why!” The “master” suddenly goes rage, his eyes extended to large with a crazy laugh, “We live in such buildings, white, immutable. And train you to workers, works for me! To recover the freaking space industry!” He keeps himself in silence for a second, then “I know you would not understand me. I born at the period of the Civil War, really great time. I chose to be a rocket scientist, and rocket chose me. I am 87 years old now, at that time, I am only 17, 17! I am old now, getting old in such a pathetic world. And you, you are all pathetic.
“Sorry, sir… I do not understand, what you mean “pathetic”?”
“You would not understand…forever” The master goes back to cold immediately, no one knows what he exactly thinking about.”
“Should they be called children?” The master stand by, think in his head “These standers are puppets, they could only be free in their “mother’s” wombs” “At the day I become master from rocket scientist, I visited the “Red House” for producing standers, all the things happened still embarked in my head.” “Children originally tend to play and do things they like, people using conditioning to control their future behaviors, if babies tried to play, they would play a very loud shocking noise, I was shocked by that, to be honest. Then these babies would not cry, because…I do not know what they experienced, but they just do cry, after that, they just stop play and stand up for the next instruction. They become the standers. What a “beautiful” world”
“I let you go outside for 35 minutes,” The master said with unexpected
“OK”
“Why? Why? Why? Why does he let me go out? Am I just said anything wrong, would I be killed? What would happen?” beta thinks in his head.
34 minutes, 37 seconds
Beta open the heavy white door, it is the first time he gets out of the house after he got five years ago. He was 2 at that time, now he is seven. The door open with the sound of old, apparently the master himself does not like to walk out.
The world is shockingly the same, Beta still remember there are some men dress in grey walking slowing when he was 2 on the street, but now, things seem to be just an older version of them, just more feeling of fade. People do not talk to each other. There is no brightness in their eyes. They are still walking dead.
“Why the master wants to go out? There are no interesting things, nothing new, just like five years ago”
17 minutes, 14 seconds
The ring still big, strong, even getting larger than before, would people from two sides meet? Do they want to meet?
“It is a beautiful, elegant”
5 minutes, 32 seconds before ship across the “giant hand”
All the people suddenly stand up, do not move, freeze, like old trees but with an unbelievable strong spirit, they are waiting for something, they look at the sky, the ring.
“What’s wrong? What are they doing?”
3…
2…
1…
Beta feel there is something shined a little. Then something he would never forget in his life happens.
People cry, with a smile on their face, old men, young people, grey, white, black…
They are all celebrating. They run to the grassland stand down, smile and cry in silence…
The world becomes different… become like 80 years ago
Beta go back to the “home”, he finds the master is smiling, saying: “Our ship met the ship from another side, and we fight…”
“Our friend is back, the long depression period ends, it is the new beautiful world”
Beta look up, look at the wall, there is a maxim on there:
THE WORLD KEEPS RUNNING, NOT BECAUSE OF PEACE AND LOVE, BUT ENDLESS DISPUTE AND COUNTERMINE
## Using Conditioning to Improve Teaching Efficiency
When a little young baby just born, learning occur at every moment the person lives on the earth, study let people become completed, also helps them to live in this society. One’s future is to decide by learning.
Teachers teach students? Right? But how many of these relationships could be in a highly efficient way? When you in school, how many times you see there are some students set away from the speaker, and do their things but not actually hear what does the person who is currently talking saying? When the people talking accompany with other people’s voice, he could be very tired because he needs to higher the frequency and the intensity of the voice.
So, it is vital to get all the students’ attention at one moment with low cause, but how? Even many strict teachers set roles to his students, after few times class, there always be some students not pay their attention to the teacher at the time they need.
I want to mention my AP Biology teacher, Ms. Rosales, she is the one who demonstrates the psychology tool is such a strong tool to apply in real life.
I still remember the first thing Ms. Rosales ask us to do is something called if…then. It really simple, if she hit the table or things in a certain rhyme, the students should response that with another rhyme for quite.
It looks very simple, and the reality also, but it really let the order of class improve in a massive way. I still remember when Ms. Rosales out of the school for some personal issue, there is a teacher called Ms. Romasa, well, she is kind of nice, but she do not have the what we now called “pa-pa” skill as Ms. Rosales has, mostly, when the class getting disorder and loud, she tries to take the control by yelling in a loud way, that really takes some time and spirit, and only when students think she is angry then they become quite a little. The class still goes on, but not that orderly.
There actually are some very deep and integrated mechanisms behind that. In psychology, what Ms. Rosales apply is called Classical conditioning. There is a very famous experiment did by Palov involved with his dog.
A dog naturally has the willingness to eat meat, so they saliva when they see meat, it is called unconditioned response because it is naturally happening. However, how could we let dog saliva without using meat, but something else, something like…the sound of the ring? The sound of the ring called natural stimuli because the sound of the ring itself has nothing to do with letting dong saliva. What Palov did is ring the ring when he shows the dog with meat, and repeated it for several times, then he finds when he only rings the bell, the dog continues to saliva without meat, the natural stimuli become conditioned stimuli.
What Ms. Rosales do to the students is a similar thing from the perspective of mechanism. At the first days Ms. Rosales began to teach us, she could be very serious because she keeps let students train that. Students naturally response as quiet and tractable when their teacher gets serious. So here teacher get serious is an unconditioned stimulus, and students become quite is an unconditioned response. And the “pa-pa” hits are just natural stimuli, it is true that students know what they should do when they hear it, but their body is not be conditioned yet which means there are not that many people would respond in a quick way. However, after one week’s training, the quantity change cause a quality change, students could response the “pa-pa” sign immediately without any delay, and they would be quiet and pay attention to the teacher.
However, it is not the end, there is something called generalization, which just means the subject would response similar stimuli with one conditioned response or response the stimuli with a similar response. In Palov’s dog experiment, the dog not only responds to the sound of the ring, but other things make a similar sound, like whistle or computer “bi-bi” sound.
Just like in the class, sometimes Ms. Rosales would make different rhyme by mistake, but student response that with same high speed. Also, students would not only be quiet and pay attention to the teacher but also turn down the laptop, iPad or other devices.
The key component here is repeated, if not, extinction happens, or we can say “forget”. If Palov does not train his dog in a certain curve, finally the dog would no saliva to the sound of the ring.
Ms. Romsas had taught us for a long time period, on the first day Ms. Rosales’s class, many students “forget” to respond to the “pa-pa” hit. They got extinction.
To hold the best curve of conditioning? Try to do it in random time gap and random frequency in densely to prevent the appearing of extinction.
Hope you can apply to your life successfully.
## Why The Interval to Graph a Trig Equation in Polar Differ?
Boys, set your calculator to polar mode, and draw a graph of
r=4sin(3θ), then set the first interval of [0,π] and the second one: [0, 2π]
They looks exactly the same, isn’t it?
And if we take of the integral of these two to find the area, for the graph with interval [0,π], area = 4π, but for the graph with [0,2π] the area is 8π, thus, which one is the “true” one we want to find?
I. The Form of Trig Polar Equation
For a trig polar equation, the form is looks like: r=asin(bθ) or cos etc…
a decide the “size of the graph”
b decide the number of “blades”
II. Odd number b and Even number b
The thing is vital :
*The number of blades = 2b
-For even number of b, the blades created are separated(here b=4, so there are 8 “blades”):
-For odd number b, the “blades” are covered with each other, take example of r=4sin(3θ) with [0, 2π], it actually graphed 6 blades, but each two are in the same position covered the another one, so we could only see three. If we do the integral to calculate its area, we get the area of 6 blades.
Thus, if we want to find the area of 3 blades, you can set the interval of [0, π], or just simply divide the area with interval [0, 2π] by two
It’s a very interesting trick dealing with polar equation.
P.S. : hpbsd for me~
{ 1 comment }
## If It Is Possible to Use Electromagnet to stabilize SpaceX’s Falcon
Landing has been a big problem to Falcon. To achieve the goal of recycling space rocket, SpaceX engineers calculate orbit by many factors, many times the problem is at the moment of landing, people could see the rocket is already on the landing pad on the ocean, but the rocket just not stable and fall down then blow up.
So, if that is possible to set a pad with electromagnet to stick the rocket while landing?
## Experiment Day#1 Crystallization and Oxidization of Bismuth
My team yesterday finished the first trail of the experiment.
Which is finding the relationship between temperature while bismuth being oxidize and color of the oxide layer
And the rule is: Low temperature is fully gold, as temperature goes high, it goes purple than blue.
|
2019-07-20 02:16:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4440893828868866, "perplexity": 1998.4902371492403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526401.41/warc/CC-MAIN-20190720004131-20190720030131-00247.warc.gz"}
|
http://math.eretrandre.org/tetrationforum/showthread.php?tid=502&pid=5173&mode=threaded
|
• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
closed form for regular superfunction expressed as a periodic function sheldonison Long Time Fellow Posts: 633 Threads: 22 Joined: Oct 2008 08/30/2010, 03:09 AM (This post was last modified: 08/30/2010, 03:33 AM by sheldonison.) (08/28/2010, 11:21 PM)tommy1729 Wrote: that seems efficient and intresting. in fact i doubt it hasnt been considered before ?Thanks Tommy! I assume it has been considered, and probably calculated before. I think Kneser developed the complex periodic regular tetration for base e, and probably would've generated the coefficients. But I haven't seen them before. Perhaps Henryk (or someone else) could comment??? I figured out the closed form equation for a couple more terms, and I have an equation that should generate the other terms, but I'm still working it, literally as I write this post! $a_2 = (1/2)/(L - 1)$ $a_3 = (1/6 + a_2)/(L*L - 1)$ $a_4 = (1/24 + (1/2)*a_2*a_2 + (1/2)*a_2 + a_3)/(L*L*L-1)$ What I did is start with the equation: $\text{RegularSuperf}(z) = \sum_{n=0}^{\infty}a_nL^{nz}$ and set it equal to the equation $\text{RegularSuperf}(z) = \exp{(\text{RegularSuperf}(z-1))}$ Continuing, there is a bit of trickery in this step to keep the equations in terms of $L^{nz}$, instead of in terms of $L^{n(z-1)}$. Notice that $L^{n(z-1)}=L^{(nz-n)}=L^{-n}L^{nz}$. $\text{RegularSuperf}(z) = \exp{(\text{RegularSuperf}(z-1))} = \exp{( \sum_{n=0}^{\infty}\exp^{(L^{-n}a_nL^{nz})})}$ This becomes a product, with $a_0=L$ and $a_1=1$ $\text{RegularSuperf}(z) = \prod_{n=0}^{\infty} \exp{(L^{-n}a_nL^{nz})}$ The goal is to get an equation in terms of $L^{nz}$ on both sides of the equation. Then I had a breakthrough, while I was typing this post!!!! The breakthrough is to set $y=L^z$, and rewrite all of the equations in terms of y! This wraps the 2Pi*I/L cyclic Fourier series around the unit circle, as an analytic function in terms of y, which greatly simplifies the equations, and also helps to justify the equations. $\text{RegularSuperf}(z) = \sum_{n=0}^{\infty}a_ny^n = \prod_{n=0}^{\infty} \exp{(L^{-n}a_ny^n)}$ The next step is to expand the individual Tayler series for the $\exp {(L^{-n}a_ny^n)}$, and multiply them all together (which gets a little messy, but remember a0=L and a1=1), and finally equate the terms in $y^n$ on the left hand side equation with those on the right hand side equation, and solve for the individual $a_n$ coefficients. Anyway, the equations match the numerical results. I'll fill in the Tayler series substitution next time; this post is already much more detailed then I thought it was going to be! I figured a lot of this out as I typed this post! - Sheldon « Next Oldest | Next Newest »
Messages In This Thread closed form for regular superfunction expressed as a periodic function - by sheldonison - 08/27/2010, 02:09 PM RE: regular superfunction expressed as a periodic function - by sheldonison - 08/28/2010, 08:44 PM RE: regular superfunction expressed as a periodic function - by tommy1729 - 08/28/2010, 11:21 PM RE: regular superfunction expressed as a periodic function - by sheldonison - 08/30/2010, 03:09 AM RE: regular superfunction expressed as a periodic function - by Gottfried - 08/30/2010, 09:22 AM RE: regular superfunction expressed as a periodic function - by tommy1729 - 08/30/2010, 09:41 AM RE: regular superfunction expressed as a periodic function - by tommy1729 - 08/30/2010, 09:46 AM RE: regular superfunction expressed as a periodic function - by Gottfried - 08/31/2010, 08:34 PM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/03/2010, 08:19 PM RE: closed form for regular superfunction expressed as a periodic function - by sheldonison - 09/05/2010, 05:36 AM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/05/2010, 04:45 PM RE: closed form for regular superfunction expressed as a periodic function - by sheldonison - 09/07/2010, 03:54 PM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/07/2010, 07:46 PM RE: closed form for regular superfunction expressed as a periodic function - by bo198214 - 09/08/2010, 06:03 AM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/08/2010, 06:55 PM RE: closed form for regular superfunction expressed as a periodic function - by bo198214 - 09/09/2010, 10:12 AM RE: closed form for regular superfunction expressed as a periodic function - by tommy1729 - 09/09/2010, 10:18 PM
Possibly Related Threads... Thread Author Replies Views Last Post New mathematical object - hyperanalytic function arybnikov 4 223 01/02/2020, 01:38 AM Last Post: arybnikov Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 346 09/09/2019, 10:55 PM Last Post: tommy1729 Is there a function space for tetration? Chenjesu 0 457 06/23/2019, 08:24 PM Last Post: Chenjesu Degamma function Xorter 0 903 10/22/2018, 11:29 AM Last Post: Xorter Periodic analytic iterations by Riemann mapping tommy1729 1 2,253 03/05/2016, 10:07 PM Last Post: tommy1729 Should tetration be a multivalued function? marraco 17 16,911 01/14/2016, 04:24 AM Last Post: marraco Introducing new special function : Lambert_t(z,r) tommy1729 2 3,661 01/10/2016, 06:14 PM Last Post: tommy1729 Natural cyclic superfunction tommy1729 3 3,064 12/08/2015, 12:09 AM Last Post: tommy1729 Tommy-Mandelbrot function tommy1729 0 1,981 04/21/2015, 01:02 PM Last Post: tommy1729 Can sexp(z) be periodic ?? tommy1729 2 3,880 01/14/2015, 01:19 PM Last Post: tommy1729
Users browsing this thread: 2 Guest(s)
|
2020-01-28 14:48:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 18, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8759695291519165, "perplexity": 2812.179051389874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00398.warc.gz"}
|
https://www.physicsforums.com/threads/time-dependent-schrodinger-equation-for-many-particles.619626/
|
# Time-dependent Schrodinger equation for many particles
1. Jul 9, 2012
### AxiomOfChoice
If you've got, say, three particles, then the time-dependent Schrodinger equation (in units where $\hbar = 1$) for the system reads
$$i \frac{\partial \psi}{\partial t} = -\sum_{i=1}^3 \frac{1}{2m_i} \Delta_i \psi + \sum_{i<j} V(r_i - r_j)\psi,$$
right? And of course $\psi = \psi(r_1,r_2,r_3;t)$. But there isn't just ONE solution to this equation, right? There are MANY. And don't they correspond to, say, all particles being independent for large times, or one particle bound to another and the remaining one free, etc.? And I'm guessing this is at the heart of scattering theory - kind of examining the variety of long-time behaviors that can be exhibited in this case. Do I have this right?
Last edited: Jul 9, 2012
2. Jul 9, 2012
### jfy4
Yes that's right. The wave function as $t\rightarrow \pm \infty$ and it's corresponding probabilities are what we can measure. Not only $t$, but also as $r\rightarrow \infty$ which in a collider experiment is on the order of meters.
|
2017-08-21 04:33:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938763737678528, "perplexity": 430.0969763569539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00350.warc.gz"}
|
https://support.bioconductor.org/p/125489/
|
Question: Further clarification on when not to use duplicateCorrelation with technical replicates (RNA-seq)
1
9 weeks ago by
paul.alto50
paul.alto50 wrote:
After reading the limma manual and paper and several posts about using duplicateCorrelation with technical replicates mixed with biological replicates, I am still unsure when to use it (and why not use it).
The 2015 limma paper says about duplicateCorrelation: "More generally, the same idea is also used to model the correlation between related RNA samples, for example repeated measures on the same individual or RNA samples collected at the same time."
The duplicateCorrelation help in the limma R package says: Estimate the correlation between duplicate spots (regularly spaced replicate spots on the same array) or between technical replicates from a series of arrays.
However, several posts here
suggest not using duplicateCorrelation in the designs proposed and instead pooling technical replicates.
In this thread ( https://support.bioconductor.org/p/86867 ), Aaron Lun says that duplicateCorrelation "does better when you have samples across a large number of levels of the blocking factor". So should duplicateCorrelation be used when mixing biological and technical replicates, but only when there is a minimum number of samples/replicates/levels? If so, what are the minimums that should be observed?
modified 8 weeks ago by Aaron Lun25k • written 9 weeks ago by paul.alto50
FYI by definition biological samples from different individuals/subjects are also biological replicates. If you have e.g. multiple biological samples per subject then that is a repeated measures design that you would use duplicateCorrelation on. A repeated measures design is any where you have multiple correlated biological samples per higher-level biological unit.
Answer: Further clarification on when not to use duplicateCorrelation with technical rep
2
9 weeks ago by
Gordon Smyth39k
Walter and Eliza Hall Institute of Medical Research, Melbourne, Australia
Gordon Smyth39k wrote:
So should duplicateCorrelation be used when mixing biological and technical replicates, but only when there is a minimum number of samples/replicates/levels?
Sure, treating factor effects as random often makes more sense when the number of levels is larger, but there is no minimum number. You can apply duplicateCorrelation with only two blocks, and there examples of this in the User's Guide.
Judging from your previous question that Aaron answered, you don't actually have technical replicates at all. If you really did have pure technical replicates (sequencing the same RNA samples twice) then you would normally just sum the counts using edgeR::sumTechReps. There is an infinite variety of designs and an infinite spectrum of "semi" technical replicates that may be strongly or weakly correlated, so it is impossible to give a universal rule that covers all cases. When we advised against duplicateCorrelation in previous posts there was always an alternative, and we gave a reason for choosing the alternative.
Thank you for your reply. In my case, if I don't have real technical replicates, should I still pool them (RNA-seq) or use duplicateCorrelation?
Answer: Further clarification on when not to use duplicateCorrelation with technical rep
2
8 weeks ago by
Aaron Lun25k
Cambridge, United Kingdom
Aaron Lun25k wrote:
As Gordon suggests, the diversity of possible designs makes it difficult to suggest a hard-and-fast rule. Nonetheless, here are some thoughts:
Technical replicates: If these are generated by literally sequencing the same sample multiple times (e.g., on different lanes), just add them together and treat the resulting sum as a single sample.
Not-quite-technical replicates: These are usually things like "we took multiple samples from the same donor", so they're not fully fledged biological replicates but they aren't totally technical either. In most cases, I would just add them together and move on because I don't care about capturing the variability within levels of the blocking factor. For example, if biopsies are variable within a patient but the average expression across multiple biopsies is consistent across patients, then the latter is all I care about. ~~On the other hand, if I did expect the repeated samples to be similar, I would want to penalize genes that exhibit variation between them, so I'd like to capture that variation with duplicateCorrelation.~~ (Update: see comment below.)
Also, when adding, it is better that each repeated sample contributes evenly to the sum for a particular blocking level; this gives you a more stable sum and thus lower across-level variance. It may also be wise to use voomWithQualityWeights to adjust for differences in the number of repeated samples per donor.
Repeated samples with different uninteresting predictors: This refers to situations where repeated samples do not have the same set of predictors in the design matrix, e.g., because some repeated samples were processed in a different batch. If the repeated samples for each blocking level have the same pattern of values for those predictors (e.g., each blocking level has one repeated sample in each of three batches), summation is still possible. However, in general, this is not the case and then duplicateCorrelation must be used.
Repeated samples with different interesting predictors: This refers to situations where repeated samples do not have the same set of predictors in the design matrix, because those predictors are interesting and their effects are to be tested. The archetypical example would be to collect samples before and after treatment for each patient. Here, we can either use duplicateCorrelation or we can block on the uninteresting factors in the design matrix. I prefer the latter as it avoids a few assumptions of the former, namely that all genes have the same consensus correlation. (There's also an assumption about the distribution of the random effect, but I can't remember what it was - maybe normal i.i.d.) However, duplicateCorrelation is more general and is the only solution when you want to compare across blocking levels, e.g., comparing diseased and healthy donors when each donor also contributes before/after treatment samples.
Thanks for your reply, Aaron. You summary is very helpful. In the "Not-quite-technical replicates" scenario, my reasoning was the opposite from yours. I thought that if the replicates are expected to be similar, then I would treat them as "technical replicates" and if they are expected to be variable but still more similar than samples from a different individual, then duplicateCorrelation would correct for the "excess" similarity of the "not-quite-technical replicates". Is my reasoning flawed?
Yes, I can see how the comment was misleading, so I've reworded the answer.
My point was more about what you want to see in the DE genes that the analysis will detect. Would you be happy with DE genes that are highly variable between repeated samples, as long as they are consistent across biological replicates? If this is fine, then you don't want to model the variability between samples, and summation makes sense to mask that variability. One example would be single-cell data analysis where you might not care about cell-to-cell variability as long as the response at the population level was consistent.
I also forgot that duplicateCorrelation-based p-values don't actually penalize genes with strong variation between repeated samples; the relevant terms cancel out at some point, so my comment above was wrong and there's no benefit in that respect. Thus, it boils down to the speed and relatively assumption-free nature of summation versus the power improvement from having more samples when using duplicateCorrelation. I prefer the former.
|
2019-12-12 02:25:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47997650504112244, "perplexity": 1269.5035508274614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540534443.68/warc/CC-MAIN-20191212000437-20191212024437-00116.warc.gz"}
|
https://community.powerbi.com/t5/Desktop/Get-rid-of-arrow-in-table-column-header/m-p/150882/highlight/true
|
cancel
Showing results for
Did you mean:
Highlighted
Visitor
## Get rid of arrow in table column header?
Is it possible to remove or hide the arrows that show up in the column headers on a table?
For example, below it does not show up for Metrics Health but does for Metrics Status:
Moderator
## Re: Get rid of arrow in table column header?
@lliu16 wrote:
Is it possible to remove or hide the arrows that show up in the column headers on a table?
For example, below it does not show up for Metrics Health but does for Metrics Status:
@lliu16
Based on my test, in the table visual, by default the arrow doesn't show up unless you sort on the column. I don't find any way to hide the arrow after it apears but you can try to remove and re-add that field to the table.
|
2019-05-25 12:20:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335195183753967, "perplexity": 1144.3806878540058}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258003.30/warc/CC-MAIN-20190525104725-20190525130725-00520.warc.gz"}
|
https://cob.silverchair.com/jeb/article/82/1/197/22822/Biomodal-Gas-Exchange-During-Variation-in
|
Gas exchange in the gourami, Trichogaster trichopterus, an obligate air breather, is achieved both by branchial exchange with water and aerial exchange via labyrinth organs lying within the suprabranchial chamber.
Ventilation of the suprabranchial chamber, , gas exchange ratios of both gills and labyrinth organs, and air convection requirements have been measured under conditions of hypoxia, hyperoxia or hypercapnia in either water or air.
In undisturbed fish in control conditions (27 °C), air breathing frequency was 12 breaths/h, gas tidal volume 30 μl /g, total oxygen uptake 5· 2 μ/g/h and total carbon dioxide excretion 4·1 μ/g/h, indicating a total gas exchange ratio of approximately 0·8. The aerial labyrinth organs accounted for 40% of oxygen uptake but only 15% of carbon dioxide elimination.
Hypoxia, in either inspired water or air, stimulated air breathing. Total was continuously maintained at or above control levels by an augmentation of oxygen uptake by the labyrinth during aquatic hypoxia or by the gills during aerial hypoxia. Hypoxia had no effect on partitioning between air and water. Hypercapnia in water greatly stimulated air breathing. About 60 % of total then occurred via aerial excretion, a situation unusual among air breathing fish, enabling the overall gas exchange to remain at control levels. Aerial hypercapnia had no effect on air breathing or O2 partitioning, but resulted in a net aerial CO2 uptake and a decrease in overall gas exchange ratio.
Trichogaster is thus an air breathing fish which is able to maintain a respiratory homeostasis under varying environmental conditions by exploiting whichever respiratory medium at a particular time is the most effective for O2 uptake and CO2 elimination.
Most interest in the physiology of the air breathing fishes has centred upon the relative gas exchanging performances of the gills and of the aerial exchange organ, which may be a buccopharyngeal structure, opercular cavity and gill elaborations, a modified swimbladder or combinations of these and other organs (see Johansen, 1970; Munshi, 1976 for reviews). Partitioning of O2 and CO2 transfer between air and water is highly variable between species, and is a function of gill and aerial organ surface area, blood-water and blood-air diffusion barriers, the ventilation-perfusion ratios of the individual organs and subunits, and the ability of the respective respiratory media to serve as an O2 source and CO2 sink.
The variability and extremes on both a seasonal and daily basis of respiratory gas pressures in stagnant water habitats of air breathing fishes are well documented (Dehadrai & Tripathi, 1976), and are often advanced as major factors in the adap- tational significance of bimodal breathing in fishes. Yet, it is not clear for many air breathing fish whether marked changes in the performance of the various respiratory organs can occur to fully or partially compensate for gas exchange in a hypoxic or hypercapnic medium. That is, can air breathing fish maintain a respiratory homeostasis in a dynamic environment by turning to whichever respiratory medium is at that time most effective for O2 uptake and CO2 excretion, or is the respiratory performance of their gas exchange organs fixed within fairly broad physiological and morphological limits? Ventilation of both the gills and air breathing organs of some air breathing fish have been shown to change, often disproportionately, after experimental hypoxic or hypercapnic exposure in one of the two respiratory media (see Hughes & Singh, 1970a; Wood & Lenfant, 1976; Farrell & Randall, 1978). However, a comprehensive investigation of actual gas exchange partitioning between respiratory organs and its quantitive relationship to tidal air ventilation and its control has not been made for a single species under regulated and varying environmental conditions.
The present investigation reports on bimodal gas exchange in the blue or open gourami, Trichogaster trichopterus. Trichogaster, like Anabas, Macropodus, and Betta, is an Anabantidae, whose air breathing modifications typically take the form of so-called labyrinth organs. These organs derive from the epibranchial regions of the first and second branchial arches, and extend dorsally as shelly, plate-like organs to fill the upper region of the opercular cavity, or suprabranchial chamber (Munshi, 1968; Peters, 1978). Fig 1 depicts the organization of the circulation in the gourami. All efferent blood from gill arches 1 and 2 enters the labyrinth complex, and all oxygenated blood draining the labyrinth joins into the jugular veins and flows on to the heart (Henninger, 1907; Munshi, 1968). Hence, blood in the ventral aorta is partially oxygenated, a condition typical of many air breathing fish (Satchell, 1976). Only the efferent vessels of gill arches 3 and 4 merge to form the dorsal aorta. Gill arches 1 and 2 in most Anabantidae are large and are fully developed, whereas gill arches 3 and particularly 4 are much reduced in size and filament numbers (Munshi, 1968 ; Burggren, unpublished).
Fig. 1.
Diagrammatic representation of the circulation of the air breathing fish Trichogaster trichopterus.
Fig. 1.
Diagrammatic representation of the circulation of the air breathing fish Trichogaster trichopterus.
The gourami continuously ventilates its gills with water, and never ventures onto land. However, it is an obligate air breather at temperatures above 20–25 °C (Das, 1927; Burggren, unpublished) and will quickly show signs of distress if denied access to air.
Experiments were performed on 15 adult blue gouramis, Trichogaster trichopterus (mean mass 7·97± 1·87 g), which either were hatched and reared in the laboratory or obtained from local suppliers. The fish were maintained at 27 °C in de-ionized Vancouver tap water for at least 1 month before experimentation. All experiments were carried out at 27° ± 1/2 °C on fish which had been fasting for at least 24 h.
Air breathing frequency and expired and inspired gas volumes were determined in a miniaturized, adapted version of an apparatus described by Lomholt & Johansen (1974). Individual fish were placed in a glass vessel (vol 931 ml), the top of which consisted of an inverted funnel open to the atmosphere. When the vessel was filled with water, surface access for air breathing was limited to an area 10-12 mm in diameter in the stem of the funnel. Upon surfacing, the inspiration of gas caused an increase or decrease, respectively, in the volume of water displaced by the fish. Since the funnel lumen was the sole opening of the water chamber to the atmosphere, breathing movements caused a displacement of water up or down the funnel stem (and hence a change in the hydrostatic pressure head) which was proportional to tidal volume. A Grass PT4 A volumetric pressure transducer writing out on a Bausch and Lomb VOM 7 chart recorder was connected via a cannula to the water chamber. The internal membrane in this transducer is extremely compliant compared to conventional pressure transducers, and produces a very large output signal in response to small changes in volume. Thus, while changes in the pressure head of the system produced by emptying and filling of the fish’s suprabranchial chamber were too small to be accurately measured by conventional means, these small pressure heads caused comparatively large volume changes in the volumetric transducer and hence in its output signal. The apparatus was calibrated for volume changes by using a syringe to simply add or withdraw a known volume of water from the chamber. Calibration lines were constructed by producing volume changes from 0 to 300 μl in 20 μl increments. Volume changes as small as 5 μl could be detected with this transducer, so the 75– 500 μl tidal volumes of the gourami could be readily measured. An overestimation of tidal volume by this technique would result if any portion of the fish’s head or body was raised above the surface during the ventilatory act. In undisturbed gouramis, however, only the mouth breaks the water meniscus (Peters, 1978) so errors in tidal volume from this source were negligible. The water chamber contained a magnetic stirring bar, and was immersed in a thermostatted water bath screened from the investigator. Fish were allowed to acclimate to the apparatus for at least 18 h before experiments were begun.
Air or gas mixtures delivered from a Wösthoff gas mixing pump continuously ventilated the funnel mouth. Aquatic gas partial pressures were controlled by bubbling appropriate gas mixtures through the water. Because of the very small diameter of the surface access hole and the extreme sensitivity of the transducer to changes in water level, gas mixtures could not be bubbled through the water during actual periods of ventilation monitoring. However, the water volume was sufficiently large and the individual 1 h monitoring periods short enough so that large changes in water gas tensions due to aquatic respiration did not occur during the course of the experiments (see below). The air-water interface had a very small surface area and control measurements in the apparatus without the presence of a fish revealed that significant diffusion of respiratory gases across this interface did not develop during the measurement periods.
In a second series of experiments oxygen uptake and carbon dioxide excretion in μ M/g/h by both the gills (plus any skin contribution) and the labyrinth organs of individual fish were simultaneously determined. An air-tight glass vessel (vol. 255 ml) was fitted over the inverted funnel at the top of the water chamber. By measuring the changes in gas partial pressures of both the liquid and gas phase occurring in this closed system during bimodal gas exchange by a gourami, total and as well as their partitioning between gas and water were calculated. Water and air from the respirometer were sampled with gas-tight glass syringes, and gas partial pressures in both respiratory media were determined with Radiometer O2 and CO2 electrodes connected to a modified Radiometer 27 or Beckman 160 gas analyser.
Respirometry of this type can be problematic for several reasons. If respiration by the fish is allowed to continue for a long period of time in a closed system such that large and easily measurable changes in gas partial pressures occur, then the changing quality of the respiratory media may begin to affect directly respiratory performance. Thus, partial pressure changes in the respiratory media should be kept small. The capacitances of water and air for CO2 and air for O2 are large, while the O2 capacitance for water is small. Hence, disproportionate changes in O2 and CO2 partial pressures will occur, especially with large imbalances in gas exchange partitioning by the fish. With the above considerations in mind, 1 h periods of bimodal respiration were chosen for the experimental measurement period, while in between periods the respirometer was left open and the water equilibrated with air. A 1 h period of closed respirometery during bimodal respiration usually resulted in an average decrease in of 15–25 mmHg in water and 1–2 mmHg in gas, and an average increase in of 0·6–1·2 mmHg in water and 0·2–0·4 mmHg in gas.
To measure accurately these very small changes, the scales on the gas analysers were considerably expanded. The outputs from Wösthoff gas mixing pumps were cascaded to provide humidified calibration gases in 0·5 mmHg partial pressure increments, and the electrodes were calibrated between each hour measurement period. The accurate measurement of small differences in , is particularly difficult, and can be complicated by the properties of the gas electrodes themselves (Boutilier et al. 1978). Several electrodes and electrode solutions were considered before one of exceptional stability and responsiveness was located. This electrode was fitted with a Teflon rather than silicon rubber membrane for additional stability. Further sources of error in measurement can arise from conversion of , to CO2 content using a solubility factor for distilled water if the fresh water contains carbonates (Dejours, Armand & Verriest, 1968). However, total alkalinity as CaCO3 in the tap water used in all experiments was less than 2·5 ppm, so the solubility values given for carbonate-free water were used in all calculations.
Labyrinth ventilation frequency and tidal volume were determined over a range of (1) either water or gas hypoxia and hyperoxia (37, 75, 150 and > 600 mmHg) and (2) either water or gas hypercapnia (o, 15, 30 and 45 mmHg). and were determined at a set level of gas or water hypoxia or hypercapnia . In each experiment, unless specified, at least air was available for labyrinth ventilation or the water ventilating the gills was air-equilibrated ; ie. fish were not simultaneously exposed to hypoxia and/or hypercapnia in both respiratory media.
Significance levels of all data were assessed with Student’s t test for independent means, and fudicial level of P<0·01 was chosen for differences of means.
Undisturbed Trichogaster trichopterus under control conditions surfaced to breathe approximately once every 4–6 min (Fig. 2), with a mean apnoea length of 4·7 ± 2·2 min. The ventilatory act consisted first of breaking the water surface with the mouth, followed immediately by the expiration of gas from the suprabranchial chambers. After rapidly inspiring the fish left the surface, the entire process of labyrinth ventilation requiring less than sec and often taking only sec. In normoxic conditions Trichogaster never took more than a single breath before leaving the water surface. The reader is referred to Peters (1978) for details of ventilation mechanics in the gourami. Inspired volumes during control conditions were approximately 27–32 μl /g (Figs. 3 and 4), and were 11–15% greater than expired volumes. Considerable differences in suprabranchial chamber gas volume between sucessive interbreath intervals occurred, as evidenced in the variation in the base-line volume level at the start of different expirations (Fig. 2). The progressive change toward a smaller suprabranchial chamber volume during the interbreath interval resulted from a proportionately greater labyrinth oxygen removal than carbon dioxide addition (see below). All values of suprabranchial chamber tidal volume (Vs in μl gas/g) reported below have been calculated on the basis of inspired gas volumes, measured as the volume change from the point of maximum expiration to the point at which inspiration was terminated (Fig. 2).
Fig. 2.
Representative records of suprabranchial chamber air ventilation during control con- ditions in an 8·3 g Trichogaster trichopterus. Time marker in minute intervals.
Fig. 2.
Representative records of suprabranchial chamber air ventilation during control con- ditions in an 8·3 g Trichogaster trichopterus. Time marker in minute intervals.
Fig. 3.
Effect of changes in inspired water or air oxygen partial pressure on air breathing frequency and suprabranchial chamber tidal volume. Mean values ± 1 s.E. determined in seven fish are given. Where mean values are significantly different from control levels a single asterisk (0·025 <P< 0·05) or double asterisk (P< 0·025) indicates the level of significance.
Fig. 3.
Effect of changes in inspired water or air oxygen partial pressure on air breathing frequency and suprabranchial chamber tidal volume. Mean values ± 1 s.E. determined in seven fish are given. Where mean values are significantly different from control levels a single asterisk (0·025 <P< 0·05) or double asterisk (P< 0·025) indicates the level of significance.
Fig. 4.
Effect of changes in inspired water or air carbon dioxide partial pressure on air breathing frequency and suprabranchial tidal volume. Mean values ± 1 s.E. determined in seven fish are given. Significance coding as described in legend to Fig. 3.
Fig. 4.
Effect of changes in inspired water or air carbon dioxide partial pressure on air breathing frequency and suprabranchial tidal volume. Mean values ± 1 s.E. determined in seven fish are given. Significance coding as described in legend to Fig. 3.
### Aerial ventilation responses
#### (1) Hypoxia and hyperoxia
A reduction in the inspired oxygen partial pressure of either gas or water caused a stimulation of air breathing, particularly at the lowest oxygen levels (Figs. 3, 5). At a water of 37 mmHg breathing rate was more than double control levels, and was almost trebled when gas at this same partial pressure was inspired. Although severe hypoxia proved a stimulus to aerial breathing frequency, gas tidal volume of the suprabranchial chamber showed no significant change even with the most profound hyperoxic conditions in either water or air. Total ventilation of the suprabranchial chambers increased slightly as the of water or air fell to approximately 75 mmHg, and then at progressively lower levels of O2 began to increase at a greater rate, particularly when the suprabranchial chamber was being ventilated with hypoxic gas.
Fig. 5.
Effect on suprabranchial chamber ventilation of changes in inspired water or gas oxygen and carbon dioxide partial pressures. Mean values ± 1 s.E. determined in seven fish are given. Significance coding as described in legend to Fig. 3.
Fig. 5.
Effect on suprabranchial chamber ventilation of changes in inspired water or gas oxygen and carbon dioxide partial pressures. Mean values ± 1 s.E. determined in seven fish are given. Significance coding as described in legend to Fig. 3.
The inspiration of hyperoxic gas produced a significant reduction in aerial breathing frequency from normoxic levels, whereas a non-significant change accompanied the inspiration of hyperoxic water (Fig. 3).Vs fell slightly during water hyperoxia, but did not change significantly during gas hyperoxia. The net effect was that decreased by one-half during the inspiration of hyperoxic gas, but showed a non-significant reduction during irrigation of the gills with hyperoxic water.
#### (2) Hypercapnia
The inspiration of hypercapnic gas had no effect on ventilation of the labyrinth organs in Trichogaster until a of approximately 30 mmHg had been reached (Figs. 4, 5). At and above this level of hypercapnia significant but small increases in breathing frequency occurred. Vs showed no significant changes during progressive hypercapnia, and consequently total suprabranchial chamber ventilation volume remained unchanged from control levels even during the inspiration of gas with a as high as 45 mmHg (Fig. 4). In contrast, ventilation of the gills with only mildly hypercapnic water caused a profound increase in air breathing frequency, and at a water of 45 mmHg breathing frequency had increased over control values by nearly 700% (Fig. 4). Vs was unchanged from control levels even at high levels of CO2 in the water, but as a consequence of the enormous increase in breathing frequency, more than doubled with every 15 mmHg increase in water (Fig. 5).
#### (3) Hypoxic and hyperoxic hypercapnia
The inspiration of water with both a of 60 mmHg and of 15 mmHg greatly stimulated air breathing in Trichogaster. under these conditions was 1672 ± 360 μl /g/h, compared to 550 and 650μl /g/h during comparative levels of solely water hypoxia or hypercapnia, respectively. This increase in during hypoxic hypercapnia was largely the product of an increase in breathing rate to over 60 breaths/h.
Experiments were designed to test the effect on ventilation of the suprabranchial chamber produced by an increase in water to 29–33 mmHg, both when fish had free access to air, and then during the inspiration of pure oxygen gas and water with a of greater than 600 mmHg. Whereas aquatic hypercapnia alone produced a very large increase in air ventilation from 387188 to 1979 + 400 μ/g/h (n = 4, ±S.E.), when very high O2 levels in air and water attended aquatic hypercapnia, 394 + 37 μl /g/h/, was not significantly changed from control values.
### Air-water partitioning of gas transfer
#### (1) Oxygen
Undisturbed Trichogaster under normoxic conditions consumed oxygen at a rate of approximately 5·3 μM-O2g/h at 27°C (Table 1). Of this total , approximately 42% was accounted for by the labyrinth organs, the remaining 58% of oxygen uptake arising almost entirely from gas exchange by the gills (Fig. 5). (The skin of the gourami is covered in relatively coarse, thick scales, and accounts for only about 10% of total aquatic exchange even in air exposed fish (Burggren & Haswell, 1979). Therefore it is reasonable to assume that most aquatic gas transfer is occurring across the branchial membranes.) When water was reduced to 56 + 6 mmHg, while air was still available for ventilation of the labyrinth organs, no significant reduction in total of Trichogaster occurred (Table 1). Under these conditions, however, the gills accounted for only 30 % of the oxygen uptake, with the labyrinth thus assuming the role of the major oxygen exchanging organ. When instead, the of the inspired gas rather than inspired water was reduced to 54 ± 3 mmHg, there again was no significant change in total , but the labyrinth organs of Trichogaster now accounted for less than 15% of total oxygen uptake (Fig. 6).
TABLE 1.
Oxygen uptake and carbon dioxide elimination (μ gas/g body w/h) and the gas exchange ratio of the gills and the labyrinth organs of the gourami, Trichogaster trichopterus at 27°C
Fig. 6.
Partitioning of $M˙O2$, and $M˙CO2$, between gills and labyrinth in Trichogaster trichopterus during different levels of $PO2$, and $PCO2$, in air and water. See text for details of levels of experimental hypercapnia and hypoxia. Mean values ± 1 s.E. were determined in seven fish. Significance coding as described in legend to Fig. 3.
Fig. 6.
Partitioning of $M˙O2$, and $M˙CO2$, between gills and labyrinth in Trichogaster trichopterus during different levels of $PO2$, and $PCO2$, in air and water. See text for details of levels of experimental hypercapnia and hypoxia. Mean values ± 1 s.E. were determined in seven fish. Significance coding as described in legend to Fig. 3.
The distribution of oxygen uptake between the gills and labyrinth evident during Control (normoxic) conditions was independent of increases in the of inspired water or air (Fig. 6). Total oxygen uptake by Trichogaster was significantly elevated during the inspiration of hypercapnic water, presumably reflecting the energetic costs of a greatly increased air breathing frequency (Fig. 4) and necessary movement through the water column which occurs under these conditions.
#### (2) Carbon dioxide
Total of Trichogaster under control conditions was approximately 4·1 μM- CO2/g/h (Table 1). Of this total , approximately 85 % was excreted into the water via the gills. Under control conditions then, the gas exchange ratio for the gills (Rg) was 1·20 compared to a gas exchange ratio for the labyrinth (Rl) of only 0·25. The labyrinth thus serves primarily as an organ of oxygen uptake during normoxia. The overall gas exchange ratio (Rtotal) for undisturbed Trichogaster under control conditions was 0·79 (Table 1).
The inspiration of hypercapnic water was accompanied by a considerable redistribution partitioning between air and water. Aquatic CO2 excretion fell to less than 40% of total (Fig. 5). Consequently, during aquatic hypercapnia Rlab increased to approximately 1·1 as the labyrinth became the major organ of CO2 excretion (Table 1). The inspiration of hypercapnic water was accompanied by a significant increase in . Rather than indicating an uptake of CO2 from the water, this increase was the result of a rise in metabolic rate. This is evident from the facts that was also elevated, probably reflecting the increased labyrinth ventilation effort (Fig. 5), and that there was no significant change in the overall gas exchange ratio during 1 h of aquatic hypercapnia (Table 1). Ventilation of the suprabranchial chamber with hypercapnic gas , although eliciting no changes in produced a reversal in the direction of CO2 movement across labyrinth membranes. A net uptake of 1·5 μ-CO2g/h from the labyrinth gas into the blood occurred under these conditions, but branchial and Rg rose significantly above control levels, and the overall gas exchange ratio of Trichogaster showed only a non-significant decrease (Table 1).
The partitioning of in the gourami was not influenced during the inspiration of hypoxic water or gas (Fig. 6). While either hypoxia or hypercapnia may indirectly influence overall gas transfer through changes in metabolic rate, O2 and CO2 partitioning between gills and the labyrinth organs appear to change independently of each other.
### Labyrinth convection requirement
The relationship between the air convection requirement of the labyrinth organs and suprabranchial chamber ventilation under different experimental conditions is shown for five Trichogaster trichopterus in Fig. 7. The air convection requirement under control conditions was approximately 100–220 Μl air/μ-O2 consumed. The air convection requirement of the labyrinth during the’inspiration of hypercapnic gas , which produced no change in , remained unchanged from control levels. Ventilation of the gills with hypoxic water , which stimulated large increases in , was accompanied by either no change or a small increase in the air convection requirement. Inspiration of either hypoxic gas or hypercapnic water similarly generated large increases in ventilation of the suprabranchaial chamber, but under both of these conditions the air convection requirement of the labyrinth increased to 4–10 times over that evident during control ventilation.
Fig. 7.
Relationship between air convection requirement and ventilation of the suprabranchial chamber during different environmental conditions in five Trichogaster trichopterus. See text for details of experimental conditions.
Fig. 7.
Relationship between air convection requirement and ventilation of the suprabranchial chamber during different environmental conditions in five Trichogaster trichopterus. See text for details of experimental conditions.
Gas flow in and out of the suprabranchial chamber of Trichogaster is somewhat different from that of the air breathing organs of many other fishes, in that during each expiration practically all of the gas in the suprabranchial chamber is displaced out of the mouth with water from the opercular cavity (Peters, 1978). Thus, the initial gas immediately after inspiration at the start of the breath hold must therefore be close to ambient (. Given the oxygen uptake, breathing frequency, and tidal volume of Trichogaster, it can be calculated that on average 70% of the oxygen must be removed from the suprabranchial chamber gas during the mean 5 min interbreath period, which is comparable to rates of oxygen depletion from the lungs of Protopterus and Lepidosiren, and the buccal cavity of Electrophorus (see Johansen, 1970).
The first and second gill arches of Trichogaster are well developed, however, and more than half of the oxygen uptake is obtained from water under normoxic conditions (Table 1). During aquatic hypoxia, the reduction in oxygen uptake from water is compensated for by an increase in labyrinth oxygen uptake, achieved by an augmented ventilation of the suprabranchial chamber (Fig. 5). As the water ventilating the gills becomes progressively more hypoxic, the gradient driving O2 diffusion from water into blood will rapidly deteriorate. Unless blood in the ventral aorta derived from the aerial exchange organ can be preferentially shunted through gill arches with a reduced diffusion capacity before delivery to the tissues, the loss of O2 from blood in the gills into water will become a factor limiting branchial gas exchange during severe aquatic hypoxia in these fish. The large increase in labyrinth air ventilation in Trichogaster which finally developed below a water of 60 mmHg probably reflects the deterioration of aquatic oxygen uptake due to a diminished or even reversed O2 diffusion gradient at the gills. The labyrinth was sufficiently effective under these conditions to maintain total at control (normoxic) levels. Another Anabantid, Anabas testudineas, also is able to maintain aerial in almost totally deoxygenated water (Hughes & Singh, 1970a). Munshi (1968) has shown that the third and fourth gill arches of this fish are very much reduced in surface area, and serve largely as nonexchanging shunt vessels conveying partially oxygenated blood derived jointly from systemic and labyrinth veins into the dorsal aorta. There is a paucity of branchial morphometric data for Trichogaster, but gill arches 3 and particularly 4 are also clearly reduced in length and width compared to arches 1 and 2 (Burggren, unpublished; Fig. 1). A reduction of diffusion capacity of these ‘shunt’ gill arches in Anabantids could be an important factor in gas transport to the tissues.
The gas convection requirements for the labyrinth changed little during aquatic hypoxia (Fig. 7) and Trichogaster still extracted approximately 70% of the oxygen from each breath. Air and blood almost certainly reach equilibrium in the labyrinth because of extremely small diffusion distances, of the order of only 1200 Å (Schulz, 1960). The system thus must be perfusion limited and since oxygen uptake increases, so too must blood flow to the labyrinth. Both labyrinth blood flow and ventilation therefore will increase, maintaining a ventilation-perfusion relationship for the labyrinth similar to control levels during aquatic hypoxia. Hypoxia in the gas phase caused an increase in suprabranchial chamber ventilation, but labyrinth was sharply curtailed and gill uptake increased to ensure the maintenance or slight elevation of oxygen uptake (Table 1). The gas convection requirement of the labyrinth increased 2 – 5 times during aerial hypoxia, but the gas medium in this instance also contained only one-third as much oxygen compared to the other experimental conditions.
The gourami clearly is able to maintain oxygen uptake in the face of environmental hypoxia by increasing uptake via either the labyrinth organs or the gills, depending on the suitability of the respiratory medium. Although not measured, Trichogaster probably increased gill ventilation to help maintain oxygen uptake in the face of aerial or aquatic hypoxia until, in the latter condition, the water-blood gradient for oxygen diffusion into the gills had deteriorated. Other amphibious fishes such as Amia calva Anabas testudineus and Neocerotodus forsteri, although progressively relying upon actual oxygen uptake as the aquatic environment becomes hypoxic, also increase gill ventilation frequency or stroke volume in an initial attempt to maintain (Johansen Lenfant & Grigg, 1967; Hughes & Singh, 19706; Johansen, Hanson & Lenfant, 1970; Singh & Hughes, 1973).
The partitioning of carbon dioxide excretion between the gills and the labyrinth organs of Trichogaster reveals that aerial is normally very small, as clearly reflected in the high gas exchange ratio for the gills and low exchange ratio for the labyrinth (Table 1). The preponderance of air breathing fishes normally show a similar distribution of C02 excretion between gills and aerial breathing organ (see Singh, 1976 for review). Rahn & Howell (1976) report that among bimodal breathers CO2 elimination averages 76% from the gill-skin system and 24% for the aerial system.
There are, however, important aspects of CO2 excretion in Trichogaster which become manifest only upon manipulation of CO2 levels in inspired gas or water. Exposure to hypercapnic water caused a large increase in labyrinth ventilation. Total and . increased in Trichogaster, partially as a result of the increased energy expended on repeated surfacing and labyrinth hyperventilation (Table 1, Fig. 5). The air convection requirement for the labyrinth increased during exposure to hypercapnic water, indicating little increase from control levels in labyrinth blood flow and hence blood flow through gill arches 1 and 2. This is supported by the fact that there was little change of the partitioning of oxygen uptake between labyrinth and gills.
A stepwise increase in water will initially result in a reversal of the CO2 diffusion gradient from blood to water, but this is probably a transient situation. During a 1 h exposure to water with a of 21 mmHg, approximately 2 · 4 Μ M CO2/g/h were excreted into the water, indicating not only that afferent branchial blood had risen to at least slightly above 21 mmHg , but also that this situation must have occurred relatively early into the measurement period to account for a large net aquatic excretion of CO2. Both the branchial fraction of total CO2 excretion as well as the gas exchange ratio of the gills fell significantly from control values during aquatic hypercapnia (Table 1), suggesting that the absolute magnitude of the gradient from blood to water had been reduced from control levels once a steady state was achieved.
The gas exchange ratio for aerial exchange organs in air breathing fishes rarely exceeds 0 · 4 – 0 · 6 when aquatic CO2 excretion is blocked by either air exposure or aquatic hypercapnia, and so blood progressively rises and pH falls (Hughes & Singh, 1971; Singh & Hughes, 1971; Singh, 1976; Randall, Farrell & Haswell, 1978; Wright & Raymond, 1978). Trichogaster trichopterus, however, unlike most other air breathing fishes which have been examined, was able through aerial hyperventilation to utilize its aerial exchange organ as a highly effective alternative route for CO2 excretion when experiencing aquatic hypercapnia. The of the suprabranchial chamber is kept low by the pronounced hyperventilation with air (Fig. 5), and so a very large CO2 gradient favourable for the elimination of CO2 from the labyrinth organs will develop when Trichogaster is in hypercapnic water. In addition, the presence of high levels of carbonic anhydrase in the labyrinth has been shown to aid aerial CO2 excretion in this fish (Burggren & Haswell, 1979).
Whereas aquatic hypercapnia proved to be a profound stimulus to air breathing, the inspiration of CO2 enriched gas had little or no effect on air breathing. More ever the gas convection requirement remained unchanged (Fig. 7), indicating that no gross changes in labyrinth perfusion developed. In short, the only factor affecting labyrinth CO2 transfer which changed upon a sudden exposure to aerial hypercapnia was that the , diffusion gradient from blood to labyrinth gas became reversed, since a net labyrinth uptake of 1 · 5 μM-CO2/g/h now occurred (Table 1). However, an augmented aquatic CO2 elimination developed, and net CO2 excretion in Trichogaster during exposure to aerial hypercapnia remained almost unchanged from control levels, with only a slight, non-significant decrease in the overall gas exchange ratio. Thus, although aerial hypercapnia is rarely, if ever, encountered in nature, these data show the potential for the gills to assume the total role in CO2 excretion, even when aquatic must be elevated well above normal levels.
There is a total lack of a ventilatory response in the gourami to aquatic hypercapnia in the presence of very high oxygen partial pressures. This experiment, plus the fact that some of the greatest ventilatory efforts observed in the present study occurred during mild but concomitant aquatic hypoxia and hypercapnia could be taken as evidence that labyrinth ventilatory responses in Trichogaster induced by changes in inspired CO2 could largely be attempts to augment an O2 uptake and delivery to the tissues being disrupted by a hypercapnic acidosis. Yet, there is a clear independence of and partitioning between gills and labyrinth under a variety of environmental conditions (Fig. 6). There may be as yet unknown complex interactions between O2 and CO2 sensitive elements in the regulation of air breathing in Trichogaster.
Johansen et al. (1968) have suggested that changes in the oxygen partial pressures of the gas in the mouth of the air breathing eel Electrophorus are responsible for normal spontaneous breathing, and the reduction in ventilation of the suprabranchial chamber during hyperoxic breathing and increase during hypoxic breathing in the present study indicates that 02 is also involved in regulation of breathing in Trichogaster. Oxygen chemoreceptors in Electrophorus may be located in the buccal mucosa (the air breathing organ) or in the blood pathways close to it, for ventilatory responses to changes in inspired gas were immediate in the electric eel. This is not the case in Trichogaster, where ventilation may take many seconds or minutes to respond to a stepwise change in O2 partial pressure, indicating a more central location for purported chemoreceptors involved in oxygen regulation. Since aerial hypoxia is rarely, if ever, encountered in the natural environment, there may not have been strong selection pressures for the evolution of a chemo-sensitive control system able to differentiate between the reduced systemic blood oxygen resulting from gill ventilation with hypoxic water and that resulting from labyrinth ventilation with hypoxic gas. In the natural environment, an increase in ventilation of the suprabranchial chamber stimulated by low blood oxygen levels can only enhance aerial oxygen uptake, and is therefore always an appropriate response to tissue hypoxia. Under experimental conditions of breathing severely hypoxic gas, increased labyrinth ventilation may serve to produce the decrease in blood 02 which in fact is reflexly stimulating aerial hyperventilation in the first place.
Chemoreceptors specifically responsive to carbon dioxide are not located on the labyrinth organs or directly in the efferent circulation in Trichogaster since aerial hypercapnia, unlike aquatic hypercapnia, had no influence on air breathing. However, the specific locations of oxygen and carbon dioxide sensitive elements modulating ventilation in the gourami are not clear, nor are they for aquatic fishes generally (Johansen, 1970; Elancher & Dejours, 1974; Bamford, 1974).
The arrangement of the circulation of gills and aerial exchange organ as well as the relative dependence upon water and air for CO2 and O2 transfer probably are as important as patterns of environmental oxygen and carbon dioxide fluctuations, in terms of the selective pressures for a particular control system regulating a particular respiratory gas. Although much of the physiology of bimodal gas exchange remains to be described for Trichogaster and other air breathing fishes, it is clear that the simultaneous exploitation of aerial and aquatic respiration carries with it a great respiratory flexibility, particularly in forms such as the gourami where effective aerial CO2 elimination can be achieved by the aerial organs.
The author would like to thank Dr M. S. Haswell and, in particular, Dr D. J. Randall for many useful comments during the preparation of the manuscript, and C. Milliken for preparing the figures. This study was undertaken while the author was a recipient of a joint NRC of Canada-Killam Foundation Postdoctoral Fellowship.
Bamford
,
O. S.
(
1974
).
Oxygen reception in the rainbow trout (Salmo gairdneri)
.
Comp. Biochem. Physiol
.
48A
,
69
76
.
Boutilier
,
R. G.
,
Randall
,
D. J.
,
Shelton
,
G.
&
Toews
,
D. P.
(
1978
).
Some response characteristics of CO, electrodes
.
Respir. Physiol
.
32
(
3
),
381
388
.
Burggren
,
W. W.
&
Haswell
,
M. S.
(
1979
).
Aerial CO, excretion in the obligate air breathing fish, Trichogaster trichopterus’. a role for carbonic anhydrase
.
J. exp. Biol
.
82
,
215
225
.
Das
,
B. K.
(
1927
).
III. The bionomics of certain air-breathing fishes of India, together with an account of the development of their air-breathing organs
.
Phil. Tran. R. Soc. B
216
,
183
218
.
,
P. V.
&
Tripathi
,
S. D.
(
1976
).
Environment and ecology of freshwater air-breathing teleosts
.
In Respiration of Amphibious Vertebrates
(ed.
G. M.
Hughes
), pp.
39
72
.
London
:
.
Dejours
,
P.
,
Armand
,
J.
&
Verriest
,
G.
(
1968
).
Carbon dioxide dissociation curves of water and gas exchange of water breathers
.
Respir. Physiol
.
5
,
23
33
.
Elancher
,
B.
&
Dejours
,
P.
(
1974
).
Contrôle de la respiration chez les poissons téléostéens: existence de chémorécepteurs physiolgiquenment analogues aux chémorécepteurs des Vertébrés supérieurs
.
,
Paris
280
,
451
453
.
Farrell
,
A. P.
&
Randall
,
D. J.
(
1978
).
Air-breathing mechanics in two amazonian Teleosts, Arapaima gigas and Hoplerythrirsus unitaeniatus
.
Can. J. Zool
.
56
,
939
945
.
Henninger
,
G.
(
1907
).
Die Labyrinthorgane bei Labyrinthfischen
.
Zool. Jb
.
25
,
251
304
.
Hughes
,
G. M.
&
Singh
,
B. N.
(
1970a
).
Respiration in an air-breathing fish, the climbing perch Anabas testudineas Bloch. I. Oxygen uptake and carbon dioxide released into air and water
.
J. exp. Biol
.
53
,
265
280
.
Hughes
,
G. M.
&
Singh
,
B. N.
(
1970b
).
Respiration in an air-breathing fish, the climbing perch Anabas testudineas Bloch. II. Respiratory patterns and the control of breathing
.
J. exp. Biol
.
53
,
281
298
.
Hughes
,
G. M.
&
Singh
,
B. N.
(
1971
).
Gas exchange with air and water in an air-breathing catfish Saccobranchys fossilis
.
J. exp. Biol
.
53
,
281
298
.
Johansen
,
K.
(
1970
).
Air Breathing in Fishes
.
In Fish Physiology
, (ed.
W. S.
Hoar
and
D. J.
Randall
), vol.
4
.
New York
:
.
Johansen
,
K.
,
Hanson
,
D.
&
Lenfant
,
C.
(
1970
).
Respiration in a primitive air breather, Amia calva
.
Respir. Physiol
.
9
,
162
174
.
Johansen
,
K.
,
Lenfant
,
C.
&
Grigg
,
G. C.
(
1967
).
Respiratory control in the lungfish, Neoceratodus forsteri (Krefft)
.
Comp. Biochem. Physiol
.
20
,
835
854
.
Jhansen
,
K.
,
Lenfant
,
C.
,
Schmidt-Nielsen
,
K.
&
Petersen
,
J.
(
1968
).
Gas exchange and control of ‘breathing in the electric eel, Electrophorus dectricut
.
Z. vergl. Physiologie
61
,
137
163
.
Lomholt
,
J. P.
&
Johansen
,
K.
(
1974
).
Control of breathing in Amphiphous cuchia, an amphibious fish
.
Resp. Physiol
.
21
,
325
340
.
Munshi
,
J. S. D.
(
1968
).
The accessory respiratory organs of Anabas testudineos (Bloch) (Anabantidae, Pisces)
.
Linn. Soc. Proc
.
179
,
107
126
.
Munshi
,
J. S. D.
(
1976
).
Gross and fine structure of the respiratory organs of air-breathing fish
.
In Respiration of Amphibious Vertebrates
(ed.
G. M.
Hughes
), pp.
73
104
.
London
:
.
Peters
,
H. M.
(
1978
).
On the mechanism of air ventilation in Anabantoids (Pisces: Teleostei)
.
Zoomorphologie
89
,
93
123
.
Rahn
,
H.
&
Howell
,
B. J.
(
1976
).
Bimodal gas exchange
.
In Respiration of Amphibious Vertebrates
(ed.
G. M.
Hughes
), pp.
271
285
.
London
:
.
Randall
,
D. J.
,
Farrell
,
A. P.
&
Haswell
,
M. S.
(
1978
).
Carbon dioxide excretion in the pirarcu (Arapaima gigas), an obligate air-breathing fish
.
Can. J. Zool
.
56
,
977
982
.
Satchell
,
G. H.
(
1976
).
The circulatory system of air-breathing fish
.
In Respiration of Amphibious Vertebrates
(ed.
G. M.
Hughes
), pp.
105
123
.
London
:
.
Schulz
,
H.
(
1960
).
Die submikroskopishce Morphologie des Kiemenepithels
.
Proc. 4th Intern. Cong. Electron Microscopy, Berlin
, vol.
2
, pp.
421
426
.
Berlin
:
Springer
.
Singh
,
B. N.
(
1976
).
Balance between aquatic and aerial respiration
.
In Respiration of Amphibious Vertebrates
(ed.
G. M.
Hughes
), pp.
125
164
.
London
:
.
Singh
,
B. N.
&
Hughes
,
G. M.
(
1971
).
Respiration of an air-breathing catfish Ciarias batrachus (Linn
.)
J. exp. Biol
.
55
,
421
434
.
Singh
,
B. N.
&
Hughes
,
G. M.
(
1973
).
Cardiac and respiratory responses in the climbing perch Anabas testudineos
.
J. comp. Physiol
.
84
,
205
226
.
Wright
,
W. G.
&
Raymond
,
J. A.
(
1978
).
Air breathing in a California sculpin
.
J. exp. Zool
.
203
,
171
176
.
Wood
,
S. C.
&
Lenfant
,
C. J. M.
(
1976
).
Physiology of fish lungs
.
In Respiration of Amphibious Vertebrates
(ed.
G. M.
Hughes
), pp.
257
270
.
London
:
|
2023-04-02 08:39:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5500107407569885, "perplexity": 6578.297786134619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00032.warc.gz"}
|
https://physics.aps.org/synopsis-for/10.1103/PhysRevX.4.011017
|
# Synopsis: Clearer Quantum Vision
The use of quantum states of light can enhance the resolution of bioimaging techniques.
Unbreakable encryption schemes or quantum computers that outperform classical ones are the most-talked-about potential applications of quantum physics. But quantum effects could also help clear the vision of microscopes looking at the interior of living cells. As reported in Physical Review X, a new experimental scheme, based on the use of carefully engineered quantum states of light, allows researchers to map subcellular structures with a spatial resolution of about $10$ nanometers.
Michael Taylor at the University of Queensland, Australia, and co-workers have developed a quantum imaging method that utilizes so-called squeezed light in photonic force microscopy (PFM). PFM is an imaging method in which a nanoscale particle is embedded in a cell and moved with optical tweezers to explore the cell interior. By measuring the light scattered by the nanoparticle at different positions, the technique provides information about the local environment around the probe, including its specific interactions with molecules like membrane proteins and other cellular structures.
The resolution of PFM depends ultimately on two factors: the particle size and the measurement’s signal-to-noise, which limits the precision with which the particle position can be determined. Using squeezed states of light—quantum states that have better noise properties than classical light—Taylor et al. were able to mitigate the impact of noise. Experiments on yeast cells showed the resolution was enhanced by $14%$ compared to experiments with classical light, but the use of better squeezed-light sources could lead to an order-of-magnitude improvement, potentially allowing angstrom resolution in PFM imaging. – Matteo Rini
More Features »
### Announcements
More Announcements »
## Previous Synopsis
Nonlinear Dynamics
## Next Synopsis
Nonlinear Dynamics
## Related Articles
Fluid Dynamics
### Synopsis: Transition to Superlubricity in 2D
Studying particles sliding over a 2D potential lattice, researchers have observed a phase transition between a frictional regime and a frictionless, “superlubric” regime Read More »
Optics
### Synopsis: Two-Face Dipole
A proposed dipole source of electromagnetic waves can selectively couple its emission into either of two neighboring waveguides. Read More »
Gravitation
### Viewpoint: Moiré Effect Could Enhance Neutron Interferometry
A new and more flexible neutron interferometer design relies on the moiré effect, in which two periodic patterns are combined to give a longer-period pattern. Read More »
|
2018-04-22 21:42:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30339232087135315, "perplexity": 2195.0255101729235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945660.53/warc/CC-MAIN-20180422212935-20180422232935-00535.warc.gz"}
|
https://proofwiki.org/wiki/Unique_Factorization_Domain_is_Integrally_Closed
|
# Unique Factorization Domain is Integrally Closed
## Theorem
Let $A$ be a unique factorization domain (UFD).
Then $A$ is integrally closed.
## Proof
Let $K$ be the quotient field of $A$.
Let $x \in K$ be integral over $A$.
Let: $x = a / b$ for $a, b \in A$ with $\gcd \set {a, b} \in A^\times$.
This makes sense because a UFD is GCD Domain.
There is an equation:
$\paren {\dfrac a b}^n + a_{n - 1} \paren {\dfrac a b}^{n - 1} + \dotsb + a_0$
with $a_i \in A$, $i = 0, \dotsc, n - 1$.
Multiplying by $b^n$, we obtain:
$a^n + b c = 0$
with $c \in A$.
Therefore:
$b \divides a^n$
Suppose $b$ is not a unit.
Then:
$\gcd \set {a, b} \notin A^\times$
So $b$ is a unit, and:
$a b^{-1} \in A$
$\blacksquare$
|
2020-01-26 17:58:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693546891212463, "perplexity": 595.0172546023564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00298.warc.gz"}
|
https://www.physicsforums.com/threads/passing-variables-in-standard-c.385487/
|
# Passing variables in Standard C
Hey, so I am teaching myself the C language to get a head start for a class I have to take later and am a little bit confused about something. I understand that if you call a function from within your int main, the variables you pass are actually copies of the originals and when the program returns to intmain, they will retain the last values they before the outside function was called, provided what you passed was not a pointer to an array, correct? So is there a way to edit the values of variables inside of external functions and have them retain that edited value when they return to int main, other than returning a value and saving that?
I hope this was clear, thanks for the help!
Related Engineering and Comp Sci Homework Help News on Phys.org
Mark44
Mentor
Hey, so I am teaching myself the C language to get a head start for a class I have to take later and am a little bit confused about something. I understand that if you call a function from within your int main, the variables you pass are actually copies of the originals and when the program returns to intmain, they will retain the last values they before the outside function was called, provided what you passed was not a pointer to an array, correct? So is there a way to edit the values of variables inside of external functions and have them retain that edited value when they return to int main, other than returning a value and saving that?
I hope this was clear, thanks for the help!
Yes, when you call a function, the arguments are passed by value, which means that the values of the variables or expressions are what are passed to your function. Another parameter passing mechanism is passing by reference, in which a passed variable can have its value changed.
Unlike some other programming languages, C is strictly call by value, but it is possible for a function to modify its passed parameters. The way you do this is to pass a pointer to the variable. In this case, the function has the address of the variable, and can deference the pointer to actually change what is pointed to. Hope that helps.
BTW, call it the main function, but not int main or intmain. The int indicates that this function returns an int value.
Borek
Mentor
Code:
void function(int xx,int &yy)
{
xx = 7;
yy = 8;
}
int main()
{
int x,y;
x = 3;
y = 4;
function(x,y);
// now x is 3 and y is 8
return 0;
}
Awesome, thanks!
Mark44
Mentor
Code:
void function(int xx,int &yy)
{
xx = 7;
yy = 8;
}
int main()
{
int x,y;
x = 3;
y = 4;
function(x,y);
// now x is 3 and y is 8
return 0;
}
Borek, does Standard C use references (as in C++)? I haven't kept up with what's current these days in Standard C.
Mark44
Mentor
If Standard C doesn't support the notion of references, here's the way with pointers.
Code:
void function(int xx,int *yy)
{
xx = 7;
*yy = 8;
}
int main()
{
int x,y;
x = 3;
y = 4;
// Pass x by value and y by reference.
// I.e., pass a copy of x, but pass the address of y.
function(x, &y);
// now x is 3 and y is 8
return 0;
}
nvn
Homework Helper
In standard C, I thought it should be as follows. Please correct me if I am remembering incorrectly.
Code:
void function(int xx,int *yy)
{
xx=7;
*yy=8;
}
int main()
{
int x,y;
x=3;
y=4;
function(x,&y);
/* now x is 3 and y is 8. */
return(0);
}
EDIT: And now I see Mark44 and I posted at about the same time.
Last edited:
Hurkyl
Staff Emeritus
Gold Member
I believe the single line comments ("//") are in modern standard C.
AFAIK, return has never had function call like syntax -- its usage has always been return <expression>. Of course, "(0)" is a perfectly valid expression.
nvn
|
2020-06-07 10:49:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2055475115776062, "perplexity": 1373.6743299019306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348526471.98/warc/CC-MAIN-20200607075929-20200607105929-00255.warc.gz"}
|
https://proofwiki.org/wiki/Sequence_of_Imaginary_Reciprocals/Interior
|
# Sequence of Imaginary Reciprocals/Interior
## Theorem
Consider the subset $S$ of the complex plane defined as:
$S := \set {\dfrac i n : n \in \Z_{>0} }$
That is:
$S := \set {i, \dfrac i 2, \dfrac i 3, \dfrac i 4, \ldots}$
where $i$ is the imaginary unit.
No point of $S$ is an interior point.
## Proof
From Sequence of Imaginary Reciprocals: Boundary Points, every $z \in S$ is a boundary point of $S$.
Thus no $z \in S$ is an interior point.
$\blacksquare$
|
2021-03-06 17:07:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398163557052612, "perplexity": 434.54031179957104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375274.88/warc/CC-MAIN-20210306162308-20210306192308-00256.warc.gz"}
|
http://tatome.de/zettelkasten/zettelkasten.php?standalone&reference=liu-et-al-2010b
|
# Show Reference: "Dimensional overlap accounts for independence and integration of stimulus-response compatibility effects"
Dimensional overlap accounts for independence and integration of stimulus–response compatibility effects In Attention, Perception, & Psychophysics, Vol. 72, No. 6. (2010), pp. 1710-1720, doi:10.3758/app.72.6.1710 by Xun Liu, Yunsoo Park, Xiaosi Gu, Jin Fan
@article{liu-et-al-2010b,
abstract = {Extensive studies have been conducted to examine various attentional control effects that stem from stimulus— stimulus ({S—S}) and stimulus-response ({S—R}) incompatibility. Among these behavioral paradigms, the best-known are the Stroop effect, the Simon effect, and Posner's cue validity effect. In this study, we designed two behavioral tasks incorporating these effects ({Simon—color-Stroop} and {Simon-spatial-Stroop}) guided by a general framework of {S—R} ensemble, the dimensional overlap theory. We analyzed various attentional effects according to dimensional overlaps among {S—S} and {S—R} ensembles and their combinations. We found that behavioral performance was independently affected by various dimensional overlaps in the {Simon—color-Stroop} task, whereas different sources of dimensional overlap in the {Simon—spatial-Stroop} task interacted with each other. We argue that the dimensional overlap theory can be extended to serve as a viable unified theory that accounts for diverse attentional effects and their interactions and helps to elucidate neural networks subserving attentional control.},
author = {Liu, Xun and Park, Yunsoo and Gu, Xiaosi and Fan, Jin},
booktitle = {Attention, Perception, \& Psychophysics},
citeulike-article-id = {13502963},
doi = {10.3758/app.72.6.1710},
keywords = {biology, conflict, cortex},
number = {6},
pages = {1710--1720},
posted-at = {2015-01-28 13:46:16},
priority = {2},
publisher = {Springer-Verlag},
title = {Dimensional overlap accounts for independence and integration of stimulus-response compatibility effects},
url = {http://dx.doi.org/10.3758/app.72.6.1710},
volume = {72},
year = {2010}
}
In the Simon task, subjects are required to respond to a stimulus with a response that is spatially congruent or incongruent to that stimulus: They have, for example, to press a button with the left hand in response to a stimulus which is presented either on the left or on the right. Congruent responses (stimulus on the left—respond by pressing a button with the left hand) is usually faster than an incongruent response.
Simon task and Stroop task are similar. A main difference is that the conflict is between a dimension of the response and a task-irrelevant stimulus dimension in the Simon task, while it is between a task-irrelevant dimension of the stimulus, the task-relevant dimension of the stimulus, and a dimension of the response, in the Stroop task.
Attention is necessary to perform the Stroop and Simon tasks.
The dimensional overlap framework can be used to classify overlap and interference between relevant (features of) stimuli and (features of) responses in psychological stimulus-response paradigms. In particular it can be used to classify types of conflict between relevant and irrelevant dimensions of stimuli and response.
In Stroop-type experiments, there is usually conflict between an irrelevant stimulus dimension, the relevant stimulus dimension, and a dimension of the response, for example the color of ink $C_I$ in which a word is written, the meaning of the word (a different color) $C_R$, and the response (saying the name of that color $C_R$).
In Simon-type experiments, there is usually conflict only between an irrelevant stimulus dimension and a dimension of the response, for example the task-irrelevant location of a stimulus and the hand with which to respond.
Liu et al. hypothesize that conflicts between stimulus dimensions and between stimulus and response dimensions are detected by different mechanisms, but resolved by the same executive control mechanism.
Liu et al. found support for their model of two conflict detection mechanisms and one conflict resolution mechanism: In their experiments, compatibility effects between stimulus dimensions and between stimulus dimensions and response dimensions were additive when both types of conflicts occurred (or both were congruent) and they canceled out when one type of conflict and one type of congruency occurred.
The frontoparietal network seems involved in executive control and orienting.
Stroop presented color words which were either presented in the color they meant (congruent) or in a different (incongruent) color. He asked participants to name the color in which the words were written and observed that participants were faster in naming the color when it was congruent than when it was incongruent with the meaning of the word.
|
2018-04-24 14:33:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4769841730594635, "perplexity": 3466.963322862483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946721.87/warc/CC-MAIN-20180424135408-20180424155408-00560.warc.gz"}
|
https://www.physicsforums.com/threads/real-gas-unable-to-reach-the-correct-expression.772910/
|
# Real gas, unable to reach the correct expression
1. Sep 25, 2014
### fluidistic
1. The problem statement, all variables and given/known data
Consider a system of N particles contained in a volume V. The Hamiltonian of the system is $H=\sum{i=1}^N \frac{\vec p_i}{2m}+\sum _{i<j}u(|\vec r_i - \vec r_j|)$ where p_i and r_i are the momentum and position of the i-th's molecule.
1)Show that the state equation of the system is $\frac{Pv}{kT}=1+v\frac{\partial Z(v,T)}{\partial v }$ where v=V/N and $Z(v,T)=\frac{1}{N}\ln \left [ \frac{1}{V^N} \int d^3r_1... d^3 r _N \Pi _{i<j}(1+f_{ij}) \right ]$
Also $f_{ij}=f(|\vec r_i -\vec r_j|)$ with $f(r)=e^{-\beta u(r)}-1$.
2. Relevant equations
Relation between P and Z: $P=-\left ( \frac{\partial A}{\partial V} \right )_{\beta,N}$
Where $A=-\frac{1}{\beta}Z_N(\beta, V)$
3. The attempt at a solution
I used the relevant equations to get $A(\beta,V,N)=-\frac{1}{\beta} \ln [Z_N(\beta,V)]$ so that $P=\frac{1}{\beta}\frac{\partial}{\partial V} \{ \ln [Z_N(\beta,V)] \}$.
Hence $\frac{PV}{kT}=V\frac{\partial}{\partial V} \{ \ln [Z_N(\beta,V)] \}$.
Dividing by N I reach $\frac{Pv}{kT}=v\frac{\partial}{\partial V} \{ \ln [Z_N(\beta, V)] \}$
Now I believe that $\frac{\partial}{\partial V} \{ \ln [Z_N(\beta,V)] \}=N\frac{\partial}{\partial v} \{ \ln [Z_N(v,\beta)] \}$.
Which yields $\frac{Pv}{kT}=Nv \frac{\partial}{\partial v} \{ \ln Z_N (\beta ,v) \}=Nv\frac{1}{Z_N(v,\beta)}\cdot\frac{\partial}{\partial v}[Z_N(v,\beta)]$.
This differs from what I should have reached and I see no way to rewrite my expression into the desired one...
Any help on what's going on is appreciated.
Last edited: Sep 25, 2014
|
2018-01-20 23:47:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8450543284416199, "perplexity": 565.9364834907044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889736.54/warc/CC-MAIN-20180120221621-20180121001621-00503.warc.gz"}
|
http://www.sciencemadness.org/talk/viewthread.php?tid=9257
|
Not logged in [Login - Register]
Sciencemadness Discussion Board » Fundamentals » Chemistry in General » Interesting rare earth metals (lanthanoids) and thier salts Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Detritus » Test Forum
Pages: 1
Author: Subject: Interesting rare earth metals (lanthanoids) and thier salts
Antwain
National Hazard
Posts: 252
Registered: 21-7-2007
Location: Australia
Member Is Offline
Mood: Supersaturated
Interesting rare earth metals (lanthanoids) and thier salts
Forgive me if there is a thread already about this, I looked and couldn't find one. There are several others concerning obtaining these metals and dealing with some specific ones. I am curious as to which of these metals and their salts are the most interesting from the point of view of the amateur chemist.
For example, I would not consider neodymium interesting because it can make very strong magnets with boron and iron, simply because it is far beyond my abilities to make these magnets myself with the equipment I have on hand. Something That I would consider interesting is europium oxide (Eu2O3), because this substance is (I believe) very strongly fluorescent under UV.
I may decide to buy some rare earth metals soon and since I can't afford to buy them all I would be interested to know if there are some more 'fun' ones that I definitely must get.
Nerro
International Hazard
Posts: 596
Registered: 29-9-2004
Location: Netherlands
Member Is Offline
Mood: Whatever...
Eu2O3 in itsself is not phosphorescent under a UV lamp, it's the Y2O3 doped variety of Eu2O3 that ís. I've made it once for a practicum. It shines a ghostly red under a UV light. Dissolving both the oxides in nitric acid and then subsequently precipitating it using ammonia was how we mixed the two intimately. Glowing it in an oven at 950°C for a few hours then removed everything but the mixed oxides. (I believe some oxalic acid was added a flux of sorts...)
#261501 +(11351)- [X]
the \"bishop\" came to our church today
he was a fucken impostor
never once moved diagonally
courtesy of bash
chloric1
International Hazard
Posts: 1039
Registered: 8-10-2003
Location: closer to the anode
Member Is Offline
Mood: Strongly alkaline
@Antwain-You should check out pottery suppliers. In The USA I know of at least 2 that sell rare earth oxides. They are not reagent grade but they will be good enough for folling around and alot cheaper.
In the theater of life its nice to know where the exit doors are located.
unionised
International Hazard
Posts: 4030
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
"Eu2O3 in itsself is not phosphorescent under a UV lamp, it's the Y2O3 doped variety of Eu2O3 that ís. "
Are you sure that's the right way round? I think it's Eu doped Y2O3 that glows.
There are certainly Eu compounds that glow red without any Y compounds.
Sauron
International Hazard
Posts: 5351
Registered: 22-12-2006
Member Is Offline
Mood: metastable
I think you mean lanthanides not lanthanoids.
Sic gorgeamus a los subjectatus nunc.
Antwain
National Hazard
Posts: 252
Registered: 21-7-2007
Location: Australia
Member Is Offline
Mood: Supersaturated
Quote: Originally posted by Sauron I think you mean lanthanides not lanthanoids.
Possibly I do, but I copied the spelling verbatim from the periodic table on the wall half a meter to my left.
Sauron
International Hazard
Posts: 5351
Registered: 22-12-2006
Member Is Offline
Mood: metastable
Sorry, but I believe that if you check you will find that your periodic table is in error.
Lanthanides, not Lanthanoids
Antinides not Actinoids
[Edited on 7-10-2007 by Sauron]
Sic gorgeamus a los subjectatus nunc.
chemkid
National Hazard
Posts: 269
Registered: 5-4-2007
Location: Suburban Hell
Member Is Offline
Mood: polarized
Perhaps lanthanoids is an antiquated term for I have seen it used as well. Spell checkers don't seem to like it though.
Chemkid
not_important
International Hazard
Posts: 3873
Registered: 21-7-2006
Member Is Offline
Mood: No Mood
From 2004
Quote: IUPAC suggest but do not required lanthanoid and actinoid on the grounds that "-ide" inferrs a negative charge. Some newer books are switching to -oids, inertia rules the rest?
http://www.webelements.com/nexus/node/140
http://www.iupac.org/reports/provisional/abstract04/RB-prs31...
Sauron
International Hazard
Posts: 5351
Registered: 22-12-2006
Member Is Offline
Mood: metastable
Yes, I just found that as well. In this instance I think IUPAC is not serving chemistry well, but, there it is.
Their argument is that the -ide suffix should only be used to denote negative ions.
However, I am an old dog and resist new tricks. It is obvious that all of the traditional chemical names that contain such minor inconsistencies cannot be resolved in such a fashion without creating chaos.
In any case Antwain's periodic chart is merely indulging in a current fashion, it is not incorrect, merely nontraditional.
Sic gorgeamus a los subjectatus nunc.
woelen
Posts: 6788
Registered: 20-8-2005
Location: Netherlands
Member Is Offline
Mood: interested
Lanthanon is another name for lanthanide or lanthanoid. Terrible, all these naming conventions.
If you want interesting chemistry with any of the "rare" earths, then you could go for cerium (also one of the cheapest) or neodymium or praseodymium.
Cerium has a nice coordination and redox chemixtry in water. It has some intensely colored compounds and with some luck you can find salts of cerium(IV).
Neodymium and praseodymium have nice colors in their ionic compounds, but their aqueous chemistry is confined to the +3 oxidation state.
Actually, compared to the transition metals, the chemistry of the lanthanoids is not that interesting. Really interesting chemistry is obtained for vanadium, chromium, molybdenum and copper, with numerous colors, complexes and extensive redox chemistry.
Iridium is another very interesting one, but it is soooooo expensive.
The art of wondering makes life worth living...
Want to wonder? Look at http://www.oelen.net/science
I am a fish
undersea enforcer
Posts: 600
Registered: 16-1-2003
Location: Bath, United Kingdom
Member Is Offline
Mood: Ichthyoidal
Quote: Originally posted by Sauron I think you mean lanthanides not lanthanoids.
Please stop bossing other people around, especially when it concerns trivia. Lanthanoid is a perfectly valid (though lesser used term), which returns nearly 5000 hits on Google Scholar.
Now can we get back to discussing lanthanoid chemistry...
1f /0u (4|\\| |234d 7|-|15, /0u |234||`/ |\\|33d 70 937 0u7 /\\/\\0|23.
Sauron
International Hazard
Posts: 5351
Registered: 22-12-2006
Member Is Offline
Mood: metastable
I was not "bossing anyone around."
Do not mischaractarize what I posted.
I was correcting what I thought was a mistake. It was not a mistake, merely a post-2003 vogue that has not caught on, and I do NOT care how many hits it has in Google Scholar. That is not a measure of reality.
LANTHANIDE has a long and illustrious history of usage, whereas lanthanoid is a four year old johnny come lately.
Sic gorgeamus a los subjectatus nunc.
Antwain
National Hazard
Posts: 252
Registered: 21-7-2007
Location: Australia
Member Is Offline
Mood: Supersaturated
It is a moot point. You all knew what I meant. The main reason I put it in is because the FSE - ie. the one we are supposed to use before posting - is frankly crap, and I figured that including that may help it show up if anyone is interested in the future.
Yes indeed, lets get back to the topic at hand. Cotton and Wilkinson has some information on the metals and salts of these elements, but it is a far to theoretically oriented book for use in this case. None of my other books mention them except briefly. Are there any brightly coloured or for any other reason interesting compounds that can be made with these elements. I am sure that some of you must have made stuff.
PS. IUPAC can go to hell for all I care, all that matters when discussing chemistry is that one can communicate clearly. If people know unambiguously what you are talking about, then it is right. This was a major sticking point when I first became interested in chemistry in yr 10. They INSISTED that we call acetic acid "ethanoic acid" but I had already become attached to the only name it is ever known by. After raising a stink when I was marked incorrect I dragged in several random papers and they were forced to concede. (BTW the spell checker doesn't like "ethanoic" either )
Sauron
International Hazard
Posts: 5351
Registered: 22-12-2006
Member Is Offline
Mood: metastable
IUPAC has its place, but I agree with you that acetic acid is a fine name and I doubt that ethanoic acid will ever displace it.
And yes the FSE is mostly useless.
Sic gorgeamus a los subjectatus nunc.
12AX7
Post Harlot
Posts: 4803
Registered: 8-3-2005
Location: oscillating
Member Is Offline
Mood: informative
My organic chem prof says "1-methylethyl" is the 'preferred' systematic name, but nobody ever uses anything other than "isopropyl"...
And let's not forget the spoken confusion of -ane, -ene, -yne and -ine.
Tim
Seven Transistor Labs LLC http://seventransistorlabs.com/
Electronic Design, from Concept to Layout.
Need engineering assistance? Drop me a message!
Sauron
International Hazard
Posts: 5351
Registered: 22-12-2006
Member Is Offline
Mood: metastable
Let's not forget 2-propyl. Same as isopropyl or (shudder) 1-methylethyl.
Sic gorgeamus a los subjectatus nunc.
Antwain
National Hazard
Posts: 252
Registered: 21-7-2007
Location: Australia
Member Is Offline
Mood: Supersaturated
Can someone at least tell me where to find some good but general (ie. not an obscure research parer) online lit about them?
not_important
International Hazard
Posts: 3873
Registered: 21-7-2006
Member Is Offline
Mood: No Mood
I've picked up various books on the REE (lanthanides is so new wave) over the years. Most of those from the 40s and earlier 50s focus on the separation, often in the context of nuclear research. The 50s and 60 tended to be alloys, later 60s on had a lot on phosphors using REE as doping elements. More recent ones have more on coordination complexes and organic chemistry, neither of which are really home lab stuff unless you've a high field NMR or can do work in an oxygen free atmosphere.
Cerium has the most interesting inorganic chemistry, because it is easy to schlep it between Ce(III) and Ce(IV), and the Ce(IV) compounds are coloured. Eu has the Eu(II) state, which can be obtained fairly simply, but the chemistry of Eu(II) is pretty similar to barium.
these two overviews pretty much sums it up
http://www.chem.ox.ac.uk/icl/heyes/LanthAct/L7.html
http://library.lanl.gov/cgi-bin/getfile?rc000021.pdf
After that try Google books limiting the search to "full view" and using "rare earths" or specific element names.
You might find the <i>Clay Times</i> "Fluorescent Glazes" (Jon Singer), 2005 May/Jun:48-52 interesting.
Engager
National Hazard
Posts: 288
Registered: 8-1-2006
Location: Moscow, Russia
Member Is Offline
Mood: No Mood
Most of metalls from this group can be made by reacting corresponding flouride with metallic calcium in vacuum at high temperatures. I got my thullium (Tm) sample using this method. I have samples of pure metallic neodymium, sammarium, thullium, lanthanum, cerrium, gadolinium and yttrium. I can post photos of them if you want.
My favourite sample is gadolinium, it's ferromagnetic below 16C and paramagnetic above this temperature. You can see magnetic transition point transformation effects around room temperature.
Antwain
National Hazard
Posts: 252
Registered: 21-7-2007
Location: Australia
Member Is Offline
Mood: Supersaturated
Surely unless you obtained a bunch of stuff for free, the cost of TmF3 + calcium metal would be greater than simply buying the metal to start with.
Interesting that gadolinium does that. All the more so because 2 weeks ago I would not have actually understood how that can happen. I was hoping that my 'condensed matter physics' course would be good for something.
JohnWW
International Hazard
Posts: 2849
Registered: 27-7-2004
Location: New Zealand
Member Is Offline
Mood: No Mood
Because it has 7 unpaired 4f electrons, in addition to an unpaired 5d electron, in the ground state, which are the maximum possible numbers of unpaired electrons, gadolinium (and similarly curium), should theoretically be the most strongly ferromagnetic pure metals. However, I am surprised that it loses its ferromagnetism at only 16ºC.
[Edited on 12-10-07 by JohnWW]
Jdurg
Hazard to Others
Posts: 220
Registered: 10-6-2006
Location: Connecticut, USA
Member Is Offline
Mood: No Mood
With regards to the Lanthanide/Lanthanoid nomenclature, I fully agree with IUPAC stating that Lanthanoid is the proper nomenclature here. The biggest mistake you can make when working with chemicals is assume something, and if someone says that they are studying Lanthanide Chemistry how can you be certain that they are talking about the Lanthanoid elements, or just Lanthanum's negatively charged ions?
I took chemistry back in the mid 1990's in high school, and even back then I learned to call them Lanthanoids and Actinoids since the -ide nomenclature was seemingly wrong AND confusing. Two things you DON'T want in regards to chemistry.
\"A real fart is beefy, has a density greater than or equal to the air surrounding it, consists of the unmistakable scent of broccoli, and usually requires wiping afterwards.\"
chemrox
International Hazard
Posts: 2896
Registered: 18-1-2007
Location: UTM
Member Is Offline
Mood: psychedelic
lantha-whatevers and IUPAC
While they're at it can someone get IUPAC to suggest the Americans might spell and pronounce"Aluminium" properly?
Antwain
National Hazard
Posts: 252
Registered: 21-7-2007
Location: Australia
Member Is Offline
Mood: Supersaturated
@jdurg- don't suppose you mean positively charged ions
I haven't met anyone who cared a toss about the lanthanoids, in general or particular, at uni. If I ever did I would probably interrogate them further once they had expressed their interest, and not be satisfied by "yeah I study lanthanides". I am going to stick with 'its right if people know what you mean. On that not I would wager that I could find an IUPAC named chemical that would take people hours to work out. A picture is worth a thousand IUPAC words.
Pages: 1
Sciencemadness Discussion Board » Fundamentals » Chemistry in General » Interesting rare earth metals (lanthanoids) and thier salts Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Detritus » Test Forum
|
2019-08-26 07:23:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31551796197891235, "perplexity": 7722.728148474278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00154.warc.gz"}
|
https://www.physicsforums.com/threads/assembly-code-not-running-debug-help.565450/
|
# Assembly code not running - debug help
## Homework Statement
Hello all,
I need to implement iterative (non-recursive) binary search in assembly. The array has 10 elements, starting from 0x10000100, in address 0x10000004 there's the element to search for, and the answer should be put in address 0x10000008
It should be for MIPS and I ran it in QTSpim
## Homework Equations
Here's my code:
Code:
# this program implements binary search
# the equivalent pseudo code is the following:
# first = 0
# last = size -1
# while (last - first > 1) {
# mid = (last-first)/2 + first
# if A[mid] == val
# break;
# if A[mid] > val
# last = mid;
# continue
# else
# first = mid;
# continue;
# }
#-----------------------------------------------------
.data 0x10000000
size: .word 0x0000000a # array size
.data 0x10000004
search: .word 0x0000000d # search term
.data 0x10000008
result: .word 0xblackff # result = -1
.data 0x10000100
array: .word 0x00000001 # the array
.word 0x00000005
.word 0x00000007
.word 0x00000009
.word 0x0000000b
.word 0x0000000d
.word 0x00000010
.word 0x00004000
.word 0x00050000
.word 0x00700000
.text 0x0400000
program:
sw $t0, 0 #$t0 = 0, that's our "first" pointer
sw $t1, size #$t1 - size
addi $t1,$t1, -1 # $t1 = size - 1, our "last" pointer j condition # goto condition nop condition: sub$t2, $t1,$t0 # $t2 = last - first bgt$t2, 1, while # if ($t2 > 1) goto while nop j exit # if not, goto exit nop while: div$t3, $t2, 2 #$t3 = (last - first) / 2
add $t3,$t3, $t0 #$t3 = t3 + first
lw $t5, 0($t3) # $t5 = array($t3)
lw $t6, result #$t6 = result
beq $t6,$t5, found # if value found, goto found
nop
bgt $t5,$t6, isGreater # if array[$t3] > result, goto isGreater nop addi$t0, $t3, 0 # else, first = mid j condition # check the condition and start over the loop found: sw$t3, result # result = $t3 j exit # goto exit nop isGreater: addi$t1, $t3, 0 # else, last = mid j condition # check the condition and start over the loop exit: sw$t4, 0($t6) # result =$t4
nop
end: j end # internal loop to end the program
## The Attempt at a Solution
I get those errors in the run:
> Exception occurred at PC=0x00400000
> Exception occurred at PC=0x0040007c
Any ideas what could go wrong?
Last edited:
Related Engineering and Comp Sci Homework Help News on Phys.org
NascentOxygen
Staff Emeritus
I haven't looked closely at your code. You should break your program down into manageable blocks and debug it that way.
So make the very first statement in the program an exit, to make sure something that straightforward that will run without error messages.
program:
j end
If this works,
modify to
program:
sw $t0, 0 #$t0 = 0, that's our "first" pointer
sw $t1, size #$t1 - size
addi $t1,$t1, -1 # $t1 = size - 1, our "last" pointer j end and confirm that that is going to run without errors, and so on, cautiously building up in steps to the final code. In essence, building up your program by verifying each part works before adding more. This is the method of program construction you should be following, in any case. Otherwise, you type it all in, find it doesn't run, then have no idea where to start searching for the error. Good luck. And welcome to the trials and tribulations of programming! I like Serena Homework Helper Welcome to PF, ydan87! Looks like an access violation, meaning you access data from a location in a memory location where you're not allowed to. Skimming through your code I found: Code: lw$t5, 0($t3) #$t5 = array($t3) However, no reference has been made yet to the memory location of "array". Thanks guys for the help and the warm welcome :) Welcome to PF, ydan87! Looks like an access violation, meaning you access data from a location in a memory location where you're not allowed to. Skimming through your code I found: Code: lw$t5, 0($t3) #$t5 = array($t3) However, no reference has been made yet to the memory location of "array". Ilike - I guess I should first save the address of the array somewhere, and advance through it in each iteration. Can you give me an example on how to do that exactly? Thanks in advnace I like Serena Homework Helper Ilike - I guess I should first save the address of the array somewhere, and advance through it in each iteration. Can you give me an example on how to do that exactly? I would try: Code: lw$t5, array($t3) #$t5 = array($t3) I would try: Code: lw$t5, array($t3) #$t5 = array($t3) Thanks for the quick reply After fixing that, I get these in the run: > Exception occurred at PC=0x00400000 > Bad address in data/stack read: 0x00000000 > Exception occurred at PC=0x00400088 > Bad address in data/stack read: 0x00000000 I guess we have a progress here...the previous third exection was for pc 0x0040007c and now to 0x00400088 which means that the programs advanced without reciveing error for some time. Any ideas now? I like Serena Homework Helper Well, are there any other places where you index "array" without specifying that it is "array" you are indexing? And perhaps you can relate the address 0x00400088 to a specific line your code? Well, are there any other places where you index "array" without specifying that it is "array" you are indexing? And perhaps you can relate the address 0x00400088 to a specific line your code? Well...not for the array. Because after I've loaded it to$t5 I just used $t5. However, if I may address you to the part of the code where "result" is defined. In the exit label I have this line of code: Code: sw$t4, 0($t6) # result =$t4
Is there something wrong with that one?
I like Serena
Homework Helper
Well, I'm not going to do your work for you...
What do you think?
Btw, do you have a method to inspect intermediate results?
Perhaps you can output values so you can inspect them?
That way you can see how far your program got before crashing, and you can see if the program was on the right track.
Well, I'm not going to do your work for you...
What do you think?
Btw, do you have a method to inspect intermediate results?
Perhaps you can output values so you can inspect them?
That way you can see how far your program got before crashing, and you can see if the program was on the right track.
Well, you've really helped me alot thanks :), I really shouldn't have asked you do more than what you've already done.
Can you just offer me a program of a simulator that I can do inspection of the kind you've mentioned?
I like Serena
Homework Helper
Well, you've really helped me alot thanks :), I really shouldn't have asked you do more than what you've already done.
Can you just offer me a program of a simulator that I can do inspection of the kind you've mentioned?
You should already have something like that at hand.
How would you know the result?
Mark44
Mentor
Well, you've really helped me alot thanks :), I really shouldn't have asked you do more than what you've already done.
Can you just offer me a program of a simulator that I can do inspection of the kind you've mentioned?
You said you were using QtSPIM. Besides the console window, there is a window that you can view the registers, and you can single-step through your code to watch the registers change.
Guys, I appriciate your help very much. The thing is that I've asked for help because all the things you offer me either with the code or the program QTSpin didn't work for me, so I wanted extra pairs of eyes to take a look at this.
I know you shouldn't do the work for me, but I really tried my best on this and that's the reason I've posted here.
I'll appriciate any extra help in finding the bug in the code.
For your question, Mark - I haven't seen any change in the registers while running, because the first error I get is in the first step of the code
Mark44
Mentor
The reason for the first error is that you are not doing what you think you're doing. In your pseudocode, the first thing you do is store 0 in first.
Your first line of MIPS code stores the value in $t0 in location 0, and you can't do that. It is not storing the value 0 in the$t0 register.
I don't understand your code well enough to know how the first and last variables relate to the array. Is first supposed to be the first location in the array? If not, you should have a declaration in your text section to define this variable, something like this:
Code:
first: .word 0 # comment that describes what first is supposed to be
Also, I don't believe you need to have all of those .data statements after the first one. The assembler will put your variables where they need to go. I could be wrong, but if not, you're doing work figuring out addresses that the computer can do much fast and more accurately.
The reason for the first error is that you are not doing what you think you're doing. In your pseudocode, the first thing you do is store 0 in first.
Your first line of MIPS code stores the value in $t0 in location 0, and you can't do that. It is not storing the value 0 in the$t0 register.
I don't understand your code well enough to know how the first and last variables relate to the array. Is first supposed to be the first location in the array? If not, you should have a declaration in your text section to define this variable, something like this:
Code:
first: .word 0 # comment that describes what first is supposed to be
Also, I don't believe you need to have all of those .data statements after the first one. The assembler will put your variables where they need to go. I could be wrong, but if not, you're doing work figuring out addresses that the computer can do much fast and more accurately.
The first variable should be, at the beginning, the value 0 and then it changes according to the binary search. First is also an index I use to look in the array in that index.
I needto have the .data statements because I am asked to put certain variables in certain addresses and also the result in a certain address and that's the reason for the statements.
So how should the first line appear? Just switching between t0 and 0?
Mark44
Mentor
The first variable should be, at the beginning, the value 0 and then it changes according to the binary search. First is also an index I use to look in the array in that index.
I needto have the .data statements because I am asked to put certain variables in certain addresses and also the result in a certain address and that's the reason for the statements.
So how should the first line appear? Just switching between t0 and 0?
Something like this:
Code:
.data 0x10000000
size: .word 0x0000000a # array size
.data 0x10000004
search: .word 0x0000000d # search term
.data 0x10000008
result: .word 0xblackff # result = -1
.data 0x10000100
first: .word 0 # index of start of array section to be searched
last: .word 0 # index of end of array section to be searched
mid: .word 0 # index of middle element in array section to be searched
array: .word 0x00000001 # the array
.word 0x00000005
.word 0x00000007
.word 0x00000009
.word 0x0000000b
.word 0x0000000d
.word 0x00000010
.word 0x00004000
.word 0x00050000
.word 0x00700000
.text 0x0400000
program:
lw $t1, size # load size value into$t1
addi $t1,$t1, -1 # \$t1 = size - 1, our "last" pointer
j condition # goto condition
You should have variables defined in your data section for each variable in your pseudocode. I have added some of the variables that you omitted. Since first is initialized to 0, I don't need to have code to do this, so I can start right in with storing size - 1 in the appropriate register.
Okkk i understand it now for the future :)
Thanks alot mark
|
2020-07-07 10:43:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2359776347875595, "perplexity": 1698.525977211663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891884.11/warc/CC-MAIN-20200707080206-20200707110206-00578.warc.gz"}
|
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/fundamenta-mathematicae/all/175/1/88744/more-on-the-ehrenfeucht-8211-fraisse-game-of-length-omega-1
|
JEDNOSTKA NAUKOWA KATEGORII A+
# Wydawnictwa / Czasopisma IMPAN / Fundamenta Mathematicae / Wszystkie zeszyty
## More on the Ehrenfeucht–Fraisse game of length $\omega _1$
### Tom 175 / 2002
Fundamenta Mathematicae 175 (2002), 79-96 MSC: 03C55, 03C75, 03C45. DOI: 10.4064/fm175-1-5
#### Streszczenie
By results of [9] there are models ${\frak A}$ and ${\frak B}$ for which the Ehrenfeucht–Fraïssé game of length $\omega _1$, ${\rm EFG}_{\omega _1}({\frak A},{\frak B})$, is non-determined, but it is consistent relative to the consistency of a measurable cardinal that no such models have cardinality $\le \aleph _2$. We now improve the work of [9] in two ways. Firstly, we prove that the consistency strength of the statement “CH and ${\rm EFG}_{\omega _1}({\frak A},{\frak B})$ is determined for all models ${\frak A}$ and ${\frak B}$ of cardinality $\aleph _2$” is that of a weakly compact cardinal. On the other hand, we show that if $2^{\aleph _0}<2^{\aleph _{3}}$, $T$ is a countable complete first order theory, and one of
(i) $T$ is unstable,
(ii) $T$ is superstable with DOP or OTOP,
(iii) $T$ is stable and unsuperstable and $2^{\aleph _0}\le \aleph _{3}$,
holds, then there are ${\cal A},{\cal B}\mathrel |\mathrel {\mkern -3mu}=T$ of power $\aleph _{3}$ such that ${\rm EFG}_{\omega _{1}}({\cal A},{\cal B})$ is non-determined.
#### Autorzy
• Tapani HyttinenDepartment of Mathematics
P.O. Box 4 (Yliopistonkatu 5)
00014 University of Helsinki, Finland
e-mail
• Saharon ShelahEinstein Institute of Mathematics
The Hebrew University of Jerusalem
Jerusalem 91904, Israel
and
Deparment of Mathematics
Rutgers University
New Brunswick, NJ 08903, U.S.A.
e-mail
• Jouko VaananenDepartment of Mathematics
P.O. Box 4 (Yliopistonkatu 5)
00014 University of Helsinki, Finland
e-mail
## Przeszukaj wydawnictwa IMPAN
Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki.
Odśwież obrazek
|
2022-09-26 23:57:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5793470740318298, "perplexity": 3074.807542181867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00023.warc.gz"}
|
http://pycbc.org/pycbc/latest/html/pycbc.types.html
|
# pycbc.types package¶
## pycbc.types.aligned module¶
This module provides a class derived from numpy.ndarray that also indicates whether or not its memory is aligned. It further provides functions for creating zeros and empty (unitialized) arrays with this class.
pycbc.types.aligned.check_aligned(ndarr)[source]
pycbc.types.aligned.empty(n, dtype)[source]
pycbc.types.aligned.zeros(n, dtype)[source]
## pycbc.types.array module¶
This modules provides a device independent Array class based on PyCUDA and Numpy.
class pycbc.types.array.Array(initial_array, dtype=None, copy=True)[source]
Bases: object
Array used to do numeric calculations on a various compute devices. It is a convience wrapper around numpy, and pycuda.
abs_arg_max()[source]
Return location of the maximum argument max
abs_max_loc()[source]
Return the maximum elementwise norm in the array along with the index location
almost_equal_elem(other, tol, relative=True)[source]
Compare whether two array types are almost equal, element by element.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]-other[i]) <= tol*abs(self[i]) for all elements of the array.
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]-other[i]) <= tol for all elements of the array.
Other meta-data (type, dtype, and length) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: other – Another Python object, that should be tested for almost-equality with ‘self’, element-by-element. tol – A non-negative number, the tolerance, which is interpreted as either a relative tolerance (the default) or an absolute tolerance. relative – A boolean, indicating whether ‘tol’ should be interpreted as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False). ‘True’ if the data agree within the tolerance, as interpreted by the ‘relative’ keyword, and if the types, lengths, and dtypes are exactly the same. boolean
almost_equal_norm(other, tol, relative=True)[source]
Compare whether two array types are almost equal, normwise.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(self-other)) <= tol*abs(norm(self)).
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(self-other)) <= tol
Other meta-data (type, dtype, and length) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: other – another Python object, that should be tested for almost-equality with ‘self’, based on their norms. tol – a non-negative number, the tolerance, which is interpreted as either a relative tolerance (the default) or an absolute tolerance. relative – A boolean, indicating whether ‘tol’ should be interpreted as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False). ‘True’ if the data agree within the tolerance, as interpreted by the ‘relative’ keyword, and if the types, lengths, and dtypes are exactly the same. boolean
astype(dtype)[source]
clear()[source]
Clear out the values of the array.
conj()[source]
Return complex conjugate of Array.
copy()[source]
Return copy of this array
cumsum()[source]
Return the cumulative sum of the the array.
data
Returns the internal python array
dot(other)[source]
Return the dot product
dtype
fill(value)[source]
imag()[source]
Return imaginary part of Array
inner(other)[source]
Return the inner product of the array with complex conjugation.
itemsize
kind
lal()[source]
Returns a LAL Object that contains this data
max()[source]
Return the maximum value in the array.
max_loc()[source]
Return the maximum value in the array along with the index location
min()[source]
Return the maximum value in the array.
multiply_and_add(other, mult_fac)[source]
Return other multiplied by mult_fac and with self added. Self is modified in place and returned as output. Precisions of inputs must match.
nbytes
ndim
numpy()[source]
Returns a Numpy Array that contains this data
precision
ptr
Returns a pointer to the memory of this array
real()[source]
Return real part of Array
resize(new_size)[source]
Resize self to new_size
roll(shift)[source]
shift vector
save(path, group=None)[source]
Save array to a Numpy .npy, hdf, or text file. When saving a complex array as text, the real and imaginary parts are saved as the first and second column respectively. When using hdf format, the data is stored as a single vector, along with relevant attributes.
Parameters: path (string) – Destination file path. Must end with either .hdf, .npy or .txt. group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value. ValueError – If path does not end in .npy or .txt.
shape
squared_norm()[source]
Return the elementwise squared norm of the array
sum()[source]
Return the sum of the the array.
take(indices)[source]
trim_zeros()[source]
Remove the leading and trailing zeros.
vdot(other)[source]
Return the inner product of the array with complex conjugation.
view(dtype)[source]
Return a ‘view’ of the array with its bytes now interpreted according to ‘dtype’. The location in memory is unchanged and changing elements in a view of an array will also change the original array.
Parameters: dtype (numpy dtype (one of float32, float64, complex64 or complex128)) – The new dtype that should be used to interpret the bytes of self
weighted_inner(other, weight)[source]
Return the inner product of the array with complex conjugation.
pycbc.types.array.check_same_len_precision(a, b)[source]
Check that the two arguments have the same length and precision. Raises ValueError if they do not.
pycbc.types.array.common_kind(*dtypes)[source]
pycbc.types.array.complex_same_precision_as(data)[source]
pycbc.types.array.empty(length, dtype=<class 'numpy.float64'>)[source]
Return an empty Array (no initialization)
pycbc.types.array.force_precision_to_match(scalar, precision)[source]
pycbc.types.array.load_array(path, group=None)[source]
Load an Array from an HDF5, ASCII or Numpy file. The file type is inferred from the file extension, which must be .hdf, .txt or .npy.
For ASCII and Numpy files with a single column, a real array is returned. For files with two columns, the columns are assumed to contain the real and imaginary parts of a complex array respectively.
The default data types will be double precision floating point.
Parameters: path (string) – Input file path. Must end with either .npy, .txt or .hdf. group (string) – Additional name for internal storage use. When reading HDF files, this is the path to the HDF dataset to read. ValueError – If path does not end with a supported extension. For Numpy and ASCII input files, this is also raised if the array does not have 1 or 2 dimensions.
pycbc.types.array.real_same_precision_as(data)[source]
pycbc.types.array.zeros(length, dtype=<class 'numpy.float64'>)[source]
Return an Array filled with zeros.
## pycbc.types.array_cpu module¶
Numpy based CPU backend for PyCBC Array
pycbc.types.array_cpu.abs_arg_max()
pycbc.types.array_cpu.abs_arg_max_complex
pycbc.types.array_cpu.abs_max_loc()
pycbc.types.array_cpu.clear()
pycbc.types.array_cpu.cumsum()
pycbc.types.array_cpu.dot()
pycbc.types.array_cpu.empty()
pycbc.types.array_cpu.inner()
Return the inner product of the array with complex conjugation.
pycbc.types.array_cpu.inner_real
pycbc.types.array_cpu.max()
pycbc.types.array_cpu.max_loc()
pycbc.types.array_cpu.min()
pycbc.types.array_cpu.multiply_and_add()
Return other multiplied by mult_fac and with self added. Self will be modified in place. This requires all inputs to be of the same precision.
pycbc.types.array_cpu.numpy()
pycbc.types.array_cpu.ptr()
pycbc.types.array_cpu.squared_norm()
Return the elementwise squared norm of the array
pycbc.types.array_cpu.sum()
pycbc.types.array_cpu.take()
pycbc.types.array_cpu.vdot()
Return the inner product of the array with complex conjugation.
pycbc.types.array_cpu.weighted_inner()
Return the inner product of the array with complex conjugation.
pycbc.types.array_cpu.zeros()
## pycbc.types.config module¶
This module provides a wrapper to the ConfigParser utilities for pycbc. This module is described in the page here:
class pycbc.types.config.DeepCopyableConfigParser(*args, **kwargs)[source]
Bases: configparser.SafeConfigParser
The standard SafeConfigParser no longer supports deepcopy() as of python 2.7 (see http://bugs.python.org/issue16058). This subclass restores that functionality.
class pycbc.types.config.InterpolatingConfigParser(configFiles=None, overrideTuples=None, parsedFilePath=None, deleteTuples=None, skip_extended=False)[source]
This is a sub-class of DeepCopyableConfigParser, which lets us add a few additional helper features that are useful in workflows.
add_options_to_section(section, items, overwrite_options=False)[source]
Add a set of options and values to a section of a ConfigParser object. Will throw an error if any of the options being added already exist, this behaviour can be overridden if desired
Parameters: section (string) – The name of the section to add options+values to items (list of tuples) – Each tuple contains (at [0]) the option and (at [1]) the value to add to the section of the ini file overwrite_options (Boolean, optional) – By default this function will throw a ValueError if an option exists in both the original section in the ConfigParser and in the provided items. This will override so that the options+values given in items will replace the original values if the value is set to True. Default = True
check_duplicate_options(section1, section2, raise_error=False)[source]
Check for duplicate options in two sections, section1 and section2. Will return a list of the duplicate options.
Parameters: section1 (string) – The name of the first section to compare section2 (string) – The name of the second section to compare raise_error (Boolean, optional (default=False)) – If True, raise an error if duplicates are present. duplicates – List of duplicate options List
classmethod from_cli(opts)[source]
Initialize the config parser using options parsed from the command line.
The parsed options opts must include options provided by add_workflow_command_line_group().
Parameters: opts (argparse.ArgumentParser) – The command line arguments parsed by argparse
get_opt_tag(section, option, tag)[source]
Convenience function accessing get_opt_tags() for a single tag: see documentation for that function. NB calling get_opt_tags() directly is preferred for simplicity.
Parameters: self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class) section (string) – The section of the ConfigParser object to read option (string) – The ConfigParser option to look for tag (string) – The name of the subsection to look in, if not found in [section] The value of the options being searched for string
get_opt_tags(section, option, tags)[source]
Supplement to ConfigParser.ConfigParser.get(). This will search for an option in [section] and if it doesn’t find it will also try in [section-tag] for every value of tag in tags. Will raise a ConfigParser.Error if it cannot find a value.
Parameters: self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class) section (string) – The section of the ConfigParser object to read option (string) – The ConfigParser option to look for tags (list of strings) – The name of subsections to look in, if not found in [section] The value of the options being searched for string
get_subsections(section_name)[source]
Return a list of subsections for the given section name
has_option_tag(section, option, tag)[source]
Convenience function accessing has_option_tags() for a single tag: see documentation for that function. NB calling has_option_tags() directly is preferred for simplicity.
Parameters: self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class) section (string) – The section of the ConfigParser object to read option (string) – The ConfigParser option to look for tag (string) – The name of the subsection to look in, if not found in [section] Is the option in the section or [section-tag] Boolean
has_option_tags(section, option, tags)[source]
Supplement to ConfigParser.ConfigParser.has_option(). This will search for an option in [section] and if it doesn’t find it will also try in [section-tag] for each value in tags. Returns True if the option is found and false if not.
Parameters: self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class) section (string) – The section of the ConfigParser object to read option (string) – The ConfigParser option to look for tags (list of strings) – The names of the subsection to look in, if not found in [section] Is the option in the section or [section-tag] (for tag in tags) Boolean
interpolate_string(test_string, section)[source]
Take a string and replace all example of ExtendedInterpolation formatting within the string with the exact value.
For values like ${example} this is replaced with the value that corresponds to the option called example *in the same section* For values like${common|example} this is replaced with the value that corresponds to the option example in the section [common]. Note that in the python3 config parser this is ${common:example} but python2.7 interprets the : the same as a = and this breaks things Nested interpolation is not supported here. Parameters: test_string (String) – The string to parse and interpolate section (String) – The current section of the ConfigParser object test_string – Interpolated string String perform_extended_interpolation()[source] Filter through an ini file and replace all examples of ExtendedInterpolation formatting with the exact value. For values like${example} this is replaced with the value that corresponds to the option called example *in the same section*
For values like ${common|example} this is replaced with the value that corresponds to the option example in the section [common]. Note that in the python3 config parser this is${common:example} but python2.7 interprets the : the same as a = and this breaks things
Nested interpolation is not supported here.
populate_shared_sections()[source]
Parse the [sharedoptions] section of the ini file.
That section should contain entries according to:
• massparams = inspiral, tmpltbank
• dataparams = tmpltbank
This will result in all options in [sharedoptions-massparams] being copied into the [inspiral] and [tmpltbank] sections and the options in [sharedoptions-dataparams] being copited into [tmpltbank]. In the case of duplicates an error will be raised.
read_ini_file(fpath)[source]
Read a .ini file and return it as a ConfigParser class. This function does none of the parsing/combining of sections. It simply reads the file and returns it unedited
Stub awaiting more functionality - see configparser_test.py
Parameters: fpath (Path to .ini file, or list of paths) – The path(s) to a .ini file to be read in cp – The ConfigParser class containing the read in .ini file ConfigParser
sanity_check_subsections()[source]
This function goes through the ConfigParset and checks that any options given in the [SECTION_NAME] section are not also given in any [SECTION_NAME-SUBSECTION] sections.
split_multi_sections()[source]
Parse through the WorkflowConfigParser instance and splits any sections labelled with an “&” sign (for e.g. [inspiral&tmpltbank]) into [inspiral] and [tmpltbank] sections. If these individual sections already exist they will be appended to. If an option exists in both the [inspiral] and [inspiral&tmpltbank] sections an error will be thrown
## pycbc.types.frequencyseries module¶
Provides a class representing a frequency series.
class pycbc.types.frequencyseries.FrequencySeries(initial_array, delta_f=None, epoch='', dtype=None, copy=True)[source]
Models a frequency series consisting of uniformly sampled scalar values.
Parameters: initial_array (array-like) – Array containing sampled data. delta_f (float) – Frequency between consecutive samples in Hertz. epoch ({None, lal.LIGOTimeGPS}, optional) – Start time of the associated time domain data in seconds. dtype ({None, data-type}, optional) – Sample data type. copy (boolean, optional) – If True, samples are copied to a new array.
delta_f
Frequency spacing
Type: float
epoch
Time at 0 index.
Type: lal.LIGOTimeGPS
sample_frequencies
Frequencies that each index corresponds to.
Type: Array
almost_equal_elem(other, tol, relative=True, dtol=0.0)[source]
Compare whether two frequency series are almost equal, element by element.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]-other[i]) <= tol*abs(self[i]) for all elements of the series.
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]-other[i]) <= tol for all elements of the series.
The method also checks that self.delta_f is within ‘dtol’ of other.delta_f; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: other (another Python object, that should be tested for) – almost-equality with ‘self’, element-by-element. tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance. relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False). dtol (a non-negative number, the tolerance for delta_f. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_f values of the two FrequencySeries. boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same. ‘True’ if the data and delta_fs agree within the tolerance,
almost_equal_norm(other, tol, relative=True, dtol=0.0)[source]
Compare whether two frequency series are almost equal, normwise.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(self-other)) <= tol*abs(norm(self)).
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(self-other)) <= tol
The method also checks that self.delta_f is within ‘dtol’ of other.delta_f; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: other (another Python object, that should be tested for) – almost-equality with ‘self’, based on their norms. tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance. relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False). dtol (a non-negative number, the tolerance for delta_f. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_f values of the two FrequencySeries. boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same. ‘True’ if the data and delta_fs agree within the tolerance,
at_frequency(freq)[source]
Return the value at the specified frequency
cyclic_time_shift(dt)[source]
Shift the data and timestamps by a given number of seconds
Shift the data and timestamps in the time domain a given number of seconds. To just change the time stamps, do ts.start_time += dt. The time shift may be smaller than the intrinsic sample rate of the data. Note that data will be cycliclly rotated, so if you shift by 2 seconds, the final 2 seconds of your data will now be at the beginning of the data set.
Parameters: dt (float) – Amount of time to shift the vector. data – The time shifted frequency series. pycbc.types.FrequencySeries
delta_f
Frequency between consecutive samples in Hertz.
delta_t
Return the time between samples if this were a time series. This assume the time series is even in length!
duration
Return the time duration of this vector
end_time
Return the end time of this vector
epoch
Frequency series epoch as a LIGOTimeGPS.
get_delta_f()[source]
Return frequency between consecutive samples in Hertz.
get_epoch()[source]
Return frequency series epoch as a LIGOTimeGPS.
get_sample_frequencies()[source]
Return an Array containing the sample frequencies.
lal()[source]
Produces a LAL frequency series object equivalent to self.
Returns: lal_data – LAL frequency series object containing the same data as self. The actual type depends on the sample’s dtype. If the epoch of self was ‘None’, the epoch of the returned LAL object will be LIGOTimeGPS(0,0); otherwise, the same as that of self. {lal.*FrequencySeries} TypeError – If frequency series is stored in GPU memory.
match(other, psd=None, low_frequency_cutoff=None, high_frequency_cutoff=None)[source]
Return the match between the two TimeSeries or FrequencySeries.
Return the match between two waveforms. This is equivalent to the overlap maximized over time and phase. By default, the other vector will be resized to match self. Beware, this may remove high frequency content or the end of the vector.
Parameters: other (TimeSeries or FrequencySeries) – The input vector containing a waveform. psd (Frequency Series) – A power spectral density to weight the overlap. low_frequency_cutoff ({None, float}, optional) – The frequency to begin the match. high_frequency_cutoff ({None, float}, optional) – The frequency to stop the match. index (int) – The number of samples to shift to get the match. match (float) index (int) – The number of samples to shift to get the match.
plot(**kwds)[source]
Basic plot of this frequency series
sample_frequencies
Array of the sample frequencies.
sample_rate
Return the sample rate this would have in the time domain. This assumes even length time series!
save(path, group=None, ifo='P1')[source]
Save frequency series to a Numpy .npy, hdf, or text file. The first column contains the sample frequencies, the second contains the values. In the case of a complex frequency series saved as text, the imaginary part is written as a third column. When using hdf format, the data is stored as a single vector, along with relevant attributes.
Parameters: path (string) – Destination file path. Must end with either .hdf, .npy or .txt. group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value. ValueError – If path does not end in .npy or .txt.
start_time
Return the start time of this vector
to_frequencyseries()[source]
Return frequency series
to_timeseries(delta_t=None)[source]
Return the Fourier transform of this time series.
Note that this assumes even length time series!
Parameters: delta_t ({None, float}, optional) – The time resolution of the returned series. By default the is determined by length and delta_f of this frequency (resolution) – series. – The inverse fourier transform of this frequency series. TimeSeries
pycbc.types.frequencyseries.load_frequencyseries(path, group=None)[source]
Load a FrequencySeries from an HDF5, ASCII or Numpy file. The file type is inferred from the file extension, which must be .hdf, .txt or .npy.
For ASCII and Numpy files, the first column of the array is assumed to contain the frequency. If the array has two columns, a real frequency series is returned. If the array has three columns, the second and third ones are assumed to contain the real and imaginary parts of a complex frequency series.
For HDF files, the dataset is assumed to contain the attribute delta_f giving the frequency resolution in Hz. The attribute epoch, if present, is taken as the start GPS time (epoch) of the data in the series.
The default data types will be double precision floating point.
Parameters: path (string) – Input file path. Must end with either .npy, .txt or .hdf. group (string) – Additional name for internal storage use. When reading HDF files, this is the path to the HDF dataset to read. ValueError – If the path does not end in a supported extension. For Numpy and ASCII input files, this is also raised if the array does not have 2 or 3 dimensions.
## pycbc.types.optparse module¶
This modules contains extensions for use with argparse
class pycbc.types.optparse.DictWithDefaultReturn[source]
default_set = False
ifo_set = False
class pycbc.types.optparse.MultiDetMultiColonOptionAction(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]
A special case of MultiDetOptionAction which allows one to use arguments containing colons, such as V1:FOOBAR:1. The first colon is assumed to be the separator between the detector and the argument. All subsequent colons are kept as part of the argument. Unlike MultiDetOptionAction, all arguments must be prefixed by the corresponding detector.
class pycbc.types.optparse.MultiDetOptionAction(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]
class pycbc.types.optparse.MultiDetOptionActionSpecial(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]
This class in an extension of the MultiDetOptionAction class to handle cases where the : is already a special character. For example the channel name is something like H1:CHANNEL_NAME. Here the channel name must be provided uniquely for each ifo. The dictionary key is set to H1 and the value to H1:CHANNEL_NAME for this example.
class pycbc.types.optparse.MultiDetOptionAppendAction(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]
pycbc.types.optparse.convert_to_process_params_dict(opt)[source]
Takes the namespace object (opt) from the multi-detector interface and returns a dictionary of command line options that will be handled correctly by the register_to_process_params ligolw function.
pycbc.types.optparse.copy_opts_for_single_ifo(opt, ifo)[source]
Takes the namespace object (opt) from the multi-detector interface and returns a namespace object for a single ifo that can be used with functions expecting output from the single-detector interface.
pycbc.types.optparse.ensure_one_opt(opt, parser, opt_list)[source]
Check that one and only one in the opt_list is defined in opt
Parameters: opt (object) – Result of option parsing parser (object) – OptionParser instance. opt_list (list of strings) –
pycbc.types.optparse.ensure_one_opt_multi_ifo(opt, parser, ifo, opt_list)[source]
Check that one and only one in the opt_list is defined in opt
Parameters: opt (object) – Result of option parsing parser (object) – OptionParser instance. opt_list (list of strings) –
pycbc.types.optparse.nonnegative_float(s)[source]
Ensure argument is a positive real number or zero and return it as float.
To be used as type in argparse arguments.
pycbc.types.optparse.positive_float(s)[source]
Ensure argument is a positive real number and return it as float.
To be used as type in argparse arguments.
pycbc.types.optparse.required_opts(opt, parser, opt_list, required_by=None)[source]
Check that all the opts are defined
Parameters: opt (object) – Result of option parsing parser (object) – OptionParser instance. opt_list (list of strings) – required_by (string, optional) – the option that requires these options (if applicable)
pycbc.types.optparse.required_opts_multi_ifo(opt, parser, ifo, opt_list, required_by=None)[source]
Check that all the opts are defined
Parameters: opt (object) – Result of option parsing parser (object) – OptionParser instance. ifo (string) – opt_list (list of strings) – required_by (string, optional) – the option that requires these options (if applicable)
## pycbc.types.timeseries module¶
Provides a class representing a time series.
class pycbc.types.timeseries.TimeSeries(initial_array, delta_t=None, epoch=None, dtype=None, copy=True)[source]
Models a time series consisting of uniformly sampled scalar values.
Parameters: initial_array (array-like) – Array containing sampled data. delta_t (float) – Time between consecutive samples in seconds. epoch ({None, lal.LIGOTimeGPS}, optional) – Time of the first sample in seconds. dtype ({None, data-type}, optional) – Sample data type. copy (boolean, optional) – If True, samples are copied to a new array.
delta_t
duration
start_time
end_time
sample_times
sample_rate
add_into(other, copy=True)
Return copy of self with other injected into it.
The other vector will be resized and time shifted with sub-sample precision before adding. This assumes that one can assume zeros outside of the original vector range.
almost_equal_elem(other, tol, relative=True, dtol=0.0)[source]
Compare whether two time series are almost equal, element by element.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]-other[i]) <= tol*abs(self[i]) for all elements of the series.
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]-other[i]) <= tol for all elements of the series.
The method also checks that self.delta_t is within ‘dtol’ of other.delta_t; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: other (another Python object, that should be tested for) – almost-equality with ‘self’, element-by-element. tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance. relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False). dtol (a non-negative number, the tolerance for delta_t. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_t values of the two TimeSeries. boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same. ‘True’ if the data and delta_ts agree within the tolerance,
almost_equal_norm(other, tol, relative=True, dtol=0.0)[source]
Compare whether two time series are almost equal, normwise.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(self-other)) <= tol*abs(norm(self)).
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(self-other)) <= tol
The method also checks that self.delta_t is within ‘dtol’ of other.delta_t; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: other (another Python object, that should be tested for) – almost-equality with ‘self’, based on their norms. tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance. relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False). dtol (a non-negative number, the tolerance for delta_t. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_t values of the two TimeSeries. boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same. ‘True’ if the data and delta_ts agree within the tolerance,
append_zeros(num)[source]
Append num zeros onto the end of this TimeSeries.
at_time(time, nearest_sample=False)[source]
Return the value at the specified gps time
crop(left, right)[source]
Remove given seconds from either end of time series
Parameters: left (float) – Number of seconds of data to remove from the left of the time series. right (float) – Number of seconds of data to remove from the right of the time series. cropped – The reduced time series pycbc.types.TimeSeries
cyclic_time_shift(dt)[source]
Shift the data and timestamps by a given number of seconds
Shift the data and timestamps in the time domain a given number of seconds. To just change the time stamps, do ts.start_time += dt. The time shift may be smaller than the intrinsic sample rate of the data. Note that data will be cyclically rotated, so if you shift by 2 seconds, the final 2 seconds of your data will now be at the beginning of the data set.
Parameters: dt (float) – Amount of time to shift the vector. data – The time shifted time series. pycbc.types.TimeSeries
delta_f
Return the delta_f this ts would have in the frequency domain
delta_t
Time between consecutive samples in seconds.
detrend(type='linear')[source]
Remove linear trend from the data
Remove a linear trend from the data to improve the approximation that the data is circularly convolved, this helps reduce the size of filter transients from a circular convolution / filter.
Parameters: type (str) – The choice of detrending. The default (‘linear’) removes a linear squares fit. 'constant' removes only the mean of the data. (least) –
duration
Duration of time series in seconds.
end_time
Time series end time as a LIGOTimeGPS.
epoch_close(other)[source]
Check if the epoch is close enough to allow operations
filter_psd(segment_duration, delta_f, flow)[source]
Calculate the power spectral density of this time series.
Use the pycbc.psd.welch method to estimate the psd of this time segment. The psd is then truncated in the time domain to the segment duration and interpolated to the requested sample frequency.
Parameters: segment_duration (float) – Duration in seconds to use for each sample of the spectrum. delta_f (float) – Frequency spacing to return psd at. flow (float) – The low frequency cutoff to apply when truncating the inverse spectrum. psd – Frequency series containing the estimated PSD. FrequencySeries
fir_zero_filter(coeff)[source]
Filter the timeseries with a set of FIR coefficients
Parameters: coeff (numpy.ndarray) – FIR coefficients. Should be and odd length and symmetric. filtered_series (pycbc.types.TimeSeries) – Return the filtered timeseries, which has been properly shifted to account for the FIR filter delay and the corrupted regions zeroed out.
gate(time, window=0.25, method='taper', copy=True, taper_width=0.25, invpsd=None)[source]
Gate out portion of time series
Parameters: time (float) – Central time of the gate in seconds window (float) – Half-length in seconds to remove data around gate time. method (str) – Method to apply gate, options are ‘hard’, ‘taper’, and ‘paint’. copy (bool) – If False, do operations inplace to this time series, else return new time series. taper_width (float) – Length of tapering region on either side of excized data. Only applies to the taper gating method. invpsd (pycbc.types.FrequencySeries) – The inverse PSD to use for painting method. If not given, a PSD is generated using default settings. data – Gated time series pycbc.types.TimeSeris
get_delta_t()[source]
Return time between consecutive samples in seconds.
get_duration()[source]
Return duration of time series in seconds.
get_end_time()[source]
Return time series end time as a LIGOTimeGPS.
get_sample_rate()[source]
Return the sample rate of the time series.
get_sample_times()[source]
Return an Array containing the sample times.
highpass_fir(frequency, order, beta=5.0, remove_corrupted=True)[source]
Highpass filter the time series using an FIR filtered generated from the ideal response passed through a kaiser window (beta = 5.0)
Parameters: Series (Time) – The time series to be high-passed. frequency (float) – The frequency below which is suppressed. order (int) – Number of corrupted samples on each side of the time series beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation. remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the filtering is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.
inject(other, copy=True)[source]
Return copy of self with other injected into it.
The other vector will be resized and time shifted with sub-sample precision before adding. This assumes that one can assume zeros outside of the original vector range.
lal()[source]
Produces a LAL time series object equivalent to self.
Returns: lal_data – LAL time series object containing the same data as self. The actual type depends on the sample’s dtype. If the epoch of self is ‘None’, the epoch of the returned LAL object will be LIGOTimeGPS(0,0); otherwise, the same as that of self. {lal.*TimeSeries} TypeError – If time series is stored in GPU memory.
lowpass_fir(frequency, order, beta=5.0, remove_corrupted=True)[source]
Lowpass filter the time series using an FIR filtered generated from the ideal response passed through a kaiser window (beta = 5.0)
Parameters: Series (Time) – The time series to be low-passed. frequency (float) – The frequency below which is suppressed. order (int) – Number of corrupted samples on each side of the time series beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation. remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the filtering is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.
match(other, psd=None, low_frequency_cutoff=None, high_frequency_cutoff=None)[source]
Return the match between the two TimeSeries or FrequencySeries.
Return the match between two waveforms. This is equivalent to the overlap maximized over time and phase. By default, the other vector will be resized to match self. This may remove high frequency content or the end of the vector.
Parameters: other (TimeSeries or FrequencySeries) – The input vector containing a waveform. psd (Frequency Series) – A power spectral density to weight the overlap. low_frequency_cutoff ({None, float}, optional) – The frequency to begin the match. high_frequency_cutoff ({None, float}, optional) – The frequency to stop the match. match (float) index (int) – The number of samples to shift to get the match.
notch_fir(f1, f2, order, beta=5.0, remove_corrupted=True)[source]
notch filter the time series using an FIR filtered generated from the ideal response passed through a time-domain kaiser window (beta = 5.0)
The suppression of the notch filter is related to the bandwidth and the number of samples in the filter length. For a few Hz bandwidth, a length corresponding to a few seconds is typically required to create significant suppression in the notched band.
Parameters: Series (Time) – The time series to be notched. f1 (float) – The start of the frequency suppression. f2 (float) – The end of the frequency suppression. order (int) – Number of corrupted samples on each side of the time series beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.
plot(**kwds)[source]
Basic plot of this time series
prepend_zeros(num)[source]
Prepend num zeros onto the beginning of this TimeSeries. Update also epoch to include this prepending.
psd(segment_duration, **kwds)[source]
Calculate the power spectral density of this time series.
Use the pycbc.psd.welch method to estimate the psd of this time segment. For more complete options, please see that function.
Parameters: segment_duration (float) – Duration in seconds to use for each sample of the spectrum. kwds (keywords) – Additional keyword arguments are passed on to the pycbc.psd.welch method. psd – Frequency series containing the estimated PSD. FrequencySeries
qtransform(delta_t=None, delta_f=None, logfsteps=None, frange=None, qrange=(4, 64), mismatch=0.2, return_complex=False)[source]
Return the interpolated 2d qtransform of this data
Parameters: delta_t ({self.delta_t, float}) – The time resolution to interpolate to delta_f (float, Optional) – The frequency resolution to interpolate to logfsteps (int) – Do a log interpolation (incompatible with delta_f option) and set the number of steps to take. frange ({(30, nyquist*0.8), tuple of ints}) – frequency range qrange ({(4, 64), tuple}) – q range mismatch (float) – Mismatch between frequency tiles return_complex ({False, bool}) – return the raw complex series instead of the normalized power. times (numpy.ndarray) – The time that the qtransform is sampled. freqs (numpy.ndarray) – The frequencies that the qtransform is sampled. qplane (numpy.ndarray (2d)) – The two dimensional interpolated qtransform of this time series.
resample(delta_t)[source]
Resample this time series to the new delta_t
Parameters: delta_t (float) – The time step to resample the times series to. resampled_ts – The resample timeseries at the new time interval delta_t. pycbc.types.TimeSeries
sample_rate
The sample rate of the time series.
sample_rate_close(other)[source]
Check if the sample rate is close enough to allow operations
sample_times
Array containing the sample times.
save(path, group=None)[source]
Save time series to a Numpy .npy, hdf, or text file. The first column contains the sample times, the second contains the values. In the case of a complex time series saved as text, the imaginary part is written as a third column. When using hdf format, the data is stored as a single vector, along with relevant attributes.
Parameters: path (string) – Destination file path. Must end with either .hdf, .npy or .txt. group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value. ValueError – If path does not end in .npy or .txt.
save_to_wav(file_name)[source]
Save this time series to a wav format audio file.
Parameters: file_name (string) – The output file name
start_time
Return time series start time as a LIGOTimeGPS.
time_slice(start, end, mode='floor')[source]
Return the slice of the time series that contains the time range in GPS seconds.
to_frequencyseries(delta_f=None)[source]
Return the Fourier transform of this time series
Parameters: delta_f ({None, float}, optional) – The frequency resolution of the returned frequency series. By the resolution is determined by the duration of the timeseries. (default) – The fourier transform of this time series. FrequencySeries
to_timeseries()[source]
Return time series
whiten(segment_duration, max_filter_duration, trunc_method='hann', remove_corrupted=True, low_frequency_cutoff=None, return_psd=False, **kwds)[source]
Return a whitened time series
Parameters: segment_duration (float) – Duration in seconds to use for each sample of the spectrum. max_filter_duration (int) – Maximum length of the time-domain filter in seconds. trunc_method ({None, 'hann'}) – Function used for truncating the time-domain filter. None produces a hard truncation at max_filter_len. remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the whitening is excised before returning. If false, the corrupted regions are not excised and the full time series is returned. low_frequency_cutoff ({None, float}) – Low frequency cutoff to pass to the inverse spectrum truncation. This should be matched to a known low frequency cutoff of the data if there is one. return_psd ({False, Boolean}) – Return the estimated and conditioned PSD that was used to whiten the data. kwds (keywords) – Additional keyword arguments are passed on to the pycbc.psd.welch method. whitened_data – The whitened time series TimeSeries
pycbc.types.timeseries.load_timeseries(path, group=None)[source]
Load a TimeSeries from an HDF5, ASCII or Numpy file. The file type is inferred from the file extension, which must be .hdf, .txt or .npy.
For ASCII and Numpy files, the first column of the array is assumed to contain the sample times. If the array has two columns, a real-valued time series is returned. If the array has three columns, the second and third ones are assumed to contain the real and imaginary parts of a complex time series.
For HDF files, the dataset is assumed to contain the attributes delta_t and start_time, which should contain respectively the sampling period in seconds and the start GPS time of the data.
The default data types will be double precision floating point.
Parameters: path (string) – Input file path. Must end with either .npy, .txt or .hdf. group (string) – Additional name for internal storage use. When reading HDF files, this is the path to the HDF dataset to read. ValueError – If path does not end in a supported extension. For Numpy and ASCII input files, this is also raised if the array does not have 2 or 3 dimensions.
|
2021-09-21 17:16:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4172534942626953, "perplexity": 4174.421428462069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00146.warc.gz"}
|
https://bbs.hankcs.com/t/topic/2768
|
# Learning to Contextually Aggregate Multi-Source Supervision for Sequence...
Learning from multi-domain data is attractive yet non-trivial. This paper augments BiLSTM-CRF with a very simple linear transform followed by domain attention for multi-source learning.
## Approach
The structure prediction score is defined as:
$$s({x},{y}) = \sum_{t=1}^T (U_{t, {y}_{t}} + M_{{y}_{t-1},{y}_t}),$$
Their simple method is to transform the emission matrix U and transition matrix M source-wise:
$$s^{(k)}({x},{y}) = \sum_{t=1}^T \left((U A^{(k)})_{t, {y}_t} + (M A^{(k)})_{{y}_{t-1},{y}_t}\right).$$
These linear transforms are trained jointly in a trivial joint learning fashion.
To produce the final prediction, the authors propose to vote based on an attention of these sources:
\begin{align} \mathbf{A}_i^* = \sum_{k=1}^K {q}_{i,k} A^{(k)}. \end{align}
where {q}_{i,k} is an attention score produced by softmax \mathbf{q}_i= \text{softmax}(\mathbf{Q} \mathbf{h}^{(i)}),\text{where}\;\mathbf{Q}\in\mathbb{R}^{K \times 2d}.
|
2021-06-22 12:56:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.971356213092804, "perplexity": 9998.960068133785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517820.68/warc/CC-MAIN-20210622124548-20210622154548-00237.warc.gz"}
|
https://quant.stackexchange.com/tags/greeks/new
|
# Tag Info
3
Think of this in terms of Taylor series. Let's say the option price today is $C\left(S,t\right)$ where S is the underlying price and t time. Let's say the underlying price changes by $\Delta S$ in a time interval $\Delta t$, so your P/L will be: $\mathrm{P/L}=C\left(S+\Delta S,t+\Delta t\right)-C\left(S,t\right)$ Use Taylor series to first order in t and ...
1
These papers study delta-hedging of equity options with different models. @Article{, author = {Gurdip Bakshi and Charles Cao and Zhiwu Chen}, title = {Empirical Performance of Alternative Option Pricing Models}, journal = {Journal of Finance}, year = 1997, volume = 52, number = 5, pages = {2003--2049}, } @Article{, author = {...
0
It is simpler than the other Greeks, and the reason you don't hear a lot about $\rho$ is because it has smaller impact in the scheme of things. Let's say we are in the BS world, then the rho formulae for a call or put are rather simple: $\rho_{\mathrm{Call}} = K { e^{- r_{d} \tau} }\tau { N\left (d_{2} \right ) }$ \$\rho_{\mathrm{Put}} = - K { e^{- r_{d}...
Top 50 recent answers are included
|
2019-08-19 03:29:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7760133147239685, "perplexity": 1083.1710009553271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314641.41/warc/CC-MAIN-20190819032136-20190819054136-00030.warc.gz"}
|
http://orbit.dtu.dk/en/publications/bounds-on-the-degree-of-apn-polynomials-the-case-of-x-1--gx(9f443068-88b9-41b2-b6e9-48c880ad3486)/export.html
|
## Bounds on the degree of APN polynomials: the case of x −1 + g(x)
Publication: Research - peer-reviewJournal article – Annual report year: 2011
### Standard
Bounds on the degree of APN polynomials: the case of x −1 + g(x). / Leander, Gregor; Rodier, François.
In: Designs, Codes and Cryptography, Vol. 59, No. 1-3, 2011, p. 207-222.
Publication: Research - peer-reviewJournal article – Annual report year: 2011
### Author
Leander, Gregor; Rodier, François / Bounds on the degree of APN polynomials: the case of x −1 + g(x).
In: Designs, Codes and Cryptography, Vol. 59, No. 1-3, 2011, p. 207-222.
Publication: Research - peer-reviewJournal article – Annual report year: 2011
### Bibtex
title = "Bounds on the degree of APN polynomials: the case of x −1 + g(x)",
abstract = "In this paper we consider APN functions $${f:\mathcal{F}_{2^m}\to \mathcal{F}_{2^m}}$$ of the form f(x) = x −1 + g(x) where g is any non $${\mathcal{F}_{2}}$$-affine polynomial. We prove a lower bound on the degree of the polynomial g. This bound in particular implies that such a function f is APN on at most a finite number of fields $${\mathcal{F}_{2^m}}$$. Furthermore we prove that when the degree of g is less than 7 such functions are APN only if m ≤ 3 where these functions are equivalent to x 3.",
author = "Gregor Leander and François Rodier",
year = "2011",
doi = "10.1007/s10623-010-9456-y",
volume = "59",
pages = "207--222",
journal = "Designs, Codes and Cryptography",
issn = "0925-1022",
publisher = "Springer New York LLC",
number = "1-3",
}
### RIS
TY - JOUR
T1 - Bounds on the degree of APN polynomials: the case of x −1 + g(x)
AU - Leander,Gregor
AU - Rodier,François
PY - 2011
Y1 - 2011
N2 - In this paper we consider APN functions $${f:\mathcal{F}_{2^m}\to \mathcal{F}_{2^m}}$$ of the form f(x) = x −1 + g(x) where g is any non $${\mathcal{F}_{2}}$$-affine polynomial. We prove a lower bound on the degree of the polynomial g. This bound in particular implies that such a function f is APN on at most a finite number of fields $${\mathcal{F}_{2^m}}$$. Furthermore we prove that when the degree of g is less than 7 such functions are APN only if m ≤ 3 where these functions are equivalent to x 3.
AB - In this paper we consider APN functions $${f:\mathcal{F}_{2^m}\to \mathcal{F}_{2^m}}$$ of the form f(x) = x −1 + g(x) where g is any non $${\mathcal{F}_{2}}$$-affine polynomial. We prove a lower bound on the degree of the polynomial g. This bound in particular implies that such a function f is APN on at most a finite number of fields $${\mathcal{F}_{2^m}}$$. Furthermore we prove that when the degree of g is less than 7 such functions are APN only if m ≤ 3 where these functions are equivalent to x 3.
U2 - 10.1007/s10623-010-9456-y
DO - 10.1007/s10623-010-9456-y
M3 - Journal article
VL - 59
SP - 207
EP - 222
JO - Designs, Codes and Cryptography
T2 - Designs, Codes and Cryptography
JF - Designs, Codes and Cryptography
SN - 0925-1022
IS - 1-3
ER -
|
2017-07-27 08:56:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6387702226638794, "perplexity": 1181.0873621500182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427750.52/warc/CC-MAIN-20170727082427-20170727102427-00323.warc.gz"}
|
http://mathoverflow.net/feeds/question/30989
|
Does NP = "epsilon-P" (PTAS / BPP)? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T23:02:23Z http://mathoverflow.net/feeds/question/30989 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/30989/does-np-epsilon-p-ptas-bpp Does NP = "epsilon-P" (PTAS / BPP)? Sai Emrys 2010-07-08T02:46:05Z 2010-07-09T08:42:12Z <p>Some NP-complete optimization problems, like the knapsack problem, have a solution reachable in polynomial time that is guaranteed to be within arbitrary ε of the optimum answer. (aka PTAS - polynomial time approximation scheme)</p> <p>Some decision problems, like testing primes, have probabilistic solutions (like Rabin's) where you can get to arbitrary ε certainty of having the right answer. (aka BPP - bounded error, probabilistic, polynomial time)</p> <p>I'm aware these are very different things theoretically, but I'm going to lump them together and call them "ε-P" - i.e. problems that have 'approximate' (in certainty or optimality) solutions in polynomial time, to within whatever ε one wants.</p> <p>My question is, how many NP problems are "ε-P", like the above?</p> <hr> <p>Answer as I understand it: </p> <p>Certain problems that are "MAX SNP-hard" have no PTAS. These include: metric traveling salesman, maximum bounded common induced subgraph, three dimensional matching, maximum H-matching, MAX-3SAT, MAX-CUT, vertex cover, and independent set.</p> <p>NP-complete problems probably don't have BPPs.</p> <p>However, there's no clear <em>positive</em> answer (i.e. what NP problems <em>do</em> have a PTAS/BPP). Brownie points if you can supply one.</p> <hr> <p>FYI: I am not a mathematician. (My areas are social neuroscience, computer hacking, etc.)</p> <p>So this is probably not nearly precisely characterized enough to answer precisely, and I am not able to do so. I'm going to give a motivated explanation; please fill in the gaps and correct my errors as you see fit. My boyfriend is a mathematician (algebraic combinatorics) and can translate stuff that's over my head, so don't feel obliged to talk down to me.</p> <p>This is a pragmatic rather than theoretical question (motivated purely by curiosity), so 'good-enough' answers are good enough. ;-)</p> http://mathoverflow.net/questions/30989/does-np-epsilon-p-ptas-bpp/30990#30990 Answer by Greg Kuperberg for Does NP = "epsilon-P" (PTAS / BPP)? Greg Kuperberg 2010-07-08T02:57:08Z 2010-07-08T02:57:08Z <p>One answer is that many of them aren't, by the <a href="http://en.wikipedia.org/wiki/Hardness_of_approximation" rel="nofollow">PCP theorem</a>. This was a dramatic discovery of the early 1990s. Even the Traveling Salesman Problem does not have a PTAS unless P = NP. (See also the <a href="http://www.cs.princeton.edu/~arora/pubs/almss.ps" rel="nofollow">classic original paper</a>.)</p> http://mathoverflow.net/questions/30989/does-np-epsilon-p-ptas-bpp/30993#30993 Answer by Joel David Hamkins for Does NP = "epsilon-P" (PTAS / BPP)? Joel David Hamkins 2010-07-08T03:20:13Z 2010-07-08T03:20:13Z <p>If P=NP, then of course <em>every</em> NP problem will be in $\epsilon$-P. So we probably shouldn't expect any proofs that a particular NP problem is definitely not in $\epsilon$-P to show up here, as this would settle P $\neq$ NP.</p> <p>Meanwhile, as Greg has already noted, there are several instances of NP complete problems whose approximate versions are also NP complete. So under P $\neq$ NP, these would be negative instances. However, <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.66.9127&rep=rep1&type=pdf" rel="nofollow">This 1992 thesis</a> by Viggo Kann explains several positive instances of the phenomenon.</p> http://mathoverflow.net/questions/30989/does-np-epsilon-p-ptas-bpp/31046#31046 Answer by Peter Shor for Does NP = "epsilon-P" (PTAS / BPP)? Peter Shor 2010-07-08T14:10:20Z 2010-07-08T14:10:20Z <p>The answer to this question is essentially given in previous answers, but I'll try to state it more completely. It really depends on the problem. All NP-complete problems are equivalent in how hard it is to find their exact solution, but they vary widely in how hard it is to approximate them. Many of them can be shown hard to approximate by using the PCP theorem. A few were known to be hard to approximate before the PCP theorem. There are many which have a polynomial time approximation scheme (PTAS), and so are "easy" to approximate (for some meaning of "easy"). A few have a fully polynomial time approximation scheme (FPTAS), and so are easy to approximate (for a much more satisfying meaning of "easy").</p> <p>There are no known NP-complete problems which have probabilistic algorithms (like primality testing does) -- this would imply BPP=NP, which is something that computer scientists think is very unlikely.</p>
|
2013-05-24 23:02:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7493211627006531, "perplexity": 2870.66287502091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00023-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.esaral.com/q/if-p-and-q-are-two-prime-number-52281/
|
If p and q are two prime number,
Question:
If p and q are two prime number, then what is their HCF?
Solution:
It is given that p and q are two prime numbers; we have to find their HCF.
We know that the factors of any prime number are 1 and the prime number itself.
For example, let $p=2$ and $q=3$
Thus, the factors are as follows
$p=2 \times 1$
And
$q=3 \times 1$
Now, the HCF of 2 and 3 is 1.
Thus the HCF of p and is 1
|
2022-01-26 07:30:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252757787704468, "perplexity": 235.49123337309118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00279.warc.gz"}
|
https://www.groundai.com/project/towards-software-analytics-modeling-maintenance-activities/
|
Towards Software Analytics: Modeling Maintenance Activities
# Towards Software Analytics: Modeling Maintenance Activities
## Abstract.
Lehman’s Laws teach us that a software system will become progressively less satisfying to its users over time, unless it is continually adapted to meet new needs. Understanding software maintenance can potentially relieve many of the pains currently experienced by practitioners in the industry and assist in reducing uncertainty, improving cost-effectiveness, reliability and more. The research community classifies software maintenance into 3 main activities: Corrective: fault fixing; Perfective: system improvements; Adaptive: new feature introduction.
In this work we seek to model software maintenance activities and design a commit classification method capable of yielding a high quality classification model. We performed a comparative analysis of our method and existing techniques based on 11 popular open source projects from which we had manually classified 1151 commits, over 100 commits from each of the studied projects. The model we devised was able to achieve an accuracy of 76% and Kappa of 63% (considered ”Good“ in this context) for the test dataset, an improvement of over 20 percentage points, and a relative improvement of 40% in the context of cross-project classification.
We then leverage our commit classification method to demonstrate two applications: {enumerate*}[label=(0)]
a tool aimed at providing an intuitive visualization of software maintenance activities over time, and
an in-depth analysis of the relationship between maintenance activities and unit tests.
Software Maintenance, Mining Software Repositories, Predictive Models, Human Factors
12
emph=[1]groupBy, map, mapValues, union , emphstyle=[1], emph=[2]String, Iterable, Array, Set, RDD, Map, Int, Long , emphstyle=[2], emph=[2]String, Iterable, Array, Set, RDD, Map, Int, Long , emphstyle=[2],
## 1. Software Evolution & Maintenance
The software evolution phenomenon was first identified in the late 60’s. The term software evolution however, was coined by Lehman only years later (Lehman, 1969, 1978, Lehman and Ramil, 2003). Initial studies in this area took place during the 70’s and concentrated primarily on measuring and interpreting the growth of software systems and evolutionary trends (Lehman and Ramil, 2003, Belady and Lehman, 1971). Belady and Lehman (1976) recognized that the process of large-scale program development and maintenance appeared to be unpredictable, its costs were high and its output was a fragile product. They advocated that one should try to reach beyond understanding and attempt to change the process for the better. Lehman et al. (2000) classify the field of software evolution research into two groups, the first considers the term evolution as a verb while the second as a noun.
The verbal view:
research is concerned with the question of “how”, and focuses on means, processes, activities, languages, methods and tools required to effectively and reliably evolve a software system.
The nounal view:
research is concerned with the question of “what” and investigates the nature of software evolution, as a phenomenon, and focuses on the nature of evolution, its causes, properties, characteristics, consequences, impact, management and control (Lehman et al., 2000, Lehman and Ramil, 2003).
Lehman et al. (2000), Lehman and Ramil (2003) suggest that both views are mutually supportive. Moreover, they suggest that the verbal view research will benefit from progress made in studying the nounal view, and both are required if the community is to advance in mastering software evolution.
Software maintenance activities are a key aspect of software evolution and have been a subject of research in numerous works (Swanson, 1976, Mockus and Votta, 2000, Meyers, 1988, Lientz et al., 1978, Levin and Yehudai, 2016, Schach et al., 2003). As a step towards enhanced Software Analytics (Buse and Zimmermann, 2010, Menzies and Zimmermann, 2013), we believe that a better understanding of software maintenance activities could help practitioners reduce uncertainty and improve cost-effectiveness (Swanson, 1976) by planning ahead and pre-allocating resources towards source code maintenance. To determine maintenance activity profiles, one must first classify the activities (i.e., developer commits to the version control system), into one of the 3 maintenance activities kinds: Corrective: fault fixing; Perfective: system improvements; Adaptive: new feature introduction.
A widely practiced method for commit classification has been inspecting the commit message3 (Mockus and Votta, 2000, Fischer et al., 2003, Śliwerski et al., 2005, Amor et al., 2006). Works employing commit message based classification reported the accuracy to average below 60% when used in the scope of a single project, and below 53% when used in the scope of multiple projects, i.e., when a single model was used to classify commits from multiple projects (Hindle et al., 2009, Amor et al., 2006). Arguably, low accuracy may be a significant barrier preventing these classification methods from being used in professional tools. It would therefore be beneficial to devise maintenance classification methods with higher accuracy (and overall classification quality). Our work is also motivated by the following observations:
1. Cross project classification quality leaves much to be desired.
Existing results rarely consider cross-project classification, which threatens external validity. Hindle et al. (2009) explored cross-project classification and reported the accuracy to be 52%, which is considerably lower than the 60% range reported by studies dealing with a single project.
2. Cohen’s Kappa is vital to determine imbalanced classification quality, but it is rarely reported.
Existing classification results rarely report Cohen’s kappa (hence forth Kappa) metric (see also Section 3.1), which accounts for cases where classification labels (a.k.a classes) are unevenly distributed. Such cases make the accuracy metric somewhat misleading. For example, if the corrective class accounted for 98% of the commits in a given dataset, and each of the remaining classes accounted for 1% of the commits, then a simple classification model which always classified commits as corrective would have an impressive accuracy of 98%. Its Kappa on the other hand, would be 0, making this model much less appealing.
3. High quality maintenance activity classification may benefit both previous and future work.
Our previous work (Levin and Yehudai, 2016) shows that source code change types as defined by Fluri and Gall (2006) are statistically significant in the context of maintenance activities defined by Mockus and Votta (2000). We believe that increasing the accuracy and Kappa characteristics of commit classification into maintenance activities could improve the quality and accuracy of individual developer maintenance profiles as well as the ability to build predictive models thereof.
In contrast to standard version control systems (VCS) and traditional diff tools which model code changes on the text level, in this work we wish to study changes in object oriented entities such as classes, methods, and fields throughout the life span of a software repository. To this end we use Fluri’s taxonomy of source code changes (Fluri and Gall, 2006) for object-oriented programming languages (OOPLs), which consists of 48 (47 an ”unknown type“) different change types, all of which are project agnostic and describe a meaningful action performed by a developer in a commit (e.g., statement_delete, statement_insert, removed_class, additional_class etc). Our work explores the following research questions:
1. Can fine-grained source code changes be utilized to improve the quality of commit classification into maintenance activities?
2. How does the quality of models which utilize fine-grained source code changes compare to that of traditional models which rely on word frequency analysis only?
3. How can our findings be useful for practitioners and researchers?
This paper is an extension of our previous work (Levin and Yehudai, 2017b), where we first suggested utilizing fine-grained source code changes to classify commits into maintenance activities. In this extended paper, we provide a detailed discussion of our commit classification and repository harvesting methods, as well as new perspectives on applications for the discussed methods and techniques. To that end, Section 4 provides detailed information about the methods we used to effectively process Big Code, and Section 8 showcases additional applications which focuses on two particular directions: {enumerate*}[label=(0)]
Software Maintenance Activity Explorer, a tool aimed at providing an intuitive visualization of software maintenance activities over time, and
an in-depth analysis of the relationship between maintenance activities and unit tests in software projects.
## 2. Related Work
The research community classifies software maintenance into 3 main activities: Corrective, Perfective and Adaptive. The interpretation of these categories, and namely, the criteria to be used to determine which commits fall under what activity type is yet to reach a consensus. Swanson (1976) and Ghezzi et al. (2002) suggested the following definitions:
• Corrective: rectify the bugs observed while the system is in use.
• Perfective: support new features or enhance performance according to user demand.
• Adaptive: run on new platforms, new operating systems or interface with new hardware or software.
Mockus and Votta (2000) used different definitions for the perfective and adaptive activities:
• Perfective: code (re-)structuring to accommodate future changes.
• Adaptive: new feature introduction.
In this study we adopt the definitions put forth by Mockus and Votta (2000) and use these definitions to devise a commit classification method that improves existing results. Having spent almost a decade and a half professionally developing commercial software for both start-ups and enterprises, the authors feel that the definitions suggested by Mockus et al. almost two decades ago, have stood the test of time and remain relevant and applicable to how modern software evolves. For example, relatively new techniques such as refactoring are now common for improving the quality of code. Despite the fact refactoring became common only years after the definition by Mockus et al. had been suggested, refactoring fits perfectly under their definition for perfective maintenance. The alternative maintenance definitions on the other hand, seem to struggle with accommodating refactoring in a sensible manner. Moreover, we favour the interpretation by Mockus and Votta of the “adaptive” maintenance as adding new features (rather than accommodating new operating systems and hardware) since it intuitively covers one of the most basic activities carried out by developers - extending existing software with new features. The alternative definition of the “adaptive” maintenance activity speaks of adapting software to new platforms, operating systems and hardware. We believe that the latter has become significantly less frequent in (modern) software evolution. Even when considering the appearance of smart-phones and other gadgets which required the adaptation of software to new platforms and hardware, the endless stream of new features developers are required to implement in today’s software seems like a much more dominant factor.
Mockus and Votta suggested the hypothesis that a textual description of the source code change (a commit to the VCS) is essential to understanding why that change was performed. To test this hypotheses, an automatic classification algorithm for maintenance activities was designed based on the textual description of changes. The automatic classification was then verified by surveying 8 developers. The survey results were in line with the automatic classification results, paving the road to text based commit classification approaches. The reported accuracy was 61%. Mockus and Votta (2000), Hindle et al. (2009), Fischer et al. (2003), Śliwerski et al. (2005), Levin and Yehudai (2016) employed similar, keywords based, techniques for classifying commits into maintenance activities.
Recent work explored using additional information such as commits’ author and module, to classify commits both within a single software project, and cross-projects (Hindle et al., 2009). Within a single project, the reported accuracy ranged from 35% to 70% (accuracy fluctuated considerably depending on the project). In a cross-project scope, Hindle et al. (2009) reported the classification accuracy to be 52%. A slightly different technique was used by Amor et al. (2006), who explored classifying maintenance activities in the FreeBSD project by applying a Naive Bayes classifier on commits’ comments without an apparent use of keywords. In FreeBSD, the reported accuracy of classifying a random sample (whose size was not specified) was 70% (within the scope of the FreeBSD project).
A summary of the existing results for commit classification into maintenance activities can be found in Table 1. In this work we were able to improve upon previous results and achieve an accuracy of 76% and Cohen’s kappa of 63% in the context of cross-project commit classification, an improvement of over 20 percentage points and a relative improvement of 40% in accuracy compared to previous results.
In contrast to prior studies which typically used the commit message to devise commit classification models, in this work we leverage fine grained source code changes in combination with the commit message to achieve superior model quality. In addition, we design and evaluate our models in a cross project scope (see also Table 4), rather than a single project scope. That is, after performing the per-project stratified sampling to obtain the ground truth dataset (see also Section 4), our subsequent model training and evaluation do not limit the commits to a single project, and are performed on heterogeneous commits (see also Table 4).
We also extend our previous work (Levin and Yehudai, 2017a) which studied the co-evolution of test maintenance and code maintenance and showed that maintenance activities can be successfully used to model the number of test methods and test classes in software projects. In particular, we provide statistical evidence showing that software maintenance activities play an important role in modeling test (method and class) counts.
## 3. Research Method
Our research method consists of the following stages:
1. Select candidate software repositories and harvest their commit data such as commit message and source code changes performed in the commits (see Section 4).
2. Create a labeled dataset by sampling commits and manually labeling them. Each label is a maintenance activity, i.e. one of the following: corrective, perfective, or adaptive (see Section 5).
1. Inspect the agreement level on the manually classified commits by having both authors independently classify a 10% sample of commits (see Section 5).
3. Devise predictive models that utilize source code changes for the task of commit classification into maintenance activities (see Section 6).
4. Evaluate the devised models using two mutually exclusive datasets obtained by splitting the labeled dataset into {enumerate*}[label=(0)]
5. a training dataset, consisting of 85% of the labeled dataset, and
6. a test dataset, consisting of the remaining 15% of the labeled dataset. The test dataset was never used as part of the training process (see Section 7).
### 3.1. Statistical Methods
Picking the optimal classifier for a real-world classification problem is hardly a simple task (Fernández-Delgado et al., 2014), however, Random Forest (RF) (Ho, 1998, Breiman, 2001) and Gradient Boosting Machine (GBM) (Friedman, 2001, Caruana and Niculescu-Mizil, 2006, Caruana et al., 2008) based classifiers are generally considered well performing (Caruana and Niculescu-Mizil, 2006, Fernández-Delgado et al., 2014). In addition, we also use J48, a variation of the C4.5 (Quinlan, 2014) algorithm. The RF implementation (Andy Liaw, 2015, Liaw and Wiener, 2002) and the GBM’s one (Ridgeway and Others, 2015, Ridgeway, 2007) are most likely to outperform the simpler J48 (Frank et al., 2005, Hornik et al., 2009, Witten and Frank, 2005), but the latter, in contrast to the formers, is capable of providing a human readable representation of its decision tree. We find this ability valuable since inspecting the decision tree may reveal further insights. An example of a decision tree produced by the J48 classifier can be found in Figure 2, which depicts our keyword based commit classification model described in Section 6.
To evaluate the different commit classification models we employ common statistical measures for classification performance. For a given class , is the number of commits correctly classified as class ; is the number of commits incorrectly classified as class ; is the number of commits of class that were incorrectly classified.
• Precision , the number of commits correctly classified as class , divided by the total number of commits classified as class .
• Recall , the number of commits correctly classified as class , divided by the actual number of class commits in the dataset.
• Accuracy , the proportion of correctly classified commits out of all classified commits.
• No Information Rate (NIR), measures the accuracy of a trivial classifier which classifies all commits with using a single class, the one that is most frequent, in our case - corrective.
• Kappa , Cohen’s kappa, often considered helpful as a measure that can handle both multi-class and imbalanced class problems (see Section 1). Cohen’s kappa measures the agreement between the predictions and the actual labels based on both the actual and predicted distributions.
• P-Value [Accuracy NIR], the -value for the null hypothesis that the ”Accuracy NIR“ (i.e., the accuracy of a given predictive model) . A low -value allows one to reject the null hypothesis in favor of the alternative hypothesis that the ”Accuracy NIR“.
## 4. Data Collection
We use GitHub (GitHub Inc., 2010) as the data source for this work due to its popularity (GitHub Inc., 2018) and rich query options (GitHub Inc., 2015, 2013). Candidate repositories were selected according to the following criteria, aimed to capture data-rich repositories that:
1. Used the Java programming language (our tools were Java oriented)
2. Had more than 100 stars (i.e. more than 100 users have ”liked“ these repositories)
3. Had more than 60 forks (i.e., more than 60 users have ”cloned“ these repositories to their private/organization accounts)
4. Had their code updated since 2016-01-01 (i.e., these repositories are active)
5. Were created before 2015-01-01 (i.e., these repositories have existed for several years)
6. Had size over 2,000 KB (i.e. these repositories are of considerable size)
The criteria aimed at capturing data abundant projects, i.e., projects with plenty of revisions that were still being actively developed. We found that while popularity related metrics such as stars and forks were a good start, after sampling some of the candidates we identified a number of projects that had little data (revisions) and were therefore not an ideal choice for our study. A closer examinations of these projects revealed that more than a few of them turned out to be visually pleasing Android User Interface (UI) controls which had gone viral. To mitigate this, we set a threshold on the repository size in an attempt to filter out small (yet widely popular) projects with little data to analyze.
In light of limited resources we reduced the final candidate set to 11 well known projects from the open source arena, representing various software domains such as IDEs, programming languages (that were implemented in Java), distributed database and storage platforms, and integration frameworks. Following is the list of projected studies in this work (see also Table 2):
1. RxJava - a library for composing asynchronous and event-based programs for the Java VM.
2. Intellij Community Edition - a popular IDE for the Java programming language.
3. HBase - a distributed, scalable, big data store.
4. Drools - a business rules management system solution.
5. Kotlin - a statically typed programming language for the JVM, Android and the browser by JetBrains.
6. Hadoop - a framework that allows for the distributed processing of large data sets across clusters of computers.
7. Elasticsearch - a distributed search and analytics engine.
8. Restlet - a RESTful web API framework for Java.
9. OrientDB - a distributed graph database with the flexibility of documents in one product.
10. Camel - an open source integration framework based on known enterprise integration patterns.
11. Spring Framework - an application framework and inversion of control container for the Java platform.
Fine-grained source code changes are not directly available in traditional VCSs, Git included, and we therefore had to extract them based on the pre-change and post-change revisions of the changed Java files (which are available in the VCSs). The task of extracting fine-grained source code changes by comparing two source code files on the abstract syntax tree (AST) level was addressed by the ChangeDistiller (Fluri et al., 2007, S.E.A.L UZH, 2011) and GumTreeDiff (Falleri et al., 2014, Falleri and Morandat, 2014) projects. Both projects share a common trait, they were designed to operate on two ASTs at a time (typically two subsequent versions of a particular class), and do not support analyzing an entire source code repository’s commit history. In order to distill (harvest) fine-grained source code changes from an entire repository’s commit history, our solution design needed to address two main concerns:
1. Multiple revisions. In the context of modern VCS systems at any given time there is only one revision of each file available in the working tree of a given source code repository. Branches are either a different directory on the file-system5, or require switching to, in which case they swap the current revision for the new one in-place6. Since we are interested in analyzing a given file throughout all its revisions we need to work around this limitation so that for every revision we have the file’s revisions and available to the AST comparison tool.
2. Multiple files. A source code repository consists of numerous source code files, created and removed at different points in time throughout the repository’s life-cycle. In order to analyze the entire repository an analysis needs to take place for all the source code files (and revisions).
The next stage was to build a mechanism that would replay all the changes made to a given repository according to its commit history so that the fine-grained source code changes could be recorded and repeat this process for every studied repository (see LABEL:lst:distillChangesRepo). The Git VCS system (Torvalds, 2007), arguably the most popular VCS system in recent years (StackOverflow, 2017, 2018), and the one used by the prevalent repository hosting platform GitHub Inc. (2010), allows one to create a series of patch files, representing the repository’s commit history (see also LABEL:lst:prepareRepo). By applying these patches in a chronological order, one can essentially replay the changes made to a source code repository throughout its commit history (see LABEL:lst:distillChangesPair, LABEL:lst:recordPatchContent and LABEL:lst:distillChangesPatches).
Given that we wish to analyze repositories, after downloading (cloning) the repositories from GitHub, for each repository where we created a series of patch files , where is the latest revision number for repository . We only considered the master branch7, which is the default branch name in Git. In exceptional cases where the master branch did not exist, we searched for the trunk branch, which is the default branch name in Subversion and can sometimes be found in Git repositories that follow Subversion’s naming patterns8. Each patch file is responsible for transforming repository from revision to revision , where is the empty repository. By initially setting repository to revision (i.e. the initial revision) and then applying all patches in a sequential manner, the revision history for that repository is essentially replayed. Conceptually, this is equivalent to having all developers perform their commits sequentially one by one according to their chronological order.
We chose ChangeDistiller to perform the fine-grained source code change extraction (i.e., the distillerTool in LABEL:lst:distillChangesPair) due to its popularity in the research community (Gall et al., 2009, Fluri et al., 2008, Martinez et al., 2013, Giger et al., 2011, 2012, Falleri et al., 2014, Fluri et al., 2007, Fluri and Gall, 2006, Fluri et al., 2009) and its native Java support. ChangeDistiller required that both the before and after revisions of a source code file were present as physical files on the file system to perform the analysis (S.E.A.L UZH, 2014). This design choice presented some challenges in the face of analyzing multiple projects at scale. Fortunately, ChangeDistiller is an open source tool (S.E.A.L UZH, 2011) and we were able to easily obtain the source code and surgically resolve this and other issues we encountered. After the distilling stage was completed, the resulting datasets were manipulated using Apache Spark (Apache Spark, 2016, ), a state of the art framework for large data processing.
Harvesting a real-world software project may yield a great amount of fine-grained source code changes, easily adding up to millions and dozens of millions of records. Manipulating a dataset of this magnitude is no longer as trivial as inputting it into a spreadsheet or even massaging it in a native R environment (R Development Core Team, 2008). As data sizes have outpaced the capabilities of single machines both in terms of memory capacity and CPU speed, users need new frameworks to scale out their computations. As a result, there has been an explosion of new cluster programming models targeting diverse computing workloads (Zaharia et al., 2016) in the “Big Data” (Diebold, 2012) ecosystem.
Our framework of choice for this work was Apache Spark (Apache Spark, 2016, ) (henceforth Spark). Spark has one of the largest developer and user communities9 and we found its programming model quite intuitive. It also offers a native Scala language (Scala, 2015, ) application programming interface10 (API), which was a great fit in light of the authors’ prior experience with Scala.
One of the fundamental abstractions in Spark is the resilient distributed datasets (RDD) (Zaharia et al., 2012). Spark exposes RDDs through a functional programming API where users can pass local functions to run on the cluster (local or distributed). Operations on an RDD are divided into transformations and actions. Transformations derive new RDDs from existing ones, while actions compute and return a concrete result to the program. Spark evaluates RDDs lazily, allowing it to find an efficient plan for the user’s computation. In this regard, transformations return a new RDD objects representing the result of a computation but do not immediately compute it. The actual computation takes place when an RDD action is called.
We extensively used Spark to produce data aggregations to significantly reduce a dataset’s size so it is sufficiently compact to lend itself to interactive exploration in the R environment. Most of our data aggregations begin with reading all the fine-grained source code changes we have already harvested on a per-project basis and stored them as files on disk, see LABEL:lst:fineGrainedChanges. Transformations are highlighted in blue, Scala type annotations are in violet. Type annotations for local variables can often be omitted in Scala, we explicitly provide them in some of the cases for the sake of clarity.
The variable projects is a collection of project names, over which we iterate and apply a map transformation (see bookmark in LABEL:lst:fineGrainedChanges) that builds an RDD from each project’s fine-grained source code changes stored as text files on disk. Each line in these files is a string concatenation of values separated by a “” (pound) sign. We split the lines by the pound sign (see bookmark in LABEL:lst:fineGrainedChanges) so that each element in the resulting RDD is of type Array[String]. Since we have multiple projects the perProjectData variable is of type Set[RDD[Array[String]]]. This set of RDDs is then unified into a single RDD for further manipulation using the union operation provided by Spark (see bookmark ). Each element in this RDD is an array of strings representing parsed lines from the original files. Since RDDs are lazy data structures, no actual processing is done at this point, and it will only take place once an action (e.g., printing, counting, etc.) is invoked on the fineGrainedChanges RDD (as indicated in LABEL:lst:fineGrainedChanges-more-grouping-global)
The aggregations we perform on the fineGrainedChanges RDD usually fall into one of the following categories:
• Per-commit, to explore commit level activity
• Per-developer, to explore developer level activity
• Per-project, to explore project level activity
• Global, to explore the entire dataset’s properties
For example, to compute the frequencies of the different fine-grained source code changes per commit, i.e., how many times each fine-grained source code change appeared in the commits in our dataset we use the code in LABEL:lst:fineGrainedChanges-by-commit.
This computation (LABEL:lst:fineGrainedChanges-by-commit) uses the groupBy and mapValues transformations. The groupBy transformation takes an element from the collection it is applied on, i.e., fineGrainedChanges, and extracts a key that is used to group all elements with the same key into a single group. Since we would like to compute the frequencies of the different fine-grained source code changes per commit, we first group our records per commit. To accomplish this we specify the key to be the commit id11. This groupBy transformation (see bookmark in LABEL:lst:fineGrainedChanges-by-commit) derives a new RDD where each element is a pair of type (String, Iterable[Array[String]]). The first tuple component (a.k.a “key”) is the commit id, and the second (a.k.a “value”) is a collection of all the elements that had this particular key. Next we apply a mapValues transformation (see bookmark in LABEL:lst:fineGrainedChanges-by-commit) which iterates over these pairs and transforms their value while retaining the key. The transformation logic we provide to mapValues is one that calculates the frequencies of each fine-grained source code change, see LABEL:lst:count-frequency.
The countFrequencies is a method which receives an iterable of lines and represents all changes performed in a given commit, it returns a mapping (Map[String, Int]) between the fine-grained source code change type (e.g., “ADDITIONAL_CLASS”) and its frequency. Note that countFrequencies does not operate on RDDs but on Scala native collections. One of the benefits of using Spark’s Scala API is that it is consistent with Scala’s native collections. In particular, the name and semantics of the mapValues and groupBy transformations for Scala collections and Spark RDDs are the same.
First each line is mapped to its corresponding fine-grained source code change type (bookmark in LABEL:lst:count-frequency), then all values are grouped using the identity key extractor (bookmark in LABEL:lst:count-frequency), forming tuples where the key is the fine-grained source code type and the value is a collection of all the corresponding fine-grained source code change types equal to the key. Finally, we map the tuples’ values (bookmark in LABEL:lst:count-frequency) to the sizes of their value component. This results in tuples where the key is the fine-grained source code change type, and the value is the key’s frequency. Since the keys in these tuples are the fine-grained source code change types, we end up with a mapping (Map[String, Int]) between the fine-grained source code change type (e.g., “ADDITIONAL_CLASS”) and its frequency.
For example, if a project’s raw data file contains the following pound separated values:
1a2b3c#PARAMETER_INSERT#file1.java
1a2b3c#DOC_DELETE#file2.java
1a2b3c#PARAMETER_INSERT#file1.java
1a2b3c#PARAMETER_INSERT#file1.java
1a2b3c#DOC_DELETE#file2.java
The map transformation (bookmark in LABEL:lst:count-frequency) results in:
{PARAMETER_INSERT}
{DOC_DELETE}
{PARAMETER_INSERT}
{PARAMETER_INSERT}
{DOC_DELETE}
The groupBy transformation (bookmark in LABEL:lst:count-frequency) results in:
(PARAMETER_INSERT -> {PARAMETER_INSERT, PARAMETER_INSERT, PARAMETER_INSERT})
(DOC_DELETE -> {DOC_DELETE, DOC_DELETE})
The mapValues transformation (bookmark in LABEL:lst:count-frequency) results in:
(PARAMETER_INSERT -> 3)
(DOC_DELETE -> 2)
The perCommitFrequencies RDD (see LABEL:lst:fineGrainedChanges-by-commit) will therefore contain the element:
(1a2b3c -> {PARAMETER_INSERT -> 3, ADDITIONAL_FUNCTIONALITY -> 1, DOC_DELETE -> 2})
Per-developer and per-project aggregations are performed similarly to what we have shown for the per-commit aggregation, the main change being the key passed to the groupBy transformation (see bookmarks and in LABEL:lst:fineGrainedChanges-more-grouping). Global operations require no prior aggregations and can be performed directly on the fineGrainedChanges RDD, see LABEL:lst:fineGrainedChanges-more-grouping-global.
## 5. Creating a ground truth dataset
The first author manually classified a randomly sampled set of 100 commits from each of the studied 11 repositories. To improve classification quality the projects’ issue tracking systems, e.g. JIRA (Atlassian, 2014), was often used. The JIRA contained the tickets occasionally referenced in developers’ commits messages (e.g., “[PRJ-NAME 1234] Fixed some bug”). Such tickets (a.k.a. issues) typically contain additional information about the feature or bug the referencing commit was trying to address. Moreover, tickets sometimes had their own classification labels such as ”feature request“, ”bug“, ”improvement“ etc., but unfortunately they were not very reliable as developers were not always consistent with their labeling (classification). For instance, in some cases bug fixes were labeled as ”improvement“, and while fixing a bug is indeed an improvement, according to the maintenance activities we use (Mockus and Votta, 2000), bug fixes should be classified corrective while improvements should be classified perfective. Some developers used the term ”fix“ even when they referenced feature requests, e.g. ”fixed issue #N“, where ”issue #N“ spoke of a new feature or an improvement that did not necessarily report a bug. These observations are consistent with Herzig et al. (2013) who reported that 33.8% of the bug reports they studied were misclassified.
In cases where the lack of supporting information (e.g., not enough information in the corresponding ticket and / or commit message) prevented us from classifying a certain commit with satisfactory confidence, that commit was discarded from the dataset and replaced by a new one, selected randomly from the same project repository (by re-sampling a commit). If we were unable to classify the replacement commit as well, we would repeat this routine until we found a commit that we were able to confidently classify. Further rules of thumb we used for classifying were as follows:
• Javadoc and comment updates were considered perfective maintenance.
Rational: these changes improve the system.
• Fixing a broken unit test or build was considered corrective maintenance.
Rational: we assume that tests break in the presence of bugs.
• Adding new unit test(s) was considered perfective maintenance.
Rational: we assume that new tests improve coverage.
We conjecture that more often than not, developers who add tests aim to improve system coverage.
• Performance improvements that resulted from an open ticket in the issue tracking system were considered corrective maintenance.
Rational: we assume that tickets that were reported on performance issues resulted from pains on the user side, and addressing these pains is more corrective in nature than perfective.
• Performance improvements that did NOT result from an open ticket in the issue tracking system were considered perfective maintenance.
Rational: we assume that developers may occasionally seize an opportunity to improve code performance, however, if there were no users suffering the problem being fixed, we consider the maintenance to be of a perfective nature, rather than corrective one.
We made efforts to avoid class starvation (i.e., not having enough instances of a certain class) by inspecting the proportion of each class within a given sample for a given project. An imbalanced training dataset could substantially degrade models’ performance, and in case we detected a considerable imbalance in some project’s classes, we added more commits of the starved class from the same project by means of repeatedly sampling and manually classifying commits until a commit of the starved class was found.
To alleviate the challenges involved in reproducing our study we have made our dataset publicly accessible online (Levin and Yehudai, 2017c). This dataset consists of 1151 manually classified commits, 100-115 commits from each of the 11 studied project. Among these commits 43.4% (500 instances) were corrective, 35% (404 instances) were perfective, and 21.4% (247 instances) were adaptive. The commits in this dataset sum up to 33,149 fine-grained source code changes.
In order to inspect manual classification agreement, we randomly selected 110 commits out of the 1151 commits, 10 random commits from each of the 11 projects, and had both authors classify it. At first the agreement stood at 79%. After discussing the conflicts and sharing the guidelines in more detail, the agreement level rose to 94.5%. According to the one sample proportion test (Altman, 1990), the error margin for our observed agreement level was 4.2%, and the estimated asymptotic 95% confidence interval was [90.3%, 98.7%]. This indicates that both authors were in agreement about the labels for the vast majority of cases once they employed the same guidelines (see Section 5). Regarding some of the commits, no consensus was reached. Consider a commit with the following message: “add hasSingleArrayBackingStorage allow for- optimization only when there really is a single array, and not when there is- a multi dimensional one”. One of the annotators had labeled it “Corrective”, assuming this commit fixed a bug, while the other had labeled it “Perfective” assuming this was an optimization which improved performance but did not necessarily fix a known bug. Since there was no JIRA ticket associated with this commit it was difficult to ascertain which label is more plausible. Similarly, consider a commit with the message: “Timeouts for row lock and scan should- be separate”. Based on the message, this commit could be considered any of the maintenance activities, it could be fixing a bug, improving design (by separating concerns) or adding a new feature (e.g., allowing different timeouts for lock and scan). In this particular case, the referenced JIRA ticket indicated it was an “improvment” and thus “Perfective”, but had it not been for the JIRA ticket it would have been quite challenging to determine the associated maintenance activity.
## 6. Commit Classification Models
We performed our statistical computations in the R statistical environment (R Development Core Team, 2008), where we extensively used the R caret package (Kuhn et al., 2017, Kuhn, 2017) for the purpose of model training and evaluation.
We split the labeled dataset into a training dataset and a test dataset, 85% and 15% respectively, in order to have the test dataset completely isolated from any training procedures. The split was performed by using R’s createDataPartition function (Kuhn, 2018), with the percentage of data that goes to training set to . The createDataPartition function uses random sampling within the labels (Corrective, Perfective, Adaptive) in an attempt to balance the class distributions within the splits, see also Table 3 for a detailed description of the train and test splits.
The model training phase consists of using 5 time repeated 10-fold validation for each compound model on the training dataset (which boils down to performing a 10-fold cross validation process 5 different times and averaging the results). Then, the trained models were evaluated using the test dataset - the 15% split that did not take part in the model training process.
### 6.1. Utilizing word frequency analysis
First we classified the test dataset (the 15% of the entire labeled dataset) using a naive method to set an initial baseline. The naive method is based on a classification technique described in our previous work (Levin and Yehudai, 2016), and consists of searching for pre-defined words (see Table 5), and assigning the most frequent class (i.e., corrective) in case none of the keywords were present in the commit message, see Table 6 for more details. Assigning the most frequent class to an instance is far from ideal, however, when models find no features to rely on, using the overall distribution of the training dataset is a common technique (also called ’No Information Rate’, see Section 3.1).
The results showed that 34.8% of the commits in the test dataset (60 commits) did not have any of the keywords present in their commit message, and were therefore automatically classified corrective. In addition, the low recall of the perfective class was particularly notable, as opposed to the high recall of the corrective class (which accounts for most of the commits in the classified dataset). The noticeable difference between the micro-averaged and macro-averaged F1 scores, 0.56 vs. 0.46 respectively, also indicates that the current model (based on the naive method) does not perform equally well for all classes.
The high percentage of commits without any keywords prompted us to try to fine-tune the keywords we were searching for. We performed an additional experiment using the same classification method, only this time the keywords were obtained by employing a word frequency analysis and normalization for the commit messages. This time 28% of the commits did not have any of the keywords present in their commit message. These findings led us to believe that the high number of commit messages containing none of the keywords could be playing a significant role in determining the overall classification quality.
### 6.2. Utilizing source code changes
Techniques for dealing with missing values in classification problems are broadly covered by Saar-Tsechansky and Provost (2007), who describe two common methods used to overcome such issues: {enumerate*}
imputation, where the missing values are estimated from the data that are present, and
reduced-feature models, which employ only those features that will be known for a particular test case (i.e., only a subset of the features that are available for the entire training dataset), so that imputation is not necessary. Since our dataset consists of two different data types, keywords and source code changes, we use reduced-feature models, which are reported to outperform imputation and represent our use-case more naturally. In addition, since the missing feature patterns in our dataset are known in advance, i.e., given a commit only the keywords can be missing, its source code changes are always present, we can pre-compute and store two models; one to be used when all features are present (keywords source code changes), and the other when only a subset is available (source code changes only). We define the notion of a compound model (similarly to the “classifier lattice” described by Saar-Tsechansky and Provost) which uses two separate models for classifying commits with, and without (pre-defined) keywords in their commit message. The classify routine of the compound model is pseudo-coded in LABEL:compound-classify-cod.
Given a commit , the compound model first checks if ’s commit message has any keywords, if so, the model defined as is used to classify (see bookmark in LABEL:compound-classify-cod), otherwise (i.e., no keywords found in ’s commit message), the model defined as is used to classify (see bookmark in LABEL:compound-classify-cod). Each of the models and may or may not be a reduced-feature model, depending on whether it employs the full set of features (both keywords and source code changes), or only a subset of it (either keywords or source code changes).
We define and to be one of the following model types:
• Keywords model, which relies solely on keywords to classify commits. The features used by this model are keywords obtained by performing the following transformations on the commit message field:
1. Stripped special characters
2. Made lower case (case-folding)
3. Stripped English stopwords
4. Stripped punctuation
5. Striped white-spaces
6. Performed stemming
7. Adjusted frequencies so that each comment can contribute a given word only once
8. Stripped custom words such as developer names, projects names, VCSs lingo (e.g., head, patch, svn13, trunk, commit), domain specific terms (e.g., http, node, client): ”patch“, ”hbase“, ”checksum“, ”code“, ”version“, ”byte“, ”data“, ”hfile“, ”region“, ”schedul“, ”singl“, ”can“, ”yarn“, ”contribut“, ”commit“, ”merg“, ”make“, ”trunk“, ”hadoop“, ”svn“, ”ignoreancestri“, ”node“, ”also“, ”client“, ”hdfs“, ”mapreduc“, ”lipcon“, ”idea“, ”common“, ”file“, ”ideadev“, ”plugin“, ”project“, ”modul“, ”find“, ”border“, ”addit“, ”changeutilencod“, ”clickabl“, ”color“, ”column“, ”cach“, ”jbrule“, ”drool“, ”coprocessor“, ”regionserv“, ”scan“, ”resourcemanag“, ”cherri“, ”gong“, ”ryza“, ”sandi“, ”xuan“, ”token“, ”contain“, ”shen“, ”todd“, ”zhiji“, ”tan“, ”wangda“, ”timelin“, ”app“, ”kasha“, ”kashacherri“, ”messag“, ”spr“, ”camel“, ”http“, ”now“, ”class“, ”default“, ”pick“, ”via“.
9. We then selected the 10 most frequent words from each of the three maintenance activities in the test dataset:
• Corrective: {enumerate*}[label=(0),before=,font=]
• fix
• test
• issu
• use
• fail
• bug
• report
• set
• error
• npe
• Perfective: {enumerate*}[label=(0),before=,font=]
• test
• remov
• use
• fix
• refactor
• method
• chang
• improv
• new
• support
• implement
• new
• allow
• use
• method
• test
• set
• chang
It can be seen that some of the words (as obtained by our commit message word frequency analysis) overlap between maintenance activities. The words ”test“ and ”use“ appear in all three maintenance activities; the word ”fix“ appears in both the corrective and perfective maintenance activity; the words ”method“, ”chang“, ”add“ and ”new“ appear both in the perfective and adaptive maintenance activities; and the word ”set“ appears both in the corrective and adaptive maintenance activities. These word overlaps may indicate that keywords alone are insufficient to accurately classify commits into maintenance activities, and need to be augmented with additional information in order to improve classification accuracy.
For the purpose of building the Keywords model type, we remove multiple occurrences of the same word (so that each word appears only once in the combined list) and remain with the following set of words: {enumerate*}[label=(0),font=]
11. allow
12. bug
13. chang
14. error
15. fail
16. fix
17. implement
18. improv
19. issu
20. method
21. new
22. npe
23. refactor
24. remov
25. report
26. set
27. support
28. test
29. use.
• (Source Code) Changes based model, which relies solely on source code changes to classify commits. The features used by this model are source code change types (Fluri and Gall, 2006) obtained by distilling commits, as described earlier in this section.
• Combined (Keyword + Source Code Change Types) model, which uses both keywords and source code change types to classify commits. The features used by this type of models consist of both keywords and source code change types.
A word-cloud visualization of the keyword distribution in each of the maintenance activities can be found in Figure 3, Figure 4, Figure 5. A summary of the model components can be found in Table 7.
For example, a commit where two methods were added (fine-grained source code change type ”additional_functionality“), and one statement was updated (fine-grained source code change type ”statement_updated“) and has a commit message that says ”Refactored blob logic into separate methods“ will be treated differently by each of the model types indicated in Table 7.
The Keywords model extracts features represented by tuples of size 20, and given the commit above would extract the following features: with “1” in the coordinates that represent the words ”refactor“ and ”method“. The count of each keyword is at most one, i.e., duplicate keywords are counted only once. Source code changes are ignored, since the Keywords model type does not consider source code changes.
The Changes model extracts features represented by tuples of size 48 (since there are 48 different source code change types), and given the commit above would extract the following features: with ”2“ in the coordinate that represents the fine-grained source code change type ”additional_functionality“ and “1” in the coordinate that represents ”statement_updated“. In contrast to the case of the Keywords model, all occurrences of every fine-grained source code change type are counted in. Keywords in the commit message are ignored, since the Changes model type does not consider keywords.
The Combined model extracts features represented by tuples of size 68 ( 48 fine-grained source code change types + 20 keywords), and given the commit above would extract the following features: , with ”2“ in the coordinate that represents the fine-grained source code change type ”additional_functionality“, and ”1“ in the coordinates that represent the fine-grained source code change type ”statement_updated“, the keyword ”refactor“, and the keyword ”method“. The Combined model type captures both keywords and fine-grained source code change types - hence its name.
In the next sections we evaluate and compare different compound models by considering the different combinations of their and model components. The evaluation process consists of the following steps:
1. Select the model component
2. Select the model component
3. Select an underlying classification algorithm for the compound model, which determines the algorithm to be used by each of the model components and (J48, GBM, or RF, see also Section 3.1).
## 7. Evaluation
We describe an exhaustive set of combinations for selecting the pair of models in Table 8, where the pairs can be one of the three model types defined in Table 7. Each row in Table 8 represents a compound model, defined by the selection of . The classification accuracy and Kappa achieved by a given compound model are reported in the corresponding Accuracy and Kappa columns. The best performing compound model for each classification algorithm is highlighted in lime-green, and the keywords based model (where both and , are of the Keywords model type) is highlighted in orange so that it can be easily compared to compound models that utilize fine-grained source code changes.
Following our main research questions (see Section 1), the accuracy and Kappa results for each compound model during the training (see Table 8) reveal that the compound models that use either or achieve higher accuracy and Kappa when compared to models with the same component but with , regardless of the underlying classification algorithm (J48, GBM or RF). This comes as no surprise, as one could expect keyword based models would have trouble accurately classifying commits that do not have any keywords in their commit message. Table 8 also reveals that models that rely solely on commit messages have higher accuracy and Kappa than models that rely solely on fine-grained source code changes (under all three algorithms).
Further accuracy and Kappa statistics pertaining to the training stage of the best performing model for each algorithm can be found in Table 9 and Table 10 respectively. From Table 9 and Table 10 we can learn that during the training stage, the RF model consistently outperforms the J48 and even the GBM model, in both accuracy and Kappa, across all of the cuts: minimum, 1-st quartile (25-th percentile), median, mean, 3-rd quartile (75-th percentile) and maximum. In particular, the minimum accuracy and Kappa of the RF are notably higher than its competitors.
A comparison between the best compound models from each of the underlying classification algorithm category can be found in Figure 1. The top performing models were then used to classify the test dataset, consisting of 15% of the entire labeled dataset, see Table 11. The ultimate winner was the RandomForest compound model with and . A detailed confusion matrix for this champion model can be found in Table 12.
The decision tree built by the J48 algorithm for our keyword based model (see Figure 2) provides some interesting insights regarding its classification process. The word ”fix“ is the single most indicative word of corrective commits, which aligns well with our intuition, according to which commits that fix faults are likely to include the ”fix“ noun or verb in the commit message. Given that ”fix“ did not appear, the words ”support“ and ”allow“ are most indicative of adaptive commits, presumably these words are used by developers to indicate the support of a new feature, or the fact that something new is now ”allowed“ in the system. The combination ”implement chang“ (stemmed), given that ”fix“, ”support“ and ”allow“ did not appear, is very indicative of either perfective or corrective commits, if however, ”implement“ is not accompanied by the word ”chang“ (stemmed), the commit is likely to be adaptive. The (stemmed) word ”remov“, given that the words ”fix“, ”support“, ”allow“ and ”implement“ did not appear, is very indicative of perfective commits, perhaps because developers often use it to describe a modification where they remove an obsolete mechanism in favor of a new one.
We also visualized the keyword frequency in maintenance activities using a word-cloud (see Figure 3, Figure 4, Figure 5), which revealed that the word ”test“ is particularly common in perfective commits, but is generally common in all three maintenance activity types. The word ”use“ is also common in all three maintenance activity types, but is particularly frequent in the perfective maintenance activity. The words ”fix“, ”remov“ and ”support“ are quite distinctive of their corresponding maintenance activity types: corrective, perfective and adaptive (respectively). The word ”add“ is common in adaptive commits, as well as ”allow“.
Similarly, we visualized the fine-grained source code changes frequencies using a source-code-change-type-cloud which revealed that statement related changes, e.g., ”statement_insert“, ”statement_update“ and ”statement_delete“ are the most common change types in all three maintenance activities (corrective, perfective, adaptive). The fine-grained source code change type ”additional_functionality“ is common in both perfective and adaptive commits, but less so in corrective commits.
The term-cloud and J48 keyword based decision tree visualizations provide an intuition for why J48 is likely to outperform a simple word-frequency based classification. In contrast to the word-cloud, which provides ”flat“ frequencies, the J48 is capable of capturing information pertaining to the presence of multiple keywords in the same commit message, as indicated by the decision tree.
We depict the 20 most important predictors for our champion RF model in Table 13. The rank score is scaled from to and is based on the contribution each predictor makes towards the quality of the RF classification model. Not all predictors are equally important for all three maintenance activities. Some play a bigger role in classifying one maintenance activity over the others. It is worth noting that numerous fine-grained source code changes are ranked high in the list, which confirms their contribution to the model’s quality.
## 8. Applications
Lehman’s Laws teach us that a software system will become progressively less satisfying to its users over time, unless it is continually adapted to meet new needs. The field of software evolution research can be classified into two groups, the first considers the term evolution as a verb while the second as a noun (Lehman et al., 2000). The verbal view is concerned with the question of “how”, and focuses on means, processes, activities, languages, methods and tools required to effectively and reliably evolve and maintain a software system. The nounal view is concerned with the question of “what” and investigates the nature of software evolution, as a phenomenon, and focuses on the nature of evolution, its causes, properties, characteristics, consequences, impact, management and control. Both views are mutually supportive (Lehman et al., 2000, Lehman and Ramil, 2003). Moreover, they advocate that the verbal view research will benefit from progress made in studying the nounal view, and both are required if the community is to advance in mastering software evolution. We follow this thinking and put forth two applications.
### 8.1. Software Maintenance Activity Explorer
In the spirit of the verbal view (Lehman et al., 2000) which focuses on studying the means, methods and tools required to effectively evolve a software system, we implement a tool for exploring software maintenance activities aimed to assist practitioners. The Software Maintenance Activity Explorer tool (Levin, 2017) is aimed at providing an intuitive visualization of software maintenance activities over time. We believe this visualization may be useful to project and team managers who seek to recognize inefficiencies and monitor the health of a software project and its corresponding source code repository. The Software Maintenance Activity Explorer was built with Few’s (2009) and Cleveland’s (1985) principles in mind, which advocate for encoding data using visual cues such as variation in size, shape, color, etc’. We chose stacked bar diagrams to visualize data since they allow for an easy comparison both between maintenance activities within a given time frame (e.g., what maintenance activity dominated a given time frame), and between different time frames (e.g., which of the time frames had more maintenance of a given type). In addition, bar diagrams allow users to quickly detect anomalies such as peaks and deeps in one maintenance activity or another compared to past periods.
#### Project Activity Visualization
The project activity visualization (see Figure 6) allows users to examine the volumes of the different maintenance activities over time, and can be sliced and diced according to a specified date range and an activity period (e.g., from date x until date y, in time frames of 28 days). The stacked bar plot allows for an easy comparison between the maintenance activity types, as well as trend detection.
#### Developer Activity Visualization
The developer activity visualization (see Figure 7) is a segmentation of the data by a specific developer. Users can examine the data for a specific developer, adjusting the period of interest and date range. Developers identity can be determined by their name, email or both, a feature that can be useful when developers perform commits using different emails, e.g., when working on an open source project from both their private account and their cooperate account.
#### Publicly Accessible Data
The Software Maintenance Activity Explorer’s about page provides an option to explore the data in-line (see Figure 8), or download it in a CSV format for an offline analysis.
#### Publicly Accessible Code
The code for this tool is publicly available on GitHub (Levin, 2018).
We conjecture that a balanced maintenance activity profile, i.e., a profile which includes all three maintenance activity kinds (corrective, perfective, adaptive) may help developers be more effective and engaged with the project they work on. It may also be the case that different project managers will choose different thresholds for what a balanced (or unbalanced) profile is, in the context of their project. Nonetheless, once these thresholds have been set our method provide means to identify opportunities for improvement. This may be of particular interest in open source projects, which tend to heavily rely on community efforts. To that end, well balanced maintenance activity profiles may be something the community needs to drive development forward and ensure that the project gets a fair share of new features, bug fixing, and design improvements - activities which tend to compete for resources in real-world scenarios.
We use our dataset and the software maintenance activity explorer to identify homogeneous activity profiles, i.e., profiles of developers who performed only one kind of maintenance activity, see Figure 8(a) and Figure 8(b).
The visualization offered by our tool makes it easier to identify these homogeneous maintenance activity profiles and encourage developers to take on a more varied set of tasks. We performed the homogeneous maintenance activity profiles test for 10 projects (see also Table 2) in our study and report the results in Table 14. According to our data, the Camel project had an extremely low portion of homogeneous maintenance activity profiles. It may be the case that Camel’s contributors were indeed developers who were inclined towards heterogeneous maintenance activities. Alternatively, one could suggest a number of possible scenarios. It is possible that the Camel project had a significant number of contributors whose contribution to the project did not include Java code, i.e., it revolved around documentation, configuration files, and so forth. This would mean that the percentage of homogeneous maintenance profiles is actually higher and it might be best to compute it by considering only Java contributors. Another possibility is that the number of contributors to the Camel project significantly increased since we had originally processed its commit history. In which case it would be necessary to re-collect and re-process the project’s data to produce a more accurate result.
Our analysis indicates that homogeneous maintenance activity profiles were not uncommon in the projects we inspected (see Table 14). We believe that unbalanced (i.e., where a significant disproportion between maintenance activities is present), and homogeneous maintenance activity profiles in particular, are an opportunity for managers to reach out to developers and suggest taking on tasks that will balance their maintenance activity profiles. A possible way to identify suitable tasks would be using projects’ task management systems (e.g., a JIRA system) which provide contextual and detailed information about the available tasks. We also hope that this kind of tool will empower both managers and developers to monitor the ongoing maintenance activities and assist in keeping them varied and balanced. Moreover, such a tool may serve as an alerting mechanism in situations which call for special attention, e.g., when the proportion of unbalanced maintenance profiles exceeds a given threshold.
### 8.2. Utilizing Software Maintenance Activities to Model Test Counts
In the spirit of the nounal view (Lehman et al., 2000) which investigates the nature of software evolution as a phenomenon, we conduct a study which leverages our method to demonstrate the importance of maintenance activities for modeling the number of tests in a software project (see Section 8.2).
Automated testing, and automatic unit tests (Hamill, 2004) in particular, is a popular technique for improving software quality. As this technique is gaining popularity and becoming ubiquitous among practitioners it is beneficial to have a good understating of its nature, which as it turns out can be alluding. Beller et al. (2015) conducted a large-scale field study, where 416 software engineers were closely monitored over the course of five months. Their findings indicate that software developers spend a quarter of their work time engineering tests, whereas they think they test half of their time.
In our previous work (Levin and Yehudai, 2017a) we studied 61 open source projects (Levin and Yehudai, 2017d) and established a connection between maintenance activities and test (method and classes) counts in software projects. In this section we extend our previous results and focus on the viability of maintenance activities to modeling the number of test methods and test classes in a software project.
The generalized regression models (GLM, McCullagh and Nelder (1989), Venables and Ripley (2013)) we devised were of the following form:
TestM(prj)=ConstantM+|Predictors|∑i=1(coeffMi∗predictorMi(prj))
where: {itemize*}[label=]
is the test metric we model;
is the set of predictors;
are the predictor coefficients;
are predictor values; and
is the model constant.
The corresponding models for and can be found in Table 15.
All predictors were log transformed to alleviate skewed data, a common practise when dealing with software metrics (Shihab, 2012, Camargo Cruz and Ochimizu, 2009). Statistically significant predictors of interest are highlighted in lime-green, and the standard error is reported in parenthesis below the estimated coefficients. In addition to the variables we are directly interested in, such as the , and we also use , and as control variables, in order to reduce the effect of lurking variables which correlate both with the predictors and the predicted (outcome) variable. Control variables are highlighted in light-bisque.
The ANOVA type-II analysis computes the changes in the model given any single predictor is dropped and it therefore does not depend on the order of the predictors in the model. Employing ANOVA type-II analysis helps in avoiding situations where regression models may lead to the conclusion that certain predictors possess greater explanatory powers than others only because they appear first (Hassan, 2017). The ANOVA type-II analysis for the predictive models and can be found in Table 16 Table 17 respectively. Each row indicates the change in the residual deviance and the “AIC” measure (Akaike information criterion, an estimator of the relative quality of statistical models) induced by removing a given predictor from the model. The statistical significance for each row is indicated in the rightmost column. By inspecting the “AIC” column in Table 16 Table 17 we learn which predictors can be excluded in order to achieve a lower (better) AIC. By inspecting the “Deviance” column we learn a given predictor’s contribution to “explaining” the predicated variables. The “base” model’s deviance and AIC are indicated in the “none” row.
It is statistically significant that removing the predictor will result in the model’s deviance rising from 72 to 95 and its AIC rising from 993 to 1,015. Higher deviance indicates that the new model will have less explanatory power, and higher AIC indicates that it will be worse than the one it is compared to, i.e., the model where the was present. Similar arguments can be applied to the predictor. The ANOVA analysis confirms that both perfective and corrective maintenance activities are vital to the model, and an attempt to remove either will significantly and adversely affect the model’s quality.
Also worth noting is the LOC predictor, its AIC and deviance indicate that it demonstrates statistically significant high explanatory power in both predictive models. This implies that the size of the project has a considerable effect on the number of test methods and test classes it contains.
Following the insights provided by these test regression models, we performed a deeper inspection of two outlier projects, ”XPrivacy” and ”Omni-Notes” (see Figure 10), that had extremely high values of corrective activity (per 1 LOC) combined with a low number of tests (per 1 LOC).
Our analysis of XPrivacy did not reveal any unit tests in its codebase. Its README page on GitHub had a designated testing section which revealed that a separate application had been written for testing purposes. The test application’s (GitHub) project was nowhere as popular as XPrivacy itself (more than 1.5K stars vs. less than 10 stars) implying it may not have been widely used by developers upon contributing code. It is possible that since the test application project was separate from the original application, it was not executed frequently (and automatically) enough, rendering it less effective in preventing defects. This may account for the high amount of corrective activity performed in this project. Omni-Notes, the second outlier project we inspected, had only 12 tests spread over 8 suites according to our analysis. Its README page on GitHub also had a designated section for testing which specified the build command developers should execute when contributing code. While the presence of a designated test section in its README page may indicate testing was quite important to the project’s owner, the great amount of corrective activity performed in this project may suggest it could have benefited from more unit tests. Gaining fine grained visibility into anomalies (e.g., as indicated in Figure 10) will allow managers to identify potential issues by examining abnormal values even without knowing the root cause. Having identified potential issues, mangers can then shift focus towards investigation and resolution.
To conclude this section, while regression models do not provide means to ascertain causality, the negative correlation between corrective commits and tests (i.e., both methods and classes) is worth considering. Potentially, one could argue that projects with tests may only need little corrective activity due to the high quality of the codebase. The opposite direction, may imply that corrective activity may be required when the test count of a project is low, and the codebase’s quality is poor. It is also possible, that test counts and corrective commits do not have a cause and effect relationship at all, in which case they just tend to happen together and are connected via a lurking variable. Either of these narratives requires further evidence before it can be reliably established, but to the very least, the empirically evident negative correlation between corrective activity and tests is yet another reminder of the relationship between automated testing and the nature and volume of the maintenance activities a project is likely to require in the future.
### 8.3. Future Applications
Identifying Anomalies In Development Processes The manager of a large software project should aim to control and manage its maintenance activity profiles, i.e., the volume of commits made in each maintenance activity. Monitoring for unexpected spikes in maintenance activity profiles and investigating the reasons (root cause) behind them could assist managers and other stakeholders to plan ahead and identify areas that require additional resource allocation. For example, lower corrective profiles could imply that developers are neglecting bug fixing. Higher corrective profiles could imply an excessive bug count. Finding the root cause in cases of significant deviations from predicted values may reveal essential issues the removal of which can improve projects’ health. Similarly, exceptionally well performing projects can be a good subject for case studies, so as to identify positive patterns.
Improving development team’s composition Building a successful software team is hardly a trivial task as it involves a delicate balance between technological and human aspects (Gorla and Lam, 2004, Guinan et al., 1998). We believe that by using commit classification it would be possible to build reliable developer maintenance activity profiles which could assist in composing balanced teams. We conjecture that composing a team that heavily favors a particular maintenance activity (e.g. adaptive) over the others could lead to an unbalanced development process and adversely affect the team’s ability to meet typical requirements such as developing a sustainable number of product features, adhering to quality standards, and minimizing technical debt so as to facilitate future changes.
## 9. Threats to validity
Threats to Statistical Conclusion Validity are the degree to which conclusions about the relationship among variables based on the data are reasonable.
• Classification Models. Our commit classification results were based on manually classifying 1151 commits, over 100 commits from each of the studied 11 projects. The projects originated from various professional domains such as IDEs, programming languages, distributed database and storage platforms, and integration frameworks. Each compound model was trained using 5-time repeated 10-fold cross validation. In addition, our commit classifications evaluations demonstrated -value below 0.01, supporting the statistical validity of the hypothesis accuracy NIR with high confidence.
• Regression Models. Our dataset for the regression analysis consisted of 61 projects and over 240,000 commits. Both the model coefficients and the predictions were annotated with statistical significance levels to indicate the strength of the signal. Most of the coefficients were statistically significant (). To compare distributions we used the Wilcoxon-Mann-Whitney test and reported its high significance level ().
We assume commits are independent, however, it may be the case that commits performed by the same developer share common properties.
Threats to Construct Validity consider the relationship between theory and observation, in case the measured variables do not measure the actual factors.
• Manual Commit Classification. We took the following measures to mitigate manual classification related errors:
1. Projects’ issue tracking systems were used, and often provided additional information pertaining to commits.
2. Commits that did not lend themselves to classification due to lack of supporting information were removed from the dataset and replaced by other commits from the same repository (see Section 5).
3. A sample of 10% out of all manually labeled commits was independently classified by both authors. The observed agreement level was 94.5%, and the asymptotic 95% confidence interval for the agreement level was [90.3%, 98.7%] indicating that both authors agreed about the labels for the vast majority of cases.
• Fined-grained Source Code Change Extraction. ChangeDistiller and the VCS mining platform we have built on top of it are both software programs, and as such, are not immune to bugs which could result in inaccurate or incomplete data.
• Test Maintenance Classification. We used a widely practiced conventions and heuristics (Maven Surefire Plugin, 2017, Zaidman et al., 2011) for detecting JUnit test methods and test classes. However, the use of heuristics may lead to undetected test maintenance.
• Data Cleaning. Prior to devising regression models, we removed extreme data points using a technique suggested in (Hubert and Vandervieren, 2008). Despite the fact we removed only 10% of the data, this process could have introduced bias into the dataset we operated on.
Threats to External Validity consider the generalization of our findings.
• Programming Language Bias. All analyzed commits were in the Java programming language since the tool we used to distill fine grained source code changes (ChangeDistiller) was Java oriented. It is possible that developers who use other programming languages, have different maintenance activity patterns which have not been explored in the scope of this work.
• Open Source Bias / GitHub. The repositories studied in this paper were all popular open source projects from GitHub, selected according to the criteria described in Section 4. It may be the case that developers’ maintenance activity profiles are different in an open source environment when compared to other environments.
• Popularity Bias. We intentionally selected the popular, data rich repositories. This could limit our results to developers and repositories of high popularity, and potentially skew the perspective on characteristics found only in less popular repositories and their developers.
• Limited Information Bias. The entire dataset, both the training and the test datasets, contained only those commits that we were able to manually classify. At the stage of VCS inspection it can be essentially impossible to actually ascertain the maintenance activities of commits that do not provide enough information traces (comment, ticket id, etc.). The true maintenance activity for such commits may only be known to the developers who made them, and even they may no longer recall it soon after they have moved on to their next task.
• Mixed Commits. Recent studies (Nguyen et al., 2013, Kirinuki et al., 2014) report that commits may involve more than one type of maintenance activity, e.g. a commit that both fixes a bug, and adds a new feature. Our classification method does not currently account for such cases, but this is definitely an interesting direction to be considered for future work (see Section 10).
• Activity Boundary. In this work we assume a commit serves as a logical boundary of an activity. It may be the case, that developers perform test maintenance as part of activities that span multiple commits. Such work patterns were not considered in the scope of this work, but are definitely an interesting direction for future work in this area.
## 10. Summary
We suggested a novel method for classifying commits into maintenance activities and used it to devise and evaluate a number of models that utilize fine-grained source code changes and the commit message for the purpose of cross-project commit classification into maintenance activities. These models were then evaluated and compared using the accuracy and Kappa metrics with different underlying classification algorithms. Our champion model showed a promising accuracy of 76% and Kappa of 63% when applied on the test dataset which consisted of 172 commits originating from various projects. These results show an improvement of over 20 percentage points, and a relative improvement of over 40% when compared to previous results (Table 1). A comparison between the widely used classifier and our champion classifier can be found in Table 6 and Table 12, respectively. Our evaluation was based on studying 11 popular open source projects from various professional domains, from which we manually classified 1151 commits, 100 from each of the studied projects. The suggested models were trained using repeated cross validation on 85% of the dataset, and the remaining 15% of the dataset were used as a test set.
We conclude that the answer to RQ 1. is that fine-grained source code changes can indeed be successfully used to devise high quality models for commit classification into maintenance activities.
The answer to RQ 2. is that models that utilize source code changes are capable of outperforming the reported accuracy of word frequency based models (Hindle et al., 2009, Amor et al., 2006) from 60% to 75%, even when classifying cross-project commits. In addition, we make the following observations based on our study:
• Using text cleaning and normalization, our word frequency based models were able to achieve an accuracy of 68-69% with Kappa of 51-53% for cross-project commits classification (see Table 8).
• Compound models employing both (commit message) word frequency analysis and source code change types for the task of cross-project commit classification were able to achieve up to 73% accuracy with Kappa 59% during the training stage, and up to 76% accuracy with Kappa of 63%, considered ”Good“ (Altman, 1990), for the test dataset.
• The RF algorithm outperformed the GBM and J48 in classifying cross-project commits (see Table 11 and Table 12).
To explore RQ. 3 we demonstrated two applications for our classification and repository harvesting methods, one in the spirit of the verbal view, and the other in the spirit of the nounal view.
• The Software Maintenance Activity Explorer, a tool that is aimed at providing an intuitive visualization of code maintenance activities over time. It provides users with both project wide, and developer centring views of maintenance activities over various periods of time. We then showed how the software maintenance activity explorer and our dataset can be used to identify homogeneous maintenance activity profiles, which we believe managers should be made aware of and act upon.
• Detecting software projects which may be lacking in tests and potentially require extensive corrective maintenance. The suggested application employs insights obtained from modeling the relationship between commit classification (into maintenance activities) and the number of test methods in a software project.
## 11. Future Work
We believe that our methods and results can be leveraged to further explore numerous directions in the field of software evolution and software analytics in particular. For example, it would be interesting to learn whether our software maintenance activity explorer could appeal to practitioners working on open source and/or commercial projects. It would also be beneficial to learn what real-life tasks they believe this tool can help with, and/or what changes they would like to suggest to make it useful for their needs. In addition, it may be of particular interest to get feedback from developers who took part in the projects we analyzed as part of our publicly available version of the software maintenance activity explorer16.
Some commits may involve more than one type of maintenance activity, and some activities may span more than one commit. It would therefore be beneficial to explore whether extended activities and mixed commits lend themselves to automatic and accurate classification.
The availability of an accurate classification model may make it possible to automatically classify an unprecedentedly large number of projects and commit activities. This, in turn, could shed new light on the distribution of maintenance activities in software projects (Schach et al., 2003, Lientz et al., 1978), a subject the research community is yet to agree upon.
### Footnotes
1. ccs: Software and its engineering Software evolution
2. ccs: Software and its engineering Maintaining software
3. Also known as “commit comment”.
4. Updated as of 2018, the original study was conducted in 2016.
5. As implemented in Subversion, see also http://svnbook.red-bean.com/en/1.7/svn.branchmerge.using.html.
6. As implemented in Git, see also https://git-scm.com/book/en/v2/Getting-Started-Git-Basics.
8. See also “Recommended Repository Layout”, http://svnbook.red-bean.com/en/1.7/svn.tour.importing.html.
9. As indicated by a survey conducted by databricks in 2016, see https://goo.gl/w92BB5.
10. Spark provides APIs for a growing number of other programming languages, see https://spark.apache.org/docs/2.3.0/api.html.
11. Also known as “commit hash” in git, see also https://git-scm.com/book/en/v2/Git-Basics-Viewing-the-Commit-History.
12. The entire labeled dataset, consisting of 1151 labeled commits, is publicly available at https://doi.org/10.5281/zenodo.835534, see also Levin and Yehudai (2017c).
13. Subversion is commonly abbreviated to SVN after its command name svn.
14. The total number of contributors is updated as of 2018, maintenance activity profiles were computed as part of the original study conducted in 2016.
15. Due to certain technical difficulties we had to exclude the IntelliJ Community Edition project from homogeneous maintenance activity analysis.
16. Available at https://soft-evo.shinyapps.io/maintenance-activities.
### References
1. D. G. Altman. Practical statistics for medical research. CRC press, 1990.
2. J. J. Amor, G. Robles, J. M. Gonzalez-Barahona, and A. Navarro. Discriminating development activities in versioning systems: A case study. In Proceedings PROMISE. Citeseer, 2006.
3. L. B. A. C. Andy Liaw, Matthew Wiener. randomforest: Breiman and cutler’s random forests for classification and regression. [Online; accessed Nov-2016].
4. Apache Spark, 2016. Lightning-fast cluster computing. http://spark.apache.org/, 2014. [Online; accessed 11-April-2016].
5. Atlassian. The #1 software development tool used by agile teams. [Online; accessed 20-Mar-2017].
6. L. Belady and M. Lehman. Programming System Dynamics Or Thmeta-dynamics of Systems in Maintenance and Growth. IBM Thomas J. Watson Research Center, 1971.
7. L. A. Belady and M. M. Lehman. A model of large program development. IBM Systems journal, 15(3):225–252, 1976.
8. M. Beller, G. Gousios, A. Panichella, and A. Zaidman. When, how, and why developers (do not) test in their ides. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, pages 179–190. ACM, 2015.
9. L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
10. R. P. Buse and T. Zimmermann. Analytics for software development. In Proceedings of the FSE/SDP workshop on Future of software engineering research, pages 77–80. ACM, 2010.
11. A. E. Camargo Cruz and K. Ochimizu. Towards logistic regression models for predicting fault-prone code across software projects. In Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement, pages 460–463. IEEE Computer Society, 2009.
12. R. Caruana and A. Niculescu-Mizil. An empirical comparison of supervised learning algorithms. In Proceedings of the 23rd international conference on Machine learning, pages 161–168. ACM, 2006.
13. R. Caruana, N. Karampatziakis, and A. Yessenalina. An empirical evaluation of supervised learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, pages 96–103. ACM, 2008.
14. W. S. Cleveland, R. McGill, et al. Graphical perception and graphical methods for analyzing scientific data. Science, 229(4716):828–833, 1985.
15. F. X. Diebold. On the origin (s) and development of the term Big Data. PIER Working Paper, 2012.
16. J. Falleri and F. Morandat. Gumtree - a neat code differencing tool. [Online; accessed 11-March-2017].
17. J. Falleri, F. Morandat, X. Blanc, M. Martinez, and M. Monperrus. Fine-grained and accurate source code differencing. In ACM/IEEE International Conference on Automated Software Engineering, ASE ’14, Vasteras, Sweden - September 15 - 19, 2014, pages 313–324, 2014.
18. M. Fernández-Delgado, E. Cernadas, S. Barro, and D. Amorim. Do we need hundreds of classifiers to solve real world classification problems. J. Mach. Learn. Res, 15(1):3133–3181, 2014.
19. S. Few. Now you see it: simple visualization techniques for quantitative analysis. Analytics Press, 2009.
20. M. Fischer, M. Pinzger, and H. Gall. Populating a release history database from version control and bug tracking systems. In Software Maintenance, 2003. ICSM 2003. Proceedings. International Conference on, pages 23–32. IEEE, 2003.
21. B. Fluri and H. C. Gall. Classifying change types for qualifying change couplings. In Program Comprehension, 2006. ICPC 2006. 14th IEEE International Conference on, pages 35–45. IEEE, 2006.
22. B. Fluri, M. Wursch, M. PInzger, and H. C. Gall. Change distilling: Tree differencing for fine-grained source code change extraction. Software Engineering, IEEE Transactions on, 33(11):725–743, 2007.
23. B. Fluri, E. Giger, and H. C. Gall. Discovering patterns of change types. In Automated Software Engineering, 2008. ASE 2008. 23rd IEEE/ACM International Conference on, pages 463–466. IEEE, 2008.
24. B. Fluri, M. Würsch, E. Giger, and H. C. Gall. Analyzing the co-evolution of comments and source code. Software Quality Journal, 17(4):367–394, 2009.
25. E. Frank, M. Hall, G. Holmes, R. Kirkby, B. Pfahringer, I. H. Witten, and L. Trigg. Weka. In Data Mining and Knowledge Discovery Handbook, pages 1305–1314. Springer, 2005.
26. J. H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189–1232, 2001.
27. H. C. Gall, B. Fluri, and M. Pinzger. Change analysis with evolizer and changedistiller. IEEE Software, 26(1):26, 2009.
28. C. Ghezzi, M. Jazayeri, and D. Mandrioli. Fundamentals of software engineering. Prentice Hall PTR, 2002.
29. E. Giger, M. Pinzger, and H. C. Gall. Comparing fine-grained source code changes and code churn for bug prediction. In Proceedings of the 8th Working Conference on Mining Software Repositories, pages 83–92. ACM, 2011.
30. E. Giger, M. Pinzger, and H. C. Gall. Can we predict types of code changes? an empirical analysis. In Mining Software Repositories (MSR), 2012 9th IEEE Working Conference on, pages 217–226. IEEE, 2012.
31. GitHub Inc. New year, new company. [Online; accessed 18-April-2016].
32. GitHub Inc. A whole new code search. [Online; accessed 11-April-2016].
33. GitHub Inc. About the search api. [Online; accessed 11-April-2016].
34. GitHub Inc. Github - the largest open source community in the world. https://github.com/about, 2018. [Online; accessed 18-October-2018].
35. N. Gorla and Y. W. Lam. Who should work with whom?: building effective software project teams. Communications of the ACM, 47(6):79–82, 2004.
36. P. J. Guinan, J. G. Cooprider, and S. Faraj. Enabling software development team performance during requirements definition: A behavioral versus technical approach. Information Systems Research, 9(2):101–125, 1998.
37. P. Hamill. Unit Test Frameworks: Tools for High-Quality Software Development. ” O’Reilly Media, Inc.”, 2004.
38. A. E. Hassan. Empirical evaluations in software engineering research: A personal perspective. [Online; accessed 11-February-2018].
39. K. Herzig, S. Just, and A. Zeller. It’s not a bug, it’s a feature: how misclassification impacts bug prediction. In Proceedings of the 2013 International Conference on Software Engineering, pages 392–401. IEEE Press, 2013.
40. A. Hindle, D. M. German, M. W. Godfrey, and R. C. Holt. Automatic classication of large changes into maintenance categories. In Program Comprehension, 2009. ICPC’09. IEEE 17th International Conference on, pages 30–39. IEEE, 2009.
41. T. K. Ho. The random subspace method for constructing decision forests. IEEE transactions on pattern analysis and machine intelligence, 20(8):832–844, 1998.
42. K. Hornik, C. Buchta, and A. Zeileis. Open-source machine learning: R meets Weka. Computational Statistics, 24(2):225–232, 2009.
43. M. Hubert and E. Vandervieren. An adjusted boxplot for skewed distributions. Computational statistics & data analysis, 52(12):5186–5201, 2008.
44. H. Kirinuki, Y. Higo, K. Hotta, and S. Kusumoto. Hey! are you committing tangled changes? In Proceedings of the 22nd International Conference on Program Comprehension, pages 262–265. ACM, 2014.
45. M. Kuhn. The caret package. [Online; accessed Nov-2016].
46. M. Kuhn. caret v6.0-80, createdatapartition. [Online; accessed 29-Jul-2018].
47. M. Kuhn, A. W. C. K. A. E. T. C. Z. M. B. K. t. R. C. T. M. B. R. L. A. Z. L. S. Y. T. C. C. Jed Wing, Steve Weston, and T. Hunt. caret: Classification and regression training. [Online; accessed Nov-2016].
48. M. M. Lehman. The programming process. internal IBM report, 1969.
49. M. M. Lehman. Programs, cities, studentsj-limits to growth? In Programming Methodology, pages 42–69. Springer, 1978.
50. M. M. Lehman and J. F. Ramil. Software evolution-background, theory, practice. Information Processing Letters, 88(1):33–44, 2003.
51. M. M. Lehman, J. F. Ramil, and G. Kahen. Evolution as a noun and evolution as a verb. In SOCE 2000 Workshop on Software and Organisation Co-evolution, volume 9, page 31, 2000.
52. S. Levin. Software maintenance activities explorer. [Online; accessed 11-February-2018].
53. S. Levin. Software maintenance explorer. [Online; accessed 11-November-2018].
54. S. Levin and A. Yehudai. Using temporal and semantic developer-level information to predict maintenance activity profiles. In Proc. ICSME, pages 463–468. IEEE, 2016.
55. S. Levin and A. Yehudai. The co-evolution of test maintenance and code maintenance through the lens of fine-grained semantic changes. In 2017 IEEE International Conference on Software Maintenance and Evolution, ICSME 2017, Shanghai, China, September 20-22, 2017, pages 35–46, 2017a. doi: 10.1109/ICSME.2017.9.
56. S. Levin and A. Yehudai. Boosting automatic commit classification into maintenance activities by utilizing source code changes. In Proceedings of the 13th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE, pages 97–106, New York, NY, USA, 2017b. ACM. ISBN 978-1-4503-5305-2.
57. S. Levin and A. Yehudai. 1151 commits with software maintenance activity labels (corrective,perfective,adaptive), July 2017c.
58. S. Levin and A. Yehudai. Statistics for the studied 61 open source projects. [Online; accessed 11-February-2018].
59. A. Liaw and M. Wiener. Classification and regression by randomforest. R News, 2(3):18–22, 2002.
60. B. P. Lientz, E. B. Swanson, and G. E. Tompkins. Characteristics of application software maintenance. Communications of the ACM, 21(6):466–471, 1978.
61. M. Martinez, L. Duchien, and M. Monperrus. Automatically extracting instances of code change patterns with ast analysis. arXiv preprint arXiv:1309.3730, 2013.
62. Maven Surefire Plugin. Inclusions and exclusions of tests. [Online; accessed Jan-2017].
63. P. McCullagh and J. A. Nelder. Generalized linear models, volume 37. CRC press, 1989.
64. T. Menzies and T. Zimmermann. Software analytics: so what? IEEE Software, (4):31–37, 2013.
65. W. Meyers. Interview with wilma osborne. IEEE Software, 5(3):104–105, 1988.
66. A. Mockus and L. G. Votta. Identifying reasons for software changes using historic databases. In Software Maintenance, 2000. Proceedings. International Conference on, pages 120–130. IEEE, 2000.
67. H. A. Nguyen, A. T. Nguyen, and T. N. Nguyen. Filtering noise in mixed-purpose fixing commits to improve defect prediction and localization. In Software Reliability Engineering (ISSRE), 2013 IEEE 24th International Symposium on, pages 138–147. IEEE, 2013.
68. J. R. Quinlan. C4. 5: programs for machine learning. Elsevier, 2014.
69. R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2008. ISBN 3-900051-07-0.
70. G. Ridgeway. Generalized boosted models: A guide to the gbm package. Update, 1(1):2007, 2007.
71. G. Ridgeway and Others. R gbm package. [Online; accessed Nov-2016].
72. M. Saar-Tsechansky and F. Provost. Handling missing values when applying classification models. Journal of machine learning research, 8(Jul):1623–1657, 2007.
73. Scala, 2015. The Scala programming language. https://www.scala-lang.org/, 2015. [Online; accessed 11-February-2018].
74. S. R. Schach, B. Jin, L. Yu, G. Z. Heller, and J. Offutt. Determining the distribution of maintenance categories: Survey versus measurement. Empirical Software Engineering, 8(4):351–365, 2003.
75. S.E.A.L UZH. The changedistiller repository. [Online; accessed 26-March-2017].
76. S.E.A.L UZH. The changedistiller api. [Online; accessed 26-March-2017].
77. E. Shihab. An exploration of challenges limiting pragmatic software defect prediction. PhD thesis, Citeseer, 2012.
78. J. Śliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? In ACM sigsoft software engineering notes, volume 30, pages 1–5. ACM, 2005.
79. StackOverflow. Developer survey results 2017. [Online; accessed 1-Nov-2017].
80. StackOverflow. Developer survey results 2018. [Online; accessed 26-March-2018].
81. E. B. Swanson. The dimensions of maintenance. In Proceedings of the 2nd international conference on Software engineering, pages 492–497. IEEE Computer Society Press, 1976.
82. L. Torvalds. Tech talk: Linus torvalds on git. [Online; accessed 11-Mar-2018].
83. W. N. Venables and B. D. Ripley. Modern applied statistics with S-PLUS. Springer Science & Business Media, 2013.
84. I. H. Witten and E. Frank. Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, San Francisco, 2nd edition, 2005.
85. M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauley, M. J. Franklin, S. Shenker, and I. Stoica. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation, pages 2–2. USENIX Association, 2012.
86. M. Zaharia, R. S. Xin, P. Wendell, T. Das, M. Armbrust, A. Dave, X. Meng, J. Rosen, S. Venkataraman, M. J. Franklin, et al. Apache spark: a unified engine for big data processing. Communications of the ACM, 59(11):56–65, 2016.
87. A. Zaidman, B. Van Rompaey, A. van Deursen, and S. Demeyer. Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining. Empirical Software Engineering, 16(3):325–364, 2011.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
2020-07-12 12:57:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4055347442626953, "perplexity": 2768.993994188084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138718.61/warc/CC-MAIN-20200712113546-20200712143546-00572.warc.gz"}
|
https://www.cheenta.com/sum-of-series-from-smo-2013-problem-number-29/
|
# Sum of Series from SMO - 2013 - Problem Number 29
Try this beautiful problem from Sum of Series from SMO, Singapore Mathematics Olympiad, 2013.
## Sum of Series from SMO, 2013
Let m and n be two positive integers that satisfy
$\frac {m}{n} = \frac {1}{10\times 12} + \frac {1}{12 \times 14} + \frac {1}{14 \times 16} + \cdot +\frac {1}{2012 \times 2014}$
Find the smallest possible value of m+n .
• 10570
• 10571
• 16001
• 20000
### Key Concepts
Greatest Common Divisor (gcd)
Sequence and Series
Number Theory
Challenges and Thrills - Pre - College Mathematics
## Try with Hints
We can start this kind some by using the concept of series and sequence .......
In this problem we can see that the series as
$\frac {m}{n}$ =$\frac {1}{10 \times 12}$ +$\frac {1}{12 \times 14}$ +$\cdot \cdot$+
$\frac {1}{2012 \times 2014}$
So sum of this series is
$\frac {m}{n} = \frac {1}{4} \displaystyle\sum _{k = 5}^{1006} \frac {1}{k(k+1)}$
Now do the rest of the sum ..................
If you are really stuck after the first hint here is the rest of the sum...............
From the above hint we can continue this problem by breaking the formula more we will get :
= $\frac {1}{4} \displaystyle\sum_{k=5}^{1006} \frac {1}{k} - \frac {1}{k+1}$
Now replacing by the values:
$\frac {1}{4} (\frac {1}{5} - \frac {1}{1007})$
Please try to do the rest.....................
This is the last hint as well as the final answer....
If we continue after the last hint...
$\frac {m}{n} = \frac {501}{10070}$
Since gcd(501,10070) = 1
we can conclude by the values of m= 501 and n = 10070
So the sum is m+n = 10571 (Answer).
## Subscribe to Cheenta at Youtube
This site uses Akismet to reduce spam. Learn how your comment data is processed.
### Cheenta. Passion for Mathematics
Advanced Mathematical Science. Taught by olympians, researchers and true masters of the subject.
|
2021-06-22 02:13:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.720009446144104, "perplexity": 2765.0351392205816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504969.64/warc/CC-MAIN-20210622002655-20210622032655-00467.warc.gz"}
|
https://math.stackexchange.com/questions/57024/quasi-compact-and-compact-in-algebraic-geometry
|
# quasi-compact and compact in algebraic geometry
In reading Hartshorne,a topological space is quasi-compact if each open cover has a finite subcover(P80).Isn't it the definition for compactness of topological spaces?Am I right?Is quasi-compactness only in use in algebraic geometry in place of compactness?Or do we have another definition for compactness in algebraic geometry?Will someone be kind enough to say something on this?Thank you very much!
## 2 Answers
Hartshorne reserves "compact" for Hausdorff spaces, which many spaces in algebraic geometry fail to be. I'm not sure how prevalent this distinction is.
• In algebraic geometry many spaces are (quasi-)compact, so it sounds strange to consider $\mathbb{A}^n_{\mathbb{C}}$ a "compact" space. Therefore one prefers to use the adjective "quasi-compact". Finally, the corresponding concept in algebraic geometry of "compact space" is "proper". – Andrea Aug 12 '11 at 7:52
Before we use any terminology here, consider two conditions on a topological space $$X$$:
1. Every open cover of $$X$$ has a finite subcover.
2. For any two distinct points of $$X$$, there is an open set containing each such that the two open sets are disjoint. (This is the Hausdorff condition.)
Some texts use “$$X$$ is compact” to mean just (1); then “compact Hausdorff” is used to mean (1) and (2).
Other texts use “$$X$$ is compact” to mean both (1) and (2); then “quasi-compact” is used to mean just (1).
In the setting of algebraic geometry (e.g., the book by Hartshorne) just about every space under consideration satisfies (1). (This is because the topology is the Zariski topology, in which (a) it is pretty trivial that the whole space is closed and (b) one can show that every closed set has the property that every open cover has a finite subcover.) So in this context, not only will your space satisfy (1), but this fact is even pretty trivial. Therefore, a term that just indicates that (1) is satisfied will not be very useful in algebraic geometry.
For that reason, algebraic geometers will tend to follow the second convention, and use “compact” to mean that both conditions (1) and (2) are satisfied. But then there are situations in which they wish to indicate just (1), and so that's how you end up with instances of “quasi-compact” indeed appearing in books like Hartshorne to mean what others might think of as just “compact.”
By the way, the MO discussion quoted in the comment by user Ch Zh (namely https://mathoverflow.net/questions/16971/compact-and-quasi-compact) serves mostly as a discussion of this phenomenon for those who are already familiar with it, rather than as an explanation of the phenomenon for those who are unfamiliar with it. But confusion resulting from this distinction is somewhat inevitable; see, for example, this math.SE post as well:
Necessity of being Hausdorff in the definition of compactness?
• Very nice answer! – Paul Frost Oct 29 '18 at 16:38
|
2021-04-21 13:15:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722395300865173, "perplexity": 202.89550465375612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039544239.84/warc/CC-MAIN-20210421130234-20210421160234-00473.warc.gz"}
|
https://www.onooks.com/tag/let-textbfx-x_1/
|
Categories
## Vector of multivariate normal distribution
Let $\textbf{X} = (X_1, X_2, X_3)^T$ and $\textbf{Y} = (Y_1, Y_2, Y_3)^T$ be independent vectors with multivariate normal distribution, with means $\mu_X$ and $\mu_Y$ and covariance matrices $\Sigma_X$ and $\Sigma_Y$ with non-zero determinant. Let $A_{2 \times 3}$ and $B_{3 \times 3}$ be lineary independent matrices. Find distribution of $(\textbf{X}^TA^T, \textbf{Y}B^T)^T$. This is what I’ve done […]
|
2020-04-08 11:51:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8358428478240967, "perplexity": 133.0571994799747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371813538.73/warc/CC-MAIN-20200408104113-20200408134613-00096.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=53252
|
# Circuit with 2 sources of emf
by joshanders_84
Tags: circuit, sources
P: n/a Treat the EMF sources and their resistances as separate components and apply the loop rule from any point on the circuit (I suggest using one of the EMF sources). Remember the current is constant throughout the system: $$16V - IR_1 - IR_2 - 8V - IR_3 - IR_4 = 0$$ I applied the law counterclockwise starting from EMF 1 but like I said you could start from anywhere. Solving for I you obtain the familiar $$I = \frac{\varepsilon_1 - \varepsilon_2}{R_1+R_2+R_3+R_4}$$ which looks a lot like $$I = \frac {\sum \varepsilon}{\sum R}$$ Hope this helps. Sorry but I don't know the LaTeX for that pretty little E my physics book uses so I figured lowercase epsilon suffices :)
Emeritus Sci Advisor PF Gold P: 5,196 Circuit with 2 sources of emf Oh I see...the notation for the EMF's of the sources: try the \mathcal function--it gives you uppercase scripted characters, should you need them: $$\mathcal{E}$$ Sometimes for source voltages we just write $V_s$ instead. But since these are not ideal sources, it's good to distinguish between the EMF, which is defined as the potential difference between the two source terminals when no load is connected, vs. the actual voltage across the source when in this series circuit. If you already knew all of this...sorry to bore you to tears. I like these scripted letters...hmm...let's see...Laplace Transform: $$\mathcal{L} \{f(t)\}$$ it's cool...
|
2014-07-31 21:50:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7480842471122742, "perplexity": 1125.210711326405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273676.54/warc/CC-MAIN-20140728011753-00466-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/211710/using-lasso-only-for-feature-selection
|
Using LASSO only for feature selection
In my machine learning class, we have learned about how LASSO regression is very good at performing feature selection, since it makes use of $l_1$ regularization.
My question: do people normally use the LASSO model just for doing feature selection (and then proceed to dump those features into a different machine learning model), or do they typically use LASSO to perform both the feature selection and the actual regression?
For example, suppose that you want to do ridge regression, but you believe that many of your features are not very good. Would it be wise to run LASSO, take only the features that are not near-zeroed out by the algorithm, and then use only those in dumping your data into a ridge regression model? This way, you get the benefit of $l_1$ regularization for performing feature selection, but also the benefit of $l_2$ regularization for reducing overfitting. (I know that this basically amounts to Elastic Net Regression, but it seems like you don't need to have both the $l_1$ and $l_2$ terms in the final regression objective function.)
Aside from regression, is this a wise strategy when performing classification tasks (using SVMs, neural networks, random forests, etc.)?
• Yes, Using lasso for feature selection for other models is a good idea. Alternatively tree based feature selection could also be fed to other models May 9, 2016 at 23:31
• The lasso only performs features selection in linear models -- it doesn't test for higher-order interactions or nonlinearity in the predictors. For an example of how that might be important: stats.stackexchange.com/questions/164048/… Your mileage may vary.
– Sycorax
May 10, 2016 at 1:02
Almost any approach that does some form of model selection and then does further analyses as if no model selection had previously happened typically has poor proporties. Unless there are compelling theoretical arguments backed up by evidence from e.g. extensive simulation studies for realistic sample sizes and feature versus sample size ratios to show that this is an exception, it is likely that such an approach will have unsatisfactory properties. I am not aware of any such positive evidence for this approach, but perhaps someone else is. Given that there are reasonable alternatives that achieve all desired goals (e.g. the elastic net), it this approach is hard to justify using such a suspect ad-hoc approach instead.
• agreed.... the point is everything has to fit within a crossvalidation framework... so you should do some nested cross validation to do the two separate regularisations (otherwise you will run into problems), and nested crossvalidation is using less data for each part. May 10, 2016 at 16:56
Besides all the answers above: It is possible to calculate an exact chi2 permutation test for 2x2 and rxc tables. Instead of comparing our observed value of the chi-square statistic to an asymptotic chi-square distribution we need to compare it to the exact permutation distribution. We need to permute our data in all possible ways keeping the row and column margins constant. For each permuted dataed set we caluclated the chi2 statistics . We then compare our observed chi2 with the (sorted) chi2 statistics The ranking of the real test statistic among the permuted chi2 test statistics gives a p-value.
• Could you add detail to your answer, please? In its current form, it is not clear how one would calculate the exact chi2 test. Aug 11, 2016 at 15:45
|
2022-05-25 14:02:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6097022891044617, "perplexity": 660.9895275703606}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00133.warc.gz"}
|
https://www.electricalexams.co/when-a-d-c-source-is-switched-is-purely-inductive/
|
# When a D.C source is switched in purely inductive, the current response is
When a D.C source is switched in purely inductive, the current response is
### Right Answer is: A straight line passing through the origin
#### SOLUTION
When a D.C source is switched in purely inductive, the current response is a straight line passing through the origin.
Detailed Explanation:-
Energy is stored in the electromagnetic field of an inductor when it is connected to a dc voltage source. The buildup of current through the inductor occurs in a predictable manner, which is dependent on the time constant of the circuit.
Switching of DC is an example of step excitation
The current through the inductor is given by:
Important Reluctance Motor MCQ
$V = Lfrac{{di}}{{dt}}$
$frac{V}{L} = frac{{di}}{{dt}}$
Since V is step excitation, V/L is also step excitation
$frac{{di}}{{dt}} = step;response$
Integrating the above expression
$i = ramp;response$
Hence current through the inductor is a straight line passing through the origin.
Scroll to Top
|
2022-08-20 02:41:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6339206695556641, "perplexity": 982.3442009399533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00756.warc.gz"}
|
https://meta.discourse.org/t/do-unreviewed-translations-ship-into-releases/18171
|
# Do unreviewed translations ship into releases?
(Anton) #1
As I got the reviewer rights, it’s now interesting to know if non-reviewed translations get included automatically into releases of Discourse?
(Erick Guan) #2
Yes, it is. Everything changes fast.
(Jonathan Feist) #3
Check out the Swedish locale. Incomplete and apparently not perfectly translated where it has been translated. So I guess some are not fully reviewed but we are not yet at version 1 so I suppose this is to be expected.
|
2018-06-21 17:36:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639013767242432, "perplexity": 4902.509503077276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864256.26/warc/CC-MAIN-20180621172638-20180621192638-00297.warc.gz"}
|
https://tex.stackexchange.com/questions/256343/how-to-use-custom-colorscheme-with-tdplotsphericalsurfaceplot-from-tikz-3dpl?noredirect=1
|
# How to use custom colorscheme with \tdplotsphericalsurfaceplot from tikz-3dplot
As the title suggest I am trying to use a different color scheme than is currently the default in the tikz-3dplot package for the \tdplotsphericalsurfaceplot command if one uses the option parametricfill.
Here is a MWE with two colors that I would like to have instead of the once used right now. Why two colors? Because I use a colorfunction that simply devides my parameter space into two regions: colorFunc(\t)=ifthenelse(\t<pi/2,0,1);
\documentclass{article}
\usepackage{tikz,tikz-3dplot}
\definecolor{color1}{RGB}{ 86, 180, 233} % Skyblue
\definecolor{color2}{RGB}{230, 159, 0} % Orange
\begin{document}
\tdplotsetmaincoords{70}{135}
\begin{tikzpicture}[scale=1,line join=bevel,tdplot_main_coords,
fill opacity=1,
declare function={ colorFunc(\t)=ifthenelse(\t<pi/2,0,1);}]
\pgfsetlinewidth{.1pt}
\tdplotsphericalsurfaceplot[parametricfill]{64}{32}%
{1.3*(1/8*sqrt(5/pi)*(1+3*cos(2*\tdplottheta)))}%
{black}%
{colorFunc(\tdplottheta*2)}%
{\draw[color=black,thick,->] (0,0,0) -- (1,0,0) node[anchor=north east]{$x$};}%
{\draw[color=black,thick,->] (0,0,0) -- (0,1,0) node[anchor=north west]{$y$};}%
{\draw[color=black,thick,->] (0,0,0) -- (0,0,1) node[anchor=south]{$z$};}%
\end{tikzpicture}
\end{document}
Note: Maybe it is possible to somehow compute the color-coordinates of my two colors and set them in my colorFunc. However, it seems that the plotrange is rescaled, so no matter which values I enter into my colorFunc the resulting plot is always the same, i.e. colorFunc(\t)=ifthenelse(\t<pi/2,0,pi*2/3); yields the same plot.
• In this case we have the result of the colorFunc is 0 or 1, what's the role of those two numbers, try to change them with 90 and 180 colorFunc(\t)=ifthenelse(\t<pi/2,90,180); – Salim Bou Aug 13 '15 at 21:06
• In my last paragraph I wrote that changing the values of the colorFunc does not change anything. Unfortunately :/. – NOhs Aug 14 '15 at 14:15
• In my case with 90 and 180 the colors are reversed – Salim Bou Aug 14 '15 at 14:45
• Ah. I see. This would be due to the periodicity of the colorfunction. My bad.. But this still does not allow me to choose the colors I want. – NOhs Aug 14 '15 at 14:48
It seems like you are already able to customize the color. I am not quiet sure what is your problem. So I would just explain everything.
# What is the 6th argument of \tdplotsphericalsurfaceplot?
(I am talking about the blank you filled by colorFunc(\tdplottheta*2))
This blank allows you to fill in a math expression which is based on \tdplotr, \tdplottheta, and \tdplotphi, and returns a real number. Intuitively, if your expression returns 60, than the color HSB(60,1,1) is used, which is the pure yellow. Similarly if your expression returns 300, it is HSB(300,1,1) the purple.
Technically, your expression is put into
\pgfmathsetmacro{\colorarg}{#5}
(it is #5 but not #6 because of, well, we would see it later)
Therefor you can write everything accepted by PGF-math engine. In the most extreme case, you can pass random(0,360).
# Why do I get two colors even if my expression returns a constant?
Things get weird if the plot contains negative radius. In your case, cos is sometimes as negative as -1, producing the doughnut part of the plot. In the following figure, I wrote 3 as the expression. So normally it should be HSB(3,0,0), a reddish color. But how is the doughnut now cyanish?
The answer is that tikz-3dplot assumes users use \tdplotphi as the expression. Since the point (-r,θ,φ) coincides with the point (r,θ+180,-φ), tikz-3dplot adds 180 to your expression whenever it comes to the case of negative radius.
To get over this, make sure your plot is always of positive radius, or make sure that your expression satisfies
func(-r,θ,φ) = func(r,θ+180,-φ)+180 (mod 360)
For instance func(-r,θ,φ) = 5θ + φ^2/90
# I want full access of color
Redefine \tdplotdosurfaceplot. This command is used to draw single pieces of rectangles. (And its 5th argument is \tdplotsphericalsurfaceplot's 6th argument.) Thus it is a bit long and contains ugly details. So please focus on the \ifthenelse{\equal{#6}{parametricfill}} part
\renewcommand\tdplotdosurfaceplot[6]{
\pgfmathsetmacro{\nextphi}{\curphi + \tdplotsuperfudge*\viewphistep}
\begin{scope}[opacity=1]
\tdplotcheckdiff{\nextphi}{360}{\origviewphistep}{#2}{}
\tdplotcheckdiff{\nextphi}{0}{\origviewphistep}{#2}{}
\tdplotcheckdiff{\nextphi}{90}{\origviewphistep}{#3}{}
\tdplotcheckdiff{\nextphi}{450}{\origviewphistep}{#3}{}
\end{scope}
\foreach \curtheta in{\viewthetastart,\viewthetainc,...,\viewthetaend}{
\pgfmathsetmacro{\curlongitude}{90 - \curphi}
\pgfmathsetmacro{\curlatitude}{90 - \curtheta}
\ifthenelse{\equal{\leftright}{-1.0}}{\pgfmathsetmacro{\curphi}{\curphi - \origviewphistep}}{}
\pgfmathsetmacro{\tdplottheta}{mod(\curtheta,360)}
\pgfmathsetmacro{\tdplotphi}{mod(\curphi,360)}
\pgfmathparse{\tdplotphi < 0}
\ifthenelse{\equal{\pgfmathresult}{1}}{\pgfmathsetmacro{\tdplotphi}{\tdplotphi + 360}}{}
\pgfmathparse{\tdplottheta > \tdplotuppertheta}
\pgfmathsetmacro{\logictest}{1 - \pgfmathresult}
\pgfmathparse{\tdplottheta < \tdplotlowertheta}
\pgfmathsetmacro{\logictest}{\logictest * (1 - \pgfmathresult)}
\pgfmathsetmacro{\tdplottheta}{\tdplottheta + \viewthetastep}
\pgfmathparse{\tdplottheta > \tdplotuppertheta}
\pgfmathsetmacro{\logictest}{\logictest * (1 - \pgfmathresult)}
\pgfmathparse{\tdplottheta < \tdplotlowertheta}
\pgfmathsetmacro{\logictest}{\logictest * (1 - \pgfmathresult)}
\pgfmathparse{\tdplotphi > \tdplotupperphi}
\pgfmathsetmacro{\logictest}{\logictest * (1 - \pgfmathresult)}
\pgfmathparse{\tdplotphi < \tdplotlowerphi}
\pgfmathsetmacro{\logictest}{\logictest * (1 - \pgfmathresult)}
\pgfmathsetmacro{\tdplotphi}{\tdplotphi + \viewphistep}
\pgfmathparse{\tdplotphi < 0}
\ifthenelse{\equal{\pgfmathresult}{1}}{\pgfmathsetmacro{\tdplotphi}{\tdplotphi + 360}}{}%
\pgfmathparse{\tdplotphi > \tdplotupperphi}
\pgfmathsetmacro{\logictest}{\logictest * (1 - \pgfmathresult)}
\pgfmathparse{\tdplotphi < \tdplotlowerphi}
\pgfmathsetmacro{\logictest}{\logictest * (1 - \pgfmathresult)}
\pgfmathsetmacro{\tdplottheta}{\curtheta}
\pgfmathsetmacro{\tdplotphi}{\curphi}
%%%%%%% not important ↑↑↑↑↑↑
%%%%%%% yes important ↓↓↓↓↓↓
\ifthenelse{\equal{#6}{parametricfill}}{
\pgfmathsetmacro\r{(\x+.4)}
\pgfmathsetmacro\g{(\y+.4)}
\pgfmathsetmacro\b{(\z+.8)/2}
\definecolor{tdplotfillcolor}{rgb}{\r,\g,\b}
\color{tdplotfillcolor}
}{}
\pgfsetstrokeopacity{0}
%%%%%%% yes important ↑↑↑↑↑↑
%%%%%%% not important ↓↓↓↓↓↓
\ifthenelse{\equal{\leftright}{-1.0}}{\pgfmathsetmacro{\curphi}{\curphi + \origviewphistep}}{}
\ifthenelse{\equal{\logictest}{1.0}}{%
\pgfmathsetmacro{\tdplotphi}{\curphi + \viewphistep}
|
2019-11-21 07:51:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7486640214920044, "perplexity": 2298.210552540051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00269.warc.gz"}
|
https://q2liu.wordpress.com/2015/01/03/the-wiki-game/
|
## The WikiGame and Graphs
Today, I had an urge to play the WikiGame. I haven’t played this game since middle school, but, hey, it’s winter break so why not? The objective of the game is given a random starting article on Wikipedia, try to arrive at some other predetermined article’s page only by clicking links within the article. A player may race against another player. Whoever reaches the destination article with the fewest clicks wins. (Other variations of the game may have other victory conditions.) Neither player may use Control + F at any time during the game.
In the true spirit of the game, the articles are generated randomly by using Wikipedia’s Random Article link. A example play of the game is as follows. Given Ysé Tardan-Masquelier as the starting article and Omaha, Nebraska as the ending article, the following series of clicks is a correct solution.
Some interpretations of the rules forbid using countries or cities as links but for simplicity of example, I’ll pretend this rule doesn’t exist. A shorter path may exist between the two articles. I highly suspect I can go directly from France to the United States but given the “no Control + F” rule, I must read the entire article to find the link. As tempting as it is to learn all about France, I decided to skip ahead to “head of state.”
It’s obviously very interesting to come upon esoteric Wikipedia articles during the course of the game. But all my rambling about this game does have some other purpose than another bout of nostalgia, an algorithmic purpose!
Shortest Paths
If one imagines that all articles on Wikipedia represent nodes in a graph and links represent directed edges from one node to another, then the WikiGame (as presented here) easily reduces to the problem of finding the shortest path (all edges have cost 1) between two nodes. A simple breadth-first search can solve this problem. To see a python implementation of a WikiGame solver using breadth-first search, visit this.
Diameter of WikiGame
The trickier, and I think, more interesting question is what is the longest shortest path–the diameter–between any two pages (excluding pages that have no paths between them). Or in other words, given $N$ pages and let $d(p_i, p_j)$ represent the length of the shortest path from page $p_i$ to page $p_j$, compute $\max_{0\leq i\neq j\leq N-1} \left\{d(p_i, p_j)\right\}$. The problem shouldn’t be theoretically “hard” with respect to the total number of links and articles. For a given graph, one may find the diameter by using any polynomial time all-pairs shortest path algorithm such as the Floyd-Warshall algorithm and finding the maximum of all returned shortest paths. The algorithm is polytime in the number of nodes and edges in the input graph. By the most recent statistics, a graph created from English Wikipedia articles and links would have ~5 million nodes and so at most ~50 million directed edges. A pretty sizable graph but I think searchable given enough motivation.
Largest Strongly Connected Component
Turns out there’s been quite a few people who wondered about how many degrees of separation exists between two pages on Wikipedia. The Six Degrees of Wikipedia page provides surprising examples of articles that are separated by at most six degrees (i.e. the shortest path consists of at most 6 clicks). Also, it appears my question of the longest shortest path has been investigated before here. The article also presents the largest strongly connected component (one may reach an article from any other article in the component). Dolan, the author, found that the “center” of Wikipedia (disregarding dates and years) is an article on the “United Kingdom.” He defines the center as an article in the largest strongly connected component from which it takes the fewest average number of clicks to get to any other article in the component. From “United Kingdom,” it takes an average of 3.45 clicks to any other article in the component. The data he used in his study was taken on March 3, 2008 so it’s possible that some of these statistics have changed since then.
Other Graph Characteristics and Questions
The results presented were very interesting, but now I wonder what other information we can gain from articles on Wikipedia by considering other graph characteristics like maximum cliques in the graph (maximum number of pages separated by at most one degree from each other), max independent set (maximum number of pages that are separated by greater than one degree of separation), maximum spanning tree (not sure what the Wikipedia interpretation is here), or TSP tour (whether every article may be visited once starting with an article using a minimum number of clicks). Admittedly, some of the problems I presented are NP-hard, but I think approximations may also be interesting, especially for the TSP tour of Wikipedia (does it exist? what is the approximate minimum length of such a tour?) If such a tour exists then we find out something interesting about how information is organized on Wikipedia, that is, stated very vaguely, every piece of information is “connected” in some way to every other piece of information.
|
2017-10-17 23:58:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5129932165145874, "perplexity": 534.6768263289354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822625.57/warc/CC-MAIN-20171017234801-20171018014801-00861.warc.gz"}
|
http://machinelearninguru.com/deep_learning/tensorflow/basics/variables/variables.html
|
## Introduction to TensorFlow Variables: Creation, Initialization
This tutorial deals with defining and initializing TensorFlow variables.
## Introduction
Definign variables is necessary because the hold the parameter. Without having parameters, training, updating, saving, restoring and any other operations cannot be performed. The defined variables in TensorFlow are just tensors with certain shapes and types. The tensors must be initialized with values to become valid. In this tutorial, we are going to explain how to define and initialize variables. The source code is available on the dedicated GitHub repository.
## Creating variables
For variable generation, the class of tf.Variable() will be used. When we define a variable, we basically pass a tensor and its value to the graph. Basically the following will happen:
• A variable tensor that holds a value will be pass to the graph.
• By using tf.assign, an initializer set initial variable value.
Some arbitrary variables can be defined as follows:
Defining Variables
import tensorflow as tf
from tensorflow.python.framework import ops
#######################################
######## Defining Variables ###########
#######################################
# Create three variables with some default values.
weights = tf.Variable(tf.random_normal([2, 3], stddev=0.1),
name="weights")
biases = tf.Variable(tf.zeros([3]), name="biases")
custom_variable = tf.Variable(tf.zeros([3]), name="custom")
# Get all the variables' tensors and store them in a list.
all_variables_list = ops.get_collection(ops.GraphKeys.GLOBAL_VARIABLES)
In the above script, line 15 gets the list of all defined variables from the defined graph. The "name" key, define a specific name for each variable on the graph
## Initialization
Initializers of the variables must be run before all other operations in the model. For an analogy, we can consider the starter of the car. Instead of running an initializer, variables can be restored too from saved models such as a checkpoint file. Variables can be initialized globally, specifically, or from other variables. We investigate different choices in the subsequent sections.
### Initializing Specific Variables
By using tf.variables_initializer, we can explicitly command the TensorFlow to only initialize certain variable. The script is as follows
:
Custom variable initialization
# "variable_list_custom" is the list of variables that we want to initialize.
variable_list_custom = [weights, custom_variable]
# The initializer
init_custom_op = tf.variables_initializer(var_list=all_variables_list)
Noted that custom initialization does not mean that we don't need to initialize other variables! All variables that some operations will be done upon them over the graph, must be initialized or restored from saved variables. This only let's us to realize how we can initialize specific variables by hand.
### Golobal variable initialization
All variables can be initialized at once using the tf.global_variables_initializer(). This op must be run after the model being fully constructed. The script is as below:
Global Variable Initialization
# Method-1
# Add an op to initialize the variables.
init_all_op = tf.global_variables_initializer()
# Method-2
init_all_op = tf.variables_initializer(var_list=all_variables_list)
Both the above methods are identical. We only provide the second one to demonstrate that the tf.global_variables_initializer() is nothing but tf.variables_initializer when you yield all the variables as the its input argument.
### Initilization of a variables using other existing variables
New variables can be initialized using other existing variables' initial values by taking the values using initialized_value().
Initialization using predefined variables' values
# Create another variable with the same value as 'weights'.
WeightsNew = tf.Variable(weights.initialized_value(), name="WeightsNew")
# Now, the variable must be initialized.
init_WeightsNew_op = tf.variables_initializer(var_list=[WeightsNew])
As it can be seen from the above script, the WeightsNew variable is initialized with the values of the weights predefined value.
## Running the session
All we did so far was to define the initilizers' ops and put them on the graph. In order to truly initialize variables, the defined initializers' ops must be run in the session. The script is as follows:
Running the session for initialization
with tf.Session() as sess:
# Run the initializer operation.
sess.run(init_all_op)
sess.run(init_custom_op)
sess.run(init_WeightsNew_op)
Each of the initializers has been run separated using a session.
## Summary
In this tutorial, we walked through the variable creation and initialization. The global, custom and inherited variable initialization have been investigated. In the future posts, we investigate how to save and restore the variables. Restoring a variable eliminate the necessity of its initialization.
Go Top
|
2018-01-22 12:01:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33591240644454956, "perplexity": 2987.689321856646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891316.80/warc/CC-MAIN-20180122113633-20180122133633-00210.warc.gz"}
|
https://socratic.org/questions/a-circuit-with-a-resistance-of-3-omega-has-a-fuse-with-a-capacity-of-4-a-can-a-v-3
|
A circuit with a resistance of 3 Omega has a fuse with a capacity of 4 A. Can a voltage of 2 V be applied to the circuit without blowing the fuse?
Jan 23, 2016
The current in a circuit where a voltage of $2$ $V$ passes through a resistance of $3$ $\Omega$ is given by: $I = \frac{V}{R} = \frac{2}{3}$ $A$. This is well short of the capacity of the fuse, so the fuse will not blow.
Explanation:
Ohm's Law relates voltage $V$ $\left(V\right)$, current $I$ $\left(A\right)$ and resistance $R$ $\left(\Omega\right)$:
$V = I R$
In this case we want to know the current, so we rearrange to make $I$ the subject:
$I = \frac{V}{R} = \frac{2}{3}$ $A$
The current flowing in the circuit is $\frac{2}{3}$ $A$. A fuse is designed to 'blow' (burn out) if the current in the circuit is more than its rated value, in this case $4$ $A$. This is to protect the rest of the circuit from excessive current which generates heat.
The current in this circuit, $\frac{2}{3}$ $A$, is considerably less than $4$ $A$, so the fuse will not blow.
|
2022-07-01 05:16:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 24, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5846289992332458, "perplexity": 339.62699660808124}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103920118.49/warc/CC-MAIN-20220701034437-20220701064437-00685.warc.gz"}
|
https://math.stackexchange.com/questions/387920/taking-a-derivative-of-the-same-equation-in-different-form-produces-different-re
|
# Taking a derivative of the same equation in different form produces different results
I have a feeling this question is going to have an obvious answer, but I'm left a bit puzzled. I have the following equation:
$$\tag 1 \frac{1}{S(x)} = \frac{W}{12}\cdot(1 - U(x)^2)$$, where $W$ is a constant
If I take a derivative of that equation with respect to $x$ as is, I come with the following results (solving for $S'(x)$):
$$\tag 2 S'(x) = \frac{W}{6}(S(x)^2)\cdot U(x)\cdot U'(x)$$
However, if I solve for $S(x)$ first and then take the derivative $(S(x) = (\frac{12}{W})/(1 - U(x)^2))$, I get the following result (again, solving for $S'(x)$):
$$\tag 3 S'(x) = \frac{24}{W}\cdot\frac{U(x)}{(1 - U(x)^2)^2}\cdot U'(x)$$
Shouldn't I arrive at the same answer regardless of the form of the equation? Or am I missing something fundamental here? Both $S(x)$ and $U(x)$ are continuous.
• Have you tried to use your original equation $(1)$ to see what you obtained in $(2)$ and $(3)$ is the same?
– Pedro
May 10, 2013 at 20:33
• What makes you think these are different? May 10, 2013 at 20:34
Use that $$S^2=\frac{12^2}{W^2}\frac{1}{(1-U^2)^2}$$ to prove what you get is the same
|
2022-10-06 09:59:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8848186731338501, "perplexity": 135.55597247044048}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00282.warc.gz"}
|
https://phys.libretexts.org/Bookshelves/Electricity_and_Magnetism/Book%3A_Electromagnetics_I_(Ellingson)/01%3A_Preliminary_Concepts/1.02%3A_Electromagnetic_Spectrum
|
$$\require{cancel}$$
# 1.2: Electromagnetic Spectrum
Electromagnetic fields exist at frequencies from DC (0 Hz) to at least 1020 Hz – that’s at least 20 orders of magnitude! At DC, electromagnetics consists of two distinct disciplines: electrostatics, concerned with electric fields; and magnetostatics, concerned with magnetic fields. At higher frequencies, electric and magnetic fields interact to form propagating waves. Waves having frequencies within certain ranges are given names based on how they manifest as physical phenomena. These names are (in order of increasing frequency): radio, infrared (IR), optical (also known as “light”), ultraviolet (UV), X-rays, and gamma rays (γ-rays). See Table $$\PageIndex{1}$$ and Figure $$\PageIndex{1}$$ for frequency ranges and associated wavelengths.
Definition: Electromagnetic Spectrum
The term electromagnetic spectrum refers to the various forms of electromagnetic phenomena that exist over the continuum of frequencies
The speed (properly known as “phase velocity”) at which electromagnetic fields propagate in free space is given the symbol $$c$$, and has the value $$\cong 3.00 \times 10^{8}$$ m/s. This value is often referred to as the “speed of light.” While it is certainly the speed of light in free space, it is also the speed of any electromagnetic wave in free space. Given frequency $$f$$, wavelength is given by the expression
$\underbrace{\lambda = \frac { c } { f }}_{\text{in free space}}$
Table $$\PageIndex{1}$$ shows the free space wavelengths associated with each of the regions of the electromagnetic spectrum. This book presents a version of electromagnetic theory that is based on classical physics. This approach works well for most practical problems. However, at very high frequencies, wavelengths become small enough that quantum mechanical effects may be important. This is usually the case in the X-ray band and above. In some applications, these effects become important at frequencies as low as the optical, IR, or radio bands. (A prime example is the photoelectric effect; see “Additional References” below.) Thus, caution is required when applying the classical version of electromagnetic theory presented here, especially at these higher frequencies.
Table $$\PageIndex{1}$$: The electromagnetic spectrum. Note that the indicated ranges are arbitrary but consistent with common usage.
Regime Frequency Range Wavelength Range
$$\gamma$$-Ray $$\mathrm{> 3 \times 10^{19} \; Hz}$$ < 0.01 nm
X-Ray $$\mathrm{3 \times 10^{16} \; Hz \; – \; 3 \times 10^{19} \; Hz}$$ 10–0.01 nm
Ultraviolet (UV) $$\mathrm{2.5 \times 10^{15} \; – \; 3 \times 10^{16} \; Hz}$$ 120–10 nm
Optical $$\mathrm{4.3 \times 10^{14} \; – \; 2.5 \times 10^{15} \; Hz}$$ 700–120 nm
Infrared (IR) $$\mathrm{300 \; GHz \; – \; 4.3 \times 10^{14} \; Hz}$$ 1 mm – 700 nm
Radio $$\mathrm{3 \; kHz – 300 \; GHz}$$ 100 km – 1 mm
The radio portion of the electromagnetic spectrum alone spans 12 orders of magnitude in frequency (and wavelength), and so, not surprisingly, exhibits a broad range of phenomena. This is shown in Figure $$\PageIndex{1}$$.
Figure $$\PageIndex{1}$$: Electromagnetic Spectrum.
Table $$\PageIndex{2}$$: The radio portion of the electromagnetic spectrum, according to a common scheme for naming ranges of radio frequencies. WLAN: Wireless local area network, LMR: Land mobile radio, RFID: Radio frequency identification
Band Frequencies Wavelengths Typical Applications
EHF 30-300 GHz 10–1 mm 60 GHz WLAN, Point-to-point data links
UHF 300–3000 MHz 1–0.1 m TV broadcasting, Cellular, WLAN
VHF 30–300 MHz 10–1 m FM & TV broadcasting, LMR
HF 3–30 MHz 100–10 m Global terrestrial comm., CB Radio
MF 300–3000 kHz 1000–100 m AM broadcasting
LF 30–300 kHz 10–1 km Navigation, RFID
VLF 3–30 kHz 100–10 km Navigation
Table $$\PageIndex{3}$$: The optical portion of the electromagnetic spectrum.
Band Frequencies Wavelengths
Violet 668–789 THz 450–380 nm
Blue 606–668 THz 495–450 nm
Green 526–606 THz 570–495 nm
Yellow 508–526 THz 590–570 nm
Orange 484–508 THz 620–590 nm
Red 400–484 THz 750–620 nm
|
2021-11-30 00:24:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.789053738117218, "perplexity": 2726.453988984942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00324.warc.gz"}
|
http://dotat.at/:/feed.html?2017
|
Here are some links to interesting web pages which I have encountered. This list is gatewayed to Delicious, Twitter, and LiveJournal. My main blog where I post longer pieces is also on LiveJournal.
The canonical URL for this page is <http://dotat.at/:/feed.html>. There's an Atom version at <http://dotat.at/:/feed.atom>. You can see which links are most popular. You can get versions with short or long links.
Jan Feb Mar --- --- --- --- --- --- --- --- ---
<< 2017 >>
Tony Finch is <dot@dotat.at>
$dotat: doc/web/cgi/url,v 1.16903 2017/03/30 23:00:01 fanf2 Exp$
|
2017-03-31 00:30:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2843680679798126, "perplexity": 2627.4988215407893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218205046.28/warc/CC-MAIN-20170322213005-00536-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://naturalmath.wikispaces.com/Divide+a+fraction+by+a+fraction?responseToken=0df4ecf8c443f5ae4f6692d7f940b23c5
|
# Divide a fraction by a fraction
Division of fractions is rare in everyday life because it’s too cumbersome for casual use. It is also rare in sciences, where people use decimals. We only want division of fractions to avoid a hole in our math theory. We have division, and we have fractions... Does our division work for fractions? If so, how?!
## One-page wonders
These are books about fraction division you can fold out of single piece of paper.
By Carol Cross and Madison Cross Sugg:
## Prerequisite concepts for fraction division
- ratio
- proportion (equivalent ratios)
- common denominators
- units and changes in unit sizes (unitizing)
If you understand these ideas, you are ready to…
## Learn To Divide Fractions In One Page!
Example Divide $\frac{2}{5}$ by $\frac{3}{4}$. Rephrase the question as, “What is the ratio of $\frac{2}{5}$ to $\frac{3}{4}$?” Visual method: the rectangle model Look at the picture until you see the answer! Hints: Slice one side of the rectangle into 5 and the other into 4. There are 20 total units.$\frac{2}{5}$ means 2*4 units out of 20$\frac{3}{4}$ means 3*5 units out of 20Their ratio is 2*4 to 3*5, or $\frac{8}{15}$ Numeric method 1: common denominator Write the ratio $\frac{2}{5}:\frac{3}{4}$Find a common denominator $\frac{2*4}{5*4}:\frac{3*5}{4*5}$ or $\frac{8}{20}:\frac{15}{20}$20 times more on both sides makes the ratio equivalent to 8:15Write as a fraction $\frac{8}{15}$ Numeric method 2: ratio to one Rephrase the question as, "The ratio $\frac{2}{5}:\frac{3}{4}$ is equivalent to the ratio of what to 1?"4 times more on both sides makes the ratio equivalent to $\frac{2*4}{5}:3$3 times less on both sides makes the ratio equivalent to $\frac{2*4}{5*3}:1$Carry out the multiplications: $\frac{8}{15}$
Create your own examples and solve them visually and numerically. People who worked out many examples summarized this algorithm. If you work with enough examples, you will come up with this or your own algorithm, too.
Algorithm
The result of dividing a fraction by a fraction is, again, a fraction. To find its numerator and numerator, cross-multiply. To find the numerator, multiply the numerator of the dividend by the denominator of the divisor. To find its denominator, multiply the denominator of the dividend by the numerator of the divisor.
For dividing positive or negative fractions, use the same rules that apply to integers to determine the signs.
$\frac{2}{5}:(-\frac{3}{4})=-\frac{8}{15}$and $(-\frac{2}{5}):(-\frac{3}{4})=\frac{8}{15}$
|
2017-06-25 12:14:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7654741406440735, "perplexity": 1282.9110303957054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320491.13/warc/CC-MAIN-20170625115717-20170625135717-00609.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-7-section-7-1-percents-decimals-and-fractions-exercise-set-page-475/26
|
## Prealgebra (7th Edition)
$\dfrac{1}{50}$
To write a percent as a fraction, drop the % symbol and multiply by $\dfrac{1}{100}$. $2\% \rightarrow 2 \rightarrow \dfrac{2}{100} = \dfrac{1}{50}$
|
2018-09-25 23:01:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037353157997131, "perplexity": 4261.128513522195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162563.97/warc/CC-MAIN-20180925222545-20180926002945-00508.warc.gz"}
|
https://math.stackexchange.com/questions/3170450/find-all-functions-f-bbbr-to-bbbr-such-that-for-all-x-y-z-in-bbbr
|
# Find all functions $f:\Bbb{R} \to \Bbb{R}$ such that for all $x,y,z \in \Bbb{R}$ , $f(xf(x)+f(y))=x^2+y$
Find all functions $$f:\Bbb{R} \to \Bbb{R}$$ such that for all $$x,y, \in \Bbb{R}$$ , $$f(xf(x)+f(y))=x^2+y$$
We can easily get a strong condition $$f(f(y))=y$$ by setting $$x=0$$ . By this equation we know $$f$$ is injective and surjective. I got lost from there. By observation I know $$f(x)=x$$ and $$f(x)=-x$$ are solution. So I was trying to make $$x^2+y=f(xf(x)+f(y))$$ close to $$f(x)^2+y$$ or $$x^2+f(y)$$. Any hints would be helpful.
• If you substitute $x$ by $f(x)$, then using $f(f(x))=x$ you get $f(x)^2=x^2$. Apr 1, 2019 at 12:01
• Let $f$ be such a function. For all $x,y\in\mathbb{R}$ we have $$f(xf(x)+f(y))=x^{2}+y=f(-xf(-x)+f(y))$$ As $f$ is bijective it follows that $$xf(x)+f(y)=-xf(-x)+f(y)\Leftrightarrow f(x)=-f(-x).$$ From this it immediately follows that $f(0)=0$. Also let $x\in\mathbb{R}$ such that $f(x)=1$, then $x^{2}=f(xf(x)+f(0))=f(x)=1$, so $x=\pm1$. For all $x\in\mathbb{R}$ it now also follows that $$f(x(f(x))-f(x^{2}))=x^{2}-x^{2}=0\Leftrightarrow xf(x)=f(x^{2})$$ Apr 1, 2019 at 12:14
• artofproblemsolving.com/community/q1h1675275p10669235
– Sil
Apr 6, 2019 at 11:07
Let $$f:\mathbb{R}\to\mathbb{R}$$ and for $$x,y\in\mathbb{R}$$ denote by $$P(x,y)$$ the assertion $$f(xf(x)+f(y))=x^2+y$$. Assume that $$f$$ satisfies $$P(x,y)$$ for all $$x,y\in\mathbb{R}$$. Then $$P(0,y):\quad f(f(y))=y\implies f\text{ bijective}\\ P(f(x),y):\quad f(f(x)f(f(x))+f(y))=f(x)^2+y\implies x^2+y=f(xf(x)+f(y))=f(f(x)f(f(x))+f(y))=f(x)^2+y\\ \implies f(x)^2=x^2$$ We are no left to prove that either $$f(x)=x$$ for all $$x$$, or $$f(x)=-x$$ for all $$x$$, i.e. that $$f$$ doesn't jump around between $$x\mapsto x$$ and $$x\mapsto -x$$. For this, assume that there are $$a,b\in\mathbb{R}\backslash\{0\}$$ with $$f(a)=a$$ and $$f(b)=-b$$. Then $$P(a,b):\quad f(a^2-b)=a^2+b$$ Now if $$f(a^2-b)=a^2-b$$ then $$a^2-b=a^2+b$$ and thus $$b=0$$, contradiction. But if $$f(a^2-b)=-(a^2-b)$$ then $$-(a^2-b)=a^2+b$$ and thus $$a=0$$, so again contradiction. Therefore, such $$a,b$$ don't exist, and thus either $$f(x)=x\ \forall x$$ or $$f(x)=-x\ \forall x$$, and one can verify easily that this are indeed solutions to the equation.
Plugging in $$x=y=-1$$ shows that $$f(0)=0$$ and plugging in $$x=0$$ then shows that $$f(f(y))=y$$. This implies that $$f$$ is invertible, and plugging in $$y=0$$ shows that $$f(xf(x))=x^2=f(f(x^2)),$$ and hence $$xf(x)=f(x^2)$$ for all $$x\in\Bbb{R}$$. This shows that $$f(-x)=-f(x)$$ for all $$x\in\Bbb{R}$$, and that $$f(x^2+y)=xf(x)+f(y)=f(x^2)+f(y),\tag{1}$$ for all $$x,y\in\Bbb{R}$$, from which it follows that $$f$$ satisfies Cauchy's functional equation $$f(x+y)=f(x)+f(y).$$ Much has been said about this functional equation, which has many pathological solutions. Note that this means $$f$$ is $$\Bbb{Q}$$-linear, and if $$f$$ is either continuous at a point, bounded on an interval or monotonic on an interval, then $$f$$ is $$\Bbb{R}$$-linear and so $$f(x)=cx$$ for some $$c\in\Bbb{R}$$.
In an earlier version of this answer I rushed to the conclusion that $$f(x)=cx$$, which quickly implies that $$c=\pm1$$ and indeed both functions $$f(x)=\pm x$$ satisfy the functional equation.
• I fail to see how you can conclude that $f(x)=\pm x$ at the end. Apr 1, 2019 at 14:02
• @FlorisClaassens So do I, unfortunately I rushed to my conclusion. Apr 2, 2019 at 0:10
\begin{align} &\text{As already noted, we have }\qquad\qquad\qquad\qquad\qquad f(f(y)) = y \implies f(x) = f^{-1}(x) \\ &\text {Also, using the substitution } x\to y \text{ we get}\,\,\quad f((y+1)f(y)) = y^2+y \to f(0) = 0\\ &\text{using } f= f^{-1}:\quad f(x*f(x) + f(y)) = x^2+y \implies x*f(x) + f(y) = f(x^2+y)\\ &\text{from above line we then get}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad f(x) = \frac{f(x^2+y) -f(y)}{x} \end{align}
From $$f(x) = \frac{f(x^2+y) -f(y)}{x}$$ we now obtain $$xf(x) = f(x^2)$$ by setting $$y=0$$.
A substitution $$x\mapsto f(x)$$ now yields $$f(x)x = f(f(x)^2)$$.
Combining those two equations, we get $$f(x^2) = f(f(x)^2) \implies x^2 = f(x)^2 \implies f(x) = \pm x$$
|
2022-06-28 04:20:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 71, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914896488189697, "perplexity": 149.73605805055558}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00285.warc.gz"}
|
https://vrcacademy.com/formulas/z-test-p-value/
|
## Z-test p Value Calculator
Use this calculator to compute the p-value of test based on normal distribution.
## $p$-Value of Z-test
If the test statistic $Z$ has standard normal distribution, then the $p$-value of the test
a. left-tailed hypothesis is $p$-value = $P(Z\leq z_{obs})$.
b. right-tailed hypothesis is $p$-value = $P(Z\geq z_{obs})$.
c. two-tailed hypothesis is $p$-value = $2P(Z\geq |Z_{obs}|)$.
|
2020-04-08 09:13:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076297640800476, "perplexity": 2952.561810173284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810807.81/warc/CC-MAIN-20200408072713-20200408103213-00152.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-3-section-3-2-the-product-and-quotient-rules-3-2-exercises-page-188/14
|
# Chapter 3 - Section 3.2 - The Product and Quotient Rules - 3.2 Exercises: 14
$y'=\dfrac{2-x}{2\sqrt{x}(2+x)^{2}}$
#### Work Step by Step
$y=\dfrac{\sqrt{x}}{2+x}$ Differentiate using the quotient rule: $y'=\dfrac{(2+x)(\sqrt{x})'-(\sqrt{x})(2+x)'}{(2+x)^{2}}=...$ Rewrite the square root using fractionary power and continue with the differentiation process: $...=\dfrac{(2+x)(x^{1/2})'-(x^{1/2})(2+x)'}{(2+x)^{2}}=...$ $=\dfrac{(2+x)(\dfrac{1}{2}x^{-1/2})-(x^{1/2})(1)}{(2+x)^{2}}=...$ Evaluate the products and simplify: $...=\dfrac{x^{-1/2}+\dfrac{1}{2}x^{1/2}-x^{1/2}}{(2+x)^{2}}=\dfrac{x^{-1/2}-\dfrac{1}{2}x^{1/2}}{(2+x)^{2}}=\dfrac{\dfrac{1}{\sqrt{x}}-\dfrac{\sqrt{x}}{2}}{(2+x)^{2}}=\dfrac{\dfrac{2-x}{2\sqrt{x}}}{(2+x)^{2}}=\dfrac{2-x}{2\sqrt{x}(2+x)^{2}}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-04-19 18:13:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7783224582672119, "perplexity": 1022.0535381525559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937015.7/warc/CC-MAIN-20180419165443-20180419185443-00208.warc.gz"}
|
https://www.arpinvestments.com/arl/a-frail-new-world
|
Bespoke alternative investment solutions for institutional investors
Before you use our website we need you to know this. Click on each section for the full information.
We do not offer investment advice to private investors
Absolute Return Partners does not offer investment advice to private investors (Retail Clients as defined by the UK Financial Conduct Authority). All such investors are advised to contact Quartet Investment Managers on +44 20 8939 2920 or visit quartet-im.com.
Our website does not give investment advice
The information contained on the website you are about to access (Website) is for information purposes only and does not constitute and should not be construed as advice on which reliance should be placed, nor is it an offer by Absolute Return Partners LLP (ARP) to enter into any contract or investment agreement or a solicitation to buy or sell any investment in any jurisdiction or in any circumstances. Any information provided in relation to a specific fund is not intended to provide a sufficient basis on which to make any investment decision as any such decision requires careful study of the offering memorandum of the relevant fund.
Information about Unregulated Collective Investment Schemes is not intended for the general public
No information on the Website is intended to amount to the financial promotion of Unregulated Collective Investment Schemes which are not authorized or recognised by the UK Financial Conduct Authority (FCA) and cannot be promoted to the general public. Any such information is intended solely for certain classes of investors permitted to receive it under relevant legislation and regulations, including investors falling within the qualifying categories set out the Conduct of Business Rules contained in the FCA Handbook or in the Financial Services and Markets Act 2000 (Promotion of Collective Investment Schemes) (Exemptions) Order 2005, in each case as amended or replaced from time to time. This is because such investors are sufficiently experienced and sophisticated to understand the risks associated with such investments, including the possibility of a substantial loss or complete loss of their investment.
Our website stores information on your device
Our website uses technologies, such as cookies, to distinguish you from other users of our website, which helps us to provide you with a good experience when you browse our website and also allows us to improve our site. We do not use these technologies for third-party related advertising or for storing/collecting personal information. For more information please see our Privacy Policy.
Before accessing the Website you should carefully read the terms set out in our Terms of Website Use and our Privacy Policy as these will apply to the entire contents of the Website and to any correspondence between us and you.
By accessing any part of the Website you are indicating that you accept these terms and that you agree to abide by them. If you do not accept these terms, do not use the Website.
February 2016
# A Frail New World
We argue why the long-term outlook for GDP growth and for returns on risk assets is uninspiring. We are often ‘accused’ of allowing the negative long-term demographic outlook to colour our view on risk assets in general, but we argue why the demographic outlook is only one of (at least) four factors, which will hold back GDP growth as well as returns on risk assets in the years to come.
##### Preview
I may be surprised. But I don’t think I will be.
Andrew Strauss, Cricketer
U.S. bank lending is also negatively impacted by regulatory constraints, even if it is less visible to the naked eye. Pre-GFC, personal income and credit card debt grew at approximately the same rate in the U.S. and, once the crisis was (largely) over, one would have expected that trend to resume, but it hasn’t (Chart 2). Even if depressed levels of personal income are taken into account, total credit card debt is still some $350 billion short of the trend line. The most likely cause? Regulatory constraints in the banking industry. The anecdote from Europe and the credit card example from the U.S. basically tell one and the same story – that regulatory authorities on both sides of the Atlantic are desperate to avoid a repeat of the GFC, which has caused so much damage. If that anxiety hasn’t already resulted in a lower GDP growth rate, it certainly will. There is no doubt in my mind that the regulatory changes already implemented are only the beginning of a lot more to come. ## 2. The end of the debt super-cycle Some of the more drama-seeking commentators have called QE the biggest money printing experiment in history but, in reality, it is not. What they did in the Weimar Republic or in Zimbabwe was money printing; QE is not. I have been there before, and do not intend to repeat myself; suffice to say that whilst QE is not money printing, it has still had a rather dramatic effect on asset prices (both bonds and equities), and on financial leverage as well. Having said that, overall leverage began to rise decades ago, long before anyone had ever heard of QE. Debt super-cycles last 50-75 years on average (Source: Ray Dalio, Bridgewater Associates), and one school of thought is that the debt super-cycle we are in now started in the aftermath of World War II, as lots of re-construction was needed. However, if one looks at long-term debt charts, one could also argue that the growth in debt didn’t really take off in a meaningful way until the early 1980s – at about the same time as the start of the great bull market in both bonds and equities (Chart 3). In a debt super-cycle, as the cycle advances, economic growth is increasingly driven by a combination of growth in debt and money (as in money supply). Having said that, there are obviously limits to how much spending growth can be financed by debt and money. When that point is reached, you are at the end of the debt super-cycle. John Maynard Keynes called it Push on a String, when he first described the phenomenon in 1935. Nowadays, it is often called the Liquidity Trap. What we are now beginning to see are the first signs of Push on a String. When that happens, monetary policy is already so accommodating that further rate cuts will have virtually no effect on economic growth. QE also becomes largely ineffective, as risk premia are too low to drive investors to assume more risk. The very low returns that we currently see across almost all asset classes is a classic sign that we are approaching the end of the debt super-cycle. Another sign is the no-growth in money supply. The U.S. Fed stopped the third instalment of QE in October 2014; at about the same time, the rapid growth in U.S. money supply stopped and has been moving broadly sideways ever since (Chart 4). With private debt having peaked almost everywhere (Chart 5), and without an ever increasing supply of money, it is no wonder that economic growth is finding it difficult to gather momentum, and nor should we be surprised that risk assets are struggling. Whether one can conclude that the end of QE equals the end of the debt super-cycle is another question. Very accommodating monetary authorities have certainly contributed to the growth in lending, but one cannot point fingers at only one guilty party. After all, it isn’t enough that banks are willing to lend. Borrowers must also be willing to borrow. ## Wealth-to-GDP to normalise Not for the first time, I have taken advantage of Woody Brock’s extensive knowledge when writing this letter. (Note: Woody Brock is an economic consultant to Absolute Return Partners.) The following is actually quite theoretical in nature, and I apologise in advance, if I lose one or two readers over the next few paragraphs, but here we go. An advance in econometrics in recent years has led to findings that weren’t previously possible. One such finding is that certain ratios – e.g. wealth-to-GDP – have well defined long-term mean values. The mean value for U.S. wealth–to-GDP is 3.7. (The same approach can be applied to P/E ratios, and the mean value for the U.S. P/E ratio is approx. 15.) The ratio is no less than 4.75 at present, implying that wealth is going to drop significantly at some point in the future (or, at the very least, grow much more slowly than GDP for an extended period of time). It ought to be said that a drop in wealth-to-GDP from 4.75 to 3.7 would in all likelihood result in vast losses of wealth. The last time the Americans experienced a significant and sustained loss of wealth was during the 1966-81 period, where wealth fell by over 3% per year. Then, in the early 1980s, the great bull market took over, which resulted in almost incomprehensible gains in wealth. Total U.S. wealth, as defined by the Federal Reserve, went from$11.5 trillion in 1981 to $85 trillion in 2015, leading to a wealth-to-GDP ratio of 4.75. It is a large part of this increase in wealth that would have to be given up again, if the ratio goes back to its long-term mean value. (Note: I have only referred to U.S. numbers here, as I don’t have access to corresponding numbers from anywhere else, but they are not likely to be vastly different in most other countries, given that the great bull market has been almost global in nature.) In order to understand the logic behind it all, think of wealth as capital and GDP as output. The wealth-to-GDP ratio is therefore the capital-to-output ratio, and a ratio of 3.7 implies that it takes$3.7 of capital to produce \$1 of output. Hence, the ratio is effectively a capital efficiency ratio, and the lower the ratio is, the more efficiently a country utilizes the capital at its disposal. I note that the U.S. enjoys one of the lowest mean values in the world. Here in Europe, the long-term mean value of wealth-to-GDP is 4.7 by comparison.
Now to the tricky part. Why is the ratio stable – at least in the long run? The reasons lie deep within growth theory, which few investors will ever have heard of and even fewer will understand (me included). If the nature of the production function is Cobb-Douglas in mathematical terms, then the capital-to-output ratio – and hence the wealth-to-GDP ratio – must be long-term stable; and it is.
(Note: A Cobb–Douglas function is a production function that is typically used to represent the relationship between the amounts of inputs (particularly capital and labour) and the amount of output that can be produced by those inputs.)
There is another way to think of this. If the amount of capital relative to output is very high (as it is at present), then the return on that capital will be reduced by competition, and people will save less. This will lower the amount of capital over time and will thus lower the capital-to-output ratio.
The obvious next question to ask is when will the ratio mean-revert? There is not much point in knowing that wealth-to-GDP will drop at some point, if it won’t happen in our lifetime. This is, however, not so simple; there is no mathematical solution to that question. That said, if one understands the circumstances that have driven the wealth-to-GDP ratio to 4.75, one can arrive at some conclusions.
The easy one first. When you plot the ratio over the last 100 years, no single set of data explain much of the variations in the value of the wealth-to-GDP ratio. In other words, no single factor can explain why wealth-to-GDP is so high at present. In many ways, it would be much simpler if the extraordinarily low interest rates that we enjoy at the moment could explain all the variation, but that is not the case. A combination of factors appear to be at work.
Back in December, I wrote an Absolute Return Letter named The Next Driver of Productivity. In the letter, I argued that automation will intensify in the years to come, and that many jobs will be lost as a result. Automation requires capital, of which there is plenty at the moment, and at a very affordable price. As significant amounts of capital is ploughed into automation, the capital-to-output ratio (and hence the wealth-to-GDP ratio) will fall, or so I think.
One last thought. When I first tuned into this topic, I thought wealth-to-GDP was high mainly as a result of low interest rates, but could it be the other way round? Could it be that interest rates are not (only) low because of QE, but (also) because the world is awash in capital? It certainly looks like it, even if almost the entire world tend to think that QE is to blame.
## A deteriorating demographic outlook
The final reason why I think both GDP and asset prices will undershoot most expectations in the years to come is the worsening demographic outlook, which I have written quite extensively about in recent months, so allow me to skate over this topic fairly quickly.
Remember what I said a few months ago. At the most basic level, only two factors drive GDP growth, and that is workforce growth and productivity growth. Productivity changes can be quite difficult to predict, but workforce growth is straightforward. With a high degree of accuracy, we can forecast what is going to happen to the workforce for many years to come, and it is not a pretty picture (Chart 6).
Of all the major nations around the world, only the U.S. will experience actual growth in the workforce, and it will be very modest. In Japan, which will be hit the hardest (together with South Korea and China), the fall in the workforce translates into approximately -1% in annual rGDP terms. In other words, only if productivity grows by more than 1%, will Japan see any growth in GDP.
The fall in Europe’s workforce will impact European GDP growth by approximately -0.5% per year, and here in the UK, where the impact is modest compared to most other countries, the impact will only be marginally negative.
So the question begs: How much can productivity grow, and could it offset the impact from a fall in the workforce? The two fastest growing periods post World War II was (i) the mid-1950s to the mid-1960s, and (ii) the mid-1990s to the mid-2000s. The first was driven by the infrastructure revolution (better roads, private cars for everyone and an emerging airline industry), whilst the second is now known as the dot com boom.
Despite the significant impact those inventions had, overall productivity didn’t rise by more than 2-2½ % per annum (Chart 7). I am therefore very comfortable with my prediction that productivity enhancements will not fully offset the lower GDP growth to be expected from a fall in the workforce. GDP growth can only disappoint for many years to come.
## Where could I possibly go wrong?
Regrettably, I am not like the economist in my little opening story. I do occasionally get things wrong, even if it is rare (hmm). If I get this one wrong, it could happen in a number of areas.
As far as demographics are concerned, DM countries could decide to open the gates and let in millions of migrants. Not very likely to happen if you ask me, but it could. Secondly, the trend towards increased automation could possibly raise productivity to levels we have never experienced before. I don’t think it would ever fully offset the loss of GDP from a fall in the workforce, but it could mute the effect.
Could wealth-to-GDP stay high forever? We already know that there are dramatic technology shocks in the pipeline over the next several years. I discussed that in December. We also know that the amount of capital needed per unit of GDP is rising with an increase in automation. Could you possibly deduce from that, that the capital-to-output ratio will stay at elevated levels forever as a result?
For that to happen, the production function of our economy would have to change. Paul Jones at Stanford University has shown that, over the very long term, the production function is indeed Cobb-Douglas in nature and will not change. I can therefore confidently say that timing is all I could be wrong on, as far as normalisation of the wealth-to-GDP ratio is concerned.
The debt super-cycle next. The private sector debt super-cycle has almost certainly come to an end. The scars from 2008 are still horrific and have fundamentally changed the attitude towards leverage – both amongst borrowers and lenders.
Having said that, the private sector is only half the story (see Chart 5 again). Governments all over the world continue to grow debt and, whilst such a strategy will only work as long as interest rates remain very low, the case of Japan has demonstrated to the rest of the world that public debt can be almost unimaginably high.
Finally, as far as regulatory changes are concerned, the more GDP slows as a result of a falling workforce, the more lightly regulatory authorities are likely to regulate the banking industry. It is, after all, in nobody’s interest that there is no GDP growth at all.
## Conclusion
It should now be obvious that one or two things could indeed be done to address the challenge(s) we are facing. That said, should the regulatory authorities for example decide to go relatively easy on our banks, there are still plenty of other structural trends that will happen regardless, which between them will hold back GDP growth – and returns on risk assets.
Bond yields are likely to stay comparatively low for much longer than many expect, partly because inflation is likely to remain subdued, as it almost always is when GDP growth is low, and partly because policy rates will be kept relatively low in order to stimulate economic growth. Having said that, policy rates will, a few years from now, likely be meaningfully higher in the U.S. than in Europe.
Equities are likely to deliver very modest returns, at least when compared to the returns we have enjoyed since the early 1980s. If GDP only grows between 0% and 1.5% per annum as we expect, and risk-free rates of return stay near current levels, assuming the equity risk premium doesn’t meaningfully change, mid-single digit annual returns are the best we can hope for going forward. Those estimates are obviously average numbers, and actual equity returns could vary substantially from those averages from one year to the next.
Commodity returns may behave a little differently. At first glance, low GDP growth is obviously not conducive to high commodity returns, but China has had a profound effect on many commodity prices since the Chinese rout started last year; hence the commodity story is quite different from the equity story, but more about that next month.
##### Investment Megatrends
Our investment philosophy, and everything we do at ARP, is driven by the long-term Investment Megatrends which are identified and routinely debated by our investment team.
##### Related Investment Megatrends
Our investment philosophy, and everything we do at ARP, is driven by the long-term Investment Megatrends which are identified and routinely debated by our investment team. Read more about related Megatrend/s for this article:
|
2020-07-13 15:32:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26930439472198486, "perplexity": 1421.0023330439838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00156.warc.gz"}
|
http://mathhelpforum.com/calculus/44630-area-region-enclosed-polar-equation-print.html
|
# Area of a region enclosed by polar equation
• July 27th 2008, 06:53 PM
auslmar
Area of a region enclosed by polar equation
I have this problem that reads "Find the area enclosed by $r^2$ = $4cos(\theta)
$
."
so, what I ended up with
2 $
\int_a^b$
$cos(\theta)$ $d\theta$
I may have messed up in finding the correct integral, but I think the problem is that I don't know what to choose for $a$ or $b$. Can anyone help explain how I would go about choosing proper values for $a$ and $b$?
P.S. I seem to be having trouble with the LaTex, is anyone else having trouble?
• July 27th 2008, 07:00 PM
Chris L T521
Quote:
Originally Posted by auslmar
I have this problem that reads "Find the area enclosed by $r^2$ = $4cos(\theta)
$
."
so, what I ended up with
2 $
\int_a^b$
$cos(\theta)$ $d\theta$
I may have messed up in finding the correct integral, but I think the problem is that I don't know what to choose for $a$ or $b$. Can anyone help explain how I would go about choosing proper values for $a$ and $b$?
P.S. I seem to be having trouble with the LaTex, is anyone else having trouble?
Have you tried to graph the Lemniscate? The limits are a little tricky, but its not that complex...take into account that $4$ $\cos\theta$ can't be negative...
Just wondering, what did you get for your limits of integration? You're integral set up is correct.
--Chris
• July 27th 2008, 07:34 PM
Chris L T521
Quote:
Originally Posted by auslmar
I have this problem that reads "Find the area enclosed by $r^2$ = $4cos(\theta)
$
."
so, what I ended up with
2 $
\int_a^b$
$cos(\theta)$ $d\theta$
I may have messed up in finding the correct integral, but I think the problem is that I don't know what to choose for $a$ or $b$. Can anyone help explain how I would go about choosing proper values for $a$ and $b$?
P.S. I seem to be having trouble with the LaTex, is anyone else having trouble?
http://img.photobucket.com/albums/v4...e2143a3-19.jpg
Does this make sense?
--Chris
• July 27th 2008, 07:37 PM
Soroban
Hello, auslmar!
By the way, this is not a lemniscate.
. . It could be called "the square root of a circle."
Quote:
I have this problem that reads: Find the area enclosed by $r^2 = 4$ $\cos\theta$
so, what I ended up with: .2 $
\int_a^b$
$cos(\theta)$ $d\theta$
I may have messed up in finding the correct integral.
Set r = 0 and solve for θ.
. . 4 cos θ .= .0 . . θ .= .±½π
and there are our limits . . .
• July 27th 2008, 08:01 PM
Chris L T521
Quote:
Originally Posted by Soroban
Hello, auslmar!
By the way, this is not a lemniscate.
. . It could be called "the square root of a circle."
Um...
http://img.photobucket.com/albums/v4...e2143a3-20.jpg
http://img.photobucket.com/albums/v4...Lemniscate.jpg
It might not be a lemniscate per se, but it has the form of one.
--Chris
|
2014-03-15 17:31:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9411276578903198, "perplexity": 422.563421955418}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678699073/warc/CC-MAIN-20140313024459-00061-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://merajutnusantara.com/o72xhsj/percentage-word-problems-year-7.html
|
# Percentage word problems year 7
Grade 7 math word problems with answers are presented. ", as opposed to "per cent". Here you will find our selection of Percentage Word Problems worksheets, which focus on how to find a missing percentage and also how to solve percentage of number problems by the Math Salamanders The latin word Centum means 100, for example a Century is 100 years. So, take a stroll down memory lane to remember all of our past Word of the Year selections. Solved examples with detailed answer description, explanation are given and it would be easy to understand. It is an opportunity for us to reflect on the language and ideas that represented each year. 30)(340 mg) = 102 mg Now subtract the 30% decrease from the original dosage. Printable worksheets and online practice tests on word-problems-on-percentage for Year 7. Year 3 (age 7-8) Number and Place Value Fraction word problems. Year 6 Ratio: Solve Problems Involving Percentages Maths Mastery Challenge Cards - Use this set of challenge cards to reinforce your teaching of year 6 ratio: problem solving maths mastery and test your students' knowledge. A class of sixty voted for class president. This free percentage calculator computes a number of values involving percentages, including the percentage difference between two given values. $2. • Recall known facts, including fraction to decimal conversions; use known facts to derive unknown facts, including products such as 0. This math video tutorial explains how to calculate the percent of change using the percent increase and decrease formula. So, if a family originally paid £1000 per year for their energy, and their bill was increasing by £200, this is a 20% increase. 3, 7. ppt (PowerPoint 2003/93kb) Counting . The Sweater Shack is offering a 20% discount on sweaters. Percentage Practice Worksheet 2 - Students will find the percent of a given number. Now that you can do these difficult algebra problems, you can trick your friends by doing some fancy word problems; these are a lot of fun. Problems. When finished with this set of worksheets, students will be able to solve word problems involving ratios, fractions, mixed numbers, and fractional parts of whole numbers. 7. Quickly access your most used files AND your custom generated worksheets! Please login to your account or become a member and join our community today to utilize this helpful feature. 4. Algebra 1 Here is a list of all of the skills students learn in Algebra 1! These skills are organized into categories, and you can move your mouse over any skill name to preview the skill. N Worksheet by Kuta Software LLC Kuta Software - Infinite Pre-Algebra Name_____ Percent Word Problems Date_____ Period____ Start studying Percent Word Problems. 0. 50 per bag. Welcome to the Math Salamanders Year 6 Mental Maths Tests. Showing top 8 worksheets in the category - 7th Grade Math Word Problems. 7+, 11+, 13+, SATs, GCSE, and A Level - Expert Tutors . answer: 65% 3. 62 Groceries -$124. Decimals, Percents, Algebra, and Geometry. Word problems are the best math problems, and we're here to help you solve them. Percentage questions in non-calculator tests tend to work with comparatively easy percentages, and here are the steps you take to work out a number as a percentage of another, for example, 45 as a percentage of 180: Label the smaller number as the ‘part’ and the larger number the ‘whole thing. Divide by the remaining number Jerry, an electrician, worked 7 months out of the year. This set of worksheets includes a mix of addition and subtraction word problems. 1. Our online percentage trivia quizzes can be adapted to suit your requirements for taking some of the top percentage quizzes. Danny just hired a new employee to work in your bakeshop. This will . To start practicing, just click on any link. Skip navigation This ensemble of percentage worksheets is tailor-made for students of Grade 6 and Grade 7. The diameter of the half circle is 12 inches. 00 If you borrow $675 for six years at an interest rate of 10%, how much interest will you pay? Word Usage: Percent or Percentage? The words percent and percentage are closely related—does it matter how they are used in a sentence? Read this word usage tip to find out. Math Worksheets Word Problems Mixed Addition and Subtraction Word Problems Word Problems: Mixed Addition and Subtraction Word Problems. Discusses the difference between 'absolute' and 'relative' values, and how these values can be misused. The relationship between principal (P), interest rate (r), length of time the money is invested (t), and earned interest (I) is given by the following formula: Two-step and multi-step problems explained for primary-school parents and ideas for the types of problems children might be asked to solve. to 4 p. This year he has 62 ninth graders and 104 tenth graders. 99 Electricity -$45. If it is reduced to a width of 3 in then how tall will it be? Word Problems Calculators: (38) lessons If you cannot find what you need, Given the 3 items of a markup word problem, cost, markup percentage, and sale price HFCC Learning Lab PERCENT WORD PROBLEMS Arithmetic - 11 Many percent problems can be solved using a proportion. What is the balance in an account at the end of 10 years if $2,500 is deposited today and Calculate the Simple Interest for the Word Problems: 1. 00, how much do you have left each week after you pay for the following? Rent -$443. A. 03, 0. 5 miles more In multi-step word problems, one or more problems have to be solved in order to get the information needed to solve the question being asked. Fractions Word Problems – Grade 8 Solve these on a separate sheet of paper. These worksheets practice math concepts explained in Fraction and Decimal Word Problems: No Problem! (ISBN: 978-0-7660-3371-9), written by Rebecca Wingard-Nelson. The sections below contain two-word problem worksheets for students, in section Nos. How much did the small business pay in taxes last year? For Questions #11-12: A magazine advertises that a subscription price of $29. The following word problems may require you to add, subtract, multiply or divide fractions. Imagine the chaos at a birthday party without Fraction and Decimal Worksheets for Year 6 (age 10-11) In Year 6 there is even more work on equivalent fractions and using common factors to simplify fractions, often called cancelling. Final Thoughts about Math Word Problems. How we use bar modelling here at Third Space Learning. Examples. In the previous Lesson, we saw how to find what percent one number is of another. The percentage is expressed in its equivalent fraction form and you then multiply it by the quantity to get the answer. 30 X 50 to find the percent: . If you want to teach kids how to solve percent problems, then it is a good idea to give them different kinds of problems. π \pi π. Therefore, Sophie has 7 dimes. Designed to help children to visualise the relationships between numbers when answering word problems involving ratio and scaling quantities. The following collection of resources have been assembled by the TES Maths Panel. We'll also solve interesting word problems involving percentages (discounts, taxes, and tip calculations). Summary: In this lesson we learned how to solve word problems involving decimals. exponential equations worksheet algebra 1. aj050. Write his test score as a percent. Practice solving percentage word problems by setting up a proportion and solving. In other words, the old value is £500 and it has been increased by 6%. Example #7: Algebra word problems can be as complicated as example #7. A pair of trainers normally costs £75, but they are offered for 10% off in the sale. Next, solve the resulting equation by dividing both sides by 15. If you still do not understand them, I strongly encourage you to study them again and again until you get it. From problems that require basic calculations to more complex percentage word problems, kids should be able to solve them all. How much time does he take to walk a distance of 20 km? Example 4. 3. So calculate 30% of 340: (0. Percent vs Percentage My Dictionary says "Percentage" is the "result obtained by multiplying a quantity by a percent". . Percentage Problems . Common errors • In a word problem, pupils may not recognise On this page you will find: a complete list of all of our math worksheets, lessons, math homework, and quizzes. Now 10% of £75 is £7. Students apply their understanding of percent 7( . Students use BrainPOP resources to define and give examples of ratios, proportion, and percents, then solve related real-world word problems and equations. 5% of £288. Then he will get a 10% raise for the remainder of the year. In British English, percent is usually written as two words (per cent), although percentage and percentile are written as one word. Also, in the above problem, encourage your child to make connections between the 7. Though there are many online options available for Word Problems courses, My11. This Key stage 3 Maths syllabus page details all aspects of the year 7 Maths curriculum. Centre for Teaching and Learning | Academic Practice | Academic Skills | Digital Resources Use equations to solve harder percentage problems. 9. 5. division worksheets grade 7 percentage word problems year 6 worksheets Larson algebra 2 extra practice answers infinite algebra 1 answers algebra and trigonometry algebraic expressions terms and coefficients super star pre algebra with pizzazz answers get the message algebra with pizzazz answers types of word problems in algebra All You Add Is Algebra. uk for more great papers covering maths, english, verbal reasoning and non-verbal reasoning Time Look at the timetable below and then answer the questions that follow:- Train 1 Train 2 Train 3 Train 4 Umberside 7. In American English, percent is the most common variant (but per mille is written as two words). 501 Math Word Problems is designed for many audiences. The exercises are designed to aid your study of mathematics by reinforcing important mathematical skills needed to succeed in the everyday Basic "Percent of" Word Problems (page 1 of 3) Sections: Basic percentage exercises, Markup / markdown , General increase / decrease When you learned how to translate simple English statements into mathematical expressions, you learned that "of" can indicate "times". 8. Maths Home Learning Task Year 7 Problem Solving Name Tutor Group Break up problems into simpler parts and give a logical Look at the diagram showing the word Fractions, decimals and percentages (Year 8) • Begin to use the equivalence of fractions, decimals and percentages to compare proportions. Solving percentage problems is an important skill that you'll learn in 7th grade. 4) 7. The heat is turned up on geometry as they start to introduce just a little bit of trigonometry too. Since very few problems in life are clear cut Finding a percentage. What integer x makes x/9 lie between 71/7 and 113/11? 0. Julie got 98 out of 140 for a science exam, what percentage did she get? A. These printable math worksheets for every topic and grade level can help make math class fun for students and simple for teachers. Proportion Word Problems Date_____ Period____ Answer each question and round your answer to the nearest whole number. How to Teach the Bar Model Method to Ace Arithmetic and Word Problems in KS1 & KS2 Maths. The price of a TV -set is 1,150$. Everyone can develop their ability to mentally calculate 50%, 25%, 10%, 75% and There are also other notes and worksheets for years 7 to 11. If he sold 360 kilograms of pears that day, how many kilograms did he sell in the morning and how many in the afternoon? Percentage Increase and Decrease Year 6 Reasoning and Problem Solving with answers. This is a 2 to 3 ratio CHAPTER 2 WORD PROBLEMS Sec. Year 7 - Percentages - Homework Complete the questions below in the back of your book. 17 < 2 < 1. Percentages for class 7 Two step word problems using 10%, 25%, 50% and 75%. 18. Callie's grandmother pledged $0. Year 7 Maths Curriculum. 9) Bar Graph and Word Problems Students practice their bar graph reading skills by interpreting the information provided and answering questions on this math printable. The chef has 25 pounds of strip loin. Welcome to the Word Problems for 11 Plus course, the only course you need to learn, revise and practice Word Problems for the 11 Plus Exam. Try this Multi-step word problems worksheet to practice with problems that are similar to examples 5 and 6 above. Percentage Practice Worksheet 1 - Percent word problems. “Banks have lowered the rate of interest on fixed deposits from 8. Help your child get ahead with Education resources, designed specifically with parents in mind. It provides 501 problems so you can flex your muscles with a variety of mathematical concepts. Here you will find notes, practice questions and solutions for GCSE, arranged by subject area (Number, Algebra, Shape and Space, Handling Data), and by topic. Musher Math Word Problems Worksheet This worksheet offers fifth graders a chance to learn about a fascinating true story, and also provides some great related word problems practice. 55% of 200 4. The NRICH Project aims to enrich the mathematical experiences of all learners. Worksheets by Grade. k12. Listed below are word lessons that focus on giving students instruction on how to solve most types of word problems commonly found in algebra, geometry, and trigonometry. m. Percentage Of . of another kind of mixed nuts that contain 40% peanuts. 30 Multiply . 80% of 125 6. James Hind, of Nottingham Trent University, said the confusing question is above the level it was set for and to reach a conclusion it is best to try a number of #1 Jessica bought 8/9 of a pound of chocolates and ate 1/3 of a pound. Encourage children to use diagrams to help them solve the problem. For example, if the interest rate is 10% per year, how many years will it take before you double your money? (A spreadsheet would be useful. For ease of grading, identical worksheets, including the answers, are printed in section Nos. The price of a watch Level 7-8 Numbers - Percentages - Increases and Decreases There are three ways to represent fractions of numbers - fractions, decimals and percentages. This activity would make a great introductory lesson or a check for understanding in class or at home. Percentages - Word Problems 1. 1 algebra 2. There are three types of word problems associated with percents: Type A: . You'll learn to use percentages to calculate tips and taxes, and you'll also learn to calculate the percent of change between two amounts. Rotation Math Problems. Answer Key. Numeracy Word Problems . Find the fraction. In this Lesson, we will emphasize a second type of word problem. quick math test. Read each problem carefully to choose the correct operation. (c). hardest math equation. Resources. 23 Questions There are 321,200 people who played tennis last year, this year the number of Multi-step Percentages Maths Word Problems Differentiated Worksheets 4th question of the first set in Two-Step Word Problems Percentage of Amounts is not clearer Percents Here is a list of all of the skills that cover percents! These skills are organized by grade, and you can move your mouse over any skill name to preview the skill. 3. All problems like the following lead eventually to an equation in that simple form. Showing top 8 worksheets in the category - Year 7 Maths. Fractions, decimals and percentages are found everywhere throughout mathematics and in your daily life.$453. 95% of 250 7. plus is one of the most effective on the market. algebra 2 polynomial functions worksheet. The result is d = 7. If he walks at 9 mph, he covers 7. kasandbox. What percentage of his pencils was red? Question 3 Some examples of percent word problems. What is the growth factor? Explain. The least common multiple of 5 and 3 is 15, so multiply the numerator and denominator of the each fraction to make the denominators 15. Jennifer What percent of the total calories come from fat? 7. " This selection will show you how to solve word problems involving percents. A Collection of Math Word Problems for Grades 1 to 6 Add/Subtract - One Step Word Problem Set 7 Word Problem Set 8. Here you will find a wide range of Mental Maths Worksheets aimed at Year 6 children which will help your child to learn number facts and practise their number skills. solve for x grade 9. To start practising, just click on any link. Percentage Basics 1 - illustrates 25%, 50%, 75% and 100% Percentage Basics 2 - illustrates 20%, 10%, 90% and 80% The above sheets can both be cut into three and used for discussion. Home; Word Doc PDF. Out of 90 candies, 30% are chocolate candies. any of the problem solving questions in this booklet can be solved using a bar modelling method. Dividing decimals may present a challenge for your child. In one hour the employee burned 625 chocolate chip cookies. Using the Proportion Method to Solve Percent Problems There are a variety of ways to solve percent problems, many of which can be VERY confusing. Find how much it depreciates over one year by dividing the cost by 5;. (a). The tax rate was 9. Children should see that sometimes it is easier to scale numbers in one 'direction' than the other, and that they can choose the most efficient method. Proportion and Percent Worksheet - Can your students solve these percentage word problems? Proportion to Percentage - Finding percentages worksheet. 08. As word problems often involve a narrative of some sort, they are occasionally also referred to as story problems and may vary in the amount of language used. These worksheets provide students with real world word problems that students can solve with grade 5 math concepts. These money word problems worksheets engage students with real world problems and applications of math skills. Keep It Simple for Students (KISS) By Jerome Dancis Executive Summary, Introduction and Conclusions. Version 1 - Order of Operations and Decimal Products; Version 2 - Fractional and Decimal Free Math 7 Worksheets. If a person walks at 4 mph, he covers a certain distance. For problems that . As of September 3, 2019, all WORD Problem tutorials have been reprogrammed as lessons with answers . Percentage. Jerry, an electrician, worked 7 months out of the year. How much Percentage One Quantity is of Another? Percentage of a Number. National Curriculum Objectives. A builder mixes one part of cement with four parts of sand to make mortar. These word problems help children hone their reading and analytical skills; understand the real-life application of math operations and other math topics. sd. 4 x 6Mra Zdne 2 5wli Ftehe pIvn efGisn 1iWtmen 1PUrXeB-vA DlBg3e cb ErRaC. YEAR 1 - Place value - White Rose - WEEK 4 - Block 1 - Autumn Percentages level 5 to 7 · Kimberley Jane Anderson This Word Problems Worksheet will produce problems that focus on finding and working with percentages. In Chicago in the year 2000, there were approximately 1. They can be downloaded for free by registering on the TES website. Percentages. 55 How many of each type of coin does she have? Solution Let x be the number of quarters. RP. You have the option to select the types of numbers, as well as the types of problem you want. The resource Decimals, Percentages and Money contains an activity designed to enable students to make connections between word problems involving money and their mathematical representations. each weekday. Practice solving word problems involving percents. A collection of short problems on fractions, decimals and percentages. “Voters turnout in the poll was over 70%”. Grade 6 Percents Word Problems Name: _____ Class: _____ Question 1 Father gave me some money. (b). These percentage word problems worksheets are appropriate for 3rd Grade, 4th Grade, 5th Grade, 6th Printable worksheets and online practice tests on Percentage for Grade 7. The trim Practice solving word problems involving percents. • Most of the CAT percentage questions will combine more than one topic in the problems and will require you to use some degree of logic to make equations in the problems. IXL will track your score, and the questions will automatically increase in difficulty as you improve! 7th Grade Math Word Problems. A plethora of exercises like finding the percent of the shaded region, finding percent of a whole numbers and decimals, comparing quantities, well-researched word problems and a lot more are available here. many are first-year students? A series of percentage problems for Year 5 (some tricky ones) a Resource! You want it? We'll make it; 24/7 customer support (with real people!) Sign Up Now Percentages - Topic wise seperated questions from one hundred 11+ Papers with detailed answers. Example 1. Fred's employer will withhold 15% of his gross salary for federal tax, state tax. Percentages for class 7 Seventh Grade Do Now Math Worksheets. 20. Fill in the missing spaces: Fraction. World of Percentage ‘World of Percentage’ is a highly recommended online percentage resource for kids who are gradually getting introduced to the complex world of percentage word problems. Let me remind you that 1% of some number is one hundredth (1/100) part of the number. You have the option to select the types of numbers, Here you will find our selection of Percentage Word Problems worksheets, which focus on how to find a missing percentage and also how to solve percentage of Can I solve word problems involving percentages? 1. 30 X 50 = 15 15 is 30% of 50. 7th Grade Math: Percent of Change and Percentages. Work in a step by step manner . The problems are grouped by addition and subtraction (appropriate for second or third grade students), or multiplication and division (appropriate for fourth or fifth grade students who have mastered decimal division), or combinations of all four operations. 30 7. They are all aligned to the 7th grade core math learning standards. The formula to calculate this is: (£200/£1000) x 100 = 20% Using percentages like this is also useful for comparing changes to different numbers, where it can be difficult to see at a glance what the impact has been. Word Problems Royal Russell School Sample Year 7 Maths Practice Paper. Fun maths practice! Improve your skills with free problems in 'Solve percent equations: word problems' and thousands of other practice lessons. Year 1. For example, you will not find division problems in the grade 1 material. What percent of the new mixture is peanuts? We have free math worksheets suitable for Grade 7. and social security. The author interviews a math professor: Math expert, Dr. Demonstrates how to set up and solve 'increase-decrease' word problems. At the end of one year he will receive an additional 8% raise if he does well on his performance review. Below are four sample problems showing how to use Chebyshev's theorem to solve word problems. We used the following skills to solve these problems: reading and writing decimals, comparing and ordering decimals, estimating decimal sums and differences, and adding and subtracting decimals. Math word problem worksheets for grade 5. Test your understanding of simple percentages with this self-marking quiz. Great to kick off or end a class after a math lesson. Solving Proportion Word Problems Answer each question and round your answer to the nearest whole number. 50 and the 3 as "seventy-five cents" and "three. Can you work out the answers? Use the multiple choice boxes on the right to select your answer. Free pdf worksheets from K5 Learning's online reading and math program. If 3 be added to both, the fraction becomes 3/4. What percent Sometimes the information needed to solve a percent word problem is not stated directly. 13 to a percentage by multiplying by 100. 340 mg – 102 mg = 238 mg. 11 May 2011. 9); Percent of change from 30 to 57 is 90%. The rule for using percent and percentage is straightforward. Which amount did I choose? Question 2 Mike had 180 blue and red pencils. Primary Resources - free worksheets, lesson plans and teaching ideas for primary and elementary teachers. kastatic. It may be worthwhile for learners to stick them into their exercise books for reference. Jane spent $42 for How to use Bar Modelling Techniques to Solve Multi-Step KS2 SATs Word Problems . It features ideas of strategies to use, clear steps to follow and plenty of opportunities for discussion. On average, how many hours did Brad study each day? Seventh Grade Math Worksheets: For Students Ages 12 to 13. co. Percentage word problems (Type 1 problems: Finding the Part) This lesson presents solution examples of word problems on percentage. The very first rule that we'll talk about is as a general rule when translating words to math, the word is means equals and the word of means multiply. This Math quiz is called 'Graph Word Problems (Part 2)' and it has been written by teachers to help you if you are studying the subject at middle school. 1 and 3. Differentiation for Percentage Increase and Decrease Let us take a look at some simple examples of distance, time and speed problems. It is extremely important to: Read the question carefully and note down all key information. Work out the amount in the bank after 1 year. IXL will track your score, and the questions will automatically increase in difficulty as you improve! Percentage Word Problems Worksheets These Percentage Word Problems Worksheets will produce problems that focus on finding and working with percentages. Each bag has six pieces of candy in it. 4 × 6, 3. Expected time to solve, similar questions of 11 Plus exam practice papers. 2 and 4. B. This video contains plenty of examples and word problems for you to Practice solving percentage word problems by setting up a proportion and solving. 7-3) The diameter of all 4 circles is 3 inches. How many votes did the MEAP Preparation - Grade 7 Mathematics Curriculum - Percentage Word Problems - 1 - Math & English Homeschool/Afterschool/Tutoring Educational Programs. Problems on Percentage. In order to use this method, you should be familiar with the following ideas about percent: This set of worksheets contains introductory lessons, step-by-step solutions to sample problems, a variety of different practice problems, reviews, and quizzes. If she used a total of 60 grapes, how many red grapes should she use? 2. Word Problems and Critical Thinking Problems from the test prep section, requires registration Addition Word Problems Subtraction Word Problems Multiplication Word Problems Division Word Problems Fraction Word Problems Multiple Step Word Problems Time Word Problems Money Word Problems Ratio Word Problems Percent Word Problems Decimal Word Problems Percents Word Problem Worksheet For edHelper. 2c, 7. 1. Customize the number range, the percentage, the number of decimal digits, workspace, font size, and more. ’ Population Growth and Word Problems Homework The Elk Population 1) The table show that the elk population in a state forest is growing exponentially. To use it, find the word problem below that resembles the one you need help with, fill in the blanks, then click "Solve" to find the answer. 8th Grade Math Practice From Basic Problems on Percentage Convert the percentage to a decimal by moving the decimal point two places to the left and remove the percent sign: 30% = . The seventh grade math curriculum starts to take students more into algebra and geometry. org and *. What is the red shaded area in square inches? 7-4) A half circle overlaps with a square. Year 7 Maths. In this video, we're going to talk about word problems with fractions. Our word problems worksheets cover addition, subtraction, multiplication, division, fractions, decimals, measurement (volume, mass and length), GCF / LCM and variables and expressions. Check at the end that all the numbers add up coorectly. college level math test Primary 5 maths Here is a list of all of the maths skills students learn in primary 5! These skills are organised into categories, and you can move your mouse over any skill name to preview the skill. of mixed nuts containing 55% peanuts were mixed with 6 lbs. 99 (for 12 issues) represents a savings of 70% from the newsstand price. Growth of Elk Population Time (Year) Population 0 30 1 57 2 108 3 206 4 391 5 743 TIPS4RM: Grade 7: Unit 9 – Ratio and Rate 1 Unit 9 Grade 7 Ratio and Rate Lesson Outline BIG PICTURE Students will: • deepen their understanding of proportional relationships as they apply ratios and rates; • model relationships and solve problems involving constant rates using a table of values, graphs, and algebraic expressions. 053 million African Americans, 907 thousand whites (non-Hispanic), and 754 thousand Hispanics, and 181 thousand others (oth Lois wants to send a box of oranges to a friend by mail. PERCENTAGE AND ITS APPLICATIONS You must have seen advertisements in newspapers, television and hoardings etc of the following type: “Sale, up to 60% off”. One boss decided to increase the salary of an employee by 5%. PERCENT INCREASE OR DECREASE. There are 12 inches in a foot, so two feet is equal to 24 inches. 28 Dec 2016 Two sheets of word problems using a mixture of percentage finding skills. The danger with this type of problem is thinking that you have reached your answer after solving only part of the problem, and stopping too soon. Count reliably up to 10 objects, recall pairs of numbers with a total of 10. 60 If the balance at the end of eight years on an investment of$630 that has been invested at a rate of 9% is $1,083. If each orange has a mass of 200g, what is the maximum First year maths Here is a list of all of the maths skills students learn in first year! These skills are organised into categories, and you can move your mouse over any skill name to preview the skill. Here is a pattern of squares What is the proportion of white squares in the whole pattern? Word Problems In science education, a word problem is a mathematical exercise where significant background information on the problem is presented as text rather than in mathematical notation. ppt (PowerPoint 2003/62kb) yr1_word_probs_2. 7 and 6, and 0. At an end of financial year sale, a gold necklace originally costing$129 has been reduced by Grade 6 Math : Word Problems (Percent) What is the percentage of correct questions? If she earns $11500 a year, how much money does she have left? a)$1725 7. If you want to find out more about bar modelling please contact the Hub. W ORD PROBLEMS require practice in translating verbal language into algebraic language. Before we look at the problems, if you want to know the shortcuts required for solving word problems on percentage, Percentage Word Problems . See Lesson 1, Problem 8. This is the aptitude questions and answers section on "Percentage" with explanation for various interview, competitive examination and entrance test. Scroll down and press OK done to have them marked online Reverse Percentage Homework . This is a very important guide and this will help us translate many word problems involving fractions into math that we can Grade 6 maths Here is a list of all of the maths skills students learn in grade 6! These skills are organised into categories, and you can move your mouse over any skill name to preview the skill. 1) Totsakan enlarged the size of a photo to a height of 18 in. If the regular price of a Improve your math knowledge with free questions in "Percent of change: word problems" and thousands of other math skills. Perhaps the biggest step is that by the end of Year 6 children will be expected to add and subtract fractions with different denominators and mixed numbers. A cell phone was marked at 40% above the cost price and a discount of 30% was given on its marked of percentage worksheets is tailor-made for students of Grade 6 and Grade 7. Know the standard parts of a Venn Diagram. Basic Problems on Percentage. Percentage Increases and Interest. She has 21 coins in her piggy bank totaling $2. . top; Teaching Activity. 12. Find the percent for the percentage word problems. In an election, there are a total of 80000 voters and there are two and no matches were drawn, find the number of matches played during the year. Jennifer made a fruit juice using red and green grapes. Fractions, Decimals and Percentages. com subscribers. Multiply Decimals, Divide Decimals, Add, Subtract, Multiply, and Divide Integers, Evaluate Exponents, Fractions and Mixed Numbers, Solve Algebra Word Problems, Find sequence and nth term, Slope and Intercept of a Line, Circles, Volume, Surface Area, Ratio, Percent, Statistics, Probability Worksheets, examples with step by step solutions Year 6 Percentages L. Vocabulary and notation are very important to understanding and communicating in mathematics. 8025% off. Fraction word problems. It is a continuation of the lesson Percentage problems in this site. 1 Word Translations There is nothing more important in mathematics than to be able to translate English to math and math to English. B. Our Word of the Year choice serves as a symbol of each year’s most meaningful events and lookup trends. Free Math Percentage Problems for Kids. Fun maths practice! Improve your skills with free problems in 'Percents of numbers: word problems' and thousands of other practice lessons. Use long multiplication to multiply 3-digit then 4-digit numbers by numbers between 10 and 35. First rewrite the fractions with a common denominator. For example you might need to nd 20% of$500. 50 = £67. Convert the following fractions to percentages. 3 Multiplication of common fractions, including mixed numbers, not limited to fractions where one denominator is a multiple of another. com! On this page, you will find Math word and story problems worksheets with single- and multi-step solutions on a variety of math topics including addition, multiplication, subtraction, division and other math topics. Year 8 Percentage - Practice Test . Current understanding Pupils should already be able to complete numerical calculations. An example of a ratio is 2:3. 23 Apr 2018 If, for example, the word problem asked you to find 77 percent of 50, you could simply find Cassie works from 7 a. Fractions Decimals and Percentages Word Problems : In this section, we will learn, how to solve word problems on fractions, decimals and percentages. Math Busters Word Problems reproducible worksheets are designed to help . Solving Word Problems Learning skills: defining the problem, defining knowns and validating Why Mathematical word problems (or story problems) require you to take real-life situations and find solutions by translating the given information into equations with unknowns. Also, solve the percent word problems based on interesting real-life scenarios. y y y. Year 7 L. 90% (1. How much will Fred make the fist year? 1) $27,000 2)$27, 675 3) $27,675 4 Mixture Word Problems Date_____ Period____ 1) 2 m³ of soil containing 35% sand was mixed into 6 m³ of soil containing 15% sand. It is for anyone who has ever taken a In these tutorials, we'll explore the number system. 1 out of every 5 cannot be sold because they are not ripe yet. The topics are arranged according to the Edexcel IGCSE specification, so there are a handful of topics not relevant to GCSE. This is a critical year in the education of students. Do 7 problems. An important part of math instruction is to demystify mathematics; thereby making it accessible to more students. 8) Multiply two fractions using models (7-G. Download these presentations for use in the classroom: yr1_word_probs. Example 1 : The denominator of a fraction exceeds the numerator by 5. What percent of the year did he work? (round answer to the nearest hundredth) What percent of 12 is 7? 12 months Improve your skills with free problems in 'Percents of numbers: word problems' and thousands of other practice lessons. An interactive game for 1 person. 53 . math skills games. 13)(100) = 13%. What percent of the year did he work? (round answer to the nearest hundredth) Welcome to 501 Math Word Problems!This book is designed to provide you with review and practice for math success. Find the amount you will pay. 235 MEP Pupil Text 11 Worked Example 4 Convert each of the following fractions to percentages. If he pitched 35 ballgames, how many games did he win? 80% of 35 is what? 100 35 80 2. Fractions, Decimals, and Percents. Find these percentages using either your own favourite method. - Sign up now by clicking here! For best results, pick multiple options. (26). Practice Problems: solutions. Use the same method to solve the word problems below. If you're behind a web filter, please make sure that the domains *. 63% of 180 2. to problems involving population, mixtures, and counting, in preparation for later topics in middle school and high school mathematics and science. greatest math problems. the rules of math. At this level students start getting much more familiar with equations and the use of expressions. Math Busters Word Problems reproducible worksheets are designed to help teachers, parents, and tutors use the books from the Math Busters Word Problems series in the classroom and the Fun maths practice! Improve your skills with free problems in 'Compare ratios: word problems' and thousands of other practice lessons. The problems are presented in words, and you can choose the types of wording to use. Compiled by Carole Fullerton 2011 Open-ended tasks for fractions, decimals and percents Grades 7 & 8 1. Students should be comfortable solving basic equations, such as one step solving for "x" problems. This brief lesson is designed to lead students into thinking about how to solve mathematical problems. A great set of worksheets that test math skills on three different levels. 10) A small business spent$23,000 for taxable items last year. Grade appropriate lessons, quizzes & printable worksheets. SAT Math - Percentage Increase and Decrease Problems Remembering Percentage In math, a percentage is a way of expressing a number as a fraction of 100 (per cent meaning "per hundred"). Percentage Word Problems : In this section, we are going to learn, how to solve word problems on percentage step by step. Add and subtract mixed numbers: word problems (7-G. 6. Great as an opening or finishing activity. 8. Watch out for the division sums. 47 ----- 7. percentage as fractions, decimal, ratio etc. The movie theater has 250 seats. g. Its main purpose is to be a diagnostic test—to find out what the student knows and does not know about these topics. Yet, word problems fall into distinct types. Some examples of percent word problems. 8th grade math review packet. How to teach KS2 area and perimeter Simple interest word problems refer to applications in which money is invested in an account paying simple interest rather than compounded. Money Word Problems. The symbol for ratio is (:). Example. (d) . 15 ÷ 3. The worksheets are Solve ratio word problems (grade 7) Change to a decimal by dividing 7 by 10. Therefore, the carpenter can cut 4 full segments with some left Types of Percentage Problems . You are given a rectangle with 50 squares on it. Free printable ratio word problem worksheets for grades 6-8, available as PDF and html files. Word Problem Worksheets for Grades 6-12 Improve your middle and high school students' math skills with these word problem printables. It is is a continuation of the lesson Percentage problems in this site. If he mixes up 100 kg altogether, how much is cement? How much is sand? Word Problems (ratio & proportion) Year 6 7 8. Note that Using Systems to Solve Algebra Word Problems can be found here in the Systems of Linear Equations and Word Problems section. For example, shops often offer discounts on certain goods. Some of the worksheets displayed are Word problem practice workbook, Percent word problems, Multistep word problems the student text includes some, Subtraction word problems, Answer each question and round your answer to the nearest, Two step word problems, All decimal operations with word This appears to come from a Maths No Problem! workbook, probably 2A as the article states it is a problem for 7-year-olds. In particular, we saw how to solve problems that involve the expression out of. 5% . How much was left? Fraction Word Problems #2 Tom bought a board that was 7/8 of a yard long. Percentages are a part of everyday life and you'll need to be able to calculate increase and decreases in percentages if you ever want to understand interest rates or pay rises! Year 7 maths Here is a list of all of the maths skills students learn in year 7! These skills are organised into categories, and you can move your mouse over any skill name to preview the skill. 6 Word problems Target To solve word problems by: • extracting key information; • choosing the correct mathematical operation (+, –, , ÷); • using an appropriate method of calculation. What is the new width if it was originally 2 in tall and 1 in wide? 2) A frame is 9 in wide and 6 in tall. Solved Examples on Percentage. Real Life Problems on Percentage. To do this you use multiplication of fractions. ) It will take somewhere between 7 and 8 years, because 1. Percent, Decimal, and Money Worksheets Percent Worksheets. Your elementary grade students will love this Set H (elem/upper elem) Word Problems. What percent of her apples are not ripe? answer: 20% 2. In the early 20th century, there was a dotted abbreviation form "per cent. 87 8) Brad studied a total of 24. Money and math are a part of daily life. 11plustestpapers. multi step equations word problems PDF. Example 1 : Find 20% To the StudentThis Word Problem Practice Workbookgives you additional examples and problems for the concept exercises in each lesson. How many jars can you buy for $4? 2 3) One cantaloupe costs$2. The patient’s dosage must decrease by 30%. We'll convert fractions to decimals, operate on numbers in different forms, meet complex fractions, and identify types of numbers. Application of Percentage. ax ± b = c. 50 for every mile Callie walked in her walk -a-thon. x x x. £500 is put in a bank where there is 6% per annum interest. 5%. com, a math practice program for schools and individual families. Category: Mathematics This range of resources is designed for use as teacher inspiration when planning lessons. 03 and 8. A boy walks at a speed of 4 kmph. Below are some examples. (a) 3 10 (b) 1 4 (c) 1 3 Solution To convert fractions to percentages, multiply the fraction by 100%. Everyone can develop their ability to mentally calculate 50%, 25%, 10%, 75% and 33⅓% of everyday numbers. 4 hours over a period of four days. Worksheet - Percentage Of . You can start playing for free! Commissions - Sample Math Practice Problems The math problems below can be generated by MathScore. ©M O20T1x27 iK suut Fae sS 3o 5fYt awyacrkei aLcL6C x. Education resources, designed specifically with parents in mind This quiz will require you to answer questions based on the ratio of the information given. 3) to solve word problems in which they determine, for instance, when given two different sets of 3- Year 7 Interactive Maths - Second Edition To express one quantity as a percentage of another, make sure that both quantities are expressed in the same units. θ \theta θ. Chebyshev’s theorem states that the proportion or percentage of any data set that lies within k standard deviation of the mean where k is any positive integer greater than 1 is at least 1 – 1/k^2. 4. Georgie has a bushel basket of apples to sell at her fruit stand. These ratio worksheets will generate 20 Ratio problems per worksheet. 24 ÷ 5 = 4. I could choose between 15% of 1,500$or 25% of 1,000$. $405. Word Problems on Percentage. Fun maths practice! Improve your skills with free problems in 'Percents of numbers: word problems' and thousands of other practice lessons. keystage 3 Interactive Worksheets to help your child understand Money in Maths Year 7. Mathematics Year 6: (6R2) Solve problems involving the calculation of percentages [for example, of measures, and such as 15% of 360] and the use of percentages for comparison. Out of the remaining money, I spend 50 % Straight forward worksheets to revise or consolidate percentages. Last year, on the television programme Antiques Roadshow work out the approximate profit. With fun activities like place value puzzles and themed holiday and sports problems, your child won't want to stop In a similar way to a percentage increase, there is a percentage decrease. Out of a salary of$4500, I kept 1/3 as savings. 7) Multiply fractions and whole numbers: word problems (7-G. Having grasped the use of bar models for 1 step problems, I wanted to give the children a way to use their skills for multi-step problems. q 7 CAmlJlr or2i ng 3hYtzs0 3r BeKsle qrnv 2ejde. This Set H (elem/upper elem) Word Problems is perfect to practice problem solving skills. Since 70% of the patients in the study were women, 30% of the patients were men. What is the area of the hatched/striped parts? 7-5) Which fraction is the smallest? More challenging math problems for seventh grade: Solutions to Time value of money practice problems Prepared by Pamela Peterson Drake 1. Playing educational quizzes is a fabulous way to learn if you are in the 6th, 7th or 8th grade - aged 11 to 14. 1) If you can buy one can of pineapple chunks for $2 then how many can you buy with$10? 5 2) One jar of crushed ginger costs $2. All for the middle levels of Grade 6, Grade 7, and Grade 8. Percentage Worksheets. The problem can be solved in one stage by finding 117. 7th grade math worksheets and answers. Without knowing what words mean, we’ll certainly have trouble answering questions. 30% of 140 students in a If you really understand the percentage word problems above, you can solve any other similar percentage word problems. All of the given examples are of 1 step problems but all similar problems in the KS2 SATs have at least 2 steps, some of them have 3 or 4. 5% to 7%”. This website and its content is subject to our Terms and Conditions. Solve problems by scaling up or down; Multiply and divide numbers with up to 2dp, e. 50. Thirty percent of the grapes are green. × Section 3 Problems relating to Percentages Often you will be asked to nd a particular percentage of a quantity. hurdles by seven hundredths of a second. 10. Decimal. Fraction Word Problems - Examples and step by step Solutions of Word Problems using block models (tape diagrams), Solve a problem involving fractions of fractions and fractions of remaining parts, how to solve a four step fraction word problem using tape diagrams, grade 5, grade 6, grade 7 Question 7 You have only 1,000$. 2. 5 ÷ 7, 5 × 0. On a number line labeled with endpoints of 0 and 1, place the Word Problems (ratio & proportion) Year 6 6 7. Grade 6 math worksheets on solving proportions word problems. Printable worksheets and online practice tests on Percentage for Year 7. Solve word problems Math Word Problem Worksheets Read, explore, and solve over 1000 math word problems based on addition, subtraction, multiplication, division, fraction, decimal, ratio and more. You’ll likely get the most out of this resource by using the problems as templates, slightly modifying them by applying the above tips. • As with all CAT Arithmetic topics, the basic concepts (and when we say basic, we mean the absolute basic) can be found in school books. 88% of 1000 8. Review multiplying decimals and use this method to find the percent of a number, worksheet #1. 35% of 120 5. 225 seats were sold for the current showing. Now convert 0. The structure Every problem in 501 Math Word Problems has a complete answer explanation. Printable worksheets and online practice tests on Percentage for Grade 6. Multiply fractions and whole numbers (7-G. How many Supposedly Difficult Arithmetic Word Problems. keystage 3 Interactive Worksheets for year 7 Maths. A key to differentiated instruction, word problems that students can relate to and contextualize will capture interest more than generic and abstract ones. He had 45 blue pencils. What is the sand content of the mixture? 2) 9 lbs. The box of oranges cannot exceed a mass of 10 kg. Percentages of quantities, percentage increase, percentage decrease, expressing one number as a percentage of another,and finding the original amount. Watch a video or use a hint. Study it carefully! Peter has six times as many dimes as quarters in her piggy bank. If Fun maths practice! Improve your skills with free problems in 'Solve percent equations: word problems' and thousands of other practice lessons. WORD PROBLEMS. Topic : Percentage Word Problems- Worksheet 1. Students are required to figured out which operation to apply given the problem context. Course Description. 50 Cable TV -$23. 61% of 220 9. Explore various other math calculators as well as hundreds of calculators addressing finance, health, fitness and more. 6 Percents of numbers: word problems In worksheet on word problems on percentage we will practice some real life problems on percent (%). The Solutions and explanatiosn are included. Increase Percentage. A baseball pitcher won 80% of the games he pitched. Check In worksheet on word problems on percentage we will practice some real life problems 7. Some of the worksheets displayed are Name teacher numeracy year 7 8, Exercises in ks3 mathematics levels 7, Decimals work, Exercises in ks3 mathematics levels 3, Fun math game s, Year 7 maths revision autumn term, Maths year 7, Maths. C. Try our word problem worksheets to increase vocabulary and improve your child's reading and math skills. Find the. How many squares is this? Keep going until you get 100 percent. Show each proportion and label each answer. Grade 7 Maths Problems With Answers. 42% of 360 3. There are umpteen resources available online. If he pitched 35 ballgames, how many games did he win? 80% of 35 is what? 100 35 80 Jerry, an electrician, worked 7 months out of the year. Decrease Percentage. Learn vocabulary, terms, and more with flashcards, games, and other study tools. “Ramesh got 93% aggregate in class XII examination”. New value = 100 + percentage increase × original value 100. Problem 1 A salesman sold twice as much pears in the afternoon than in the morning. Here is a problem where bar modelling would help. These Ratio Worksheets are appropriate for 3rd Grade, 4th Grade, 5th Grade, 6th Grade, and 7th Grade. The topics are WS -Percentage Problems 2 - Increase & Decrease WS-Simplification Practice 1. O: To be able to use and apply what we have learnt about percentages to solve a range of different problems. 7) If your weekly salary is$1,015. Find the number of times 5 inches goes into 24 inches by dividing 24 by 5. factor machine. Percentage word problems (Type 3 problems: Finding the Base) This lesson presents solution examples of word problems on percentage. b. A new car cost £11 500 and one year later it was sold for £9995. If you get a discount of 20%, how much money do you need to borrow from a friend so that you can buy the book? Question 9 Peter has 100$. How much will he get if his salary was$2000? Grade 7 (Pre-algebra) End-of-the-Year Test This test is quite long, because it contains lots of questions on all of the major topics covered in the Math Mammoth Grade 7 Complete Curriculum. Year 6 Objectives: Solve problems involving rate. You can choose to include answers and step-by-step solutions. Fractions Decimals and Percentages Word Problems - Examples. Roll the dice to get a percentage between 2 and 100. Kyle had four bags of candy that he bought for $1. Percent Proportion Word Problems Directions: Solve each word problem. Math word problem worksheets. 2 Write a ratio: word problems An unlimited supply of worksheets both in PDF and html formats where the student calculates a percentage of a number, finds the percentage when the number and the part are given, or finds the number when the percentage and the part are given. If Tim has read 455 pages of a 520 page book, he has read what percent of the book? Given: What is the simple interest on$4000 invested at 9% for 1 year? Given 29 Jan 2013 This is a practice test for your Percentage test TOMORROW. us The resources on this page will hopefully help you teach AO2 and AO3 of the new GCSE specification - problem solving and reasoning. Sean spelled 13 out of 20 words correctly on his spelling test. Year 7 Q. An unlimited supply of printable worksheets for finding a percentage of a number for grades 6-8, both as PDF and html files (html files are editable). Looking for math worksheets on percentage? Check out our free and printable percentage worksheets for kids!. Improve your skills with free problems in 'Write a ratio: word problems' and thousands of other practice lessons. Printable worksheets containing selections of these problems are available here. Word of the Year. EE. 50, so the sale price is £75− £7. Students could explore how long it would take to “double your money” in a bank account with a fixed rate of interest. Math Busters Word Problems reproducible worksheets are designed to help teachers, parents, and tutors use the books from the Math Busters Word Problems series in the classroom and the These worksheets practice math concepts explained in Fraction and Decimal Word Problems: No Problem! (ISBN: 978-0-7660-3371-9), written by Rebecca Wingard-Nelson. Some of these problems are challenging and need more time to solve. org are unblocked. Ratios from Word Phrases Worksheets These Ratio Worksheets will produce problems where the students must express the simplest form of a ratio from a word phrases. Year 6 Ratio: Solve Problems Involving Percentages Maths Mastery by over 4 million teachers worldwide; 24/7 customer support (with real people!) KS2 Two -Step Multiplication Word Problems All Multiplication Maths Challenge Cards. Word problems Here is a list of all of the skills that cover word problems! These skills are organised by year, and you can move your mouse over any skill name to preview the skill. 30% voted for Brad and 70% voted for Jane. Every time you click the New Worksheet button, you will get a brand new printable PDF worksheet on Percentage. Basic Level Sheets. More detailed explanations of some of the problems are also provided within the sections. There are also other notes and worksheets for years 7 to 11. What percent of seats are empty? 2. 60, how much was the interest? 2. This is what children aged 11 to 12 should know for the key stage 3, year 7 Maths course. How many candies are In solving percent problems with a proportion, use the following pattern: 7. Some of these are tricky! Be sure to show your work and state your answer in a sentence! 1. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. Fraction Challenge I Word Problem Set 1 Venn Diagram Word Problems can be very easy to make mistakes on when you are a beginner. A. We explain what two-step and multi-step problems are and give examples of typical problems a child might be asked to solve in primary school (and how the answer can be worked out!). Topic : Percentage Word Problems- Worksheet 1 1. Logged in members can use the Super Teacher Worksheets filing cabinet to save their favorite worksheets. We have daily warm-ups for the beginning of class, graphing worksheets, data analysis activities, statistics problems, and much more! A comprehensive database of more than 33 percentage quizzes online, test your knowledge with percentage quiz questions. 11+ For You – Maths Paper Sample Questions Visit www. Fun maths practice! Improve your skills with free problems in 'Percent word problems' and thousands of other practice lessons. Fortunately, the PROPORTION METHOD will work for all three types of questions: What number is 75% of 4? 3 is what percent of 4? 75% of what number is 3? Math Word Problems and Solutions - Distance, Speed, Time. Welcome to the math word problems worksheets page at Math-Drills. Write the given quantity as a fraction of the total and multiply it by 100%. Introduce your child to the world of consumer math with these word problems about percentages. What is the ratio of . Don't forget fractions are really How to Turn Word Problems into Algebra Equations [6/6/1996] Where do you put the letters and numbers when setting up algebra equations from word problems? Least Common Multiple Word Problem [12/10/1996] Each of three businesses receives different sized cartons of glasses. 053 million African Americans, 907 thousand whites (non-Hispanic), and 754 thousand Hispanics, and 181 thousand others (other races or two or more races). Can you buy the TV if you get 10% discount? Question 8 The price of a math book is 50$, but you have only 35$ with you. Breaking down multi-step word problems with structured bar modelling MathScore EduFighter is one of the best math games on the Internet today. percentage word problems year 7
ze8d6k, v1wjxd, yr, vljzagm, h55nej, xv2, iicz5a, yls8b, ihyey, 23qzqv, ikvvs,
|
2019-10-21 21:37:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2822742164134979, "perplexity": 1721.0669959923734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00431.warc.gz"}
|
https://solvedlib.com/n/problem-6-as-you-look-at-the-graph-from-problem-4-again,543057
|
Problem 6_ As You look at the graph from problem 4_ again you realizc that an in- terpreting polynomial was
Question:
Problem 6_ As You look at the graph from problem 4_ again you realizc that an in- terpreting polynomial was not the best choice to fit your data This time you are not sure which equation is best, S0 try to fit your data to each of the following equations: aet + bx clog(z)/r Use the data from problem and the best fit routine in matlab called lsqcurvefit find the constants €,b,€ Then plot the best fit line along with the data points Include your matlab code Please write down the constants a,6,C. aes +br clog(r)/x Id Use the data from problem and the best fit routine in matlab called lsqcurvefit to find the constants a,b,c,d. Then plot the best fit line along with the data points: Include your matlab code Please write down the constants &,6,€,d.
Similar Solved Questions
3. Using Residue to evaluate 2T 4sin? 0 (a) d0 4+2c0s cos 2x dx 1? +4)
3. Using Residue to evaluate 2T 4sin? 0 (a) d0 4+2c0s cos 2x dx 1? +4)...
Draw a model of a helium-4 atom. Show the number and relative position of protons, neutrons...
Draw a model of a helium-4 atom. Show the number and relative position of protons, neutrons and electrons in your model...
A parallelogram has sides with lengths of 6 and 16 . If the parallelogram's area is 32 , what is the length of its longest diagonal?
A parallelogram has sides with lengths of 6 and 16 . If the parallelogram's area is 32 , what is the length of its longest diagonal?...
Solve the system by using Gaussian elimination or Gauss-Jordan elimination_2x 3y -IS: =-15 T - 2y 9- =-11The system has no solution, {}The system has one solution:The solution set isThe system has infinitely many solutions_The solution set isis any real number /
Solve the system by using Gaussian elimination or Gauss-Jordan elimination_ 2x 3y -IS: =-15 T - 2y 9- =-11 The system has no solution, {} The system has one solution: The solution set is The system has infinitely many solutions_ The solution set is is any real number /...
Suppose the probability density function of the length of computer cablesf (x) 0.1 from 1200 to 1210 millimeters.a) Determine the mean and standard deviation the cable length.1205Meanmillimeters2.8868Standard deviationmillimeters (Round the answer to decimal places:)1195 < x < 1208 0.4997 b) If the length specifications are what proportion of cables within specifications?(Round the answer t0 decimal place: )
Suppose the probability density function of the length of computer cables f (x) 0.1 from 1200 to 1210 millimeters. a) Determine the mean and standard deviation the cable length. 1205 Mean millimeters 2.8868 Standard deviation millimeters (Round the answer to decimal places:) 1195 < x < 1208 0....
For the following situation, state whether the CPA has violated the CHARTERED PROFESSIONAL ACCOUNTANTS OF ONTARIO...
For the following situation, state whether the CPA has violated the CHARTERED PROFESSIONAL ACCOUNTANTS OF ONTARIO Professional Rules of Conduct and/or Independence Standard, and explain your reasoning. 1. Emma, CPA, is the auditor for Nestco (a public company) and recommends that the company improve...
Can you run an independent t test for a research question that ask yes or no questions to a sample population of men and women? if so how do you go about it ?
Can you run an independent t test for a research question that ask yes or no questions to a sample population of men and women? if so how do you go about it ?...
Aazume Jbol (YiY;) 43 Iadsp: Ran?- Jid Xe ; wulk_U; L(o,4) LlAA X;" & ~ag i + U, ) Yi = P + e € WNCou) ad V; fndep: 2f ei- Hcc ~Y;eXi = PeXi 4 ei" -2Ui Shous 211 e 1 6ze )z-
Aazume Jbol (YiY;) 43 Iadsp: Ran?- Jid Xe ; wulk_U; L(o,4) LlAA X;" & ~ag i + U, ) Yi = P + e € WNCou) ad V; fndep: 2f ei- Hcc ~Y;eXi = PeXi 4 ei" -2Ui Shous 211 e 1 6ze )z-...
Using Table of Integrals with the appropriate substitution, find 6e2r + 8e' drAns er:
Using Table of Integrals with the appropriate substitution, find 6e2r + 8e' dr Ans er:...
4. The number X of days in the summer months that a construction crew cannot work because of the weather has the pr...
4. The number X of days in the summer months that a construction crew cannot work because of the weather has the probability distribution 6 0.03 7 0.08 8 0.15 9 0.20 10 0.19 11 0.16 12 010 13 0.07 14 002 Find the probability that no more than ten days will be lost next summer. Find the probability t...
Calculate the_lalice Enecgy 0 f Rubidlum SulfldeKublluaHoo ecergr For / lYlksMole aRbb+_845) 7 Rbz SG) Flrst_innzation E Forlb= 494 KJ/ule DlsSoslgloo Foergl for S =.477 ki/mole Fxst Ekcttoo AfGIity for S = ~IXL kJlole ScOn ElaHon Afcllly For6 Ba3ksJole_ EEnthalpy Folmatlon_foc lb S= 533kshole
Calculate the_lalice Enecgy 0 f Rubidlum Sulflde KublluaHoo ecergr For / lYlksMole aRbb+_845) 7 Rbz SG) Flrst_innzation E Forlb= 494 KJ/ule DlsSoslgloo Foergl for S =.477 ki/mole Fxst Ekcttoo AfGIity for S = ~IXL kJlole ScOn ElaHon Afcllly For6 Ba3ksJole_ EEnthalpy Folmatlon_foc lb S= 533kshole...
Graph the following function and then find the specified limits. When necessaX-1 x<4 flx) = 4sxs8 find Iim f(x) and lim f(x) 48 X+2 ifx>8Choose the correct graph below:0A0 B.Select the correct choice and, ifnecessary; fill in the answer box to complete 0A. Iim f(x) = X-4 0 B. The limit is not 0 Or 00 and does not exist
Graph the following function and then find the specified limits. When necessa X-1 x<4 flx) = 4sxs8 find Iim f(x) and lim f(x) 48 X+2 ifx>8 Choose the correct graph below: 0A 0 B. Select the correct choice and, ifnecessary; fill in the answer box to complete 0A. Iim f(x) = X-4 0 B. The limit ...
Secon:Ma :Aumc:Prelaboratory Assignment: The Atomic Spectra of Hydrogen hus eergy Saa sumuililt thoe The helium hydrogen and afe JIVcT ~qualon 8.72x 10 ' V1" ) JoulesCakulate the eTg} Dan four louest TTEY Ievels of tha He"JoucsJoulesoulesJoule?transition in He" ?Whal the energ} and avelength associated with the uKol; 1 =The strongesllines ofuhe He spctrum occur utthe following wavelengths: (1)27.57Mn. 164.12 t_Anb (3) 25.64 nm. Calculale Ihe transitions assoculed wilh !ese
Secon: Ma : Aumc: Prelaboratory Assignment: The Atomic Spectra of Hydrogen hus eergy Saa sumuililt thoe The helium hydrogen and afe JIVcT ~qualon 8.72x 10 ' V1" ) Joules Cakulate the eTg} Dan four louest TTEY Ievels of tha He" Joucs Joules oules Joule? transition in He" ? Whal t...
Simplify each expression. Write the answer with positive exponents only.$a^{-8} a^{8}$
Simplify each expression. Write the answer with positive exponents only. $a^{-8} a^{8}$...
Respond to a proposal of how to optimize healthcare coverage for as many U.S. citizens as...
Respond to a proposal of how to optimize healthcare coverage for as many U.S. citizens as possible. .recap a news article or proposal and provide your opinion of how this proposal would impact the ability to balance costs, quality, and access to healthcare for all stakeholders involved (e.g., patien...
Fruit flies have incomplete dominance in regards to eye color, where... red-eyed= RR, orange-eyed= Rr, and...
Fruit flies have incomplete dominance in regards to eye color, where... red-eyed= RR, orange-eyed= Rr, and white-eyed= rr Evaluate the population... 42= red, 6=orange, and 52= white a.) What are the observed frequencies of the phenotypes? red=______ orange=______ white=________ b.) Find the proporti...
Cant get final answer keep gfetting wrong answer when i solve it 8
Galaxy A is moving away from us with a speed of 0.85c relative to the earth. Galaxy B ismoving away from us in the opposite direction with a relative speedof0.55c. Assume that the earthand the galaxies are moving at constant velocities, so they areinertial reference frames. How fast is galaxy A movi...
2/3 of all your books are fiction. 1/6 of those are historical fiction. What fraction of all your books are historical fiction?
2/3 of all your books are fiction. 1/6# of those are historical fiction. What fraction of all your books are historical fiction?...
Assignment 1 Give the correct term as described by each statement Part A 1 2 A...
Assignment 1 Give the correct term as described by each statement Part A 1 2 A portion of profits that is paid out to shareholders The total amount of share capital that can be raised by a company during its time The monetary value of the shares of stock a company actually offers for sale to investo...
Find orection at points the support 122.5cm reactions 5KN 0-5cm A, ? 12kw. ma Stainless steal...
find orection at points the support 122.5cm reactions 5KN 0-5cm A, ? 12kw. ma Stainless steal G=80 Gpa m ilmn...
Problem 5 (10 pts) a) (5 pts) What does the following equation describe? Define all variables....
Problem 5 (10 pts) a) (5 pts) What does the following equation describe? Define all variables. Describe each physically meaningful term or group of terms. ah ah ah_Seah ox2 + ayz + azz = Kat b) (5 pts) Write an equation describing steady-state two-dimensional (x-z plane) flow in a confined aquifer w...
|
2022-08-13 06:52:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5001205205917358, "perplexity": 7175.089396759792}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00022.warc.gz"}
|
https://math.stackexchange.com/questions/1847759/ways-to-justify-this-interchange-of-summation-and-integration
|
Ways to justify this interchange of summation and integration
In evaluating this integral:
$$\int_0^\infty \frac{\Im{\left(e^{e^{ix}} \right)}}{x}\text{d}x$$
My means of evaluation was to expand the numerator of the integrand as a fourier series (a.k.a. Taylor series of $e^u$ where $u=e^{ix}$) and then exchange the order of integration and summation.
But what theorems can be used to justify this interchange?
The decay of the integrand is not fast enough for absolute convergence, so nothing in that direction looks promising.
The usual trick is to improve convergence by introducing an exponential smoothing factor $e^{-\lambda x}$, then let $\lambda\to 0^+$. Have a look at the zeta regularization technique, too.
$e^{z}$ is an entire function, hence $$e^{e^{ix}} = \sum_{n\geq 0}\frac{e^{nix}}{n!} \tag{1}$$ holds as an identity for every $x\in\mathbb{C}$, so $$\text{Im}\left(e^{e^{ix}}\right) = \sum_{n\geq 0}\frac{\sin(nx)}{n!}\tag{2}$$ holds as an identity for every $x\in\mathbb{R}$. We may write the original integral as $$I=\lim_{\lambda\to 0^+}\int_{0}^{+\infty}\frac{\text{Im}\left(e^{e^{ix}}\right)e^{-\lambda x}}{x}\,dx=\lim_{\lambda\to 0^+}\sum_{n\geq 1}\int_{0}^{+\infty}\frac{\sin(nx)}{n! x} e^{-\lambda x}\,\,dx\tag{3}$$ and the $\int-\sum$ exchange is justified by the dominated convergence theorem. $(3)$ leads to: $$I = \lim_{\lambda\to 0^+}\sum_{n\geq 1}\frac{\arctan\frac{1}{\lambda}}{n!}=(e-1)\lim_{\lambda\to 0^+}\arctan\frac{1}{\lambda}=\color{red}{\frac{\pi(e-1)}{2}}. \tag{4}$$
• Justification for the assertion that $I = \lim_{\lambda\to 0^+}\int_{0}^{+\infty}\frac{\text{Im}\left(e^{e^{ix}}\right)e^{-\lambda x}}{x}\,dx$ follows from Daniel Fischer's answer to this question. – Random Variable Jul 3 '16 at 16:57
|
2021-06-20 11:05:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988183319568634, "perplexity": 105.92636496290432}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00113.warc.gz"}
|
http://dict.cnki.net/dict_more_exm.aspx?&c=34057&searchword=characteristic
|
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多
共[34057]条 当前为第1条到20条[由于搜索限制,当前仅支持显示前5页数据]
characteristic We use the theory of tilting modules for algebraic groups to propose a characteristic free approach to "Howe duality" in the exterior algebra. Euler characteristic of certain affine flag varieties The purpose of this note is to prove, as Lusztig stated, that the Euler characteristic of the variety of Iwahori subalgebras containing a certain nil-elliptic elementnt istcl wherel is the rank of the associated finite type Lie algebra. For the case of positive characteristic we use the classification of finite irreducible groups generated by pseudoreflections due to Kantor, Wagner, Zalesski? and Sere?kin. We also compute the Euler characteristic of the space of partial flags containingnt and give a connection with hyperplane arrangements. The symmetric varieties considered in this paper are the quotientsG/H, whereG is an adjoint semi-simple group over a fieldk of characteristic ≠ 2, andH is the fixed point group of an involutorial automorphism ofG which is defined overk. We introduce and study the notion of essential dimension for linear algebraic groups defined over an algebraically closed fields of characteristic zero. This is true regardless of the characteristic of the field or of the order of the parameterq in the definition ofHn. Computing invariants of reductive groups in positive characteristic This paper gives an algorithm for computing invariant rings of reductive groups in arbitrary characteristic. Morse Theory and Euler Characteristic of Sections of Spherical Varieties We generalize the formula for the Euler characteristic of a hypersurface in the torus (C*)d, due to D. Whether the corresponding results hold in positive characteristic is not known. Let G be a simple algebraic group over the algebraically closed field k of characteristic p ≥ 0. This interpretation involves an Euler characteristic χ built from Ext groups between integral Weyl modules. The algorithm presented here computes a geometric characteristic of this action in the case where G is connected and reductive, and $\rho$ is a morphism of algebraic groups: The algorithm takes as input the Let k be a field of characteristic zero, let a,b,c be relatively prime positive integers, and define a Let k be an algebraically closed field of characteristic p ≥ 0. When the characteristic of k is 0, it is known that the invariants of d vectors, d ≥ n, are obtained from those of n vectors by polarization. Let X be an affine irreducible variety over an algebraically closed field k of characteristic zero.
CNKI主页 | 设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索
2008 CNKI-中国知网
2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社
|
2018-12-19 11:11:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932900071144104, "perplexity": 302.98083687724403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832259.90/warc/CC-MAIN-20181219110427-20181219132427-00177.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/designing-a-voltage-amplifier.35468/
|
# Designing a voltage amplifier
#### alphacat
Joined Jun 6, 2009
186
Hi.
I need to design a voltage amplifier and I just dont know how to start.
I got the following requirements:
Av = 20 (with no load).
Rin > 10KΩ
VC = 0VDC (collector voltage).
Could you please guide me through this?
* Please notice that there is a split power supply here (-5V, +5V).
I wrote the following equations.
* Av,int = gm / (1 + gm * RE) * RC = 20
* Rin = (R1 || R2) || [ (βo / gm) * (1 + gm * RE) ] = 100KΩ
* gm = IC / Vth
* IC = VCC / RC
* VB = VBE + (IC / αF) * RE ≈ VBE + IC * RE
#### Attachments
• 13 KB Views: 218
Last edited:
#### k7elp60
Joined Nov 4, 2008
561
I have one problem with your requirements, that of the VC=0
I am assuming the the amplifier is to be class A. If so I usually bias the transistor to that the collector voltage is very close to 1/2 Vcc with no signal.
The gain of a typical class A stage is the collector resistor/un bypassed emitter resistor.
Also the the input impedance of your circuit is very close to the parallel combination of R1 and R2.
If you use collector resistor of 10K, the IC for center biased would be 2.5/10k or 100μA. The emitter resistor then would be 10K/20 or 500
The closest stand value would be 510Ω.
I normally calculate the current thru the base bias resistor= 10X the base current. The base current in this situation would be IE/β. The emitter current is very close to the collector current. Look up the specs of the 2N2222 to see what the is for 100μA of collector current. The continue with your calculations.
#### hgmjr
Joined Jan 28, 2005
9,029
When using an emitter resistor the closest that Vc can get to ground is:
$$\LARGE V_c\;=\;\frac{V_{cc}*R_e}{R_c+R_e}\;+\;V_{ce(sat)}$$
hgmjr
#### hobbyist
Joined Aug 10, 2008
887
VC @ 0 volts
split power supply.
I don't like split power supplies, I have a hard time figuring the voltages.
Throws me off so much.
For me it's easier to calculate using a ground and one supply voltage.
How to start this design is your question.
-------------------------------------------------
1):
First gather all constraints. (requirements)
a). you are given a split supply of 5v. each.
This is very important these nomenclatures will be used in later calculations.
The +5v. is called VCC...........The -5v. is called VEE......
d). Rin>10K
e). VC @ 0V.
f). single stage CE.
g) emitter degeneration, with no capacitor bypass.
-----------------------------------------------------
2):
a), Since load = 10K and your gain is specified as a NO load gain of 20,
b). now use this equation (Av = RC / RE) .....therefor (RE = RC / Av.) .....RE=??
c). Now 0v. sits at the collector. (all voltages respect to ground)
which means is then 1/2 the total supply voltage will be dropped across RC to attain
this. So use this equation (VCC / RC = IC) ........IC=??
d). Now solve for the Voltage DROP across RE.... (VRE = IC x RE).........VRE=??
e). Here is where it gets a little tricky, if your not careful.
VB ={ VEE + (VRE + Vbe)}, this is solved like this, VEE with respect to ground = -5v.
So this partial loop works as follows, (VEE + VRE + Vbe) =VB
which is (-5v. + VRE +Vbe) = VB, You should get a neg. value at this point.
f). Once you get that value, then to continue the loop, you need to algebraicly solve for the
voltage DROP NEEDED across R1 (VR1) that will equal a positive value of VCC.
So when you algebraicly add VCC into this loop you will end back at the begining which
is 0v.
Looks like this, (starting from ground) VEE + VB + VR1 + VCC = 0v.
{-5v. + VB + VR1 -5v.} = 0v.
VR1 =??
g). Now make R2> Rin.
h). Now solve for the voltage DROP ACROSS R2......(VR2)
Use this equation, (VRE + Vbe) = VR2
solve for current thrugh this resistor use this equation, IR2 = (VR2 / R2)= .......IR2=??
I). Solve for R1 use this equation, R1 = (VR1 / IR2) ......R1=??
-----------------------------------------------------------------------------
I simulated it,and had to adjust R1 a little higher than calculated to get close to VC @ 0v.
Remember the voltages are refrenced to ground, where ground is considered the connectrion between both battery supplies.
Have fun.
Last edited:
#### alphacat
Joined Jun 6, 2009
186
Thank you so much Hobbyist and K7elp60!!
I learned so much from the way you started off.
I'd like to ask you 2 questions please regarding how you handled it.
1.
You started off by choosing a collector resistor.
Hobbyist, you said its value should match the load resistor, which is reasonable (so at least we will get half of the output voltage transferred to the load).
What made you start off by picking first RC, and say not picking first IC or any other parameter?
2.
You concluded that
Av = RC / RE.
However, the exact formula is:
Av = gm / (1 + gm * RE) * RC
Meaning, you assumed that:
gm * RE >> 1
However, according to K7elp60's calculations:
gm * RE = 4mS * 500Ω = 2 ~ 1
But according to Hobbyist's calculations:
gm * RE = 20mS * 500Ω = 10 >> 1
So I assume that you always need to check that indeed
gm * RE >> 1,
isnt it?
---EDIT---
Hobbyist, in the equation you wrote for VB:
VB = VEE + VRE + VBE
Should we assume that
VBE ≈ 0.6V ?
Last edited:
#### Jony130
Joined Feb 17, 2009
5,145
Calculations may look like this:
Rc=5V/Ic=5V/1mA=5K=5.1KΩ.
Re≈Rc/Av=5.1K/20=255=220Ω.
Ve≈Vee+Ic*Re=-5V+1mA*220Ω=-5V+220mV=-4.78V
Vb=Ve+Vbe=-4.78V+0.65V=-4.13V
R2 = (Vb-Vee) / (10*Ib)
R1= (Vcc- Vb) / (11*Ib)
So if we assume β=100; Ib=10uA
R2=8.7K=8.1K
R1=83k=81KΩ
But if R1=8.1K we don't get Rin >10K
So we need to find bjt with β larger then 200 e.g. BC548B. (of-course we can change Ic by increase Rc)
Or increase R2 and R1 to:
R2 = (Vb-Vee) / (5*Ib) = 16KΩ
R1= (Vcc- Vb) / (6*Ib)
= 150KΩ
Last edited:
#### hobbyist
Joined Aug 10, 2008
887
Hi,
I should have pointed out, that I was using
first order approximations.
That usually gets you into the ballpark with values close enough to readjust as nesecary.
Your methods are going into a more refined order of calculations, which I leave that up to you, because you have more of an understanding of it then I do. I could learn from you on that order.
The reason I chose RC, was just to get a start on the design, knowing a value for the load (10K), gave me something to work with, and knowing a value for Av. was an easy way to proceed with the design.
I chose R2 to be around 10 x RE to keep the base current from loading the divider, again that is a first order approximation.
A more refined way would be to add in the parrallel resistances plus the (B+1*RE+re) factor ect...
I hope I was able to give you a starting point, which it looks like you now have a good handle on it, so now you can use the more refined equations to work with this design.
Have fun...
Study joni130's post
He sums it up real nice an neatly, with nice short easy to follow equations.
Thankyou for taking the time to reply back, to let us know that you got our posts...
#### alphacat
Joined Jun 6, 2009
186
I really thank you guys.
I managed to design it with your help.
according to the simulation:
Av,int = 18.8
Rin = 56KΩ
Rout =19.89KΩ
In order to get Rin, I kept the VCC and VEE working, and calculated v_in / i_in, its correct right?
And to get Rout, I kept VCC and VEE working, neutralized Vin, connected a voltage source instead of load (from coupling capacitor at output to ground) and calculated v_out/i_out, is it right?
Last edited:
#### Jony130
Joined Feb 17, 2009
5,145
In order to get Rin, I kept the VCC and VEE working, and calculated v_in / i_in, its correct right?
And to get Rout, I kept VCC and VEE working, neutralized Vin, connected a voltage source instead of load (from coupling capacitor at output to ground) and calculated v_out/i_out, is it right?
Yes, its correct.
And you use very low Ic current and this could cause problems in real circuit.
#### alphacat
Joined Jun 6, 2009
186
Hi Jony.
Thank you again.
Why could low IC cause problems?
#### hobbyist
Joined Aug 10, 2008
887
I just quickly seen your schem.
Your VEE is the neg. 5v. but the schematic, shows the neg. term. on both supplies as grounded.
I don't know if your simulation picks up on that, or goes by the absolute value written (-5v.) but the total supply should be 10 volts.
with the 2 batteries in series, and the ground in between.
Hope this makes sense.
If your simulator goes by the actual pic.
then neg. term of VEE needs to be connected to the emitter.
#### alphacat
Joined Jun 6, 2009
186
Hi Hobbyist.
Spice recognizes negative voltages.
So VEE is indeed -5V and not 5V as you might think.
I designed a voltage buffer as well (I will actually build this dual-stage amplifier this week).
I noticed that you can't use a coupling capacitor between the two stages.
Its like I needed to set ahead the CE's output to 0VDC and the CC's input to 0VDC, in order to be able to connect them to each other.
Am I missing something?
Like, what happens if the CE is designed to have an output of 2.5VDC,
while the CC is designed to be biased by 0VDC?
#### hobbyist
Joined Aug 10, 2008
887
Hi
Using dual supplies is pretty new to me too.
So I am kind of stumbling through this myself.
First may I suggest that you put both batteries in series, with the neg. term on one connected to the pos term. on the other, and then write for the voltage "5v." on each, that way it would be much easier to analyze the circuit. And then place the ground in the connection of both of these batteries.
I'm not quite understanding why such high value resistors in the base network.
I'll design a class A stage using your first requirements, using my simulator, and see if I can make things more clearer.
I got to say your doing a good job, at working at this, and trying to get a good handle on it.
I'll be back later, with a schem...
#### PRS
Joined Aug 24, 2008
989
Thank you so much Hobbyist and K7elp60!!
I learned so much from the way you started off.
I'd like to ask you 2 questions please regarding how you handled it.
1.
You started off by choosing a collector resistor.
Hobbyist, you said its value should match the load resistor, which is reasonable (so at least we will get half of the output voltage transferred to the load).
What made you start off by picking first RC, and say not picking first IC or any other parameter?
2.
You concluded that
Av = RC / RE.
However, the exact formula is:
Av = gm / (1 + gm * RE) * RC
Meaning, you assumed that:
gm * RE >> 1
However, according to K7elp60's calculations:
gm * RE = 4mS * 500Ω = 2 ~ 1
But according to Hobbyist's calculations:
gm * RE = 20mS * 500Ω = 10 >> 1
So I assume that you always need to check that indeed
gm * RE >> 1,
isnt it?
---EDIT---
Hobbyist, in the equation you wrote for VB:
VB = VEE + VRE + VBE
Should we assume that
VBE ≈ 0.6V ?
Yes, assume .6 volts for transistor base to emitter drops. Hobbyist did very well with his explanation.
#### hobbyist
Joined Aug 10, 2008
887
Hi
Here is 2 schematics.
This is just to get VC close to ground potential.
Didn't do any simulation with it, to check for linear amplification, and gain and such.
First shows the VC using calculations, and nominal resistor values only.
The second is the VC after adjusting R1.
I had to make R2 greater than 10 x RE so as to meet the Rin requirements. So I chose 12K ohms.
However (B+1) x RE is included when going further in calculations.
All calculations wer done on a first order (approximation) Only.
SIDE NOTE:
By making R1 higher seemed to cause the VC to be closer to design value, HOWEVER,
that may not have been the better thing to do, because if it goes to high then the base current will begin loading and controling the network, thereby causing fluctuations, at the output when its parameters change.
That's why in designing a circuit, it is best to prototype it, and check all parameters, (measurements and all testings), for the desired results.
Last edited:
#### PRS
Joined Aug 24, 2008
989
Here's my approach.
*The parameter Av=20 no load gives you the freedom to pick whatever Rc you want. RL becomes irrelavent. But Av=20 no load means Rc must be 20 times greater than Re, for Av=Rc/Re
* Rin >= 10k means your R1 and R2 resistors in parallel must be greater than 10k to the extent that when you put them in parallel with Rin=B*Re you satisfy the requirement.
* Making the voltage at the collector = 0 is a matter of math. Vc = Vcc -IcRc = 0
* Incidently, there are no requirements on frequency response so high resistances are usable.
Taking all of this into consideration, I'd go with hobbyist's thought and make Rc=10K to maximize the power to the load.
Given that, Re must be <= 10k/20= 500 ohms. But use the standard value Re=470 ohms.
Now the question of the current, Ic. It must be selected such as to give 0 volts at the collector. So this must satisfy Vc=Vcc-IcRc=0. We know Vcc and Rc so Ic must be 0.5 mA. Therefore the voltage at the emitter must be Ve=IeRe= .5mA. Therefore Ve=.5*470=.235 volts. Which makes Vb=.835 volts. (note the appoximation Ic=Ie is due to 10% resistor values and variable Beta of the xistor. To get exactly 0 volts you need a variable resistor in the bias circuit, probably R2.)
Here comes another approximation. The junction of R1 and R2 are taken as simple voltage division, ignoring the small current into the base of the xistor. And this, again, is due to the 10% approximation rule. We are not using exact equations here, but first order equations and non-ideal resistors.
So Vb=.835 volts. Excuse me if I simplify the dual power supply by supposing a single supply of 10 volts. It makes life easier.
[eq 1] [R2/(R1+R2)]*Vcc=Vb This is one equation for determining R1 and R2. The other is this. To make the amplifier work for any value of current gain (Beta) you use the rule of thumb: Idivider= Ic/10. So we will have .5mA/10=.05 mA.
Therefore [eq 2] Vcc*2/(R1+R2)=.05mA. (Vcc*2 is just both supplies taken together).
Solving eq 2 we get R1+R2=200k
And puting this into eq 1 we get R2=16.7k and then from eq2 we get R1=183k
So use R1=180k and R2=15k, uh oh. Do we have Rin>=10k? Let's find out: Rin' is the resistance looking into the xistor. Assuming a Beta of 100, this reflects Re=470 into the base as Beta*470=47000. Thus we have"
Rin=1/47000 + 1/180000 + 1/15000 = 10,695 ohms and we passed!
But to get exactly 0 volts at the collector use a pot for R2.
Last edited:
#### Jony130
Joined Feb 17, 2009
5,145
All this problems with finale adjustment of a R1, R2 is caused by Ve voltage is smeller then Vbe.
If we "proper" design the amplifier with Ve large then Vbe and Idivider= Ic/10 rule. Then the influence of a BJT parameters decrease significantly.
For example for Ve=1V; and Ve=2V and Hfe chance from 95 to 450.
So for Ve=1V ; Ic change from 509uA to 563uA (change only by 10.6%).
------ Ve=2V ; Ic change from 505uA to 545uA (change by 8%)
And for Ve=0.22mV; Ic change from 240uA to 428uA (change by 78%).
So if we design the simple amplifier its always good to choose Ve large then Vbe. And large Ve improves thermal stability to.
And to set the gain we need to add a extra Re2 resistor plus capacitor Ce.
But if you wont better discrete amplifier ( DC-coupling) why don't you use long tailed pair as a input stage.
This type of amplifiers is very easy to design and they have much better parameters then simple BJT amplifiers.
#### Attachments
• 19.3 KB Views: 32
• 18.9 KB Views: 25
• 19.6 KB Views: 25
• 49.9 KB Views: 30
• 16.4 KB Views: 129
Last edited:
#### PRS
Joined Aug 24, 2008
989
Very well said, Joni, and you made good points. But doesn't this go well beyond the simplicity of the original circuit?
#### PRS
Joined Aug 24, 2008
989
Here's a circuit based on the design considerations and my calculations above. I changed R2 from 15 k to 18 k (standard values), and got nearly 0 volts at the collector. The waveform is fine as you can see in the simulation attached. Hope this helped. By the way, to get 0 volts at the collector dead on, you need a variable resistor in place of R1 or R2 or in series with one of them.
#### Attachments
• 158.6 KB Views: 35
#### Audioguru
Joined Dec 20, 2007
11,251
Why does Jony's simulator show the negative battery connected backwards?
|
2020-04-04 16:15:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6151379346847534, "perplexity": 4881.269319867088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00123.warc.gz"}
|
http://electronics.stackexchange.com/questions/29825/does-layer-order-matter-in-a-four-layer-pcb/29827
|
# Does layer order matter in a four layer PCB?
The PCB doesn't have blind vias. In a four layer PCB the inner layers will often be a ground and a power plane. Does it matter which one you place closer to the component side?
-
It does. You want ground plane to be closest to your main signal layer. – Armandas Apr 13 '12 at 8:46
In an ideal world $V_{CC}$ is the same as ground for AC signals, and then the order doesn't matter. In practice ground and power net impedances aren't zero and there are noise signals between them.
The picture shows how signals on the top layer are coupled to the $V_{CC}$ layer, not to ground. Especially HF signals you'll want on the layer closest to ground, here layer 4.
For low power, LF designs the order will not make much difference.
Picture from this paper
-
|
2013-12-13 15:40:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4381439685821533, "perplexity": 1826.1403416268295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164954485/warc/CC-MAIN-20131204134914-00092-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://byjus.com/question-answer/if-a-and-b-are-two-matrices-such-that-ab-b-ba-a-then-a/
|
Question
# If $$A$$ and $$B$$ are two matrices such that $$AB = B, BA = A$$ then $$[A$$ and $$B$$ are not null matrices].
A
A2=A
B
B2=0
C
AB is idempotent
D
AB is nilpotent
Solution
## The correct option is A $$A^{2} = A$$$$AB=B$$ $$\&$$ $$BA=A$$$$\Rightarrow AB=B$$$$\Rightarrow (BA)B=B$$$$\Rightarrow B(AB)=B$$$$\Rightarrow B.B=B$$$$\Rightarrow B^2=B$$Similarly, $$A^2=A$$Hence, the answer is $$A^2=A.$$Maths
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-21 02:34:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812011480331421, "perplexity": 2567.820760684384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00138.warc.gz"}
|
https://testbook.com/question-answer/a-4-0-ampere-ammeter-has-a-resistance-of-0-01-ome--615da22e83637eb6c396ad50
|
A 4.0 ampere ammeter has a resistance of 0.01 Ω. Determine the efficiency of the instrument.
This question was previously asked in
UPPCL JE Electrical 7 Sept 2021 Official Paper (Shift 2)
View all UPPCL JE Papers >
1. 25 A/W
2. 25.01 A/W
3. 16 A/W
4. 1.6 A/W
Option 1 : 25 A/W
Detailed Solution
Effiiciency of Instruments:
The efficiency of instruments is the ratio of voltage and current rating to the power rating.
For ammeter, the current rating is considered and for voltmeter, the voltage rating is considered.
Formula:
For Ammeter: P = I2R
For Voltmeter: $$P=\frac{V^2}{R}$$
Application:
The given instrument is ammeter,
I = 4 A
R = 0.01 Ω
Hence, P = I2R = 42 × 0.01 = 0.16 W
Hence, efficiency = I/P = 4/0.16 = 25 A/W
|
2022-01-18 16:21:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7092740535736084, "perplexity": 9970.604796444486}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00264.warc.gz"}
|
https://docs.ropensci.org/changes/articles/changes.html
|
## Some simple steps to success!
1. Start a new repository (or download an existing one). What is a repository, you ask? A repository is a folder in which you store your project. The folder saves previous versions of your project and allows you to easily access them. For example:
#Create a brand new repository in a new or existing local project. Defaults to your current working directory
create_repo("~/Desktop/myproject")
download_repo("url")
1. Make some changes: work on your project as normal. Don’t forget to save your changes. [insert picture/code here]
2. Review and visualise the changes you have made to your project. [insert picture here]
changes()
1. Once you are happy with your changes, record them in your repository. Your repository records a snapshot of your current project, adding it to the list of previous versions.
[insert picture here]
# automatically performs all of the steps to record your changes in your repository
record("a message to your future self")
## Fixing stuff, moving back, recording stuff
1. Look at your history of records
# print a history of your past records in your console:
timeline()
1. Fixing stuff!
# Made a mistake? Return your project to your last record:
scrub()
# ...or to another previous record of your choice:
retrieve(1)
# take a peek into any older record
go_to(2)
1. Saving stuff online When you work on your computer, you are usually working in what is called the ‘local’ environment. The local environment encompasses anything housed on your computer’s hard drive. If you want to collaborate or make your work available to others, it’s a good idea to put it ‘in the cloud,’ in other words, in a ‘remote’ repository. These are housed on a server somewhere else in the world, and can be accessed online. This can also provide additional safety in case you lose your work in your local environment.
# Synchronize your work with a new or existing remote repository
sync("url") # (not yet implemented)
# An example workflow
# load the package
library(changes)
# make a new repository in your working directory
# (you only need to do this the first time you work with the project)
create_repo("~/Desktop/myproject")
# tell the repository if there are files (e.g. large data output files) you
# don't want to keep copies of
ignore("output/results.csv")
# (you can always change your mind)
unignore("output/results.csv")
cat("this is fun!\n", file = "README.md", append = TRUE)
# see which files have changed
changes()
# record the changes you have made
# you can keep working on and adding files in this folder, and recording your
# changes regularly with record()
# If you make a change you don't want to keep, you can undo them and go back to
# your last record with scrub()
cat("I could do this all day.\n", file = "README.md", append = TRUE)
changes()
scrub()
changes()
# you can look at your history of records...
timeline()
# ... and go back in time to recover the project at any one of your records
go_to(2)
# all of the files will have been changed back to how they were at record 2
timeline()
# we can always go back to the future
go_to(3)
# if you want to start again from a previous record, you can do retrieve() to
# bring that record to the end of your timeline
retrieve(2)
# all the work you recorded since then will still be stored, in case you need it
# later
|
2020-11-26 01:51:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23119117319583893, "perplexity": 2260.7451213861737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141185851.16/warc/CC-MAIN-20201126001926-20201126031926-00355.warc.gz"}
|
https://mhasoba.github.io/TheMulQuaBio/notebooks/20-ModelFitting-NLLS.html
|
# Model Fitting using Non-linear Least-squares#
## Introduction#
In this Chapter, you will learn to fit non-linear mathematical models to data using Non-Linear Least Squares (NLLS).
Specifically, you will learn to
• Visualize the data and the mathematical model you want to fit to them
• Fit a non-linear model
• Assess the quality of the fit, and whether the model is appropriate for your data
• Compare and select between competing models
We will work through various examples. These assume that you have at least a conceptual understanding of what Linear vs Non-linear models are, how they are fitted to data, and how the fits can be assessed statistically. You may want to see the Linear Models lecture (you can also watch the video), and the NLLS Lecture lecture first (you can also watch the video).
You may also (optionally) want to see the lecture on model fitting in Ecology and Evolution in general.
We will use R. For starters, clear all variables and graphic devices and load necessary packages:
rm(list = ls())
graphics.off()
## Traits data as an example#
Our first set of examples will focus on traits.
A trait is any measurable feature of an individual organism. This includes physical traits (e.g., morphology, body mass, wing length), performance traits (e.g., biochemical kinetics, respiration rate, body velocity, fecundity), and behavioral traits (e.g., feeding preference, foraging strategy, mate choice). All natural populations show variation in traits across individuals. A trait is functional when it directly (e.g., mortality rate) or indirectly (e.g., somatic development or growth rate) determines individual fitness. Therefore, variation in (functional) traits can generate variation in the rate of increase and persistence of populations. When measured in the context of life cycles, without considering interactions with other organisms (e.g., predators or prey of the focal population), functional traits are typically called life history traits (such as mortality rate and fecundity). Other traits determine interactions both within the focal population (e.g., intra-specific interference or mating frequency) and between the focal population/species and others, including the species which may act as resources (prey, for example). Thus both life history and interaction traits determine population fitness and therefore abundance, which ultimately influences dynamics and functioning of the wider ecosystem, such as carbon fixation rate or disease transmission rate.
## Biochemical Kinetics#
The properties of an organism’s metabolic pathways, and the underlying (enzyme-mediated) biochemical reactions (kinetics) are arguably its most fundamental “traits”, because these drive all “performance” traits, from photosynthesis and respiration, to movement and growth rate.
The Michaelis-Menten model is widely used to quantify reaction kinetics data and estimate key biochemical parameters. This model relates biochemical reaction rate ($$V$$) (rate of formation of the product of the reaction), to concentration of the substrate ($$S$$):
(7)#$V = \frac{V_{\max} S}{K_M + S}$
Here,
• $$V_{\max}$$ is the maximum rate that can be achieved in the reaction system, which happens at saturating substrate concentration (as $$S$$ gets really large), and
• $$K_M$$ is the Michaelis or half-saturation constant, defined as the substrate concentration at which the reaction rate is half of $$V_{\max }$$. This parameter controls the overall shape of the curve, i.e., whether $$V$$ approaches $$V_{\max}$$ slowly or rapidly. In enzyme catalyzed reactions, it measures how loosely the substrate binds the enzyme: large $$K_M$$ indicates loose binding of enzyme to substrate, small $$K_M$$ indicates tight binding (it has units of the substrate concentration, $$S$$).
Biochemical reactions involving a single substrate are often well fitted by the Michaelis-Menten kinetics.
The Michaelis-Menten model.
Let’s fit the Michaelis-Menten model to some data.
### Generating data#
Instead of using real experimental data, we will actually generate some “data” because that way we know exactly what the errors in the data are. You can also import and use your own dataset for the fitting steps further below.
We can generate some data as follows.
First, generate a sequence of substrate concentrations from 1 to 50 in jumps of 5, using seq() (look up the documentation for seq()).
S_data <- seq(1,50,5)
S_data
1. 1
2. 6
3. 11
4. 16
5. 21
6. 26
7. 31
8. 36
9. 41
10. 46
Note that because we generated values only at intervals of, there will be 50/5 = 10 “substrate” values.
Now generate a Michaelis-Menten reaction velocity response with V_max = 12.5 and K_M = 7.1:
V_data <- ((12.5 * S_data)/(7.1 + S_data))
plot(S_data, V_data)
Note that our choice of $$V_{\max} = 12.5$$ and $$K_M = 7.1$$ is completely arbitrary. As long as we make sure that $$V_{\max} > 0$$, $$K_H > 0$$, and $$K_M$$ lies well within the lower half of the the range of substrate concentrations (0-50), these “data” will be physically biologically sensible.
Now let’s add some random (normally-distributed) fluctuations to the data to emulate experimental / measurement error:
set.seed(1456) # To get the same random fluctuations in the "data" every time
V_data <- V_data + rnorm(10,0,1) # Add 10 random fluctuations with standard deviation of 1 to emulate error
plot(S_data, V_data)
That looks real!
## Fitting the model using NLLS#
Now, fit the model to the data:
MM_model <- nls(V_data ~ V_max * S_data / (K_M + S_data))
Warning message in nls(V_data ~ V_max * S_data/(K_M + S_data)):
“No starting values specified for some parameters.
Initializing ‘V_max’, ‘K_M’ to '1.'.
Consider specifying 'start' or using a selfStart model”
This warning arises because nls requires “starting values” for the parameters (two in this case: V_max and K_M) to start searching for optimal combinations of parameter values (ones that minimize the RSS). Indeed, all NLLS fitting functions / algorithms require this. If you do not provide starting values, nls gives you a warning (as above) and uses a starting value of 1 for every parameter by default. For simple models, despite the warning, this works well enough.
Tip
Before proceeding further, have a look at what nls()’s arguments are using ?nls, or looking at the documentation online.
We will address the issue of starting values soon enough, but first let’s look at how good the fit that we obtained looks.
### Visualizing the fit#
The first thing to do is to see how well the model fitted the data, for which plotting is the best first option:
plot(S_data,V_data, xlab = "Substrate Concentration", ylab = "Reaction Rate") # first plot the data
lines(S_data,predict(MM_model),lty=1,col="blue",lwd=2) # now overlay the fitted model
This looks OK.
Note
We used the predict() function here just as we did in any of the linear models chapters (e.g., here). In general, you can use most of the same commands/functions (e.g., predict() and summary()) on the output of a nls() model fitting object as you would on a lm() model fitting object. Please have a look at the documentation of the predict function for nls before proceeding.
However, the above approach for plotting is not the best way to do it, because predict(), without further arguments (see its documentation), by default only generates predicted values for the actual x-values (substrate) data used to fit the model. So if there are very few values in the original data, you will not get a smooth predicted curve (as you can see above). A better approach is to generate a sufficient number of x-axis values and then calculate the predicted line. Let’s do it:
coef(MM_model) # check the coefficients
V_max
12.9636891133139
K_M
10.6072253230542
Substrate2Plot <- seq(min(S_data), max(S_data),len=200) # generate some new x-axis values just for plotting
Predict2Plot <- coef(MM_model)["V_max"] * Substrate2Plot / (coef(MM_model)["K_M"] + Substrate2Plot) # calculate the predicted values by plugging the fitted coefficients into the model equation
plot(S_data,V_data, xlab = "Substrate Concentration", ylab = "Reaction Rate") # first plot the data
lines(Substrate2Plot, Predict2Plot, lty=1,col="blue",lwd=2) # now overlay the fitted model
That looks much better (smoother) than the plot above !
### Summary stats of the fit#
Now lets get some stats of this NLLS fit. Having obtained the fit object (MM_model), we can use summary() just like we would for a lm() fit object:
summary(MM_model)
Formula: V_data ~ V_max * S_data/(K_M + S_data)
Parameters:
Estimate Std. Error t value Pr(>|t|)
V_max 12.964 1.221 10.616 5.42e-06 ***
K_M 10.607 3.266 3.248 0.0117 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8818 on 8 degrees of freedom
Number of iterations to convergence: 5
Achieved convergence tolerance: 9.503e-06
This looks a lot like the output of a linear model, and to be specific, of a Linear Regression. For starters, compare the above output with the output of summary(genomeSizeModelDragon) in this section of the Linear Regression chapter.
So here are the main things to note about the output of summary() of an nls() model object:
• Estimates are, as in the output of the lm() function for fitting linear models, the estimated values of the coefficients of the model that you fitted ($$V_{\max}$$ and $$K_M$$). Note that although we generated our data using $$V_{\max} = 12.5$$ and $$K_M = 7.1$$, the actual coefficients are quite different from what we are getting with the NLLS fitting ($$\hat{V}_{\max} = 12.96$$ and $$\hat{K}_M = 10.61$$). This is because we added random (normally-distributed) errors when we generated the “data”. This tells you something about how experimental and/or measurement errors can distort your image of the underlying mechanism or process.
• Std. Error, t value, and Pr(>|t|) and Residual standard error have the same interpretation as in the output of lm() (please look back at the Linear Regression Chapter)
• Number of iterations to convergence tells you how many times the NLLS algorithm had to adjust the parameter values till it managed to find a solution that minimizes the Residual Sum of Squares (RSS)
• Achieved convergence tolerance tells you on what basis the algorithm decided that it was close enough to the a solution; basically if the RSS does not improve more than a certain threshold despite parameter adjustments, the algorithm stops searching. This may or may not be close to an optimal solution (but in this case it is).
The last two items are specific to the output of an nls() fitting summary(), because unlike Ordinary Least Squares (OLS), which is what we used for Linear regression, NLLS is not an exact procedure, and the fitting requires computer simulations; revisit the Lecture for an explanation of this. This is all you need to know for now. As such, you do not need to report these last two items when presenting the results of an NLLS fit, but they are useful for problem solving in case the fitting does not work (more on this below).
As noted above, you can use the same sort of commands on a nls() fitting result as you can on a lm() object.
Thus, much of the output of NLLS fitting using nls() is similar to the output of an lm(), and can be further analyzed or processed using analogous functions such as coef(), residuals(), and confint().
For example, you can get just the values of the estimated coefficients using coef() as you did above for the plotting.
## Statistical inference using NLLS fits#
So what do we do with the results of an NLLS fit? What statistical inferences can be made? We will address this issue here at a basic level, and then revisit it using model selection further below (in the Allometric growth example).
### Goodness of fit#
Assessing whether an NLLS-fitted model is “significant” based on an F-value, by performing an Analysis of Variance (ANOVA) on the regression results as one can do for a Linear Model using the anova() function on a linear model fit is not advisable. Try anova(MM_model), and see what happens:
anova(MM_model)
Error in anova.nls(MM_model): anova is only defined for sequences of "nls" objects
Traceback:
1. anova(MM_model)
2. anova.nls(MM_model)
3. stop("anova is only defined for sequences of \"nls\" objects")
So a ANOVA calculation is not implemented for a NLLS fit. The third item in the error message / traceback above says that you can however use anova() to compare two nested models. That is, you can do something like anova(model1,model2), where model 1 is nested within model 1, as we did previously to ask if a simpler model explains significantly more variation in the data than a more complicated one (with one or more addtional terms in the equation).
To understand why you cannot use an ANOVA to assess the significance of a NLLS model fit, recall that the objective of an anova() function applied to a linear model fit is o compare the fitted model to a null model, asking whether you can reject that null model (hypothesis). The problem with doing so for a nonlinear model is that unfortunately it is not clear what the null hypothesis should be when the model is not a linear combination of terms (the definition of a linear model). For an F-statistic to be meaningful, the null hypothesis model must be nested within the alternative model (in which case you can indeed use the anova() function using anova(Model1,Model2)).
To make this clear, let’s recall the equation for a simple linear regression:
$y = \beta_0 + \beta_1 x$
The null model for this is
$y = \beta_0$
This then allows the TSS, RSS and ESS to be calculated in a meaninglful way, as we did before, leading to an ANOVA based assessment of significance of the regrssion moel fit to the data.
Now consider a nonlinear model like the Michaelis-Menten eqn (7). What would be an appropriate null model for this equation?
Thus, the best ways to assess a NLLS model’s fit are:
• Compare it’s likelihood to those of other alternative models’ fits (which may include your best guess for a null model).
• Examine whether the fitted coefficients are reliable, i.e., are significant, based on their (low) standard errors, (high) t-values, and (low) p-values (see the example model’s summary above).
#### R-squared values#
To put it simply, unlike an R$$^2$$ value obtained by fitting a linear model, that obtained from NLLS fitting is not reliable, and should not be used. The reason for this is somewhat technical (e.g., see this paper) and we won’t go into it here. But basically, NLLS R$$^2$$ values do not always accurately reflect the quality of fit, and definitely cannot be used to select between competing models (Model selection, as you learned previously). Indeed R$$^2$$ values obtained from NLLS fitting even be negative when the model fits very poorly! We will learn more about model selection with non-linear models later below.
### Confidence Intervals#
One particularly useful thing you can do after NLLS fitting is to calculate/construct the confidence intervals (CI’s) around the estimated parameters in our fitted model. This is analogous to how we would in the OLS fitting used for Linear Models:
confint(MM_model)
Waiting for profiling to be done...
A matrix: 2 × 2 of type dbl
2.5%97.5%
V_max10.64047817.00502
K_M 4.92454722.39247
The Waiting for profiling to be done... message reflects the fact that calculating the standard errors from which the CI’s are calculated requires a particular computational procedure (which we will not go into here) when it comes to NLLS fits.
Calculating confidence intervals can be useful because,
1. As you learned here, here, and here (among other places) you can use a coefficient/parameter estimate’s confidence intervals to test whether it is significantly different from some reference value. In our CI example above, the intervals for K_M do in fact include the original value of $$K_M = 7.1$$ that we used to generate the data.
2. Confidence intervals can also be used to do a quick (but not robust - see warning below) test of whether coefficient estimates of the same model coefficient obtained from different populations (samples) are significantly different from each other (If their CI’s don’t overlap, they are significantly different).
Warning
You can compare estimates of the same coefficient (parameter) from samples of different populations: if their CI’s don’t overlap, this indicates a statistically significant difference between the two populations as far as that coefficient is concerned (e.g., for 95% CI’s this means at the 0.05 level of significance or p-value). However, the opposite is not necessarily true: when CI’s overlap, there may still be a statistically significant difference between the coefficients (and therefore, populations).
## The starting values problem#
Now let’s revisit the issue of starting values in NLLS fitting. Previously, we fitted the Michaelis-Menten Model without any starting values, and R gave us a warning but managed to fit the model to our synthetic “data” using default starting values.
### Fitting the model with starting values#
Lets try the NLLS fitting again, but with some particular starting values:
MM_model2 <- nls(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = 12, K_M = 7))
Note that unlike before, we got no warning message about starting values.
As will become apparent below, using sensible starting values is important in NLLS fitting.
Note
How do you find “sensible” starting values for NLLS fitting? This very much depends on your understanding of the mathematical model that is being fitted to the data, the mechanistic interpretation of its parameters, and the specific dataset. For example, in the Michaelis-Menten Model example, we can know that $$V_{\max}$$ is maximum reaction velocity and $$K_M$$ is the value of the substrate concentration at which half of $$V_{\max}$$ is reached. So we can choose starting values by “eye-balling” a particular dataset and determining approximately what the V_maxand K_M are be for that particular dataset. In this particular case, we chose V_max = 12 and K_M = 7 because looking at the data plot above, these values seem to be reasonable guesses for the two parameters.
Let’s compare the coefficient estimates from our two different model fits to the same dataset:
coef(MM_model)
coef(MM_model2)
V_max
12.9636891133139
K_M
10.6072253230542
V_max
12.9636297453629
K_M
10.6070555266004
Not too different, but not exactly the same!
In contrast, when you fit linear models you will get exactly the same coefficient estimates every single time, because OLS is an exact procedure.
Now, let’s try even more different start values:
MM_model3 <- nls(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = .01, K_M = 20))
Compare the coefficients of this model fit to the two previous ones:
coef(MM_model)
coef(MM_model2)
coef(MM_model3)
V_max
12.9636891133139
K_M
10.6072253230542
V_max
12.9636297453629
K_M
10.6070555266004
V_max
0.816129930954726
K_M
-19.4993655809431
The estimates in our latest model fit are completely different (in fact, K_M is negative)! Let’s plot this model’s and the first model’s fit together:
plot(S_data,V_data) # first plot the data
lines(S_data,predict(MM_model),lty=1,col="blue",lwd=2) # overlay the original model fit
lines(S_data,predict(MM_model3),lty=1,col="red",lwd=2) # overlay the latest model fit
As you would have guessed from the really funky coefficient estimates that were obtained in MM_model3, this is a pretty poor model fit to the data, with the negative value of K_M causing the fitted version of the Michaelis-Menten model to behave strangely.
Also, you could have used the nicer plotting approach that was introduced before.
Let’s try with even more different starting values.
nls(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = 0, K_M = 0.1))
Error in nlsModel(formula, mf, start, wts, scaleOffset = scOff, nDcentral = nDcntr): singular gradient matrix at initial parameter estimates
Traceback:
1. nls(V_data ~ V_max * S_data/(K_M + S_data), start = list(V_max = 0,
. K_M = 0.1))
2. nlsModel(formula, mf, start, wts, scaleOffset = scOff, nDcentral = nDcntr)
3. stop("singular gradient matrix at initial parameter estimates")
The singular gradient matrix at initial parameter estimates error arises from the fact that the starting values you provided were so far from the optimal solution, that the parameter searching in nls() failed at the very first step. The algorithm could not figure out where to go from those starting values. In fact, the starting value we gave it is biologically/ physically impossible, because V_max can’t equal 0.
Let’s look at another pair of starting values that causes the model fitting to fail:
nls(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = -0.1, K_M = 100))
Error in nls(V_data ~ V_max * S_data/(K_M + S_data), start = list(V_max = -0.1, : singular gradient
Traceback:
1. nls(V_data ~ V_max * S_data/(K_M + S_data), start = list(V_max = -0.1,
. K_M = 100))
In this case, the model fitting did get started, but eventually failed, again because the starting values were too far from the (approximately) optimal values ($$V_{\max} \approx 12.96, K_M \approx 10.61$$).
Note
There are other types of errors (other than the “singular gadient matrix” one) that the nlls fitting can run into because of poor starting parameter values.
And what happens if we start really close to the optimal values? Let’s try:
MM_model4 <- nls(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = 12.96, K_M = 10.61))
coef(MM_model)
coef(MM_model4)
V_max
12.9636891133139
K_M
10.6072253230542
V_max
12.9636623970243
K_M
10.6071489188677
The results of the first model fit and this last one are still not exactly the same! This drives home the point that NLLS is not an “exact” procedure. However, the differences between these two solutions are minuscule, so the main thing to take away is that if the starting values are reasonable, NLLS is exact enough.
Note that and even if you started the NLLS fitting with the exact parameter values with which you generated the data before introducing errors (so use start = list(V_max = 12.5, K_M = 7.1) above instead), you would still get the same result for the coefficients (try it). This is because the NLLS fitting will converge back to the parameter estimates based on the actual data, errors and all.
## A more robust NLLS algorithm#
The standard NLLS function in R, nls, which we have been using so far, does the NLLS fitting by implementing an algorithm called the Gauss-Newton algorithm. While the Gauss-Newton algorithm works well for most simple non-linear models, it has a tendency to “get lost” or “stuck” while searching for optimal parameter estimates (that minimize the residual sum of squares, or RSS). Therefore, nls will often fail to fit your model to the data if you start off at starting values for the parameters that are too far from what the optimal values would be, as you saw above (e.g., when you got the singular gradient matrix error).
Some nonlinear models are especially difficult for nls to fit to data because such model have a mathematical form that makes it hard to find parameter combinations that minimize the residual sum of squared (RSS). If this does not makes sense, don’t worry about it.
One solution to this is to use a different algorithm than Gauss-Newton. nls() has one other algorithm that can be more robust in some situations, called the “port” algorithm. However, there is a better solution still: the Levenberg-Marqualdt algorithm, which is less likely to get stuck (is more robust than) than Gauss-Newton (or port). If you want to learn more about the technicalities of this, here are here are good places to start (also see the Readings list at the end of this chapter).
To be able to use nlsLM, we will need to switch to a different NLLS function called nlsLM. In order to be able to use nlsLM, you will need the nls.lm R package, which you can install using the method appropriate for your operating system (e.g., linux users will launch R in sudo mode first) and then use:
> install.packages("minpack.lm")
Now load the minpack.lm package:
require("minpack.lm")
Now let’s try it (using the same starting values as MM_model2 above):
MM_model5 <- nlsLM(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = 12, K_M = 7))
Now compare the nls and nlsLM fitted coefficients:
coef(MM_model2)
coef(MM_model5)
V_max
12.9636297453629
K_M
10.6070555266004
V_max
12.9636298178077
K_M
10.6070557350073
Close enough.
Now, let’s try fitting the model using all those starting parameter combinations that failed previously:
MM_model6 <- nlsLM(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = .01, K_M = 20))
MM_model7 <- nlsLM(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = 0, K_M = 0.1))
MM_model8 <- nlsLM(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = -0.1, K_M = 100))
coef(MM_model6)
coef(MM_model7)
coef(MM_model8)
V_max
12.9636487209143
K_M
10.6071097943627
V_max
12.9636360068123
K_M
10.6070734344093
V_max
12.9636401038953
K_M
10.6070851518134
Nice, these all worked with nlsLM even though they had failed with nls!
But nlsLM also has its limits. Let’s try more absurd starting values:
nlsLM(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = -10, K_M = -100))
Error in nlsModel(formula, mf, start, wts): singular gradient matrix at initial parameter estimates
Traceback:
1. nlsLM(V_data ~ V_max * S_data/(K_M + S_data), start = list(V_max = -10,
. K_M = -100))
2. nlsModel(formula, mf, start, wts)
3. stop("singular gradient matrix at initial parameter estimates")
Here again, NLLS fitting fails because these starting values are just too far from the best solution.
## Bounding parameter values#
You can also bound the starting values, i.e., prevent them from exceeding some minimum and maximum value during the NLLS fitting process.
For example let’s first re-run the fitting without bounding the parameters (and some relatively-far-from-optimal starting values):
nlsLM(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = 0.1, K_M = 0.1))
Nonlinear regression model
model: V_data ~ V_max * S_data/(K_M + S_data)
data: parent.frame()
V_max K_M
12.96 10.61
residual sum-of-squares: 6.22
Number of iterations to convergence: 12
Achieved convergence tolerance: 1.49e-08
Now, the same with parameter bounds:
nlsLM(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = 0.1, K_M = 0.1), lower=c(0.4,0.4), upper=c(100,100))
Nonlinear regression model
model: V_data ~ V_max * S_data/(K_M + S_data)
data: parent.frame()
V_max K_M
12.96 10.61
residual sum-of-squares: 6.22
Number of iterations to convergence: 10
Achieved convergence tolerance: 1.49e-08
So when you added bounds, the solution was found in two fewer iterations (not a spectacular improvement, but an improvement nevertheless).
Note
The nls() function too has an option to provide lower and upper parameter bounds, but that is only in effect available when using algorithm = "port" (only available for a particular algorithm).
However, if you bound the parameters too much (to excessively narrow ranges), the algorithm cannot search sufficient parameter space (combinations of parameters), and will fail to converge on a good solution. For example:
nlsLM(V_data ~ V_max * S_data / (K_M + S_data), start = list(V_max = 0.5, K_M = 0.5), lower=c(0.4,0.4), upper=c(20,20))
Nonlinear regression model
model: V_data ~ V_max * S_data/(K_M + S_data)
data: parent.frame()
V_max K_M
16.09 20.00
residual sum-of-squares: 9.227
Number of iterations to convergence: 3
Achieved convergence tolerance: 1.49e-08
Here the algorithm converged on a poor solution, and in fact took fewer iterations (3) than before to do so. This is because it could not explore sufficient parameter combinations of V_max and K_M as we have narrowed the range that both these parameters could be allowed to take during the optimization too much.
## Diagnostics of an NLLS fit#
NLLS regression carries the same three key assumptions as Linear models:
• No (in practice, minimal) measurement error in explanatory/independent/predictor variable ($$x$$-axis variable)
• Data have constant normal variance — errors in the $$y$$-axis are homogeneously distributed over the $$x$$-axis range
• The measurement/observation errors are Normally distributed (Gaussian)
At the very least, it is a good idea to plot the residuals of a fitted NLLS model. Let’s do that for our Michaelis-Menten Model fit:
hist(residuals(MM_model6))
The residuals look OK. But this should not come as a surprise because we generated these “data” ourselves using normally-distributed errors!
You may also want to look at further diagnostics, as we did previously in the case of Linear models. The most convenient way to do this is to use the nlstools package. We will not go into it here, but you can have a look at its documentation. Note that you will need to install this package as it is not one of the core (base) R packages. nlstools has some other handy utilities as well; for example, its preview command allows you to visualise how good the starting values are (by evaluating your non-linear function at those values) in advance of the actual fitting.
Note
For the remaining examples, we will switch to using nlsLM instead of nls.
## Allometric scaling of traits#
Now let’s move on to a very common class of traits in biology: physical traits like body weight, wing span, body length, limb length, eye size, ear width, etc.
We will look at a very common phenomenon called allometric scaling. Allometric relationships between linear measurements such as body length, limb length, wing span, and thorax width are a good way to obtain estimates of body weights of individual organisms. We will look at allometric scaling of body weight vs. total body length in dragonflies and damselfiles.
Allometric relationships take the form:
(8)#$y = a x^b$
where $$x$$ and $$y$$ are morphological measures (body length and body weight respectively, in our current example), the constant is the value of $$y$$ at body length $$x = 1$$ unit, and $$b$$ is the scaling “exponent”. This is also called a power-law, because $$y$$ relates to $$x$$ through a simple power.
Let’s fit a power low to a typical allometric relationship: The change in body weight vs change in body length. In general, this relationship is a allometry; that is, body weight does not increase proportionally with some measure of body length.
First, let’s look at the data. You can get the data here (first click on link and use “Save as” or Ctrl+S to download it as a csv).
$$\star$$ Save the GenomeSize.csv data file to your data directory, and import it into your R workspace:
MyData <- read.csv("../data/GenomeSize.csv") # using relative path assuming that your working directory is "code"
A data.frame: 6 × 16
<chr><chr><chr><dbl><dbl><int><dbl><dbl><dbl><dbl><dbl><dbl><dbl><dbl><dbl><int>
2AnisopteraAeshnidaeAeshna constricta 1.760.0640.22871.976.8410.7254.4146.0045.48411.15517.383
3AnisopteraAeshnidaeAeshna eremita 1.85 NA10.31278.806.2716.1956.3351.2449.47460.72574.331
4AnisopteraAeshnidaeAeshna tuberculifera1.780.1020.21872.446.6212.5353.2949.8448.82468.74591.422
5AnisopteraAeshnidaeAeshna umbrosa 2.00 NA10.20773.054.9211.1157.0346.5145.97382.48481.441
6AnisopteraAeshnidaeAeshna verticalis 1.59 NA10.22066.256.4811.6448.1345.9144.91400.40486.971
Anisoptera are dragonflies, and Zygoptera are Damselflies. The variables of interest are BodyWeight and TotalLength. Let’s use the dragonflies data subset.
So subset the data accordingly and remove NAs:
Data2Fit <- subset(MyData,Suborder == "Anisoptera")
Data2Fit <- Data2Fit[!is.na(Data2Fit$TotalLength),] # remove NA's Plot the data: plot(Data2Fit$TotalLength, Data2Fit$BodyWeight, xlab = "Body Length", ylab = "Body Weight") Or, using ggplot: library("ggplot2") ggplot(Data2Fit, aes(x = TotalLength, y = BodyWeight)) + geom_point(size = (3),color="red") + theme_bw() + labs(y="Body mass (mg)", x = "Wing length (mm)") You can see these body weights of dragonflies does not increase proportionally with body length – they curve upwards w.r.t. wing length (so the allometric constant $$b$$ in eqn (8) mustbe greater than 1), instead of increasing as a straight line (in which case $$b = 1$$ (isometry, instead of allometry). Now fit the model to the data using NLLS: nrow(Data2Fit) 60 PowFit <- nlsLM(BodyWeight ~ a * TotalLength^b, data = Data2Fit, start = list(a = .1, b = .1)) ### NLLS fitting using a model object# Another way to tell nlsLM which model to fit, is to first create a function object for the power law model: powMod <- function(x, a, b) { return(a * x^b) } Now fit the model to the data using NLLS by calling the model: PowFit <- nlsLM(BodyWeight ~ powMod(TotalLength,a,b), data = Data2Fit, start = list(a = .1, b = .1)) Which gives the same result as before (you can check it). ### Visualizing the fit# The first thing to do is to see how well the model fitted the data, for which plotting is the best first option. So let’s visualize the fit. For this, first we need to generate a vector of body lengths (the x-axis variable) for plotting: Lengths <- seq(min(Data2Fit$TotalLength),max(Data2Fit$TotalLength),len=200) coef(PowFit)["a"] coef(PowFit)["b"] a: 3.94068495559397e-06 b: 2.58504796499038 Predic2PlotPow <- powMod(Lengths,coef(PowFit)["a"],coef(PowFit)["b"]) Next, calculate the predicted line. For this, we will need to extract the coefficient from the model fit object using the coef()command. Now plot the data and the fitted model line: plot(Data2Fit$TotalLength, Data2Fit$BodyWeight) lines(Lengths, Predic2PlotPow, col = 'blue', lwd = 2.5) ### Summary of the fit# Now lets get some stats of this NLLS fit. Having obtained the fit object (PowMod), we can use summary() just like we would for a lm() fit object: summary(PowFit) Formula: BodyWeight ~ powMod(TotalLength, a, b) Parameters: Estimate Std. Error t value Pr(>|t|) a 3.941e-06 2.234e-06 1.764 0.083 . b 2.585e+00 1.348e-01 19.174 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.02807 on 58 degrees of freedom Number of iterations to convergence: 39 Achieved convergence tolerance: 1.49e-08 print(confint(PowFit)) Waiting for profiling to be done... 2.5% 97.5% a 1.171935e-06 1.205273e-05 b 2.318292e+00 2.872287e+00 ### Examine the residuals# As we did before, let’s plot the residuals around the fitted model: hist(residuals(PowFit)) The residuals look OK. Tip Remember, when you write this analysis into a stand-alone R script, you should put all commands for loading packages (library(), require()) at the start of the script. * #### Exercises # (a) Make the same plot as above, fitted line and all, in ggplot, and add (display) the equation you estimated to your new (ggplot) plot. The equation is: $$\text{Weight} = 3.94 \times 10^{-06} \times \text{Length}^{2.59}$$ (b) Try playing with the starting values, and see if you can “break” the model fitting – that is, change the starting values till the NLLS fitting does not converge on a solution. (c) Repeat the model fitting (including a-b above) using the Zygoptera data subset. (d) There is an alternative (and in fact, more commonly-used) approach for fitting the allometric model to data: using Ordinary Least Squares on bi-logarithamically transformed data. That is, if you take a log of both sides of the allometric equation we get, $\log(y) = \log(a) + b \log(x)$ This is a straight line equation of the form $$c = d + b z$$, where $$c = \log(c)$$, $$d = \log(a)$$, $$z = \log(x)$$, and $$b$$ is now the slope parameter. So you can use Ordinary Least Squares and the linear models framework (with lm()) in R to estimate the parameters of the allometric equation. In this exercise, try comparing the NLLS vs OLS methods to see how much difference you get in the parameter estimates between them. For example, see the methods used in this paper by Cohen et al 2012. (e) The allometry between Body weight and Length is not the end of the story. You have a number of other linear morphological measurements (HeadLength, ThoraxLength, AdbdomenLength, ForewingLength, HindwingLength, ForewingArea, and HindwingArea) that can also be investigated. In this exercise, try two lines of investigation (again, repeated separately for Dragonflies and Damselfiles): (i) How do each of these measures allometrically scale with Body length (obtain estimates of scaling constant and exponent)? (Hint: you may want to use the pairs() command in R to get an overview of all the pairs of potential scaling relationships. (ii) Do any of the linear morphological measurements other than body length better predict Body weight? That is, does body weight scale more tightly with a linear morphological measurement other than total body length? You would use model selection here, which we will learn next. But for now, you can just look at and compare the $$R^2$$ values of the models. ## Comparing models# How do we know that there isn’t a better or alternative model that adequately explains the pattern in your dataset? This is important consideration in all data analyses (and more generally, the scientific method!), so you must aim to compare your NLLS model with an one or more alternatives for a more extensive and reliable investigation of the problem. Let’s use model comparison to investigate whether the relationship between body weight and length we found above is indeed allometric. For this, we need an alternative model that can be fitted to the same data. Let’s try a quadratic curve, which is of the form: $y = a + b x + c x^2$ This can also capture curvature in data, and is an alternative model to the allometric equation. Note that this mode is linear in its parameters (a linear model), which you can fit to the simply data using your familiar lm() function: QuaFit <- lm(BodyWeight ~ poly(TotalLength,2), data = Data2Fit) And like before, we obtain the predicted values (but this time using the predict.lm function): Predic2PlotQua <- predict.lm(QuaFit, data.frame(TotalLength = Lengths)) Now let’s plot the two fitted models together: plot(Data2Fit$TotalLength, Data2Fit$BodyWeight) lines(Lengths, Predic2PlotPow, col = 'blue', lwd = 2.5) lines(Lengths, Predic2PlotQua, col = 'red', lwd = 2.5) Very similar fits, except that the quadratic model seems to deviate a bit from the data at the lower end of the data range. Let’s do a proper, formal model comparison now to check which model better-fits the data. First calculate the R$$^2$$ values of the two fitted models: RSS_Pow <- sum(residuals(PowFit)^2) # Residual sum of squares TSS_Pow <- sum((Data2Fit$BodyWeight - mean(Data2Fit$BodyWeight))^2) # Total sum of squares RSq_Pow <- 1 - (RSS_Pow/TSS_Pow) # R-squared value RSS_Qua <- sum(residuals(QuaFit)^2) # Residual sum of squares TSS_Qua <- sum((Data2Fit$BodyWeight - mean(Data2Fit$BodyWeight))^2) # Total sum of squares RSq_Qua <- 1 - (RSS_Qua/TSS_Qua) # R-squared value RSq_Pow RSq_Qua 0.90054752976309 0.900302864503218 Not very useful. In general, R$$^2$$ is a good measure of model fit, but cannot be used for model selection – especially not here, given the tiny difference in R$$^2$$’s. Instead, as explained in the lecture, we can use the Akaike Information Criterion (AIC): n <- nrow(Data2Fit) #set sample size pPow <- length(coef(PowFit)) # get number of parameters in power law model pQua <- length(coef(QuaFit)) # get number of parameters in quadratic model AIC_Pow <- n + 2 + n * log((2 * pi) / n) + n * log(RSS_Pow) + 2 * pPow AIC_Qua <- n + 2 + n * log((2 * pi) / n) + n * log(RSS_Qua) + 2 * pQua AIC_Pow - AIC_Qua -2.14742608125084 Of course, as you might have suspected, we can do this using an in-built function in R! AIC(PowFit) - AIC(QuaFit) -2.1474260812509 • So which model wins? * As we had dicussed in the NLLS lecture, a rule of thumb is that a AIC value difference (typically denoted as $$\Delta$$AIC) > 2 is a acceptable cutoff for calling a winner. So the power law (allometric model) is a better fit here. Read the Johnson & Omland paper for more on model selection in Ecology and Evolution. ### Exercises # (a) Calculate the Bayesian Information Criterion (BIC), also know as the Schwarz Criterion (see your Lecture notes and the Johnson & Omland paper, and use $$\Delta$$BIC to select the better fitting model. (b) Fit a straight line to the same data and compare with the allometric and quadratic models. (c) Repeat the model comparison (incuding 1-2 above) using the Damselflies (Zygoptera) data subset – does the allometric model still win? (d) Repeat exercise (e)(i) and (ii) from the above set, but with model comparison (e.g., again using a quadratic as an alternative model) to establish that the relationships are indeed allometric. (e) Repeat exercise (e)(ii) from the above set, but with model comparison to establish which linear measurement is the best predictor of Body weight. ## Albatross chick growth# Now let’s look at a different trait example: the growth of an individual albatross chick (you can find similar data for vector and non-vector arthropods in VecTraits). First load and plot the data: alb <- read.csv(file="../data/albatross_grow.csv") alb <- subset(x=alb, !is.na(alb$wt))
plot(alb$age, alb$wt, xlab="age (days)", ylab="weight (g)", xlim=c(0, 100))
### Fitting the three models using NLLS#
Let’s fit multiple models to this dataset.
The Von Bertalanffy model is commonly used for modelling the growth of an individual. It’s formulation is:
$W(t) = \rho (L_{\infty}(1-e^{-Kt})+L_0 e^{-Kt})^3$
If we pull out $$L_{\infty}$$ and define $$c=L_0/L_{\infty}$$ and $$W_{\infty}=\rho L_{\infty}^3$$ this equation becomes:
$W(t) = W_{\infty}(1-e^{-Kt}+ c e^{-Kt})^3.$
$$W_{\infty}$$ is interpreted as the mean asymptotic weight, and $$c$$ the ratio between the initial and final lengths. This second equation is the one we will fit.
We will compare this model against the classical Logistic growth equation and a straight line.
The logistic equation is:
$N_t = \frac{N_0 K e^{r t}}{K + N_0 (e^{r t} - 1)}$
Here $$N_t$$ is population size at time $$t$$, $$N_0$$ is initial population size, $$r$$ is maximum growth rate (AKA $$r_\text{max}$$), and $$K$$ is carrying capacity.
First, as we did before, let’s define the R functions for the two models:
logistic1 <- function(t, r, K, N0){
N0 * K * exp(r * t)/(K+N0 * (exp(r * t)-1))
}
vonbert.w <- function(t, Winf, c, K){
Winf * (1 - exp(-K * t) + c * exp(-K * t))^3
}
For the straight line, we use simply use R’s lm() function, as that is a linear least squares problem. Using NLLS will give (approximately) the same answer, of course. Now fit all 3 models using least squares.
We will scale the data before fitting to improve the stability of the estimates:
scale <- 4000
alb.lin <- lm(wt/scale ~ age, data = alb)
alb.log <- nlsLM(wt/scale~logistic1(age, r, K, N0), start=list(K=1, r=0.1, N0=0.1), data=alb)
alb.vb <- nlsLM(wt/scale~vonbert.w(age, Winf, c, K), start=list(Winf=0.75, c=0.01, K=0.01), data=alb)
Next let’s calculate predictions for each of the models across a range of ages.
ages <- seq(0, 100, length=1000)
pred.lin <- predict(alb.lin, newdata = list(age=ages)) * scale
pred.log <- predict(alb.log, newdata = list(age=ages)) * scale
pred.vb <- predict(alb.vb, newdata = list(age=ages)) * scale
And finally plot the data with the fits:
plot(alb$age, alb$wt, xlab="age (days)", ylab="weight (g)", xlim=c(0,100))
lines(ages, pred.lin, col=2, lwd=2)
lines(ages, pred.log, col=3, lwd=2)
lines(ages, pred.vb, col=4, lwd=2)
legend("topleft", legend = c("linear", "logistic", "Von Bert"), lwd=2, lty=1, col=2:4)
Next examine the residuals between the 3 models:
par(mfrow=c(3,1), bty="n")
plot(alb$age, resid(alb.lin), main="LM resids", xlim=c(0,100)) plot(alb$age, resid(alb.log), main="Logisitic resids", xlim=c(0,100))
plot(alb$age, resid(alb.vb), main="VB resids", xlim=c(0,100)) The residuals for all 3 models still exhibit some patterns. In particular, the data seems to go down near the end of the observation period, but none of these models can capture that behavior. Finally, let’s compare the 3 models using a simpler approach than the AIC/BIC one that we used above by calculating adjusted Sums of Squared Errors (SSE’s): n <- length(alb$wt)
list(lin=signif(sum(resid(alb.lin)^2)/(n-2 * 2), 3),
log= signif(sum(resid(alb.log)^2)/(n-2 * 3), 3),
vb= signif(sum(resid(alb.vb)^2)/(n-2 * 3), 3))
$lin 0.00958$log
0.0056
$vb 0.00628 The adjusted SSE accounts for sample size and number of parameters by dividing the RSS by the residual degrees of freedom. Adjusted SSE can also be used for model selection like AIC/BIC (but is less robust than AIC/BIC). The residual degrees of freedom is calculated as the number of response values (sample size, $$n$$) minus 2, times the number of fitted coefficients $$m$$ (= 2 or 3 in this case) estimated. The logistic model has the lowest adjusted SSE, so it’s the best by this measure. It is also, visually, a better fit. ### Exercises # (a) Use AIC/BIC to perform model selection on the Albatross data as we did for the trait allometry example. (b) Write this example as a self-sufficient R script, with ggplot istead of base plotting ## Aedes aegypti fecundity# Now let’s look at a disease vector example. These data measure the reponse of * Aedes aegypti * fecundity to temperature. First load and visualize the data: aedes <- read.csv(file="../data/aedes_fecund.csv") plot(aedes$T, aedes$EFD, xlab="temperature (C)", ylab="Eggs/day") ### The Thermal Performance Curve models# Let’s define some models for Thermal Performance Curves: quad1 <- function(T, T0, Tm, c){ c * (T-T0) * (T-Tm) * as.numeric(T<Tm) * as.numeric(T>T0) } Instead of using the inbuilt quadratic function in R, we we defined our own to make it easier to choose starting values, and so that we can force the function to be equal to zero above and below the minimum and maximum temperature thresholds (more on this below). briere <- function(T, T0, Tm, c){ c * T * (T-T0) * (abs(Tm-T)^(1/2)) * as.numeric(T<Tm) * as.numeric(T>T0) } The Briere function is a commonly used model for temperature dependence of insect traits. See here section for more info. Unlike the original model definition, we have used abs() to allow the NLLS algorithm to explore the full parameter space of $$T_m$$; if we did not do this, the NLLS could fail as soon as a value of $$T_m < T$$ was reached during the optimization, because the square root of a negative number is complex. Another way to deal with this issue is to set parameter bounds on $$T_m$$ so that it is can never be less than T. However, this is a more technical approach that we will not go into here. As in the case of the albatross growth data, we will also compare the above two models with a * straight line * (again, its a linear model, so we can just use lm() without needing to define a function for it). Now fit all 3 models using least squares. Although it’s not as necessary here (as the data don’t have as large values as the albatross example), lets again scale the data first: scale <- 20 aed.lin <- lm(EFD/scale ~ T, data=aedes) aed.quad <- nlsLM(EFD/scale~quad1(T, T0, Tm, c), start=list(T0=10, Tm=40, c=0.01), data=aedes) aed.br <- nlsLM(EFD/scale~briere(T, T0, Tm, c), start=list(T0=10, Tm=40, c=0.1), data=aedes) ### Exercises # (a) Complete the * Aedes * data analysis by fitting the models, calculating predictions and then comparing models. Write a single, self-standing script for it. Which model fits best? By what measure? (b) In this script, use ggplot instead of base plotting. ## Abundance data as an example# Fluctuations in the abundance (density) of single populations may play a crucial role in ecosystem dynamics and emergent functional characteristics, such as rates of carbon fixation or disease transmission. For example, if vector population densities or their traits change at the same or shorter timescales than the rate of disease transmission, then (vector) abundance fluctuations can cause significant fluctuations in disease transmission rates. Indeed, most disease vectors are small ectotherms with short generation times and greater sensitivity to environmental conditions than their (invariably larger, longer-lived, and often, endothermic) hosts. So understanding how populations vary over time, space, and with respect to environmental variables such as temperature and precipitation is key. We will look at fitting models to the growth of a single population here. ## Population growth rates# A population grows exponentially while its abundance is low and resources are not limiting (the Malthusian principle). This growth then slows and eventually stops as resources become limiting. There may also be a time lag before the population growth really takes off at the start. We will focus on microbial (specifically, bacterial) growth rates. Bacterial growth in batch culture follows a distinct set of phases; lag phase, exponential phase and stationary phase. During the lag phase a suite of transcriptional machinery is activated, including genes involved in nutrient uptake and metabolic changes, as bacteria prepare for growth. During the exponential growth phase, bacteria divide at a constant rate, the population doubling with each generation. When the carrying capacity of the media is reached, growth slows and the number of cells in the culture stabilizes, beginning the stationary phase. Traditionally, microbial growth rates were measured by plotting cell numbers or culture density against time on a semi-log graph and fitting a straight line through the exponential growth phase – the slope of the line gives the maximum growth rate ($$r_{max}$$). Models have since been developed which we can use to describe the whole sigmoidal bacterial growth curve (e.g., using NLLS). Here we will take a look at these different approaches, from applying linear models to the exponential phase, through to fitting non-linear models to the full growth curve. Let’s first generate some “data” on the number of bacterial cells as a function of time that we can play with: t <- seq(0, 22, 2) N <- c(32500, 33000, 38000, 105000, 445000, 1430000, 3020000, 4720000, 5670000, 5870000, 5930000, 5940000) set.seed(1234) # To ensure we always get the same random sequence in this example "dataset" data <- data.frame(t, N * (1 + rnorm(length(t), sd = 0.1))) # add some random error names(data) <- c("Time", "N") head(data) A data.frame: 6 × 2 TimeN <dbl><dbl> 1 0 28577.04 2 2 33915.52 3 4 42120.88 4 6 80370.17 5 8 464096.05 6101502365.99 Note how we added some random “sampling” error using N * (1 + rnorm(length(t), sd = .1)). This means that we are adding an error at each time point $$t$$ (let’s call this fluctuation $$\epsilon_t$$) as a percentage of the population ($$N_t$$) at that time point in a vectorized way. That is, we are performing the operation $$N_t \times (1 + \epsilon_t)$$ at all time points at one go. This is important to note because this is often the way that errors appear – proportional to the value being measured. Now let’s plot these data: ggplot(data, aes(x = Time, y = N)) + geom_point(size = 3) + labs(x = "Time (Hours)", y = "Population size (cells)") ### Basic approach# The size of an exponentially growing population ($$N$$) at any given time ($$t$$) is given by: $$N(t) = N_0 e^{rt} ,$$ where $$N_0$$ is the initial population size and $$r$$ is the growth rate. We can re-arrange this to give: $$r = \frac{\log(N(t)) - \log(N_0)}{t} ,$$ That is, in exponential growth at a constant rate, the growth rate can be simply calculated as the difference in the log of two population sizes, over time. We will log-transform the data and estimate by eye where growth looks exponential. data$LogN <- log(data$N) # visualise ggplot(data, aes(x = t, y = LogN)) + geom_point(size = 3) + labs(x = "Time (Hours)", y = "log(cell number)") By eye the logged data looks fairly linear (beyond the initial “lag phase” of growth; see below) between hours 5 and 10, so we’ll use that time-period to calculate the growth rate. (data[data$Time == 10,]$LogN - data[data$Time == 6,]$LogN)/(10-6) 0.732038333517017 This is our first, most basic estimate of $$r$$. Or, we can decide not to eyeball the data, but just pick the maximum observed gradient of the curve. For this, we can use the the diff() function: diff(data$LogN)
1. 0.171269154259665
2. 0.216670872460636
3. 0.646099642770272
4. 1.75344839347772
5. 1.17470494059035
6. 0.639023867964838
7. 0.44952974020198
8. 0.181493481601755
9. -0.000450183952025895
10. 0.0544907101941003
11. -0.0546009242768832
This gives all the (log) population size differences between successive timepoint pairs. The max of this is what we want, divided by the time-step.
max(diff(data$LogN))/2 # 2 is the difference in any successive pair of timepoints 0.87672419673886 ### Using OLS# But we can do better than this. To account for some error in measurement, we shouldn’t really take the data points directly, but fit a linear model through them instead, where the slope gives our growth rate. This is pretty much the “traditional” way of calculating microbial growth rates – draw a straight line through the linear part of the log-transformed data. lm_growth <- lm(LogN ~ Time, data = data[data$Time > 2 & data$Time < 12,]) summary(lm_growth) Call: lm(formula = LogN ~ Time, data = data[data$Time > 2 & data$Time < 12, ]) Residuals: 3 4 5 6 0.21646 -0.38507 0.12076 0.04785 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 7.9366 0.5350 14.835 0.00451 ** Time 0.6238 0.0728 8.569 0.01335 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3256 on 2 degrees of freedom Multiple R-squared: 0.9735, Adjusted R-squared: 0.9602 F-statistic: 73.42 on 1 and 2 DF, p-value: 0.01335 Npw we get $$r \approx 0.62$$, which is probably closer to the “truth”. But this is still not ideal because we only guessed the exponential phase by eye. We could do it better by iterating through different windows of points, comparing the slopes and finding which the highest is to give the maximum growth rate, $$r_{max}$$. This is called a “rolling regression”. Or better still, we can fit a more appropriate mathematical model using NLLS! ### Using NLLS# For starters, a classical, (somewhat) mechanistic model is the logistic equation: (9)#$N_t = \frac{N_0 K e^{r t}}{K + N_0 (e^{r t} - 1)}$ Here $$N_t$$ is population size at time $$t$$, $$N_0$$ is initial population size, $$r$$ is maximum growth rate (AKA $$r_\text{max}$$), and $$K$$ is carrying capacity (maximum possible abundance of the population). Note that this model is actually the solution to the differential equation that defines the classic logistic population growth model (eqn. (13)). Note The derivation of eqn. (9) is covered here. But you don’t need to know the derivation to fit eqn. (9) to data. Let’s fit it to the data. First, we need to define it as a function object: logistic_model <- function(t, r_max, K, N_0){ # The classic logistic equation return(N_0 * K * exp(r_max * t)/(K + N_0 * (exp(r_max * t) - 1))) } Now fit it: # first we need some starting parameters for the model N_0_start <- min(data$N) # lowest population size
K_start <- max(data$N) # highest population size r_max_start <- 0.62 # use our estimate from the OLS fitting from above fit_logistic <- nlsLM(N ~ logistic_model(t = Time, r_max, K, N_0), data, list(r_max=r_max_start, N_0 = N_0_start, K = K_start)) summary(fit_logistic) Formula: N ~ logistic_model(t = Time, r_max, K, N_0) Parameters: Estimate Std. Error t value Pr(>|t|) r_max 6.309e-01 3.791e-02 16.641 4.56e-08 *** N_0 3.317e+03 1.451e+03 2.286 0.0481 * K 5.538e+06 7.192e+04 76.995 5.32e-14 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 119200 on 9 degrees of freedom Number of iterations to convergence: 12 Achieved convergence tolerance: 1.49e-08 We did not pay much attention to what starting values we used in the simpler example of fitting the allometric model because the power-law model is easy to fit using NLLS, and starting far from the optimal parameters does not matter too much. Here, we used the actual data to generate more realistic start values for each of the three parameters (r_max, N_0, K) of the Logistic equation. Now, plot the fit: timepoints <- seq(0, 22, 0.1) logistic_points <- logistic_model(t = timepoints, r_max = coef(fit_logistic)["r_max"], K = coef(fit_logistic)["K"], N_0 = coef(fit_logistic)["N_0"]) df1 <- data.frame(timepoints, logistic_points) df1$model <- "Logistic equation"
names(df1) <- c("Time", "N", "model")
ggplot(data, aes(x = Time, y = N)) +
geom_point(size = 3) +
geom_line(data = df1, aes(x = Time, y = N, col = model), size = 1) +
theme(aspect.ratio=1)+ # make the plot square
labs(x = "Time", y = "Cell number")
That looks nice, and the $$r_{max}$$ estimate we get (0.64) is fairly close to what we got above with OLS fitting.
Note that we’ve done this fitting to the original non transformed data, whilst the linear regressions earlier were on log transformed data. What would this function look like on a log-transformed axis?
ggplot(data, aes(x = Time, y = LogN)) +
geom_point(size = 3) +
geom_line(data = df1, aes(x = Time, y = log(N), col = model), size = 1) +
theme(aspect.ratio=1)+
labs(x = "Time", y = "log(Cell number)")
The model actually diverges from the data at the lower end! This was not visible in the previous plot where you examined the model in linear scale (without taking a log) because the deviation of the model is small, and only becomes clear in the log scale. This is because of the way logarithms work. Let’s have a look at this in our Cell counts “data”:
ggplot(data, aes(x = N, y = LogN)) +
geom_point(size = 3) +
theme(aspect.ratio = 1)+
labs(x = "N", y = "log(N)")
As you can see the logarithm is a strongly nonlinear transformation of any sequence of real numbers, with small numbers close to zero yielding disproportionately large deviations.
Note
You may play with increasing the error (by increasing the value of sd in synthetic data generation step above) and re-evaluating all the subsequent model fitting steps above. However, note that above some values of sd, you will start to get negative values of populations, especially at early time points, which will raise issues with taking a logarithm.
The above seen deviation of the Logistic model from the data is because this model assumes that the population is growing right from the start (Time = 0), while in “reality” (in our synthetic “data”), this is not what’s happening; the population takes a while to grow truly exponentially (i.e., there is a time lag in the population growth). This time lag is seen frequently in the lab, and is also expected in nature, because when bacteria encounter fresh growth media (in the lab) or a new resource/environment (in the field), they take some time to acclimate, activating genes involved in nutrient uptake and metabolic processes, before beginning exponential growth. This is called the lag phase and can be seen in our example data where exponential growth doesn’t properly begin until around the 4th hour.
To capture the lag phase, more complicated bacterial growth models have been designed.
One of these is the modified Gompertz model (Zwietering et. al., 1990), which has been used frequently in the literature to model bacterial growth:
(10)#$\log(N_t) = N_0 + (N_{max} - N_0) e^{-e^{r_{max} \exp(1) \frac{t_{lag} - t}{(N_{max} - N_0) \log(10)} + 1}}$
Here maximum growth rate ($$r_{max}$$) is the tangent to the inflection point, $$t_{lag}$$ is the x-axis intercept to this tangent (duration of the delay before the population starts growing exponentially) and $$\log\left(\frac{N_{max}}{N_0}\right)$$ is the asymptote of the log-transformed population growth trajectory, i.e., the log ratio of maximum population density $$N_{max}$$ (aka “carrying capacity”) and initial cell (Population) $$N_0$$ density.
Note
Note that unlike the Logistic growth model above, the Gompertz model is in the log scale. This is because the model is not derived from a differential equation, but was designed * specifically * to be fitted to log-transformed data.
Now let’s fit and compare the two alternative nonlinear growth models: Logistic and Gompertz.
First, specify the function object for the Gompertz model (we already defined the function for the Logistic model above):
gompertz_model <- function(t, r_max, K, N_0, t_lag){ # Modified gompertz growth model (Zwietering 1990)
return(N_0 + (K - N_0) * exp(-exp(r_max * exp(1) * (t_lag - t)/((K - N_0) * log(10)) + 1)))
}
Again, note that unlike the Logistic growth function above, this function has been written in the log scale.
Now let’s generate some starting values for the NLLS fitting of the Gompertz model.
As we did above for the logistic equation, let’s derive the starting values by using the actual data:
N_0_start <- min(data$LogN) # lowest population size, note log scale K_start <- max(data$LogN) # highest population size, note log scale
r_max_start <- 0.62 # use our previous estimate from the OLS fitting from above
t_lag_start <- data$Time[which.max(diff(diff(data$LogN)))] # find last timepoint of lag phase
• So how did we find a reasonable time lag from the data? *
Let’s break the last command down:
diff(data$LogN) # same as what we did above - get differentials 1. 0.171269154259665 2. 0.216670872460636 3. 0.646099642770272 4. 1.75344839347772 5. 1.17470494059035 6. 0.639023867964838 7. 0.44952974020198 8. 0.181493481601755 9. -0.000450183952025895 10. 0.0544907101941003 11. -0.0546009242768832 diff(diff(data$LogN)) # get the differentials of the differentials (approx 2nd order derivatives)
1. 0.0454017182009707
2. 0.429428770309636
3. 1.10734875070745
4. -0.578743452887371
5. -0.535681072625511
6. -0.189494127762858
7. -0.268036258600224
8. -0.181943665553781
9. 0.0549408941461262
10. -0.109091634470984
which.max(diff(diff(data$LogN))) # find the timepoint where this 2nd order derivative really takes off 3 data$Time[which.max(diff(diff(data$LogN)))] # This then is a good guess for the last timepoint of the lag phase 4 Now fit the model using these start values: fit_gompertz <- nlsLM(LogN ~ gompertz_model(t = Time, r_max, K, N_0, t_lag), data, list(t_lag=t_lag_start, r_max=r_max_start, N_0 = N_0_start, K = K_start)) You might one or more warning(s) that the model fitting iterations generated NaNs during the fitting procedure for these data (because at some point the NLLS fitting algorithm “wandered” to a combination of K and N_0 values that yields a NaN for log(K/N_0)). You can ignore these warning in this case. But not always – sometimes these NaNs mean that the equation is wrongly written, or that it generates NaNs across the whole range of the x-values, in which case the model is inappropriate for these data. Get the model summary: summary(fit_gompertz) Formula: LogN ~ gompertz_model(t = Time, r_max, K, N_0, t_lag) Parameters: Estimate Std. Error t value Pr(>|t|) t_lag 4.80680 0.18433 26.08 5.02e-09 *** r_max 1.86616 0.08749 21.33 2.45e-08 *** N_0 10.39142 0.05998 173.24 1.38e-15 *** K 15.54956 0.05056 307.57 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.09418 on 8 degrees of freedom Number of iterations to convergence: 10 Achieved convergence tolerance: 1.49e-08 And see how the fits of the two nonlinear models compare: timepoints <- seq(0, 24, 0.1) logistic_points <- log(logistic_model(t = timepoints, r_max = coef(fit_logistic)["r_max"], K = coef(fit_logistic)["K"], N_0 = coef(fit_logistic)["N_0"])) gompertz_points <- gompertz_model(t = timepoints, r_max = coef(fit_gompertz)["r_max"], K = coef(fit_gompertz)["K"], N_0 = coef(fit_gompertz)["N_0"], t_lag = coef(fit_gompertz)["t_lag"]) df1 <- data.frame(timepoints, logistic_points) df1$model <- "Logistic model"
names(df1) <- c("Time", "LogN", "model")
df2 <- data.frame(timepoints, gompertz_points)
df2\$model <- "Gompertz model"
names(df2) <- c("Time", "LogN", "model")
model_frame <- rbind(df1, df2)
ggplot(data, aes(x = Time, y = LogN)) +
geom_point(size = 3) +
geom_line(data = model_frame, aes(x = Time, y = LogN, col = model), size = 1) +
theme_bw() + # make the background white
theme(aspect.ratio=1)+ # make the plot square
labs(x = "Time", y = "log(Abundance)")
Clearly, the Gompertz model fits way better than the logistic growth equation in this case! Note also that there is a big difference in the fitted value of $$r_{max}$$ from the two models; the value is much lower from the Logistic model because it ignores the lag phase, including it into the exponential growth phase.
You can now perform model selection like you did above in the allometric scaling example.
### Exercises#
(a) Calculate the confidence intervals on the parameters of each of the two fitted models, and use model selection (using AIC and/or BIC) as you did before to see if you can determine the best-fitting model among the three.
(b) Alternatively, for a different random sequence of fluctuations, one or more of the models may fail to fit (a singular gradiant matrix error). Try repeating the above fitting with a different random seed (change the integers given to the random.seed( ) function), or increase the sampling error by increasing the standard deviation and see if it happens. If/when the NLLS optimization does fail to converge (the RSS minimum was not found), you can try to fix it by changing the starting values.
(c) Repeat the model comparison exercise 1000 times (You will have to write a loop), and determine if/whether one model generally wins more often than the others. Note that each run will generate a slightly different dataset, because we are adding a vector of random errors every time the “data” are generated. This may result in failure of the NLLS fitting to converge, in which case you will need to use the try() or tryCatch functions.
(d) Repeat (b), but increase the error by increasing the standard deviation of the normal error distribution, and see if there are differences in the robustness of the models to sampling/experimental errors. You may also want to try changing the distribution of the errors to some non-normal distribution and see what happens.
## Some tips and tricks for NLLS fitting#
### Starting values#
The main challenge for NLLS fitting is finding starting (initial) values for the parameters, which the algorithm needs to proceed with the fitting/optimization. Inappropriate starting values can result in the algorithm finding parameter combinations represent convergence to a local optimum rather than the (globally) optimal solution. Starting parameter estimates can also result in or complete “divergence”, i.e., the search results in a combination of parameters that cause mathematical “singularity” (e.g., log(0) or division by zero).
#### Obtaining them#
Finding the starting values is a bit of an art. There is no method for finding starting values that works universally (across different types of models).
The one universal rule though, is that finding starting values requires you to understand the meaning of each of the parameters in your model. So, for example, in the population growth rate example, the parameters in both the nonlinear models that we covered (Logistic growth, eqn. (9) , Gompertz model; eqn. (10)) have a clear meaning.
Furthermore, you will typically need to determine starting values specific to each model and each dataset that that you are wanting to fit that model to (e.g., every distinct functional response dataset to be fitted to the Holling Type II model). To do so, understanding how each parameter in the model corresponds to features of the actual data is really necessary.
For example, in the Gompertz population growth rate model (eqn. (10)), your starting values generator function would, for each dataset,
• Calculate a starting value for $$r_{max}$$ by searching for the steepest slope of the growth curve (e.g., with a rolling OLS regression)
• Calculate a starting value of $$t_{lag}$$ by intersecting the fitted line with the x (time)-axis
• Calculate a starting value for the asymptote $$K$$ as the highest data (abundance) value in the dataset.
Tip
Ideally, you should write a separate a function that calculates starting values for the model parameters.
#### Sampling them#
Once you have worked out how to generate starting values for each non-linear model and dataset, a good next step for optimizing the fitting across multiple datasets (and thus maximize how many datasets are successfully fitted to the model) is to rerun fitting attempts multiple times, sampling each of the starting values (simultaneously) randomly (that is, randomly vary the set of starting values a bit each time). This sampling of starting values will increase the likelihood of the NLLS optimization algorithm finding a solution (optimal combination of parameters), and not getting stuck in a combination of parameters somewhere far away from that optimal solution.
In particular,
• You can choose a Gaussian/Normal distribution if you have high confidence in mean value of parameter, or
• You can uniform distribution if you have low confidence in the mean, but higher confidence in the range of values that the parameter can take. In both cases, the mean of the sampling distribution will be the starting value you inferred from the model and the data (previous section).
Furthermore,
• Whichever distribution you choose (gaussian vs uniform), you will need to determine what range of values to restrict each parameter’s samples to. In the case of the normal distribution, this is determined by what standard deviation parameter (you choose), and in the case of the uniform distribution, this is determined by what lower and upper bound (you choose). Generally, a good approach is to set the bound to be some percent (say 5-10%) of the parameter’s (mean) starting value. In both cases the chosen range to restrict the sampling to would typically be some subset of the model’s parameter bounds (next section).
• How many times to re-run the fitting for a single dataset and model?* – this depends on how “difficult” the model is, and how much computational power you have.
Tip
For the sampling of starting values, recall that you learned to generate random numbers from probability distributions in both the R and Python chapters).
You may also try and use a more sophisticated approach such as grid searching for varying your starting values randomly. An example is in the MLE chapter.
### Bounding parameters revisited#
At the start, we looked at an example of NLLS fitting where we bounded the parameters. It can be a good idea to restrict the range of values that the each of the model’s parameters can take during any one fitting/optimization run. To “bound” a parameter in this way means to give it upper and lower limits. By doing so, during one optimization/fitting (e.g., one call to nlsLM, to fit one model, to one dataset), the fitting algorithm does not allow a parameter to go outside some limits. This reduces the chances of the optimization getting stuck too far from the solution, or failing completely due to some mathematical singularity (e.g., log(0)).
The bounds are typically fixed for each parameter of a model at the level of the model (e.g., they do not change based on each dataset). For example, in the Gompertz model for growth rates (eqn. (10)), you can limit the growth rate parameter to never be negative (the bounds would be $$[0,\infty]$$), or restrict it further to be some value between zero and an upper limit (say, 10) that you know organismal growth rates cannot exceed (the bounds would in this case would be $$[0,10]$$).
However, as we saw in the Michaelis-Menten model fitting example, bounding the parameters too much (excessively narrow ranges) can result in poor solutions because the algorithm cannot explore sufficient parameter space.
Tip
The values of the parameter bounds you choose, of course, may depend on the units of measurement of the data. For example, in SI, growth rates in the Logistic or Gompertz models would be in units of s$$^{-1}$$).
Irrespective of which computer language the NLLS fitting algorithm is implemented in (nlsLM in R or lmfit in Python), the fitting command/method will have options for setting the parameter bounds. In particular,
Bounding the parameter values has nothing to do, per se, with sampling the starting values of each parameter, though if you choose to sample starting values (explained in previous section), you need to make sure that the samples don’t exceed the pre-set bounds (explained in this section).
Note
Python’s lmfit has an option to also internally vary the parameter. So by using a sampling approach as described in the previous section, and allowing the parameter to vary (note that vary=True is the default) within lmfit, you will be in essence be imposing sampling twice. This may or may not improve fitting performance – try it out both ways.
|
2023-02-08 13:23:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7005429267883301, "perplexity": 1923.38440246592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00591.warc.gz"}
|
https://byjus.com/question-answer/if-p-n-is-statement-such-that-p-3-is-true-assuming-p-k-is/
|
Question
# If $$P(n)$$ is statement such that $$P(3)$$ is true. Assuming P(k) is true $$\Rightarrow$$ $$P(k+1)$$ is true for all $$k$$ $$\geq$$ $$2$$, then $$P(n)$$ is true.
A
For all n
B
For n 3
C
For n 4
D
None of these
Solution
## The correct option is B For $$n$$ $$\geq$$ $$3$$Given $$P (3)$$ is true.Assume $$P(k)$$ is true $$\Rightarrow$$ $$P(k+1)$$ is true means if $$P(3)$$ is true $$\Rightarrow$$ $$P(4)$$ is true $$\Rightarrow$$ $$P(5)$$ is true and so on. So statement is true for all $$n\geq3$$.Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-28 11:17:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6460363268852234, "perplexity": 615.6775605480915}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00431.warc.gz"}
|
https://brilliant.org/problems/derivative-of-composition-of-functions/
|
Derivative of Composition of Functions
There is a certain function $f$ such that the tangent line at $x=\frac{1}{2}$ is $y=2x+1.$
Given that $g(x)=f(\sin x),$ find the sum of all possible values of $\frac{dg}{dx}$ when evaluated at $x=\frac{\pi}{6}$.
×
|
2021-12-03 16:26:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5898910164833069, "perplexity": 70.40200130340783}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00622.warc.gz"}
|
https://stats.stackexchange.com/questions/389705/usage-of-correlated-non-linear-correlation-variables-in-an-experiment-and-stan
|
# Usage of correlated (non-linear correlation) variables in an experiment and standardisation of variables values
The setting (see the dataset at the end of the question)
The setting of the problem is this:
• I ask many multiple choice questions;
• only 1 out of the 3 available answers is correct;
• I ask the same question in two different forms (an EASY form and a COMPLEX form) (i.e. "what color is Hulk?" and "what is the color of the Hulk epidermis surface layer?");
• The correct answer DOES NOT change (i.e. Hulk is always green);
• The form of the available answers DOES NOT change, only the form of the question does;
• Respondents to the questions in the easy form are N, respondents to questions in complex form are n (and n is always less than N)(please assume that there is not any kind of bias, it is not the topic of this question);
My goal is to predict which is the REAL CORRECT answer (TARGET variable in the dataset) based on what respondents choose and in order to do that: (see the dataset at the end of the question)
1. I empirically noticed that the most selected answer in the EASY form tends to be the correct one (Var A of the dataset)
2. I empirically noticed that the most selected answer in the COMPLEX form tends to be the correct one (Var B of the dataset)
3. I empirically noticed that the answer that undergoes the minor reduction in response rate when shifting from EASY to COMPLEX form tends to be the correct one (Var E of the dataset)
Since point 3, is the heart of the question I will be more clear with an example: let's say that we have two possible answers: Answer 1 (A1) and Answer 2 (A2) to the same question (only one is correct), say that A1 has been selected 100 times in the easy form (remember the question is asked either in an easy or complex form, answers stay the same) and 90 times in the complex form, while A2 has been selected 200 times in the easy form while only 40 in the complex form.
Table to sum up:
_____________________
| number of time |
| selected |
–––––––––––––––––––––
|question|question |
|in EASY |in COMPLEX |
|form |form |
|–––––––|––––––––|–––––––––––|
|answer | 100 | 90 |
| 1 | | |
|–––––––|––––––––|–––––––––––|
|–––––––|––––––––|–––––––––––|
|answer | 200 | 40 |
| 2 | | |
|–––––––––––––––––––––––––––––
It is clear that the shift from easy to complex form, has hit way much more A2, and this fact is crucial for me, since indeed the correct answer is A1, exactly for this reason. Mathematically the reduction of A1 is (90-100)/100=-10% While the reduction of A2 is (40-200)/200=-80%
It means that 80% less people selected A2 when the question was asked in a complex form.
Now here comes the question(s) I have for you guys: (see the dataset at the end of the question)
1. Can I use the Var E (the reduction in the selection) even if it is, of course, correlated with variable A and B since it is a non-linear transformation of these? The fact is that I still do want to include in my model both Var A and Var B, since they are informative as well (I probably will do a decision tree model that will also help me understand and rank which is the most relevant variable, in other terms the one with a greater coefficient) The fact is that the first thing I read on statistics book are statement like: "never never never use correlated variables" and stuff like this, and so I'm a bit terrified now
2. As you can see from the dataset down there, I have a problem, because some questions receive way much more answers than other. See for example question "wuv" received 9,000,000 answers in the easy form while question "xyz" only 2,000. Now, the dataset I uploaded here, is fake and with example data, but this really happens in my dataset and so I would like to solve this problem. My fear, is not about a Construct Validity Problem or neither a selection bias problem because the number of respondent to each question is totally random (because some time I submit the question to 1000 people, sometimes to 100000) (and assume this is the only way of doing this experiment please). My fear is that it is pointless to include in a model continuous variables (such as Var A and Var B) that are so diverse. For example I do not have this problem with Var E, because it is a percentage value, so always from 0% to 100%. How can i solve this problem? How can I standardise Var A and Var B in way such to eliminate the huge difference in response between question "wuv" and "xyz"?
The dataset (to see column of Var E move the table to the left)
TARGET var A var B var C Var D Var E
_________________________________________________________________________________________________________|
|answer|question| is |n of people that|n of people that||n of respons|n of respons|(varB-varA)/varA |
| | id |correct|choose it in the|choose it in the||in EASY form|in COMPLEX |{the % reduction of|
| | | (Y/N) |EASY form |COMPLEX form || |form | respondent from |
| | | | | || | | EASY to COMPLEX} |
|——————————————————————————————————————————————————————————————————————————————————————––––––––––––––––––|
|————————————————————————————————————————————————————————————————————–––––––––––———————––––––––––––––––––|
| 1 | xyz | Y | 1000 | 500 || 2000 | 850 | -0.5 (or -50%) |
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
| 2 | xyz | N | 800 | 300 || 2000 | 850 | -0.625 |
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
| 3 | xyz | N | 200 | 50 || 2000 | 850 | -0,75 |
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
| 1 | abc | N | 6000 | 800 || 10000 | 1400 | -0.8666666 |
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
| 2 | abc | Y | 3000 | 500 || 10000 | 1400 | -0.8333333 |
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
| 3 | abc | N | 1000 | 100 || 10000 | 1400 | -0.9 |
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
| 1 | wuv | N | 1000000 | 300000 || 9000000 | 800000 | -0.7 |
|—————————————————————————————————————————————–——————————————————————––––––———————–––––––––––––––––––––––|
| 2 | wuv | N | 6000000 | 400000 || 9000000 | 800000 | -0.93333 |
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
| 3 | wuv | Y | 2000000 | 100000 || 9000000 | 800000 | -0.5 |
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
|————————————————————————————————————————————————————————————————————––––––———————–––––––––––––––––––––––|
| ... | ... | ... | ... | ... || ... | ... | |
Relevant Facts about the variables in the dataset (how they have been calculated)
Var A < Var B
Var C < Var D
|
2019-08-23 04:55:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28447479009628296, "perplexity": 41.24516540821743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317847.79/warc/CC-MAIN-20190823041746-20190823063746-00358.warc.gz"}
|
https://istopdeath.com/find-the-derivative-d-dx-y3ex/
|
# Find the Derivative – d/dx y=3e^x
Since is constant with respect to , the derivative of with respect to is .
Differentiate using the Exponential Rule which states that is where =.
Find the Derivative – d/dx y=3e^x
Scroll to top
|
2022-12-06 08:09:05
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480265974998474, "perplexity": 1519.6571912434133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00507.warc.gz"}
|
https://matheducators.stackexchange.com/digest/preview
|
# Mathematics Educators Stack Exchange Community Digest
## Top new questions this week:
### Is there a virtue to learning how to compute by hand?
I have been professionally tutoring a wide range of students (from elementary school through graduate school) for many years. Most of them are from the United States. I generally focus on helping my ...
reference-request primary-education arithmetic-operations arithmetic middle-school
geometry
## Can you answer this question?
### Why students do not notice sign of plus/minus?
My child in grade 3 often needs a reminder because in a task where he needs to do addition he would subtract instead of adding or v.v ignoring the signs given in a task. He would also forget adding ...
primary-education
|
2021-03-06 14:13:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3582267761230469, "perplexity": 2031.3398194145461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00484.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/171350/feasibilty-of-binary-planet-having-life
|
# Feasibilty of binary planet having life
So far I've designed two planets, and would like for them to orbit each other at a distance of roughly 1.25 million km (both planets would easily stay in the habitable zone), the smaller being tidally locked.
One has a radius of 6,371 km and a mass of $$~4.776×10^{24} \ kg$$ (the smaller, tidally locked planet), the other has a radius of 7,645 km and a mass of $$~1.1002×10^{25} \ kg$$ (the larger planet).
The estimated orbital period is 39 days 11 hours. The star is class KV, with 0.625 solar masses, and a luminosity of 0.16386 solar units.
I'm wondering what the tidal forces of each planet would be, if it would be possible for life to develop on the larger planet, and if the smaller planet would be habitable even though it would have a relative day/night length of almost 40 earth days. If any other factors are needed for clarification please tell me and I'll edit the post.
• Both planets seems plausible. Rocky density and razoable gravity aceleration. Depends of atmosphere and orbit around the star. The small one probably haven't one magnetosphere due slow rotation and will lose atmosphere quickly. – Rodolfo Penteado Mar 15 '20 at 4:53
• This is known to be possible. The Earth and moon is a real life example. – Kilisi Mar 15 '20 at 5:29
• Kilisi, while it is possible for the larger of the two planets to sustain life, the question was moreso centered on whether or not the smaller of the two, which would have a day/night cycle equal to that of its orbital period, would be habitable. As well as what the tides compared to earth would be. – Rebel110 Mar 15 '20 at 5:38
• @Rebel110 I have another question. The smaller planet has a radius similar to Earth but lower mass, and the larger planet has roughly twice the mass of Earth. The size of the Hill Sphere of Earth is about 1.5 million kilometers, though I think that a moon will have a stable orbit only within the inner third of the Hill Sphere radius. The larger planet's Hill Sphere will have to be calculated from its mass and the mass and distance of its star. It is possible that the smaller planet won't have a stable orbit at 1.25 million km. – M. A. Golding Mar 15 '20 at 20:26
• @M. A. Golding I hadn't considered this fact, I've found, due to the calculator, that the two planets could orbit each other at roughly 3× the distance between earth and its moon. I didnt put the hill sphere into account, as I was greatly oversimplifying the orbital calculations. Thank you for mentioning this however. – Rebel110 Mar 15 '20 at 21:26
## 2 Answers
What seems to be a bit off is the ratio of the masses of the two bodies. Earth is about 100 times more massive than Moon, while here the big one is just 10 times more massive than the little one.
This means that the co-orbiting around their center of mass would be more noticeable, the COM being at about 70% of their mutual distance, measuring from the center of the smaller planet: 880 thousand kilometers, outside of both bodies.
I am not sure that the above would allow tidal locking with the main star alone. With tidal locking out of the picture, developing life is "just" a matter of having the right conditions: temperature, water and so on.
I found a calculator that suits my needs exactly! Neither body actually needs to be tidally locked. Tides on the major planet are 2.56 solar tide magnitude (1.4 m), and 1.24 lunar tide magnitude (1.3 m). For the smaller planet, the solar tide magnitude is equal, but the lunar tide magnitude increases to 1.29 (1.4 m). I am also sufficiently satisfied with the tidal forces that I will be removing the tidally locked stipulation from the smaller planet. Thanks everyone! Edit; link to the calculator :https://docs.google.com/spreadsheets/u/0/d/1uSjlohnk_dR_WNqFaqebrd2myOw8HrBMupr-5-WBWhU/htmlview
• If you want to self answer your question, at least refer to the calculator you have found. – L.Dutch - Reinstate Monica Mar 15 '20 at 6:40
|
2021-04-16 14:36:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6622046828269958, "perplexity": 733.190638068964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00319.warc.gz"}
|
https://www.physicsforums.com/threads/integration-problem.714986/
|
# Integration Problem
jayanthd
I know ∫(xe$^{ax}$) dx = x (e$^{ax}$ / a) - (1/a) ∫e$^{ax}$ . 1 dx
= x (e$^{ax}$ / a) - (1/a) (e$^{ax}$ / a)
= (e$^{ax}$ / a) (x - 1/a)
i.e, integral of two functions = (first function) (integral of second function) - ∫(integral of second function) (differential of first function)
This is not a homework. I am a working professional and I need help in solving a problem.
The solution I need is for
(1/0.1) ∫20te$^{-10t}$ dt between limits 0 and 20 us. limits can be taken as 0 to t. I don't need numerical solution.
t = $\tau$
dt = d$\tau$
integral becomes
200 ∫$\tau$e$^{-10\tau}$ d$\tau$ between limits 0 and t
it becomes 200 [ $\tau$ (e$^{-10\tau}$ / - 10) + (1/10) ∫e$^{-10\tau}$ . 1 d$\tau$
= 200 [ $\tau$ (e$^{-10\tau}$ / - 10) + (1/10) (e$^{-10\tau}$ / - 10)]
= 200 [ $\tau$ (e$^{-10\tau}$ / - 10) - (1/100) e$^{-10\tau}$]
I know I have to apply limits to the two e$^{-10\tau}$
I want to know should I apply limits also to $\tau$ which is at the beginning of the solution (here... = 200 [ $\tau$ ...) ?
Last edited:
Homework Helper
So you have the indefinite integral
$$\int \tau e^{-10 \tau} \, d\tau = - \tau e^{-10\tau} / 10 - e^{-10\tau} / 100 = -\frac{e^{-10\tau}}{10} \left( \tau - \frac{1}{10} \right)$$
which looks correct to me (I've rewritten it slightly to look a bit better).
Now evaluate that expression at ##\tau = t##, and at ##\tau = 0##, and subtract the result.
Staff Emeritus
Homework Helper
Yes, everywhere there is a τ, you must substitute the limits.
jayanthd
Thank you CompuChip and SteamKing.
CompuChip my question was
I know I have to apply limits to the two (e−10τ / - 10)
I want to know should I apply limits also to τ which is at the beginning of the solution (here... = 200 [ τ ...) ?
Yes, CompuChip the solution you gave is what I have. I was updating my first post to show the solution in the form you gave.
Thank you both of you.
So the limits are applied to whole solution and not just the integrals. Right? i.e., Even though $\tau$ is not integrated or differentiated in the process.
Edit: CompuChip I think you made a mistake.
Is it not
-e$^{-10\tau}$ / 10 ($\tau$ + 1/10)
I get ($\tau$ + 1/10) in the brackets
yours is ($\tau$ - 1/10)
You made a mistake in the sign?
I was referring the book "Electric Circuits 9th edition by Nilsson and Riedel" page no 178.
#### Attachments
• integration.png
45.3 KB · Views: 374
Last edited:
Homework Helper
Edit: CompuChip I think you made a mistake.
Is it not
-e$^{-10\tau}$ / 10 ($\tau$ + 1/10)
I get ($\tau$ + 1/10) in the brackets
yours is ($\tau$ - 1/10)
You made a mistake in the sign?
Yes, you are right. Good catch!
I mean, of course I was just checking if you were paying attention.
|
2022-10-02 03:26:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7538459300994873, "perplexity": 3556.174639775141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00666.warc.gz"}
|
https://www.iitianacademy.com/ib-physics-topic-2-mechanics-2-4-momentum-and-impulse-study-notes/
|
# IB Physics Topic 2. Mechanics-2.4 Momentum and impulse- study Notes
### 2.4 Momentum and impulse
Essential Idea:
Conservation of momentum is an example of a law that is never violated.
Understandings:
• Newton’s second law expressed in terms of rate of change of momentum
• Impulse and force–time graphs
• Conservation of linear momentum
• Elastic collisions, inelastic collisions and explosions
Applications and Skills:
• Applying conservation of momentum in simple isolated systems including (but not limited to) collisions, explosions, or water jets
• Using Newton’s second law quantitatively and qualitatively in cases where mass is not constant
• Sketching and interpreting force–time graphs
• Determining impulse in various contexts including (but not limited to) car safety and sports
• Qualitatively and quantitatively comparing situations involving elastic collisions, inelastic collisions and explosions
Data booklet reference:
$$p=mv\\ f=\frac{\Delta p}{\Delta t}\\ E_k=\frac{p^2}{2m}\\ impulse=F\Delta t=\Delta p$$
Principle of conservation of energy
“Energy is never created or destroyed, only transformed (e.g. into mass E = mc²), dissipated or transferred.” Energy is measured in J (joules) – energy required to move 1 N through 1m.
• ∆Esystem + ∆Esurroundings = 0
• The energy of system changes as a result of interactions with the surroundings.
Work done (W) by a force
“The work done by a force is: force x distance moved in direction of the force”
• W = Fs cos θ
• The work done by a centripetal force is equal to zero, since the force is always at right angles to movement.
• Graph: Work is also the area under a Force-Distance graph.
Energy (When work is done, energy is transferred)
• Kinetic energy (Ek): energy related to motion – Ek = 1/2mv^2
• Fractional change is the change of Ek divided by the original Ek.
• Raised with constant speed – no net work done.
• Potential energy (Ep): energy stored in a position.
• Gravitational potential energy: energy related to height – Ep = mg ∆h
• Independent of path followed - only ∆h matters ∆h.
• Elastic potential energy: energy stored in a spring – Ep = 1/2kx^2
• In a Force-extension graph, the area is the work done, and the gradient is k.
• Other energies: Electric, Magnetic, Chemical, Nuclear, Thermal, Vibration, Light…
• Dissipation: Energy transformed into thermal energy (internal energy of a body), sound.
Power (P)
• “Power is the rate of energy transfer.” = ∆W/∆t = ∆pv/∆t = Fv
• Measured in W (watts).
Efficiency
• Energy transferred = useful energy + wasted enery (never say lost energy!)
• Efficiency = useful energy out/total energy in = useful power out/total power.
• Efficiency is always smaller than 100% – frictional forces.
### WORK
#### WORK DONE BY A CONSTANT FORCE
Work done (W) by a force in displacing a body through a displacement x is given by
= Fx cos θ
Where θ is the angle between the applied force and displacement.
The S.I. unit of work is joule, CGS unit is erg and its dimensions are [ML2T–2].
1 joule = 107 erg
• When θ = 0° then W = Fx
• When θ is between 0 and π/2 then
W = Fx cos θ = positive
• When θ = π/2 then W = Fx cos 90° = 0 (zero)
Work done by centripetal force is zero as in this case angle θ = 90°
• ∴ When θ is between π/2 and π then
W = Fx cos θ = negative
#### WORK DONE BY A VARIABLE FORCE
When the force is an arbitrary function of position, we need the techniques of calculus to evaluate the work done by it. The figure shows Fx as function of the position x. We begin by replacing the actual variation of the force by a series of small steps.
The area under each segment of the curve is approximately equal to the area of a rectangle. The height of the rectangle is a constant value of force, and its width is a small displacement Δx. Thus, the step involves an amount of work ΔWn = Fn Δxn. The total work done is approximately given by the sum of the areas of the rectangles.
i.e., W ≈ Δxn.
As the size of the steps is reduced, the tops of the rectangle more closely trace the actual curve shown in figure. If the limit Δx → 0, which is equivalent to letting the number of steps tend to infinity, the discrete sum is replaced by a continuous integral.
Thus, the work done by a force Fx from an initial point A to final point B is
The work done by a variable force in displacing a particle from x1 to x2
= area under force displacement graph
CAUTION : When we find work, we should be cautious about the question, work done by which force? Let us take an example to understand this point. Suppose you are moving a body up without acceleration.
Work done by applied force
Work done by gravitational force
### ENERGY
It is the capacity of doing work. Its units and dimensions are same as that of work.
#### POTENTIAL ENERGY
The energy possessed by a body by virtue of its position or configuration is called potential energy. Potential energy is defined only for conservative forces. It does not exist for non-conservative forces.
ELASTIC POTENTIAL ENERGY (POTENTIAL ENERGY OF A SPRING)
Let us consider a spring, its one end is attached to a rigid wall and other is fixed to a mass m. We apply an external force on mass m in the left direction, so that the spring is compressed by a distance x.
If spring constant is k, then energy stored in spring is given by
P.E. of compressed spring = ½kx2
Now if the external force is removed, the mass m is free to move then due to the stored energy in the spring, it starts oscillating
GRAVITATIONAL POTENTIAL ENERGY
When a body is raised to some height, above the ground, it acquires some potential energy, due to its position. The potential energy due to height is called gravitational potential energy. Let us consider a ball B, which is raised by a height h from the ground.
In doing so, we do work against gravity and this work is stored in the ball B in the form of gravitational potential energy and is given by
W = Fapp.h = mgh = gravitational potential energy …(i)
Further if ball B has gravitational P.E. (potential energy) Uo at ground and at height h, Uh, then
Uh–Uo =mgh …(ii)
If we choose Uo= 0 at ground (called reference point) then absolute gravitational P.E of ball at height h is
Uh = mgh …(iii)
In general, if two bodies of masses m1 and m2 are separated by a distance r, then the gravitational potential energy is
#### KINETIC ENERGY
The energy possessed by a body by virtue of its motion is called kinetic energy.
The kinetic energy Ek is given by
Ek = ½ mv2 …(i)
Where m is mass of body, which is moving with velocity v. We know that linear momentum (p) of a body which is moving with a velocity v is given by
p = mv …(ii)
So from eqs. (i) and (ii), we have
…(iii)
This is the relation between momentum and kinetic energy.
The graph between and p is a straight line
The graph between and is a rectangular hyperbola
The graph between Ek and is a rectangular hyperbola
KEEP IN MEMORY
• Work done by the conservative force in moving a body in a closed loop is zero.
Work done by the non-conservative force in moving a body in a closed loop is non-zero.
• If the momenta of two bodies are equal then the kinetic energy of lighter body will be more.
• If the kinetic energies of two bodies are same then the momentum of heavier body will be more.
### WORK-ENERGY THEOREM
Let a number of forces acting on a body of mass m have a resultant force and by acting over a displacement x (in the direction of ), does work on the body, and there by changing its velocity from u (initial velocity) to v (final velocity). Kinetic energy of the body changes.
So, work done by force on the body is equal to the change in kinetic energy of the body.
This expression is called Work energy (W.E.) theorem.
### LAW OF CONSERVATION OF MECHANICAL ENERGY
The sum of the potential energy and the kinetic energy is called the total mechanical energy.
The total mechanical energy of a system remains constant if only conservative forces are acting on a system of particles and the work done by all other forces is zero.
i.e., ΔK + ΔU = 0
or Kf – Ki + Uf – Ui = 0
or Kf + Uf = Ki + Ui = constant
### LAW OF CONSERVATION OF ENERGY
Energy is of many types – mechanical energy, sound energy, heat energy, light energy, chemical energy, atomic energy, nuclear energy etc.
In many processes that occur in nature energy may be transformed from one form to other. Mass can also be transformed into energy and vice-versa. This is according to Einstein’s mass-energy equivalence relation, E = mc2.
In dynamics, we are mainly concerned with purely mechanical energy.
The study of the various forms of energy and of transformation of one kind of energy into another has led to the statement of a very important principle, known as the law of conservation of energy.
“Energy cannot be created or destroyed, it may only be transformed from one form into another. As such the total amount of energy never changes”.
KEEP IN MEMORY
1. Work done against friction on horizontal surface = μ mgx and work done against force of friction on inclined plane = (μmg cosθ) x where μ = coefficient of friction.
2. If a body moving with velocity v comes to rest after covering a distance ‘x’ on a rough surface having coefficient of friction μ, then (from work energy theorem), 2μ gx = v2. Here retardation is
3. Work done by a centripetal force is always zero.
4. Potential energy of a system decreases when a conservative force does work on it.
5. If the speed of a vehicle is increased by n times, then its stopping distance becomes n2 times and if momentum is increased by n times then its kinetic energy increases by n2 times.
6. Stopping distance of the vehicle
7. Two vehicles of masses M1 and M2 are moving with velocities u1 and u2 respectively. When they are stopped by the same force, their stopping distance are in the ratio as follows :
Since the retarding force F is same in stopping both the vehicles. Let x1 and x2 are the stopping distances of vehicles of masses M1 & M2 respectively, then
….(i)
where u1 and u2 are initial velocity of mass M1 & M2 respectively & final velocity of both mass is zero.
….(ii)
Let us apply a retarding force F on M1 & M2, a1 & a2 are the decelerations of M1 & M2 respectively. Then from third equation of motion :
….(iii a)
and ….(iii b )
If t1 & t2 are the stopping time of vehicles of masses
M1 & M2 respectively, then from first equation of motion (v = u+at)
….(iv a)
and ….(iv b)
Then by rearranging equation (i), (iii) & (iv), we get
1. If
2. If
3. If M1u1 = M2u1 ⇒ t1 = t2 and
4. Consider two vehicles of masses M1 & M2 respectively.
If they are moving with same velocities, then the ratio of their stopping distances by the application of same retarding force is given by
and let M2 > M1 then x1 < x2
lighter mass will cover less distance then the heavier mass
And the ratio of their retarding times are as follows :
i.e
1. If kinetic energy of a body is doubled, then its momentum becomes times,
2. If two bodies of masses m1 and m2 have equal kinetic energies, then their velocities are inversely proportional to the square root of the respective masses. i.e.
1. The spring constant of a spring is inversely proportional to the no. of turns i.e.
or kn = const.
Greater the no. of turns in a spring, greater will be the work done i.e. W ∝ n
The greater is the elasticity of the spring, the greater is the spring constant.
1. Spring constant : The spring constant of a spring is inversely proportional to length i.e., or Kl = constant.
1. If a spring is divided into n equal parts, the spring constant of each part = nK.
2. If spring of spring constant K1, K2, K3 ………. are connected in series, then effective force constant
3. If spring of spring constant K1, K2, K3……….. are connected in parallel, then effective spring constant
Keff = K1 + K2 + K3 +………….
### POWER
Power of the body is defined as the time rate of doing work by the body.
The average power Pav over the time interval Δt is defined by
…(i)
And the instantaneous power P is defined by
…(ii)
Power is a scalar quantity
The S.I. unit of power is joule per second
1 joule/sec = 1watt
The dimensions of power are [ML2T–3]
(force is constant over a small time interval)
So instantaneous power (or instantaneous rate of working) of a man depends not only on the force applied to body, but also on the instantaneous velocity of the body.
### Impulse
When two bodies collide, they exert forces on each other while in contact. The momentum of each body is changed due to the force on it exerted by the other. On an ordinary scale, the time duration of this contact is very small and yet the change in momentum is sizeable. This means that the magnitude of the force must be large on an ordinary scale. Such large forces acting for a very short duration are called impulsive forces. The force may not be uniform while the contact lasts.
The change in momentum produced by such an implusive force is
This quantity $$\int_{t_i}^{t_f}\vec{F}dt$$is known as the impluse of the force F during the time interval ti to tf and is equal to the change in the momentum of the body on which it acts. Obviously, it is the area under the F – t curve for ones dimensional motion.
### COLLISION
Collision between two bodies is said to take place if either of two bodies come in physical contact with each other or even when path of one body is affected by the force exerted due to the other.
• Elastic collision : The collision in which both the momentum and kinetic energy of the system remains conserved is called elastic collision.
Forces involved in the interaction of elastic collision are conservative in nature.
• Inelastic collision : The collision in which only the momentum of the system is conserved but kinetic energy is not conserved is called inelastic collision.
Perfectly inelastic collision is one in which the two bodies stick together after the collision.
Forces involved in the interaction of inelastic collision are non-conservative in nature.
#### COEFFICIENT OF RESTITUTION (OR COEFFICIENT OF RESILIENCE)
It is the ratio of velocity of separation after collision to the velocity of approach before collision. i.e., e = | v1 – v2 |/ | u1 – u2 |
Here u1 and u2 are the velocities of two bodies before collision and v1 and v2 are the velocities of two bodies after collision.
• 0 < e < 1 (Inelastic collision)
Collision between two ivory balls, steel balls or quartz ball is nearly elastic collision.
• For perfectly elastic collision, e = 1
• For a perfectly inelastic collision, e = 0
#### OBLIQUE ELASTIC COLLISION
When a body of mass m collides obliquely against a stationary body of same mass then after the collision the angle between these two bodies is always 90°.
#### ELASTIC COLLISION IN ONE DIMENSION (HEAD ON)
Let two bodies of masses M1 and M2 moving with velocities u1 and u2 along the same straight line, collide with each other. Let u1>u2. Suppose v1 and v2 respectively are the velocities after the elastic collision, then:
According to law of conservation of momentum
…(1)
From law of conservation of energy
…(2)
…(3)
Relative velocity of a Relative velocity of a
body before collision body after collision
Solving eqs. (1) and (2) we get,
…(4)
…(5)
From eqns. (4) and (5), it is clear that :
• If M1 = M2 and u2 = 0 then v1 = 0 and v2 = u1. Under this condition the first particle comes to rest and the second particle moves with the velocity of first particle after collision. In this state there occurs maximum transfer of energy.
• If M1>> M2 and (u2=0) then, v1 = u1, v2 = 2u1 under this condition the velocity of first particle remains unchanged and velocity of second particle becomes double that of first.
• If M1 << M2 and (u2 = 0) then v1 = –u1 and v2 = 0 under this condition the second particle remains at rest while the first particle moves with the same velocity in the opposite direction.
• If M1 = M2 = M but u2 ≠0 then v1 = u2 i.e., the particles mutually exchange their velocities.
• If second body is at rest i.e., u2 = 0, then fractional decrease in kinetic energy of mass M1, is given by
#### INELASTIC COLLISION
Let two bodies A and B collide inelastically. Then from law of conservation of linear momentum
M1u1 + M2u2 = M1v1+M2v2 …(i)
e …(ii)
From eqns.(i) and (ii), we have,
…(iii)
…(iv)
Loss in kinetic energy (–ΔEk) = initial K.E. – final K.E
…(v)
Negative sign indicates that the final kinetic energy is less than initial kinetic energy.
#### PERFECTLY INELASTIC COLLISION
In this collision, the individual bodies A and B move with velocities u1 and u2 but after collision move as a one single body with velocity v.
So from law of conservation of linear momentum, we have
M1u1+M2u2=(M1+M2)V …(i)
or …(ii)
And loss in kinetic energy, –ΔEk = total initial K.E – total final K.E
or, …(iii)
#### OBLIQUE COLLISION
This is the case of collision in two dimensions. After the collision, the particles move at different angle.
We will apply the principle of conservation of momentum in the mutually perpendicular direction.
Along x-axis, m1u1 = m1v1 cosθ + m2 v2 cosφ
Along y-axis, 0 = m1v1 sinθ – m2 v2 sinφ
KEEP IN MEMORY
1. Suppose, a body is dropped from a height h0 and it strikes the ground with velocity v0. After the (inelastic) collision let it rise to a height h1. If v1 be the velocity with which the body rebounds, then the coefficient of restitution.
=
1. If after n collisions with the ground, the velocity is vn and the height to which it rises be hn, then
1. When a ball is dropped from a height h on the ground, then after striking the ground n times , it rises to a height hn = e2n ho where e = coefficient of restitution.
2. If a body of mass m moving with velocity v, collides elastically with a rigid ball, then the change in the momentum of the body is 2 m v.
1. If the collision is elastic then we can conserve the energy as
1. If two particles having same mass and moving at right angles to each other collide elastically then after the collision they also move at right angles to each other.
2. If a body A collides elastically with another body of same mass at rest obliquely, then after the collision the two bodies move at right angles to each other, i.e. (θ + φ) =
1. In an elastic collision of two equal masses, their kinetic energies are exchanged.
2. When two bodies collide obliquely, their relative velocity resolved along their common normal after impact is in constant ratio to their relative velocity before impact (resolved along common normal), and is in the opposite direction.
|
2022-07-07 08:00:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6817446351051331, "perplexity": 721.5159677934793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00105.warc.gz"}
|
http://lammps.sandia.gov/doc/accelerate_gpu.html
|
# 5.3.1. GPU package
The GPU package was developed by Mike Brown at ORNL and his collaborators, particularly Trung Nguyen (ORNL). It provides GPU versions of many pair styles, including the 3-body Stillinger-Weber pair style, and for kspace_style pppm for long-range Coulombics. It has the following general features:
• It is designed to exploit common GPU hardware configurations where one or more GPUs are coupled to many cores of one or more multi-core CPUs, e.g. within a node of a parallel machine.
• Atom-based data (e.g. coordinates, forces) moves back-and-forth between the CPU(s) and GPU every timestep.
• Neighbor lists can be built on the CPU or on the GPU
• The charge assignment and force interpolation portions of PPPM can be run on the GPU. The FFT portion, which requires MPI communication between processors, runs on the CPU.
• Asynchronous force computations can be performed simultaneously on the CPU(s) and GPU.
• It allows for GPU computations to be performed in single or double precision, or in mixed-mode precision, where pairwise forces are computed in single precision, but accumulated into double-precision force vectors.
• LAMMPS-specific code is in the GPU package. It makes calls to a generic GPU library in the lib/gpu directory. This library provides NVIDIA support as well as more general OpenCL support, so that the same functionality can eventually be supported on a variety of GPU hardware.
Here is a quick overview of how to enable and use the GPU package:
• build the library in lib/gpu for your GPU hardware with the desired precision settings
• install the GPU package and build LAMMPS as usual
• use the mpirun command to set the number of MPI tasks/node which determines the number of MPI tasks/GPU
• specify the # of GPUs per node
• use GPU styles in your input script
The latter two steps can be done using the “-pk gpu” and “-sf gpu” command-line switches respectively. Or the effect of the “-pk” or “-sf” switches can be duplicated by adding the package gpu or suffix gpu commands respectively to your input script.
Required hardware/software:
To use this package, you currently need to have an NVIDIA GPU and install the NVIDIA CUDA software on your system:
• Check if you have an NVIDIA GPU: cat /proc/driver/nvidia/gpus/0/information
• Go to http://www.nvidia.com/object/cuda_get.html
• Install a driver and toolkit appropriate for your system (SDK is not necessary)
• Run lammps/lib/gpu/nvc_get_devices (after building the GPU library, see below) to list supported devices and properties
Building LAMMPS with the GPU package:
This requires two steps (a,b): build the GPU library, then build LAMMPS with the GPU package.
You can do both these steps in one line as described in Section 4 of the manual.
Or you can follow these two (a,b) steps:
1. Build the GPU library
The GPU library is in lammps/lib/gpu. Select a Makefile.machine (in lib/gpu) appropriate for your system. You should pay special attention to 3 settings in this makefile.
• CUDA_HOME = needs to be where NVIDIA CUDA software is installed on your system
• CUDA_ARCH = needs to be appropriate to your GPUs
• CUDA_PREC = precision (double, mixed, single) you desire
See lib/gpu/Makefile.linux.double for examples of the ARCH settings for different GPU choices, e.g. Fermi vs Kepler. It also lists the possible precision settings:
CUDA_PREC = -D_SINGLE_SINGLE # single precision for all calculations
CUDA_PREC = -D_DOUBLE_DOUBLE # double precision for all calculations
CUDA_PREC = -D_SINGLE_DOUBLE # accumulation of forces, etc, in double
The last setting is the mixed mode referred to above. Note that your GPU must support double precision to use either the 2nd or 3rd of these settings.
To build the library, type:
make -f Makefile.machine
If successful, it will produce the files libgpu.a and Makefile.lammps.
The latter file has 3 settings that need to be appropriate for the paths and settings for the CUDA system software on your machine. Makefile.lammps is a copy of the file specified by the EXTRAMAKE setting in Makefile.machine. You can change EXTRAMAKE or create your own Makefile.lammps.machine if needed.
Note that to change the precision of the GPU library, you need to re-build the entire library. Do a “clean” first, e.g. “make -f Makefile.linux clean”, followed by the make command above.
1. Build LAMMPS with the GPU package
cd lammps/src
make yes-gpu
make machine
Note that if you change the GPU library precision (discussed above) and rebuild the GPU library, then you also need to re-install the GPU package and re-build LAMMPS, so that all affected files are re-compiled and linked to the new GPU library.
Run with the GPU package from the command line:
The mpirun or mpiexec command sets the total number of MPI tasks used by LAMMPS (one or multiple per compute node) and the number of MPI tasks used per node. E.g. the mpirun command in MPICH does this via its -np and -ppn switches. Ditto for OpenMPI via -np and -npernode.
When using the GPU package, you cannot assign more than one GPU to a single MPI task. However multiple MPI tasks can share the same GPU, and in many cases it will be more efficient to run this way. Likewise it may be more efficient to use less MPI tasks/node than the available # of CPU cores. Assignment of multiple MPI tasks to a GPU will happen automatically if you create more MPI tasks/node than there are GPUs/mode. E.g. with 8 MPI tasks/node and 2 GPUs, each GPU will be shared by 4 MPI tasks.
Use the “-sf gpu” command-line switch, which will automatically append “gpu” to styles that support it. Use the “-pk gpu Ng” command-line switch to set Ng = # of GPUs/node to use.
lmp_machine -sf gpu -pk gpu 1 -in in.script # 1 MPI task uses 1 GPU
mpirun -np 12 lmp_machine -sf gpu -pk gpu 2 -in in.script # 12 MPI tasks share 2 GPUs on a single 16-core (or whatever) node
mpirun -np 48 -ppn 12 lmp_machine -sf gpu -pk gpu 2 -in in.script # ditto on 4 16-core nodes
Note that if the “-sf gpu” switch is used, it also issues a default package gpu 1 command, which sets the number of GPUs/node to 1.
Using the “-pk” switch explicitly allows for setting of the number of GPUs/node to use and additional options. Its syntax is the same as same as the “package gpu” command. See the package command doc page for details, including the default values used for all its options if it is not specified.
Note that the default for the package gpu command is to set the Newton flag to “off” pairwise interactions. It does not affect the setting for bonded interactions (LAMMPS default is “on”). The “off” setting for pairwise interaction is currently required for GPU package pair styles.
Or run with the GPU package by editing an input script:
The discussion above for the mpirun/mpiexec command, MPI tasks/node, and use of multiple MPI tasks/GPU is the same.
Use the suffix gpu command, or you can explicitly add an “gpu” suffix to individual styles in your input script, e.g.
pair_style lj/cut/gpu 2.5
You must also use the package gpu command to enable the GPU package, unless the “-sf gpu” or “-pk gpu” command-line switches were used. It specifies the number of GPUs/node to use, as well as other options.
Speed-ups to expect:
The performance of a GPU versus a multi-core CPU is a function of your hardware, which pair style is used, the number of atoms/GPU, and the precision used on the GPU (double, single, mixed).
See the Benchmark page of the LAMMPS web site for performance of the GPU package on various hardware, including the Titan HPC platform at ORNL.
You should also experiment with how many MPI tasks per GPU to use to give the best performance for your problem and machine. This is also a function of the problem size and the pair style being using. Likewise, you should experiment with the precision setting for the GPU library to see if single or mixed precision will give accurate results, since they will typically be faster.
Guidelines for best performance:
• Using multiple MPI tasks per GPU will often give the best performance, as allowed my most multi-core CPU/GPU configurations.
• If the number of particles per MPI task is small (e.g. 100s of particles), it can be more efficient to run with fewer MPI tasks per GPU, even if you do not use all the cores on the compute node.
• The package gpu command has several options for tuning performance. Neighbor lists can be built on the GPU or CPU. Force calculations can be dynamically balanced across the CPU cores and GPUs. GPU-specific settings can be made which can be optimized for different hardware. See the packakge command doc page for details.
• As described by the package gpu command, GPU accelerated pair styles can perform computations asynchronously with CPU computations. The “Pair” time reported by LAMMPS will be the maximum of the time required to complete the CPU pair style computations and the time required to complete the GPU pair style computations. Any time spent for GPU-enabled pair styles for computations that run simultaneously with bond, angle, dihedral, improper, and long-range calculations will not be included in the “Pair” time.
• When the mode setting for the package gpu command is force/neigh, the time for neighbor list calculations on the GPU will be added into the “Pair” time, not the “Neigh” time. An additional breakdown of the times required for various tasks on the GPU (data copy, neighbor calculations, force computations, etc) are output only with the LAMMPS screen output (not in the log file) at the end of each run. These timings represent total time spent on the GPU for each routine, regardless of asynchronous CPU calculations.
• The output section “GPU Time Info (average)” reports “Max Mem / Proc”. This is the maximum memory used at one time on the GPU for data storage by a single MPI process.
None.
|
2017-10-17 03:52:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23659038543701172, "perplexity": 3850.9368247800157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820700.4/warc/CC-MAIN-20171017033641-20171017053641-00250.warc.gz"}
|
http://mathsci.kaist.ac.kr/home/
|
## Problem of the week
### 2018-23 Game of polynomials
Two players play a game with a polynomial with undetermined coefficients $1 + c_1 x + c_2 x^2 + \dots + c_7 x^7 + x^8.$ Players, in turn, assign a real number to an undetermined coefficient until all coefficients are determined. The first player wins if the polynomial has no real zeros, and the second player wins if the polynomial has at least one real zero. Find who has the winning strategy.
|
2019-01-22 10:16:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28355154395103455, "perplexity": 200.91419767244815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583835626.56/warc/CC-MAIN-20190122095409-20190122121409-00108.warc.gz"}
|
https://www.askiitians.com/forums/Analytical-Geometry/24/60718/problems-in-circle-3.htm
|
# 3. The locus of the foot of the perpendicular from the origin to the line which always passes through a fixed point (h,k) is (a) parabola (b) Ellipse (c) Hyperbola (d) Circle
Jitender Singh IIT Delhi
8 years ago
Ans:(d) Circle
Let the equation of the line:
$y = mx+c$
Let the foot of perpendicular from (0, 0) to the line be (a, b)
$a = \frac{-mc}{1+m^{2}}$
$b = \frac{c}{1+m^{2}}$
$m = \frac{-a}{b}$
Since line passes through (h, k):
$k = mh +c$
$c = k-mh$
$b = \frac{k-mh}{1+m^{2}}$
$b = \frac{k-(\frac{-a}{b})h}{1+(\frac{-a}{b})^{2}}$
$a^{2}+b^{2}-ah-bk=0$
$a\rightarrow x, b\rightarrow y$
$x^{2}+y^{2}-ax-by=0$
It is a equation of circle.
Thanks & Regards
Jitender Singh
IIT Delhi
|
2022-09-27 06:57:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48390719294548035, "perplexity": 4795.133987084185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00648.warc.gz"}
|
http://mathoverflow.net/feeds/user/5640
|
User ed gorcenski - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T15:08:08Z http://mathoverflow.net/feeds/user/5640 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/102413/must-read-papers-in-numerical-analysis/102473#102473 Answer by Ed Gorcenski for "Must read" papers in numerical analysis Ed Gorcenski 2012-07-17T18:28:46Z 2012-11-20T07:00:58Z <p>It is not a classic paper, but I would add Xiu, D. and Karniadakis, G.E, "The Wiener-Askey Polynomial Chaos for Stochastic Differential Equations," which can be found <a href="http://www.dam.brown.edu/scicomp/media/report_files/BrownSC-2003-07.pdf" rel="nofollow">here</a>.</p> <p>I mention this paper because Generalized Polynomial Chaos is still underrepresented in many fields, though it has found much use in engineering applications. It is not precisely a magic bullet, but in some cases it can drastically reduce computational demand in uncertainty analysis. The aforementioned paper is a good summary of the method.</p> http://mathoverflow.net/questions/24256/fields-of-mathematics-that-were-dormant-for-a-long-time-until-someone-revitalized/24292#24292 Answer by Ed Gorcenski for Fields of mathematics that were dormant for a long time until someone revitalized them Ed Gorcenski 2010-05-11T21:04:11Z 2010-05-12T00:52:57Z <p>Polynomial Chaos was developed in the late 30s by N. Wiener, but went more or less unnoticed until Ghanem & Spanos picked up on it for use in finite element analysis in the 80s and 90s. In some ways it still may be an under-utilized approach, given the dominance of the Itō and Stratonovich calculi.</p> http://mathoverflow.net/questions/24287/what-is-the-best-algorithm-to-find-the-smallest-nonzero-eigenvalue-of-a-symmetric/24297#24297 Answer by Ed Gorcenski for What is the best algorithm to find the smallest nonzero Eigenvalue of a symmetric matrix? Ed Gorcenski 2010-05-11T21:41:22Z 2010-05-11T21:41:22Z <p>A quick search led me to this paper, which deals specifically with sparse symmetric matrices, although some of its references might be useful.</p> <blockquote> <p>Jang, Ho-Jong, and Lee, Sung-Ho, "NUMERICAL STABILITY OF UPDATE METHOD FOR SYMMETRIC EIGENVALUE PROBLEM," J. Appl. Math. & Computing Vol. 22(2006), No. 1 - 2, pp. 467 - 474.</p> </blockquote> <p>A PDF copy is available here: <a href="http://www.mathnet.or.kr/mathnet/kms_tex/986075.pdf" rel="nofollow">http://www.mathnet.or.kr/mathnet/kms_tex/986075.pdf</a></p> <p>I should also mention that "best" is a difficult superlative to qualify without knowing the structure of your matrices. Probably the best algorithm for a sparse symmetric matrix is not the best algorithm for a symmetric Toeplitz matrix.</p> http://mathoverflow.net/questions/24272/numerical-instability-using-only-heuns-method-on-a-simple-pde/24290#24290 Answer by Ed Gorcenski for Numerical instability using only Heun's method on a simple PDE. Ed Gorcenski 2010-05-11T20:56:09Z 2010-05-11T20:56:09Z <p>There are a couple approaches that I think you could take to avoid this problem.</p> <p>Consider your Euler's method example. With this example, you know that the value at the next time step $W(x_i,t_{j+1})$ will go negative if the delta term, $\frac{c\left[W(x_{i+1},t_{j+1}-W(x_{i-1},t_{j+1})\right]}{2\Delta x}$ is greater than the value at the current time step, $W(x_i,t_j)$. The negative value leads to error inflation, and so on.</p> <p>First, you could try an adaptive solver, which varies the step size to meet certain error tolerances. MATLAB, for example, comes with ode45() which uses a 4th order and 5th order Runge-Kutta solver in conjunction with one another, and adaptively adjusts the step size. </p> <p>A second solution is to use a multi-step method, such as an Adams-Bashforth method. These methods are well-suited for stiff problems, which although your particular issue is not due to stiffness, it does seem to suffer from the same issues as stiff problems -- that is, the method is incapable of approximating the derivative of the function within a desired error tolerance within a neighborhood of a set of points.</p> <p>A third solution is a hybrid approach. Since you know how to evaluate, based on your chosen ODE solver, when the next step will go negative, then you could put in place some conditionals that changes the routine when you encounter these trouble spots either by switching to a different method, or reducing the step size in some ad hoc manner. Alternatively, you could switch to a multi-step method at this point, and use the preceding $m$ steps as the seed for the multi-step method.</p> <p>I haven't put a whole lot of effort into evaluating the region of instability, since I don't know how well the reduced problem applies to your current needs, but if you substituted $e^{(x-ct)^2}$ in your delta term for Euler's method, you could probably determine pretty easily when your solution will dip negative.</p> http://mathoverflow.net/questions/24221/reference-request-for-conceptual-numerical-analysis/24247#24247 Answer by Ed Gorcenski for Reference request for conceptual numerical analysis Ed Gorcenski 2010-05-11T15:06:59Z 2010-05-11T15:06:59Z <p>Are you looking for a reference that links the field of numerical analysis to mathematical concepts moreso than algorithmic concepts? Matrix Computations by Golub and Van Loan is a fairly important book that studies the algebraic structures of matrices and derives algorithms from those properties. If you're looking for an entry-level work, I keep a copy of Michael Heath's book Scientific Computing on my desk. It covers fundamental concepts and algorithms fairly well, in my opinion.</p> <p>Do you have a specific problem domain in mind?</p> http://mathoverflow.net/questions/24132/what-are-examples-of-mathematical-concepts-named-after-the-wrong-people-stigler/24142#24142 Answer by Ed Gorcenski for What are examples of mathematical concepts named after the wrong people? (Stigler's law) Ed Gorcenski 2010-05-10T19:52:00Z 2010-05-10T21:53:16Z <p>If you search for almost any eponymous topic in Wikipedia, you'll find that it was first studied by someone else. For example, the Gaussian distribution (according to Wikipedia) was first studied by de Moivre. It seems that in many cases, naming the body of work was given to the person who first applied its study to some other field (using the earlier example, Gauss used the distribution in astronomy).</p> <p>The common story goes that L'Hôpital bought "the rights" to L'Hôpital's rule, as he was a nobleman and not a mathematician by trade, although I am not sure about the veracity of that story.</p> <p>Although I am no expert on the history of Mathematics, it seems as though ideas or formulae assumed their names from certain mathematicians due either to a.) the more notable application or publication of the theory or b.) attribution by mathematicians of a later generation to pay tribute to (or garner attention from) the work of their predecessors.</p> http://mathoverflow.net/questions/22634/nice-solution-to-repeated-integral "Nice" Solution to repeated integral Ed Gorcenski 2010-04-26T18:49:32Z 2010-04-27T18:40:11Z <p>I have a problem wherein I have defined a function $I_r(t) = \int e^{(2r-1)at} \int e^{(2r-3)at} \cdots \int e^{at} dt\cdots dt$, and $I_r(0) = 0$, for $r = 1,2,3,\ldots$.</p> <p>I find that $e^{-ar^2t} I_r(t) = \left(1-e^{-at}\right)^r q(t)$, where $q(exp(-at))$ is a polynomial of $e^{-at}$. Is there a general technique for evaluating repeated integrals of this type that allows me to write $q$ in a nice clean way?</p> <p>If I took $I^*_r(t) = \int e^{at} \int e^{at}\cdots \int e^{at} dt\cdots dt$ with $I^*_r(0) = 0$, and multiplied by $e^{-art}$, I would get $e^{-art}I^*_r(t) = (1-e^{-at})^r$. I am looking for a nice closed-form solution where I have a quadratic in $r$.</p> <p>This is related to the derivation of a discrete probability distribution where the transition rate function is quadratic w/r.t. the number of events per cell.</p> http://mathoverflow.net/questions/40525/principal-eigen-vector-of-a-matrix Comment by Ed Gorcenski Ed Gorcenski 2010-09-29T20:31:38Z 2010-09-29T20:31:38Z You may wish to seek some references on "Matrix Completion" techniques. Depending on the rank of M, I suppose the answer is "it depends." http://mathoverflow.net/questions/24272/numerical-instability-using-only-heuns-method-on-a-simple-pde/24290#24290 Comment by Ed Gorcenski Ed Gorcenski 2010-05-13T16:22:16Z 2010-05-13T16:22:16Z Another thought... you gain some marginal stability by increasing the order of your solver. This gain is analogous to reducing the step size of a lower order solver (by a very large margin). Does changing \Delta x have any effect on your system? http://mathoverflow.net/questions/24256/fields-of-mathematics-that-were-dormant-for-a-long-time-until-someone-revitalized/24292#24292 Comment by Ed Gorcenski Ed Gorcenski 2010-05-12T02:05:12Z 2010-05-12T02:05:12Z My typo rate for mathematicians' names is 2/2 the last couple days... :( http://mathoverflow.net/questions/24132/what-are-examples-of-mathematical-concepts-named-after-the-wrong-people-stigler/24142#24142 Comment by Ed Gorcenski Ed Gorcenski 2010-05-11T15:01:57Z 2010-05-11T15:01:57Z Oops! Thanks for fixing the typo. Here is the Wikipedia page regarding the relationship between L'Hopital and Bernoulli: <a href="http://en.wikipedia.org/wiki/Johann_Bernoulli#L.27H.C3.B4pital_controversy" rel="nofollow">en.wikipedia.org/wiki/…</a> http://mathoverflow.net/questions/22634/nice-solution-to-repeated-integral/22757#22757 Comment by Ed Gorcenski Ed Gorcenski 2010-04-27T19:01:24Z 2010-04-27T19:01:24Z Indeed, I had hoped for something of the sort. I am trying to derive a probability distribution for a discrete random process wherein the transition rate function is a nonlinear polynomial in terms of the number of events per cell. For instance, if the transition rate function was f(r,t) = c+br, (without writing out all the equations behind it), we could easily derive the negative binomial distribution, which leads to a clustered grouping of points (the existence of k events in a cell has a linearly positive influence on the induction of another event in that cell). http://mathoverflow.net/questions/22634/nice-solution-to-repeated-integral/22757#22757 Comment by Ed Gorcenski Ed Gorcenski 2010-04-27T18:22:59Z 2010-04-27T18:22:59Z Brilliant, I should have known to use the Laplace transform, foolish me. Thank you! http://mathoverflow.net/questions/22634/nice-solution-to-repeated-integral Comment by Ed Gorcenski Ed Gorcenski 2010-04-27T13:12:07Z 2010-04-27T13:12:07Z Yes, I apologize, it was a poor choice of nomenclature! I have not had much luck in determining a general rule for the coefficients, but they are annoyingly close to matching certain sequences. Thank you for your comments. http://mathoverflow.net/questions/22634/nice-solution-to-repeated-integral Comment by Ed Gorcenski Ed Gorcenski 2010-04-26T18:57:01Z 2010-04-26T18:57:01Z I am seeking a solution where I do not have to evaluate $I_{r-1}(t)$ to find $I_r(t)$. Notice that I'_r(t) is linear in r, but for any r I know what the repeated integral evaluates to. I am seeking the same "happy" solution for I_r(t) which is quadratic in r.
|
2013-05-24 15:07:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7508183121681213, "perplexity": 879.5377819799511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704666482/warc/CC-MAIN-20130516114426-00002-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://gmatclub.com/forum/absolute-fundamentals-110674.html
|
It is currently 21 Mar 2018, 13:44
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Absolute fundamentals
Author Message
TAGS:
### Hide Tags
Director
Joined: 07 Jun 2004
Posts: 607
Location: PA
### Show Tags
10 Mar 2011, 09:06
Hi
if i have the below equation
| 3 + 2x | > |4 - x |
are there 2 cases o3 3 cases i need to consider to solve for x
3 + 2x > 4 - x
3 + 2x < -4 + x
is there another case i need to consider here ?
_________________
If the Q jogged your mind do Kudos me : )
Director
Status: Impossible is not a fact. It's an opinion. It's a dare. Impossible is nothing.
Affiliations: University of Chicago Booth School of Business
Joined: 03 Feb 2011
Posts: 832
### Show Tags
11 Mar 2011, 11:01
Only two cases have to be considered.
case 1
------
3 + 2x > 4 - x
3x > 1
x > 1/3
case 2
------
3 + 2x > x - 4
x > -7
Taking the most restricted value x > 1/3
rxs0005 wrote:
Hi
if i have the below equation
| 3 + 2x | > |4 - x |
are there 2 cases o3 3 cases i need to consider to solve for x
3 + 2x > 4 - x
3 + 2x < -4 + x
is there another case i need to consider here ?
Math Expert
Joined: 02 Sep 2009
Posts: 44388
### Show Tags
11 Mar 2011, 12:53
2
KUDOS
Expert's post
rxs0005 wrote:
Hi
if i have the below equation
| 3 + 2x | > |4 - x |
are there 2 cases o3 3 cases i need to consider to solve for x
3 + 2x > 4 - x
3 + 2x < -4 + x
is there another case i need to consider here ?
Above solution is not correct.
$$|3+2x|> |4-x|$$"
First you should determine the check points (key points are the values of x for which the expressions in absolute value equal to zero). So the key points are $$-\frac{3}{2}$$ and $$4$$. Hence we'll have three ranges to check:
A. $$x<-\frac{3}{2}$$ --> $$-(3+2x)>{4-x}$$ --> $$x<{-7}$$;
B. $$-\frac{3}{2}\leq{x}\leq{4}$$ --> $$3+2x>4-x$$ --> $$x>\frac{1}{3}$$, as we consider the range $$-\frac{3}{2}\leq{x}\leq{4}$$, then $$\frac{1}{3}<{x}\leq{4}$$;
C. $$x>4$$ --> $$3+2x>-(4-x)$$ --> $$x>{-7}$$, as we consider the range $$x>4$$, then $$x>4$$;
Ranges from A, B and C give us the solution as: $$x<{-7}$$ or $$x>\frac{1}{3}$$ (combined range from B and C).
Similar problem: inequalities-challenging-and-tricky-one-89266.html
_________________
Director
Status: Impossible is not a fact. It's an opinion. It's a dare. Impossible is nothing.
Affiliations: University of Chicago Booth School of Business
Joined: 03 Feb 2011
Posts: 832
### Show Tags
11 Mar 2011, 18:22
Bunuel
I got this thoroughly. Please enlighten me on these statements -
a) For a quadratic equation - the value is positive beyond the roots and negative between the roots. In other words the value of the quadratic alternates +/- in between the root intervals.
b) For an inequality - you must examine the root intervals. "sign" of the inequality MAY flip between the root intervals. To determine the exact sign - use the numberline.
I hope I have absorbed the information flawlessly.
thanks
Bunuel wrote:
rxs0005 wrote:
Hi
if i have the below equation
| 3 + 2x | > |4 - x |
are there 2 cases o3 3 cases i need to consider to solve for x
3 + 2x > 4 - x
3 + 2x < -4 + x
is there another case i need to consider here ?
Above solution is not correct.
$$|3+2x|> |4-x|$$"
First you should determine the check points (key points are the values of x for which the expressions in absolute value equal to zero). So the key points are $$-\frac{3}{2}$$ and $$4$$. Hence we'll have three ranges to check:
A. $$x<-\frac{3}{2}$$ --> $$-(3+2x)>{4-x}$$ --> $$x<{-7}$$;
B. $$-\frac{3}{2}\leq{x}\leq{4}$$ --> $$3+2x>4-x$$ --> $$x>\frac{1}{3}$$, as we consider the range $$-\frac{3}{2}\leq{x}\leq{4}$$, then $$\frac{1}{3}<{x}\leq{4}$$;
C. $$x>4$$ --> $$3+2x>-(4-x)$$ --> $$x>{-7}$$, as we consider the range $$x>4$$, then $$x>4$$;
Ranges from A, B and C give us the solution as: $$x<{-7}$$ or $$x>\frac{1}{3}$$ (combined range from B and C).
Similar problem: inequalities-challenging-and-tricky-one-89266.html
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 7997
Location: Pune, India
### Show Tags
12 Mar 2011, 21:30
4
KUDOS
Expert's post
1
This post was
BOOKMARKED
rxs0005 wrote:
Hi
if i have the below equation
| 3 + 2x | > |4 - x |
are there 2 cases o3 3 cases i need to consider to solve for x
3 + 2x > 4 - x
3 + 2x < -4 + x
is there another case i need to consider here ?
Bunuel has provided you the algebraic solution. Let me add the graphical approach. It may seem daunting at first, but if you take the effort of understanding it, it will seem very straight forward and easy.
Draw the graphs of both the mods
y = | 3 + 2x |
and y = |4 - x |
as shown below.
Attachment:
Ques2.jpg [ 14.42 KiB | Viewed 4545 times ]
Now where is the graph of | 3 + 2x | > the graph of |4 - x | ?
Where is the Red line above the Purple line?
Can I say it's so for the values of x as depicted by the green arrows?
Now all we need to do is find these points.
Point A: 2x+3 = 4-x which implies x = 1/3
Point B: -2x - 3 = 4-x which implies x = -7
So the given condition is satisfied when x > 1/3 or x < -7
Note 1: For more on how to draw graphs of mods, check:
http://www.veritasprep.com/blog/2011/01 ... h-to-mods/
Note 2: If you are wondering how do we decide whether we need to take (2x+3) or (-2x-3), (4-x) or (x-4) while finding points A and B, notice that we need to find the intersection of 2 lines to find these points.
When we make the graph of | 3 + 2x |, one line is (3+2x), and the other line is (-3-2x). When we make graph of |4 - x |, one line is (4-x) and the other is (x-4).
While finding point A, the Red line is going up from left to right so co-efficient of x must be positive hence we use (2x+3). The purple line is going down from left to right so the co-efficient of x must be negative so we use (4-x).
Similarly for point B, the red line is going down from left to right so co-efficient of x must be negative so we use (-2x-3). The purple line is going down from left to right so the co-efficient of x must be negative so we use (4-x).
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Director Status: Impossible is not a fact. It's an opinion. It's a dare. Impossible is nothing. Affiliations: University of Chicago Booth School of Business Joined: 03 Feb 2011 Posts: 832 Re: Absolute fundamentals [#permalink] ### Show Tags 12 Mar 2011, 23:30 +1 This is epic. I wish you taught me inequalities back in school. Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7997 Location: Pune, India Re: Absolute fundamentals [#permalink] ### Show Tags 13 Mar 2011, 08:15 1 This post received KUDOS Expert's post gmat1220 wrote: +1 This is epic. I wish you taught me inequalities back in school. If you like alternative solutions, I suggest you check out: http://www.veritasprep.com/blog/2011/01 ... s-part-ii/ An even shorter and more intuitive approach to these questions. Rephrase the question as: | 2x + 3| - |4 - x | > 0 2| x + 3/2| - |x - 4 | > 0 The '2' outside the mod means 'twice the distance from -3/2'. Also, |4 - x | is the same as |x - 4 |. Then see if you can figure out the answer from the number line. (I must tell you that it takes some effort to figure out the first time... A few people in my batches have done it though so its not terribly difficult...) I could give an explanation later if you want to verify... _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Senior Manager
Joined: 08 Nov 2010
Posts: 371
### Show Tags
13 Mar 2011, 12:50
Karishma, as usual - u are great. I would be more than happy to learn more about it.
thanks! +1
_________________
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 7997
Location: Pune, India
### Show Tags
14 Mar 2011, 18:36
1
KUDOS
Expert's post
144144 wrote:
Karishma, as usual - u are great. I would be more than happy to learn more about it.
thanks! +1
Check out the inequality posts on my blog. I have discussed these methods there in posts titled 'Bagging the Graphs' and 'Holistic approach to mods'
They will help you get comfortable with the basics... Get back in case you have any doubts... Then try and solve this question.... I will give the solution using the method mentioned in the post if you like then....
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Director Status: Impossible is not a fact. It's an opinion. It's a dare. Impossible is nothing. Affiliations: University of Chicago Booth School of Business Joined: 03 Feb 2011 Posts: 832 Re: Absolute fundamentals [#permalink] ### Show Tags 14 Mar 2011, 23:09 Karishma, I have read your blog. Pls verify this - I am doing it intuitively. If a - b > 0 this means at some point a = b. Lets determine that point. a = |2x+3| and b = |4-x| Solving - |2x+3| = |4-x| Case 1 ------ 2x + 3 = 4 - x 3x = 1 x = 1/3 Therefore to make a > b, x > 1/3. i.e. move x further right of zero. Case 2 ------- 2x + 3 = x - 4 x + 7 = 0 x = -7 Therefore to make a > b, x < -7. i.e. move x further left of zero. VeritasPrepKarishma wrote: gmat1220 wrote: +1 This is epic. I wish you taught me inequalities back in school. If you like alternative solutions, I suggest you check out: http://www.veritasprep.com/blog/2011/01 ... s-part-ii/ An even shorter and more intuitive approach to these questions. Rephrase the question as: | 2x + 3| - |4 - x | > 0 2| x + 3/2| - |x - 4 | > 0 The '2' outside the mod means 'twice the distance from -3/2'. Also, |4 - x | is the same as |x - 4 |. Then see if you can figure out the answer from the number line. (I must tell you that it takes some effort to figure out the first time... A few people in my batches have done it though so its not terribly difficult...) I could give an explanation later if you want to verify... Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7997 Location: Pune, India Re: Absolute fundamentals [#permalink] ### Show Tags 15 Mar 2011, 19:10 2 This post received KUDOS Expert's post gmat1220 wrote: Karishma, I have read your blog. Pls verify this - I am doing it intuitively. If a - b > 0 this means at some point a = b. Lets determine that point. a = |2x+3| and b = |4-x| Solving - |2x+3| = |4-x| Case 1 ------ 2x + 3 = 4 - x 3x = 1 x = 1/3 Therefore to make a > b, x > 1/3. i.e. move x further right of zero. Case 2 ------- 2x + 3 = x - 4 x + 7 = 0 x = -7 Therefore to make a > b, x < -7. i.e. move x further left of zero. This is absolutely fine but I would not worry about equating... I would let common sense guide me... Let me tell you what I have in mind Let's focus on just the x axis i.e. the number line Since you have gone through the post, you know that mod is nothing but distance from 0 on the number line... |x| = 2 means 'x is at a distance of 2 from point 0 on the number line' |x-4| = 6 means 'the distance of x from 4 on the number line is 6' so x must be 10 or -2 These are the basics. Let's take an easier example first. What does the following mean? |x+2| > |x-4| It means the value of x is such that distance from -2 is greater than distance from 4. Attachment: Ques3.jpg [ 3.01 KiB | Viewed 4303 times ] Where is the distance from -2 equal to distance from 4? At point 1 on the number line, right? Red and green arrows will be equal. So if x > 1, red arrow will be longer than green i.e. the distance of the point from -2 will be more than the distance from 4. So x > 1 satisfies this inequality. What about points on the left of -2? Will there be any point such that its distance from -2 is equal to the distance from 4? Obviously not. All points will be closer to -2 than to 4. Hence the only region is x > 1. Now let's take the question at hand: |2x+3| > |4-x| 2|x+3/2| > |x-4| It means the value for x is such that twice the distance from -3/2 is more than the distance from 4. Where will twice the distance from -1.5 be exactly equal to distance from 4? Attachment: Ques2.jpg [ 7.97 KiB | Viewed 4302 times ] We should divide the distance of 5.5 between them in 3 equal parts to get 5.5/3 Now lets go 5.5/3 ahead of -1.5 to get -1.5+5.5/3 = 1/3. This is the point where twice the distance from -3/2 is equal to distance from 4. So you go to the right to make twice the distance from -3/2 greater than the distance from 4. So one solution is x > 1/3 Is there some other point where the same thing will happen? Yes, at x = -7. How do I get it? because distance between -1.5 and 4 is 5.5 When I add this to the left of -1.5, I get point 7 which is where double the distance from -1.5 will be equal to distance from point 4. To the left of -7, twice the distance from -1.5 will be greater than the distance from 4. So another solution is x < -7 It is much more intuitive and all you need to do is draw a number line and then reason it out. Let me warn you, it's not everyday that I come across people who are interested in and appreciate alternative strategies. So when I do get an audience, I tend to get a little out of hand... If it makes sense to you, go ahead and try it out.. let me know if you get stuck with anything.. if it doesn't make sense, ignore it... _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Manager
Joined: 18 Aug 2010
Posts: 88
### Show Tags
18 Mar 2011, 00:47
Bunuel wrote:
rxs0005 wrote:
Hi
if i have the below equation
| 3 + 2x | > |4 - x |
are there 2 cases o3 3 cases i need to consider to solve for x
3 + 2x > 4 - x
3 + 2x < -4 + x
is there another case i need to consider here ?
Above solution is not correct.
$$|3+2x|> |4-x|$$"
First you should determine the check points (key points are the values of x for which the expressions in absolute value equal to zero). So the key points are $$-\frac{3}{2}$$ and $$4$$. Hence we'll have three ranges to check:
A. $$x<-\frac{3}{2}$$ --> $$-(3+2x)>{4-x}$$ --> $$x<{-7}$$;
B. $$-\frac{3}{2}\leq{x}\leq{4}$$ --> $$3+2x>4-x$$ --> $$x>\frac{1}{3}$$, as we consider the range $$-\frac{3}{2}\leq{x}\leq{4}$$, then $$\frac{1}{3}<{x}\leq{4}$$;
C. $$x>4$$ --> $$3+2x>-(4-x)$$ --> $$x>{-7}$$, as we consider the range $$x>4$$, then $$x>4$$;
Ranges from A, B and C give us the solution as: $$x<{-7}$$ or $$x>\frac{1}{3}$$ (combined range from B and C).
Similar problem: inequalities-challenging-and-tricky-one-89266.html
Hello, here is my solution: testing all possibilities with signs
1) 3+2x>4-x
x>1/3
2) -3-2x>-4+x
x<1/3
3) -3-2x>4-x
-7>x
4)3+2x>4+x
x>-7
but i have read somewhere that we shall only test both positive and positive neg.
so we have solution x>1/3 x>-7 is this correct ?
thx
Manager
Status: One last try =,=
Joined: 11 Jun 2010
Posts: 139
### Show Tags
18 Mar 2011, 01:20
My solution:
Because both $$|3+2x|$$ and $$|4-x| >= 0$$, I can square the two sides:
$$(3+2x)^2 > (4-x)^2$$
$$9+12x+4x^2 > 16-8x+x^2$$
$$3x^2+20x-7 > 0$$
$$x1=\frac{1}{3}$$
$$x2={-7}$$
=> we have two ranges: $$x<{-7}$$ or $$x>\frac{1}{3}$$
_________________
There can be Miracles when you believe
Manager
Status: One last try =,=
Joined: 11 Jun 2010
Posts: 139
### Show Tags
18 Mar 2011, 02:16
I think this table will help illustrate the solution of Bunuel.
1. $$x<-\frac{3}{2}$$
$$|3+2x|= -(3+2x)$$ and $$|4-x|={4-x}$$
2. $$-\frac{3}{2}\leq{x}\leq{4}$$
$$|3+2x|={3+2x}$$ and $$|4-x|={4-x}$$
3. $$x>4$$
$$|3+2x|={3+2x}$$ and $$|4-x|=-(4-x)$$
Attachments
Table.jpg [ 6.54 KiB | Viewed 3881 times ]
_________________
There can be Miracles when you believe
Non-Human User
Joined: 09 Sep 2013
Posts: 6537
### Show Tags
01 Aug 2017, 11:28
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: Absolute fundamentals [#permalink] 01 Aug 2017, 11:28
Display posts from previous: Sort by
|
2018-03-21 20:44:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7149847149848938, "perplexity": 1892.4177769106723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647692.51/warc/CC-MAIN-20180321195830-20180321215830-00354.warc.gz"}
|
http://math.ecnu.edu.cn/academia/swgam2019/abstract.html
|
# 2019 ECNU Summer Workshop
## TBA
Guan, Bo
E-mail:guan@math.ohio-state.edu
Ohio State University, U.S.A.
Abstract: TBA
## Gromov-Hausdorff limits of Kahler manifolds with Ricci curvature lower bound
Liu, Gang
E-mail: gang.liu@northwestern.edu
Northwest University, U.S.A.
Abstract:A fundamental result of Donaldson-Sun states that non-collapsed Gromov-Hausdorff limits of polarized Kahler manifolds, with 2-sided Ricci curvature bounds, are normal projective varieties. We extend their approach to the setting where only a lower bound for the Ricci curvature is assumed. More precisely, we show that non-collapsed Gromov-Hausdorff limits of polarized Kahler manifolds, with Ricci curvature bounded below, are normal projective varieties. In addition the metric singularities are precisely given by a countable union of analytic subvarieties. This is a joint work with Gabor Szekelyhidi.
## Domination results for harmonic maps in higher Teichmüller theory
Li, Qiongling
E-mail:qiongling.li@nankai.edu.cn
Chern Institute of Mathematics, P.R.China
Abstract:In this talk, we study the harmonic maps in higher Teichmüller theory from the viewpoint of the Higgs bundles. Let X=(S,J) be a closed Riemann surface with genus at least 2. The non-abelian Hodge theory gives a correspondence between the moduli space of representations of the fundamental group of a surface S into a Lie group G with the moduli space of G-Higgs bundles over the Riemann surface X. The correspondence is through looking for an equivariant harmonic map from X to the symmetric space associated to G. Hitchin representations are an important class of representations of fundamental groups of closed hyperbolic surfaces into PSL(n,R), at the heart of higher Teichmüller theory. We discover some geometric properties of such harmonic maps for Hitchin representations or more general representations by using Higgs bundles techniques.
## Semi-local simple connectedness of non-collapsing Ricci limit spaces
Pan, Jiayin
E-mail:j_pan@math.ucsb.edu
University of California, Santa Barbara, U.S.A.
Abstract:We prove that any non-collapsing Ricci limit space is semi-locally simply connected. This is joint work with Guofang Wei.
## Heegaard splittings on 3-manifolds: a survey
Qiu, Ruifeng
E-mail:rfqiu@math.ecnu.edu.cn
East China Normal University, P.R.China
Abstract: Let M be a closed, orientable 3-manifold, then there exists a closed surface which cuts M into two handlebodies. This structure on 3-manifold is called Heegaard splitting. In this talk, I will introduce some classical results on Heegaard splitting and its applications.
## Green's function estimates and applications
Sung, Chiung-Jue Anna
E-mail:cjsung@math.nthu.edu.tw
National Tsing Hua University
Abstract: In this talk, we intend to explain some estimates for the Green's function on complete manifolds admitting a weighted Poincare inequality. Applications will also be mentioned. This is a joint work with Ovidiu Munteanu and Jiaping Wang.
## Topology of gradient Ricci solitons
Wang, Jiaping
E-mail:wangx208@umn.edu
University of Minnesota, Twins Cities, U.S.A.
Abstract:The talk mainly concerns the issue of connectedness at infinity for gradient Ricci solitons. Ricci solitons are precisely the self-similar solutions to the Ricci flows. They play an important role in the singularity analysis of Ricci flows and are of interest of themselves. This is joint work with Ovidiu Munteanu.
## Escobar’s conjecture on lower bound for first Steklov eigenvalue
Xia, Chao
E-mail: chaoxia@xmu.edu.cn
Xiamen University, P.R.China
Abstract: It was conjectured by Escobar in 1999 that for a smooth compact Riemannian manifold with boundary, which has nonnegative Ricci curvature and boundary principal curvatures bounded below by some c>0, the first Steklov eigenvalue is greater than or equal to c with equality holding only on isometrically Euclidean balls with radius 1/c. In this talk, we present a resolution to this conjecture in the case of nonnegative sectional curvature. This is a joint work with Changwei Xiong at ANU.
## Solutions to the equations from the conformal geometry
Xu, Lu
E-mail:xulu@hnu.edu.cn
Hunan University, P.R.China
Abstract: We solve the Gursky-Streets equations with uniform $C^{1, 1}$ estimates for $2k\leq n$. An important new ingredient is to show the concavity of the operator which holds for all $k\leq n$. Our proof of the concavity heavily relies on Garding's theory of hyperbolic polynomials and results from the theory of real roots for (interlacing) polynomials. Together with this concavity, we are able to solve the equation with the uniform $C^{1, 1}$ \emph{a priori estimates} for all the cases $n\geq 2k$. Moreover, we establish the uniqueness of the solution to the degenerate equations for the first time. TBA
## A few properties of global solutions of the heat equation on Euclidean space and some manifolds
Zhang, Qi
E-mail:qizhang@math.ucr.edu
University of California,Riverside, U.S.A.
Abstract:We report some recent results on Martin type representation formulas for ancient solutions of the heat equation and dimension estimates of the space of these solutions under some growth assumptions. We will also present a new observation on the time analyticity of solutions of the heat equation under natural growth conditions. One application is a solvability condition of the backward heat equation, i.e. under what condition can one turn back the clock in a diffusion process. Part of the results are joint work with Fanghua Lin and Hongjie Dong.
## Unstability of Kaehler-Ricci flow
Zhu, Xiaohua
E-mail:xhzhu@math.pku.edu.cn
Peking University, P.R.China
Abstract: In this talk, we will show that there exists a Fano manifold with admitting a Kaehler-Ricci soliton on which the Kaehler-Ricci flow is unstable for Kaehler metrics (the complex structure may vary) in the first Chern class. As a consequence, the second variation of Perelman's entropy on this manifold is not stable for Kaehler metrics in the first Chern class . The situation is totally different on Kaehler-Einstein manifolds on which the second variation of Perelman's entropy is always stable.
|
2019-09-16 14:59:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.778469979763031, "perplexity": 521.5535015406919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572744.7/warc/CC-MAIN-20190916135948-20190916161948-00299.warc.gz"}
|
https://www.physicsforums.com/threads/partial-derivative-concept.231599/
|
# Partial derivative concept
1. Homework Statement
Given the partial derivative df/dx= 3-3(x^2)
what is d^2f/dydx?
I'm not sure if the answer would be 0, since x is held constant, or if it would remain 3-3(x^2) (since df/dx is a function of x now?)
## Answers and Replies
Related Calculus and Beyond Homework Help News on Phys.org
G01
Homework Helper
Gold Member
The answer is one of those choices. Here, think about it like this:
You are given a function:
$$g(x)=3-3x^2$$
You want to find: $$\frac{\partial g}{\partial y}$$
What is that derivative? Now, what if: $$g(x)=\frac{\partial f}{\partial x}$$
Does this change the partial derivative of g with respect to y?
Dick
Homework Helper
You were right the first time. With x held constant the d/dy is just differentiating a constant. It's 0.
HallsofIvy
As is always true, with "nice" functions, the two mixed derivatives are equal. You could find $\partial^2 f/\partial x\partial y$ by differentiating first with respect to x, then with respect to y: first getting -6x and then, since it does not depend on y, 0. Or you could differentiate first with respect to y, then with respect to x: getting 0 immediately and then, of course, the derivative of "0" with respect o x is 0.
|
2019-12-09 05:33:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768817186355591, "perplexity": 928.8626768469759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00506.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/potassium-40-can-decay-three-modes-it-can-decay-emission-b-emission-electron-capture-a-write-equations-showing-end-products-b-find-q-values-atomic-masses-composition-nucleus_70137
|
# Potassium-40 Can Decay in Three Modes. It Can Decay by β−-emission, B*-emission of Electron Capture. (A) Write the Equations Showing the End Products. (B) Find the Q-values - Physics
Sum
Potassium-40 can decay in three modes. It can decay by β-emission, B*-emission of electron capture. (a) Write the equations showing the end products. (b) Find the Q-values in each of the three cases. Atomic masses of ""_18^40Ar , ""_19^40K and ""_20^40Ca are 39.9624 u, 39.9640 u and 39.9626 u respectively.
(Use Mass of proton mp = 1.007276 u, Mass of ""_1^1"H" atom = 1.007825 u, Mass of neutron mn = 1.008665 u, Mass of electron = 0.0005486 u ≈ 511 keV/c2,1 u = 931 MeV/c2.)
#### Solution
(a) Decay of potassium-40 by βemission is given by
""_19"K"^40 → ""_20"Ca"^40 + β^- + bar"v"
Decay of potassium-40 by β+ emission is given by
""_19"K"^40 → ""_18"Ar"^40 + β^+ + "v"
Decay of potassium-40 by electron capture is given by
""_19"K"^40 + e^(-) → ""_18"Ar"^40 + "v"
(b)
Qvalue in the β decay is given by
Qvalue = [m(19K40) − m(20Ca40)]c2
= [39.9640 u − 39.9626 u]c2
= 0.0014 xx 931 MeV
= 1.3034 MeV
Qvalue in the β+ decay is given by
Qvalue = [m(19K40) − m(20Ar40) − 2me]c2
= [39.9640 u − 39.9624 u − 0.0021944 u]c2
= (39.9640 − 39.9624) 931 MeV − 1022 keV
= 1489.96 keV − 1022 keV
= 0.4679 MeV
Qvalue in the electron capture is given by
Qvalue = [ m(19K40) − m(20Ar40)]c2
= (39.9640 − 39.9624)uc2
= 1.4890 = 1.49 MeV
Is there an error in this question or solution?
#### APPEARS IN
HC Verma Class 11, Class 12 Concepts of Physics Vol. 2
Chapter 24 The Nucleus
Q 14 | Page 442
|
2021-04-14 11:18:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5308837294578552, "perplexity": 10992.378757724442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00452.warc.gz"}
|
https://beta.geogebra.org/m/pPu6Kb3z
|
# Euler spiral (Clothoid)
Clothoid is a curve whose curvature k changes linearly with its curve length (denote s or L). Clothoids are widely used as transition curves in railroad engineering for connecting and transiting the geometry between a tangent and a circular curve. Clothoid has the desirable property that the curvature k is linearly related to the arc length s. Although its defining formulas for coordinates are transcendental functions (Fresnel integrals), the important characteristics can be derived easily from equation k = s/A where A is constant. Some applications avoid working with the transcendental functions by proposing polynomial approximations to the clothoid, e.g. .
Determine the length s of Euler spiral for transition between straight road and circular arc of radius r = 9 m. Solution: Curvature of a bend must be the same as curvature of a clothoid, i.e. and s = 16 m.
## Clothoid k = s/A
Determine an angle length α between the tangent of Euler spiral at s = 16 m and x-axis. Solution: Formula for direction part of a clothoid. and α = 50,92°.
|
2022-09-27 01:45:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8921001553535461, "perplexity": 1112.2530703479551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00010.warc.gz"}
|
https://hal.inria.fr/hal-01636792
|
# High-dimensional approximate r-nets
2 AROMATH - AlgebRe, geOmetrie, Modelisation et AlgoriTHmes
CRISAM - Inria Sophia Antipolis - Méditerranée , NKUA - National and Kapodistrian University of Athens
Abstract : The construction of r-nets offers a powerful tool in computational and metric geometry. We focus on high-dimensional spaces and present a new randomized algorithm which efficiently computes approximate $r$-nets with respect to Euclidean distance. For any fixed \epsilon>0, the approximation factor is 1+\epsilon and the complexity is polynomial in the dimension and subquadratic in the number of points. The algorithm succeeds with high probability. More specifically, the best previously known LSH-based construction of Eppstein et al. [EHS15] is improved in terms of complexity by reducing the dependence on \epsilon, provided that $\epsilon$ is sufficiently small. Our method does not require LSH but, instead, follows Valiant's [Val15] approach in designing a sequence of reductions of our problem to other problems in different spaces, under Euclidean distance or inner product, for which r-nets are computed efficiently and the error can be controlled. Our result immediately implies efficient solutions to a number of geometric problems in high dimension, such as finding the (1+\epsilon)-approximate k-th nearest neighbor distance in time subquadratic in the size of the input.
Keywords :
Document type :
Conference papers
Domain :
Cited literature [12 references]
https://hal.inria.fr/hal-01636792
Contributor : Ioannis Emiris <>
Submitted on : Wednesday, October 17, 2018 - 5:56:25 PM
Last modification on : Thursday, November 26, 2020 - 3:50:03 PM
Long-term archiving on: : Friday, January 18, 2019 - 3:51:33 PM
### File
EmirisEtalSoda1607.04755.pdf
Files produced by the author(s)
### Identifiers
• HAL Id : hal-01636792, version 1
• ARXIV : 1607.04755
### Citation
Georgia Avarikioti, Ioannis Z. Emiris, Loukas Kavouras, Ioannis Psarros. High-dimensional approximate r-nets. SODA: ACM/SIAM Symposium on Discrete Algorithms, Jan 2017, Barcelone, Spain. ⟨hal-01636792⟩
Record views
|
2021-09-18 02:38:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4969419240951538, "perplexity": 3414.33922013533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00513.warc.gz"}
|