url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://serverfault.com/questions/344047/wait-rate-very-high-due-to-mysql-server-activity
# wait rate very high due to mysql server activity Sometimes the server is very slow and needs a lot of time for serving the requests. iotop shows a disk read rate of 1-2 M/S on average for some minutes (which is not that much actually), after that the server is very fast again. while being very slow the wait rate is about 60-90% according to top. All mysql caches are more than OK according to mysql tuning primer. So I don't know why the mysql server does so much disk reads. Is there any way to find out what causes so much I/O read in mysql? I have to say that it is a virtual server, so could it be that another customer is using the whole I/O capacities? It is good to understand the underlying storage layer you have. Do you have a single physical disk? RAID1 or RAID5 of few disks or large storage array where the LUNs are made from many (40+) physical drives. Each physical drive can give you approximately 150-200 requests/s (depending on rorational speed). So the MB/s figure isn't important in the iostat/sar/dstat output, because for sequential reads/writes, modern drives can do more than 100 MB/s, but for random requests with for example 8kB size, it will give you only 150*8kB = 1.2 MB/s. Requests from database server are almost always random. The best metric to look at is always the io service time - that is the time it takes for the storage to service your read or write request. You don't need to worry about how many disks you have, if the service time is lower for example than 15-20 ms (milli-seconds), then you know your storage is performing well. On almost idle server with battery-backed cache (BBWC) you should see write service times less than 1ms and read times under 5 ms. This metric is also good for VPS as it will show you high service times even if the storage is busy servicing other clients. • Very informative! Can you recommend any tool to monitor the io service time? – Chris Jan 5 '12 at 11:15 • For example diskstats plugin in munin. But I suppose any other widely used monitoring tool has a plugin for it (for exaple nagios). – Marki555 Jan 5 '12 at 11:39 use sar to find out history of io actvitiy on the server. I think sar is already configured and running on that server. If not do it. Then there might be other Virtual machine making heavy io calls and your mysql is busy waiting for io. I would recommend not to have mysql or any database with even moderate traffic to be on a VPS solution. If it is serving real traffic then it should be on a physical server, as io on VPS tends to be bad and mysql with bad io performs badly and the whole point of having db is affected. • Is it really possible to look at the I/O activities other virtual machines are producing with sar? Yes, you're right, I should change to a dedicated server. – Chris Dec 25 '11 at 18:43 • @chris I didnt to say sar would give you io stats of other vm's. Sar output should help you with stats of io activity for your virtual machine for say past 30 days . This should give fair idea of how good or bad things are. – bagavadhar Dec 26 '11 at 4:01
2021-04-22 03:48:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2428482621908188, "perplexity": 1884.2891836882916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00382.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-6-applications-of-the-integral-6-5-work-and-energy-exercises-page-317/22
# Chapter 6 - Applications of the Integral - 6.5 Work and Energy - Exercises - Page 317: 22 $9800 c (\dfrac{ah^2}{3}+\dfrac{bh^2}{6}) \ J$ #### Work Step by Step The force of one layer is equal to: $\ Force= \ Mass \times \ gravity = 9800 c (a+\dfrac{b-a}{h})^2 \Delta y \ N$ Therefore, the work done can be computed as: $W=\int_{0}^{h} 9800 c (a+\dfrac{b-a}{h})^2 \Delta y \ N \\=\int_{0}^{h} 9800 c (ah+(b-a) y \ dy -9800 c \int_{0}^{h} (ay+\dfrac{(b-a)}{h} y^2) \ dy \\= 9800 c [ah^2+\dfrac{h^2(b-a)}{2}-\dfrac{ah^2}{2}-\dfrac{(b-a) h^2}{3} ] \\= 9800 c (\dfrac{ah^2}{3}+\dfrac{bh^2}{6}) \ J$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2020-05-28 16:06:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6928790211677551, "perplexity": 835.1917200997389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399820.9/warc/CC-MAIN-20200528135528-20200528165528-00403.warc.gz"}
http://planetmath.org/ProofOfRodriguesRotationFormula
# proof of Rodrigues’ rotation formula Let $[\mathbf{x},\mathbf{y},\mathbf{z}]$ be a frame of right-handed orthonormal vectors in $\mathbb{R}^{3}$, and let $\mathbf{v}=a\mathbf{x}+b\mathbf{y}+c\mathbf{z}$ (with $a,b,c\in\mathbb{R}$) be any vector to be rotated on the $\mathbf{z}$ axis, by an angle $\theta$ counter-clockwise. The image vector $\mathbf{v}^{\prime}$ is the vector $\mathbf{v}$ with its component in the $\mathbf{x},\mathbf{y}$ plane rotated, so we can write $\mathbf{v}^{\prime}=a\mathbf{x}^{\prime}+b\mathbf{y}^{\prime}+c\mathbf{z}\,,\\$ where $\mathbf{x}^{\prime}$ and $\mathbf{y}^{\prime}$ are the rotations by angle $\theta$ of the $\mathbf{x}$ and $\mathbf{y}$ vectors in the $\mathbf{x},\mathbf{y}$ plane. By the rotation formula in two dimensions, we have $\displaystyle\mathbf{x}^{\prime}$ $\displaystyle=\cos\theta\,\mathbf{x}+\sin\theta\,\mathbf{y}\,,$ $\displaystyle\mathbf{y}^{\prime}$ $\displaystyle=-\sin\theta\,\mathbf{x}+\cos\theta\,\mathbf{y}\,.$ So $\mathbf{v}^{\prime}=\cos\theta(a\mathbf{x}+b\mathbf{y})+\sin\theta(a\mathbf{y}% -b\mathbf{x})+c\mathbf{z}\,.$ The vector $a\mathbf{x}+b\mathbf{y}$ is the projection of $\mathbf{v}$ onto the $\mathbf{x},\mathbf{y}$ plane, and $a\mathbf{y}-b\mathbf{x}$ is its rotation by $90^{\circ}$. So these two vectors form an orthogonal frame in the $\mathbf{x},\mathbf{y}$ plane, although they are not necessarily unit vectors. Alternate expressions for these vectors are easily derived — especially with the help of the picture: $\displaystyle\mathbf{v}-(\mathbf{v}\cdot\mathbf{z})\mathbf{z}$ $\displaystyle=\mathbf{v}-c\mathbf{z}=a\mathbf{x}+b\mathbf{y}\,,$ $\displaystyle\mathbf{z}\times\mathbf{v}$ $\displaystyle=a(\mathbf{z}\times\mathbf{x})+b(\mathbf{z}\times\mathbf{y})+c(% \mathbf{z}\times\mathbf{z})=a\mathbf{y}-b\mathbf{x}\,.$ Substituting these into the expression for $\mathbf{v}^{\prime}$: $\mathbf{v}^{\prime}=\cos\theta(\mathbf{v}-(\mathbf{v}\cdot\mathbf{z})\mathbf{z% })+\sin\theta(\mathbf{z}\times\mathbf{v})+c\mathbf{z}\,,$ which could also have been derived directly if we had first considered the frame $[\mathbf{v}-(\mathbf{v}\cdot\mathbf{z}),\mathbf{z}\times\mathbf{v}]$ instead of $[\mathbf{x},\mathbf{y}]$. We attempt to simplify further: $\mathbf{v}^{\prime}=\mathbf{v}+\sin\theta(\mathbf{z}\times\mathbf{v})+(\cos% \theta-1)(\mathbf{v}-(\mathbf{v}\cdot\mathbf{z})\mathbf{z})\,.$ Since $\mathbf{z}\times\mathbf{v}$ is linear in $\mathbf{v}$, this transformation is represented by a linear operator $A$. Under a right-handed orthonormal basis, the matrix representation of $A$ is directly computed to be $A\mathbf{v}=\mathbf{z}\times\mathbf{v}=\begin{bmatrix}0&-z_{3}&z_{2}\\ z_{3}&0&-z_{1}\\ -z_{2}&z_{1}&0\end{bmatrix}\,\begin{bmatrix}v_{1}\\ v_{2}\\ v_{3}\end{bmatrix}\,.$ We also have $\displaystyle-(\mathbf{v}-(\mathbf{v}\cdot\mathbf{z})\mathbf{z})$ $\displaystyle=-a\mathbf{x}-b\mathbf{y}$ (rotate $a\mathbf{x}+b\mathbf{y}$ by $180^{\circ}$) $\displaystyle=\mathbf{z}\times(a\mathbf{y}-b\mathbf{x})$ (rotate $a\mathbf{y}-b\mathbf{x}$ by $90^{\circ}$) $\displaystyle=\mathbf{z}\times(\mathbf{z}\times(a\mathbf{x}+b\mathbf{y}+c% \mathbf{z}))$ $\displaystyle=A^{2}\,\mathbf{v}\,.$ So $\mathbf{v}^{\prime}=I\mathbf{v}+\sin\theta\,A\mathbf{v}+(1-\cos\theta)A^{2}\,% \mathbf{v}\,,$ proving Rodrigues’ rotation formula. ## Relation with the matrix exponential Here is a curious fact. Notice that the matrix11If we want to use coordinate-free , then in this section, “matrix” should be replaced by “linear operator” and transposes should be replaced by the adjoint operation. $A$ is skew-symmetric. This is not a coincidence — for any skew-symmetric matrix $B$, we have ${(e^{B})}^{\textrm{t}}=e^{{B}^{\textrm{t}}}=e^{-B}=(e^{B})^{-1}$, and $\det e^{B}=e^{\operatorname{tr}B}=e^{0}=1$, so $e^{B}$ is always a rotation. It is in fact the case that: $\displaystyle I+\sin\theta\,A+(1-\cos\theta)A^{2}=e^{\theta A}$ for the matrix $A$ we had above! To prove this, observe that powers of $A$ cycle like so: $\displaystyle I,A,A^{2},-A,-A^{2},A,A^{2},-A,-A^{2},\ldots$ Then $\displaystyle\sin\theta\,A$ $\displaystyle=\sum_{k=0}^{\infty}\frac{(-1)^{k}\theta^{2k+1}A}{(2k+1)!}=\sum_{% k=0}^{\infty}\frac{\theta^{2k+1}A^{2k+1}}{(2k+1)!}=\sum_{k\textrm{ odd}}\frac{% (\theta A)^{k}}{k!}$ $\displaystyle(1-\cos\theta)\,A^{2}$ $\displaystyle=\sum_{k=1}^{\infty}\frac{(-1)^{k-1}\theta^{2k}A^{2}}{(2k)!}=\sum% _{k=1}^{\infty}\frac{\theta^{2k}A^{2k}}{(2k)!}=\sum_{k\geq 2\textrm{ even}}% \frac{(\theta A)^{k}}{k!}\,.$ Adding $\sin\theta\,A$, $(1-\cos\theta)A^{2}$ and $I$ together, we obtain the power series for $e^{\theta A}$. Second proof: If we regard $\theta$ as time, and differentiate the equation $\mathbf{v}^{\prime}=a\mathbf{x}^{\prime}+b\mathbf{y}^{\prime}+c\mathbf{z}$ with respect to $\theta$, we obtain $d\mathbf{v}^{\prime}/d\theta=a\mathbf{y}^{\prime}-b\mathbf{x}^{\prime}=\mathbf% {z}\times\mathbf{v}^{\prime}=A\mathbf{v}^{\prime}$, whence the solution (to this linear ODE) is $\mathbf{v}^{\prime}=e^{\theta A}\mathbf{v}$. Remark: The operator $e^{\theta A}$, as $\theta$ ranges over $\mathbb{R}$, is a one-parameter subgroup of $\mathrm{SO}(3)$. In higher dimensions $n$, every rotation in $\mathrm{SO}(n)$ is of the form $e^{A}$ for a skew-symmetric $A$, and the second proof above can be modified to prove this more general fact. Title proof of Rodrigues’ rotation formula ProofOfRodriguesRotationFormula 2013-03-22 15:23:25 2013-03-22 15:23:25 stevecheng (10074) stevecheng (10074) 14 stevecheng (10074) Proof msc 51-00 msc 15-00 DimensionOfTheSpecialOrthogonalGroup
2018-03-22 06:26:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 81, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9556524157524109, "perplexity": 170.6084928546086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647777.59/warc/CC-MAIN-20180322053608-20180322073608-00289.warc.gz"}
https://www.studyadda.com/ncert-solution/data-handling_q15/542/45017
• # question_answer 15) Following table shows the number of bicycles manufactured in a factory during the years 1998 to 2002. Illustrate this data using a bar graph. Choose a scale of your choice. Years Number of bicycles manufactured 1998 800 1999 600 2000 900 2001 1100 2002 1200 (a) In which year were the maximum number of bicycles manufactured? (b) In which year were the minimum number of bicycles manufactured? 1998 $\frac{800}{100}=4\text{units}$ 1999 $\frac{600}{100}=6\text{units}$ 2000 $\frac{900}{100}=9\text{units}$ 2001 $\frac{1100}{100}=11\text{units}$ 2002 $\frac{1200}{100}=12\text{units}$
2020-09-25 11:30:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4447531998157501, "perplexity": 755.7314273536066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400223922.43/warc/CC-MAIN-20200925084428-20200925114428-00103.warc.gz"}
https://www.acmicpc.net/problem/6255
시간 제한메모리 제한제출정답맞힌 사람정답 비율 1 초 128 MB444100.000% ## 문제 Mr. Farmer owns a rectangular farm in Newfoundland. He has partitioned his farm into n × m equal-size squares. In each square, he has planted either of the two grains: wheat or corn. By many years of experience, Mr. Farmer has found a simple rule for improving the quality of the grains: if “1” represents wheat, and “2” represents corn, then there must be no 2 × 2 adjacent squares in the farm forming one of the following “crossing” patterns: 1 2 2 1 2 1 1 2 Mr. Farmer has a combine for harvesting the grains. To harvest each type of grain, the farmer needs to attach a special cutter to his combine. At the beginning, when no cutter is attached to the combine, and also, during the course of replacing the combine’s cutter, the farmer can move the combine to any place of the farm, without harvesting the grains. However, once a cutter is attached, the combine can only move either on the squares containing grains compatible with the combine’s cutter, or on the squares with no grain (namely, those harvested before). Since replacing the cutter is a tedious task, the farmer wishes to hire you to show him the best way for harvesting the whole farm using a minimum number of cutter replacements. Your task is to write a program to help the farmer find such a minimum number. ## 입력 There are multiple test cases in the input. Each test case starts with a line containing two integers n and m(1 ≤ n × m ≤ 105) that specify the number of rows and columns in the farm, respectively. The next n lines, each contains m characters from the set {1, 2}, representing the type of grain in the corresponding square of the farm. The input terminates with a line containing “0 0”. ## 출력 For each test case, output a single line containing the minimum number of times that the farmer needs to replace the combine’s cutter. The first time that a cutter is attached to the combine is also counted as a replacement. ## 예제 입력 1 2 3 112 211 4 4 1212 2211 1112 1222 0 0 ## 예제 출력 1 2 3
2022-05-18 02:53:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1733507663011551, "perplexity": 1701.3709689069533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521041.0/warc/CC-MAIN-20220518021247-20220518051247-00123.warc.gz"}
https://www.physicsforums.com/threads/the-complexity-of-modern-science-comments.851580/
# The Complexity of Modern Science - Comments • Insights • mfb Mentor mfb submitted a new PF Insights post The Complexity of Modern Science Continue reading the Original PF Insights Post. ## Answers and Replies Gold Member 2022 Award I think the biggest problem is that people find it SO much easier to watch pop science on TV than to do any actual study of science, and you know how those shows get so much wrong. I think they do sometimes inspire young people to study but overall I'm not sure but what they do more harm than good and they certainly give those adults who are not likely to further pursue actual science a very poor view of actual science of the kind you talk about. The producers of the TV shows can't be blamed for this any more than McDonalds can be blamed for serving tasty junk food. People sell what other people buy and there are lots of buyers for junk food and junk science, especially since they LOOK so tasty, what with all the nifty graphics and tomato sauce and all. The first thing we COULD do (and won't) would be to insist that people who teach science, at any level but particularly below the college level, be required to have at least some idea what they are talking about. Teachers below the high school level in particular have no idea, generally, what science is really all about. Silicon Waffle and mfb Gold Member Hi mfb: This is a great topic to discuss. I hope that some useful ideas will emerge, but I confess I am very pessimistic. NOVA is an excellent source for informing the public about science, but I am guessing that for each viewer of NOVA there are more than a hundred viewers of FOX NEWS. Regards, Buzz I think the biggest problem is that people find it SO much easier to watch pop science on TV than to do any actual study of science, and you know how those shows get so much wrong. I think they do sometimes inspire young people to study but overall I'm not sure but what they do more harm than good and they certainly give those adults who are not likely to further pursue actual science a very poor view of actual science of the kind you talk about. The producers of the TV shows can't be blamed for this any more than McDonalds can be blamed for serving tasty junk food. People sell what other people buy and there are lots of buyers for junk food and junk science, especially since they LOOK so tasty, what with all the nifty graphics and tomato sauce and all. The first thing we COULD do (and won't) would be to insist that people who teach science, at any level but particularly below the college level, be required to have at least some idea what they are talking about. Teachers below the high school level in particular have no idea, generally, what science is really all about. So first of all, I find this insight to be very interesting. However, as a crackpot by some definitions, I must disagree with you and phinds on one point. I do study actual science and calculus with that science. I learned on this forum to stay away from popsci, but still, I come up with "theories". No, as a middle schooler I do not believe these theories will go anywhere. What I do is I make a theory based on math and what I have learned so far from other sources, and then I look for what I did wrong. Sometimes I need to ask a professional about what I did wrong. In fact, I learn better from actually thinking about it and then finding a problem in my thinking and/or math. I do agree that some people do need to learn real science before making a statement like "special relativity is wrong" but some people (like me) learn from thinking, challenging the theories and finding out why I'm wrong. Just something to think about. Overall, however, a great insight. BTW, for some reason part of the quote is not in the quote part of my post. Mentor but some people (like me) learn from thinking, challenging the theories and finding out why I'm wrong. That is perfectly fine, as long as challenging the theories is based on actual knowledge of those theories. I don't see what would be wrong with the quote. As we've seen in the thread: https://www.physicsforums.com/threa...is-not-weird-unless-presented-as-such.850860/ most concepts can't be genuinely described using word language. Susskind also mentioned this in an interview I watched where the interviewer asked some quantum mechanics question and Susskind basically said he couldn't answer it with words, only in math. Not long ago I debated a man who claimed to understand the concepts of the Big Bang very well and didn't need to know the math. I promptly left the conversation. Fact is that the general public certainty can't handle or have the patience for reading research papers so pop sci news agencies water the research down into cookie cutter pieces with catchy headlines which in the end only vaguely resemble what it really means. This is good for the public's imagination but doesn't do justice for how complex their research is. symbolipoint and OmCheeto I don't see what would be wrong with the quote On the actual insights page it turned out differently. Gold Member 2022 Award So first of all, I find this insight to be very interesting. However, as a crackpot by some definitions, I must disagree with you and phinds on one point. I do study actual science and calculus with that science. I learned on this forum to stay away from popsci, but still, I come up with "theories". No, as a middle schooler I do not believe these theories will go anywhere. What I do is I make a theory based on math and what I have learned so far from other sources, and then I look for what I did wrong. I do not find that to be at all in disagree w/ my statements or beliefs and in fact I think it's a fine way to forge ahead in science for someone your age. It keeps you interested but as long as you stay grounded in the knowledge that your theories are based, at this level of your development, more in ignorance than in knowledge, then you are using your process as a learning tool and that's great. OmCheeto, Samy_A and Isaac0427 I do not find that to be at all in disagree w/ my statements or beliefs and in fact I think it's a fine way to forge ahead in science for someone your age. It keeps you interested but as long as you stay grounded in the knowledge that your theories are based, at this level of your development, more in ignorance than in knowledge, then you are using your process as a learning tool and that's great. Ok, I just picked up an implication (that may or may not have been there) in both your post and the insight that these theories all come from ignorance and stubbornness to accept mainstream theories, and not from an attempt to learn more about these theories by challenging them (and I am not saying people don't create theories out of ignorance and stubbornness to accept mainstream theories, because they do). Most of what I have learned about relativity has been from challenging it and finding a list of reasons why my challenge was incorrect. OmCheeto NOVA is an excellent source for informing the public about science, but I am guessing that for each viewer of NOVA there are more than a hundred viewers of FOX NEWS. And we all know how great Fox News is at explaining/acknowledging proven scientific fact. Hornbein A substantial majority of People believe what they want to believe. They disregard information/misinformation that contradicts what they believe. They absorb information/misinformation that reinforces what they believe. What they believe is usually self-flattering. They choose a person or organization in which to place their trust. They believe all information/misinformation that issues from this source. Corollary: Sources of flattering information are more likely to earn such trust. To believe that you easily have come up with a simple insight that a thousands of hard-working geniuses have missed is very self-flattering. To believe that you are ignorant and incompetent in some area is anti-self-flattering. Ergo, crackpottery is a basic human tendency. You can coerce a student into learning, but each student is free disbelieve in or subsequently forget what was learned. Once no longer a student, untrusted sources no longer have any power over their learning or beliefs. ----- I think the educational system is obsolete, ineffective, inflicts pain on the student, and needs a complete overhaul. It has hardly changed since ancient times. Can you blame people for hating it? There is room for improvement. JakeBrodskyPE When I was a child, getting into ham radio, I was amazed at what the retail for a radio was versus what the parts cost. What I didn't realize is that the radio had a lot of marketing and engineering expenses that needed to be recouped. This may sound crass, but we need MORE marketing. Science invokes a sense of wonder in its practitioners that is only rarely ever described well. Carl Sagan did that. We don't realize how good that was, until watching the remake of Cosmos. Even his masterful and charismatic protege Neil deGrasse Tyson is only a pale reflection of the kind of science marketing that Carl Sagan did. Don't get me wrong, Tyson is brilliant; but Sagan's presentation was the very embodiment of artistry. If you can't be as brilliant as Sagan, you can make up for it with quantity. That's where we need to go. Relying upon public enthusiasm will get you only so far. The biggest successes in business and history in general were accompanied by masterful marketing. We need more. JorisL I think we need another Feynman. However I still have to watch/read some work by Sagan to fully appreciate his efforts. However I find it hard to believe anyone can match let alone surpass Feynman's QED. He not only simplifies the theory but also mentions why it is difficult to do cutting edge science. The latter is what lacks in most popular accounts but also in education. Which I understand since most teachers haven't gotten close to the "cutting edge" over here. But that last point is for another discussion. Ok, I just picked up an implication (that may or may not have been there) in both your post and the insight that these theories all come from ignorance and stubbornness to accept mainstream theories, and not from an attempt to learn more about these theories by challenging them (and I am not saying people don't create theories out of ignorance and stubbornness to accept mainstream theories, because they do). Most of what I have learned about relativity has been from challenging it and finding a list of reasons why my challenge was incorrect. One could argue this is the way to study science and especially physics. However it would take ages while the time we have (in high school) is severely limited. Often people also have the idea that since physics is called exact science the approximate models we have are useless. Also the interplay theory-experiment in the scientific method is not a one-way street. It's a complicated interplay of ideas and results. (A dance if you like metaphors) I'd say keep on the good work and remain critical. Amrator Awesome Insight, mfb. I'm going to show it to one of my physics professors if you don't mind. mfb and Greg Bernhardt Monsterboy One can't expect science T.V shows to tell all the intricate details ,if that is the case the public will stop watching because everyone in the public aren't scientists and most of the things will start going over their head and will scare the public away from science ,the shows are only aimed to give a brief idea to the lay public (and to motivate them to pursue science) who are completely or a large extent, disconnected from modern science, i think the problem mentioned in this article can be overcome if the science documentaries are screened for errors by the actual scientists who work on the topic (either retired or active scientists) and they should make it clear to the public that they are simplifying very complicated science in order for them to understand and that real knowledge only comes with hard study. Gold Member I think the biggest problem is that people find it SO much easier to watch pop science on TV than to do any actual study of science, and you know how those shows get so much wrong. I think they do sometimes inspire young people to study but overall I'm not sure but what they do more harm than good and they certainly give those adults who are not likely to further pursue actual science a very poor view of actual science of the kind you talk about. The producers of the TV shows can't be blamed for this any more than McDonalds can be blamed for serving tasty junk food. People sell what other people buy and there are lots of buyers for junk food and junk science, especially since they LOOK so tasty, what with all the nifty graphics and tomato sauce and all. The first thing we COULD do (and won't) would be to insist that people who teach science, at any level but particularly below the college level, be required to have at least some idea what they are talking about. Teachers below the high school level in particular have no idea, generally, what science is really all about. So first of all, I find this insight to be very interesting. However, as a crackpot by some definitions, I must disagree with you and phinds on one point. I do study actual science and calculus with that science. I learned on this forum to stay away from popsci, but still, I come up with "theories". No, as a middle schooler I do not believe these theories will go anywhere. What I do is I make a theory based on math and what I have learned so far from other sources, and then I look for what I did wrong. Sometimes I need to ask a professional about what I did wrong. In fact, I learn better from actually thinking about it and then finding a problem in my thinking and/or math. I do agree that some people do need to learn real science before making a statement like "special relativity is wrong" but some people (like me) learn from thinking, challenging the theories and finding out why I'm wrong. Just something to think about. Overall, however, a great insight. Unfortunately you are using theory in the layman's sense. What you are coming up with are at best hypotheses. It is good to think up new ideas but calling them theories without rigorous testing does an injustice to science. BoB mrspeedybob As we've seen in the thread: https://www.physicsforums.com/threa...is-not-weird-unless-presented-as-such.850860/ most concepts can't be genuinely described using word language. Susskind also mentioned this in an interview I watched where the interviewer asked some quantum mechanics question and Susskind basically said he couldn't answer it with words, only in math. Not long ago I debated a man who claimed to understand the concepts of the Big Bang very well and didn't need to know the math. I promptly left the conversation. Fact is that the general public certainty can't handle or have the patience for reading research papers so pop sci news agencies water the research down into cookie cutter pieces with catchy headlines which in the end only vaguely resemble what it really means. This is good for the public's imagination but doesn't do justice for how complex their research is. I think Richard Feynman did a great job of accurately explaining concepts to lay people without loosing them in the math. That's a very difficult thing to do, but that does not mean that it can't be done, or that it should not be attempted. It should. A good recent example was "The Space Doctor's Big Idea" http://www.newyorker.com/tech/elements/the-space-doctors-big-idea-einstein-general-relativity Mentor Non-Science is a societal problem. It forwards the interest of the idiot fringe and of FUD mongers. Idiot fringe: In the US, there are so-called 'anti-vaxxers'. They are people who are vehemently against vaccinating children. Therein lies the issue with junk science in general. These folks made decisions based on pulp news sources and magazines that impact not only their immediate family, but others in the community. Measles was essentially unreported by the US CDC for years, there were very, very few cases. Not anymore. Measles can result in death and lifelong medical problems. Example SSPE: https://www.nlm.nih.gov/medlineplus/ency/article/001419.htm The same thing happens with a variety of issues that are subjected to *FUD attacks, in order to further the economic positions of very powerful companies. Example - The US tobacco lobby's very effective attacks against anti-tobacco legislation. The stupidity did not abate until Science made discoveries that were so very plain, smack-you-in-the-face, that even the pop-science idiots could not get them wrong. FUD stills lives on with the climate "debate" - as newspapers call it. It is not a debate. You cannot legislate scientific observations and results. *FUD - fear, uncertainty, and doubt. jerromyjon I think there will always be two camps, those who try to strictly adhere to the rules of science which relies mainly on the math which is enough to communicate. and the laypeople who know the world from their own perspectives, which are numerous, I'm certain. The perspective they have built in their mind to comprehend the reality they are familiar with typically fails and so they grapple for a mechanism to leave their sole understanding intact. Many don't have the neural plasticity to drop lifelong thought processes and start over, let alone the determinism to seek a deeper, more accurate understanding, and the mental ability to assemble intricate models. So pop-sci caters to their inadequacies with "fantastic" phrases like "wave-particle duality" and life goes on. Greg Bernhardt epistememe A substantial majority of People believe what they want to believe. They disregard information/misinformation that contradicts what they believe. They absorb information/misinformation that reinforces what they believe. What they believe is usually self-flattering. They choose a person or organization in which to place their trust. They believe all information/misinformation that issues from this source. Corollary: Sources of flattering information are more likely to earn such trust. To believe that you easily have come up with a simple insight that a thousands of hard-working geniuses have missed is very self-flattering. To believe that you are ignorant and incompetent in some area is anti-self-flattering. Ergo, crackpottery is a basic human tendency. You can coerce a student into learning, but each student is free disbelieve in or subsequently forget what was learned. Once no longer a student, untrusted sources no longer have any power over their learning or beliefs. ----- I think the educational system is obsolete, ineffective, inflicts pain on the student, and needs a complete overhaul. It has hardly changed since ancient times. Can you blame people for hating it? There is room for improvement. I agree with most of you assessment. Now how about offering solutions. toumaza the complexity of any science field happens at the level of our mind. same as one could say mathss is easier than any other science subject. also some people remind and understand better when you explain the subject to the minor details to them why others (which i am part of ) prefer to understand it on its own than waiting for lecturers to give them the explanations. just trying to say complexity is at the level of individuals Unfortunately you are using theory in the layman's sense. What you are coming up with are at best hypotheses. It is good to think up new ideas but calling them theories without rigorous testing does an injustice to science. BoB Yes, it is more of a hypothesis than a theory. Jeff Rosenbury Physics Forum is dedicated to advancing the standard model. This discussion doesn't seem to be part of that. As I see it, every part of the standard model was once part of fringe science at least to the extent of not being accepted/proven. While some fringe science is clearly wrong (or not even wrong), some tiny fraction of it will someday work its way into the standard model. A discussion like this opens up the question of what is right or wrong about fringe science. That's a problem for a website that tries to avoid fringe science. I do think anyone attacking the standard needs to have a good understanding of it before declaring it wrong. This forum helps to provide that understanding. Hopefully the work we do here will one day allow us to replace the standard model with one closer to the truth. (Not that the standard model is off by a lot, but there are still outstanding problems.) Historically liberal arts have always ruled science. That is because a good story has always received more funding than being right. (NSF budget: $7 billion (2012), Disney:$11 billion) Greg Bernhardt Physics Forum is dedicated to advancing the standard model. Is it? If it was, we would not have a forum for relativity, biology, chemistry, homework help, etc. jerromyjon Is it? If it was, we would not have a forum for relativity, biology, chemistry, homework help, etc. I think it was intended to say the standards which physics forums holds as standard consensus of mainstream science in each category? "What do you think? Can science popularisation improve in that aspect, and if yes, how?" I think one big thing that can help is for scientists themselves to talk more openly about what they do. The scientific community can't rely on the entertainment industry to popularize science, and then complain when they get it wrong. Nor can the scientific community hope that politicians won't spin their work and results to reinforce their agendas. Small things that can help: • More open-access academic articles. I realize there are complications that come with this. It's not free to produce these things. But when information is locked away in the ivory tower of academia all the outsiders can do is speculate. • Authors providing non-technical summaries of their work. I've noticed a few journals in my field now requiring this. • More non-technical summary presentations or non-technical components of technical presentations at major conferences. Mass media are more likely to cover these events and report on what's found when it's easier for the journalists to understand what's happening. • Scientists volunteering time to go out into the community: giving public lectures, coming into classrooms, and speaking to teachers. I volunteer with a local program called "Scientists and Engineers in the Classroom" because I think programs like this are very important for helping studends and teachers learn about how science happens. • More blogging or popular content coming from scientists about what it is they do. (And yes, that's a kudo to Insites!) While I agree that more people like Feynman and Sagan can help, I think it would help a lot more if "everyday" scientists were more vocal about what they do on social media. StatGuy2000 and mfb JorisL • Scientists volunteering time to go out into the community: giving public lectures, coming into classrooms, and speaking to teachers. I volunteer with a local program called "Scientists and Engineers in the Classroom" because I think programs like this are very important for helping studends and teachers learn about how science happens. Funny you mention this, after reading this I recalled something from a course called "Historical and Social Aspects of Physics". It was mentioned that at some time in the 20th century science was part of regular culture in Germany. (I belief it was during the Weimar Republic era) Meaning people would go to public lectures like we go to plays or movies today. Continuing this train of thought I figured that universities could easily arrange for such a thing in this day and age. I don't think this is commonplace here (haven't heard about it at least). It could be a monthly thing, I'm quite confident you'd find volunteers in every department. A gifted presenter would include news reports of e.g. the loop-hole free Bell test which over here was basically announced as "Einstein was wrong after all". Staff Emeritus Homework Helper So first of all, I find this insight to be very interesting. However, as a crackpot by some definitions, I must disagree with you and phinds on one point. I do study actual science and calculus with that science. I learned on this forum to stay away from popsci, but still, I come up with "theories". No, as a middle schooler I do not believe these theories will go anywhere. What I do is I make a theory based on math and what I have learned so far from other sources, and then I look for what I did wrong. Sometimes I need to ask a professional about what I did wrong. In fact, I learn better from actually thinking about it and then finding a problem in my thinking and/or math. I do agree that some people do need to learn real science before making a statement like "special relativity is wrong" but some people (like me) learn from thinking, challenging the theories and finding out why I'm wrong. Just something to think about. Overall, however, a great insight. Using the word theory the way you do is probably what sets off crackpot alarms if you've been accused of being a crackpot. I tend to roll my eyes when someone claims to be developing a new theory. From what you describe, I'd say you're constructing your understanding and knowledge of a topic, not a new theory. One way you do that is to take your current understanding and test it. When you encounter non-sensical or puzzling results, you figure out where you went wrong with your reasoning or where your understanding of a topic may have been flawed and try to resolve the inconsistency. That's completely normal. That's how you learn. The hallmark of the crackpot, though, is the refusal to learn from one's mistakes. According to them, their understanding isn't wrong; everyone else's is. This is the mistake you want to avoid. The insight touches on what allows the crackpot to even entertain the idea that they're right and everyone else is wrong. Part of that is not understanding the process of science. When you don't understand all of the work that goes into research and what the research actually says, it becomes easy to dismiss established scientific knowledge and theories as being based on someone's whims. Jeff Rosenbury, mfb and Isaac0427 Using the word theory the way you do is probably what sets off crackpot alarms if you've been accused of being a crackpot. I tend to roll my eyes when someone claims to be developing a new theory. From what you describe, I'd say you're constructing your understanding and knowledge of a topic, not a new theory. One way you do that is to take your current understanding and test it. When you encounter non-sensical or puzzling results, you figure out where you went wrong with your reasoning or where your understanding of a topic may have been flawed and try to resolve the inconsistency. That's completely normal. That's how you learn. The hallmark of the crackpot, though, is the refusal to learn from one's mistakes. According to them, their understanding isn't wrong; everyone else's is. This is the mistake you want to avoid. The insight touches on what allows the crackpot to even entertain the idea that they're right and everyone else is wrong. Part of that is not understanding the process of science. When you don't understand all of the work that goes into research and what the research actually says, it becomes easy to dismiss established scientific knowledge and theories as being based on someone's whims. Yes, as I said, it's more of a hypothesis. Jeff Rosenbury "What do you think? Can science popularisation improve in that aspect, and if yes, how?" I think one big thing that can help is for scientists themselves to talk more openly about what they do. The scientific community can't rely on the entertainment industry to popularize science, and then complain when they get it wrong. Nor can the scientific community hope that politicians won't spin their work and results to reinforce their agendas. I think we need a new economic model for an intellectual economy. Our legal and social values were developed for an industrial economy. They reward behaviors that produce real goods. Mass market art (T.V., etc.) is aimed at keeping producing workers healthy and happy. Science (and less banal art) is poorly rewarded unless it supports those now non-functional goals. The idea that scientists should be required to volunteer to act as teachers seems odd to me. The two jobs are quite different (at least on the general public level). Would we ask a carpenter to volunteer his time to explain how the joists were laid in a new house before it could be sold? Scientists have historically made good money when they could leverage their high IQs to manipulate the system. But otherwise they tend to lag behind other professions requiring similar levels of learning/talents. To me this indicates a flaw in the economic system that needs fixing, not that scientists should become media manipulators. I don't have a solution, but offering more science prizes seems like a good idea. Paying scientists for learning is also an obvious step. But I'm hardly the first to decry the student loan situation. Gold Member I don't get why some people want to overturn SR. Special Relativity is one of the most intriguing things I have ever come across. It's where my love of physics began, and it led me directly down the path to GR, which is my favorite scientific subject period. What do people have against relativity? Mentor What do people have against relativity? 1. They didn't learn it in high school. 2. Some of the implications (time dilation and length contraction) contradict our everyday observations. Last edited: I think we need a new economic model for an intellectual economy. Our legal and social values were developed for an industrial economy. They reward behaviors that produce real goods. Mass market art (T.V., etc.) is aimed at keeping producing workers healthy and happy. Science (and less banal art) is poorly rewarded unless it supports those now non-functional goals. I'm not sure I understand this point. You can't expect an economic system to reward you unless you are contributing something that it values. Science is very well rewarded when it produces things like MRI machines, smart phones, vaccines, etc. You can't simply put people with high IQs at the top of an economic pyramid and pay them to do whatever their hearts desire. The idea that scientists should be required to volunteer to act as teachers seems odd to me. The two jobs are quite different (at least on the general public level). Would we ask a carpenter to volunteer his time to explain how the joists were laid in a new house before it could be sold? I didn't say it should be "required." But if you want more people to know what it is that you do, you start by making them aware of what it is that you do, and explain why it's relevant. In some fields, it's self-evident. But with science the real-world relevance can lag substantially behind the investment of money, time and resources. Your carpenter example is not a good one. You don't have to get a skilled trade to explain a skill because the skill needs to be performed to given standard (building code for example). If it's not, the consumer has recourse. And beyond that, trades have their work inspected all the time. When I buy a house, I both inspect it thoroughly myself and I hire someone who knows the local code to go through it with a fine tooth comb. Scientists have historically made good money when they could leverage their high IQs to manipulate the system. But otherwise they tend to lag behind other professions requiring similar levels of learning/talents. To me this indicates a flaw in the economic system that needs fixing, not that scientists should become media manipulators. The key point here is that the other fields that require similar levels of training are professions. These are doctors, lawyers, engineers, etc. If you're a doctor it's easy to convince someone or the taxpayers in general to reimburse you for exercising your skill set on you because they understand that it's likely to cure whatever is ailing them. If you're a lawyer you can expect reimbursement because your skill set will help a client to draft a contract that will protect him or her, or navigate a set of problems with very serious consequences. The professions establish colleges that act to ensure those skills meet a certain standard so that the public doesn't need to evaluate individual practitioners. But science isn't a profession - at least not in that sense. There are no licences or professional standards. It may be embarrassing if you have to retract a journal article, but in most cases no one is going to sue a scientist for making a mistake. StatGuy2000 zoobyshoe I think one big thing that can help is for scientists themselves to talk more openly about what they do. The scientific community can't rely on the entertainment industry to popularize science, and then complain when they get it wrong. Something like this. Specifically what's wrong is that most scientists seem to be suffering a very bad case of "curse of knowledge." Knowing what they know, they can't conceive of a mind that doesn't also know it, and they don't have any idea what that mind needs to hear to understand what science is actually up to. However, it's never been considered part of the job description of a scientist to be able to communicate to lay people, so this isn't a shortcoming. If, though, scientists perceive that being misunderstood is becoming disadvantageous, then it's up to them to figure out how to explain themselves and not leave it to the popular media.
2023-04-02 12:49:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3234444856643677, "perplexity": 1134.3364283705148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00640.warc.gz"}
http://mathhelpforum.com/math-challenge-problems/168357-solve-following-integral-print.html
# Solve the following integral: • Jan 14th 2011, 10:48 AM wonderboy1953 Solve the following integral: $\int{(x^2 + 5x + 6)\sqrt{x + 1}}dx$ by the addition-subtraction method. Moderator edit: Approved Challenge question. • Jan 14th 2011, 11:35 AM dwsmith Quote: Originally Posted by wonderboy1953 $\int{(x^2 + 5x + 6)\sqrt{x + 1}}dx$ by the addition-subtraction method. Perhaps you can elaborate on this method since a google search on this technique provides nothing useful. • Jan 14th 2011, 11:45 AM wonderboy1953 Quote: Originally Posted by dwsmith Perhaps you can elaborate on this method since a google search on this technique provides nothing useful. I'll refer you to this thread "The addition-subtraction puzzle (calculus)" in the puzzle section. • Jan 14th 2011, 06:33 PM TheCoffeeMachine I'm sort of confused here. What are you trying to achieve with this integral? Letting $t = \sqrt{x+1}$ gives: \begin{aligned}& \int \left(x^2+5x+6\right)\sqrt{x+1}\;{dx} = \int {2(t^2+1)(t^2+2)t^2\;{dt} \\& = \int (2 t^6+6 t^4+4 t^2)\;{dt} = \frac{2}{7}t^7+\frac{6}{5}t^5+\frac{4}{3}t^3+k \\& = \frac{2}{7}\sqrt{(x+1)^7}+\frac{6}{5}\sqrt{(x+1)^5 }+\frac{4}{3}\sqrt{(x+1)^3}+k.\end{aligned} • Jan 14th 2011, 06:34 PM dwsmith Quote: Originally Posted by TheCoffeeMachine I'm sort of confused here. What are you trying to achieve with this integral? Letting $t = \sqrt{x+1}$ gives: \begin{aligned}& \int \left(x^2+5x+6\right)\sqrt{x+1}\;{dx} = \int {2(t^2+1)(t^2+2)t^2\;{dt} \\& = \int (2 t^6+6 t^4+4 t^2)\;{dt} = \frac{2}{7}t^7+\frac{6}{5}t^5+\frac{4}{3}t^3+k \\& = \frac{2}{7}\sqrt{(x+1)^7}+\frac{6}{5}\sqrt{(x+1)^5 }+\frac{4}{3}\sqrt{(x+1)^3}+k.\end{aligned} I didn't even bother to look it up in the forum. • Jan 15th 2011, 09:26 AM wonderboy1953 Quote: Originally Posted by TheCoffeeMachine I'm sort of confused here. What are you trying to achieve with this integral? Letting $t = \sqrt{x+1}$ gives: \begin{aligned}& \int \left(x^2+5x+6\right)\sqrt{x+1}\;{dx} = \int {2(t^2+1)(t^2+2)t^2\;{dt} \\& = \int (2 t^6+6 t^4+4 t^2)\;{dt} = \frac{2}{7}t^7+\frac{6}{5}t^5+\frac{4}{3}t^3+k \\& = \frac{2}{7}\sqrt{(x+1)^7}+\frac{6}{5}\sqrt{(x+1)^5 }+\frac{4}{3}\sqrt{(x+1)^3}+k.\end{aligned} A no-brainer for you and a few others perhaps (I could have given a much tougher problem). The A-S method is simpler/more efficient to apply to this type of problem, yet I've never seen it used in any math class nor text. A simpler method would help avoid algebraic mistakes (btw please recheck your algebra as it needs fixing). Since the A-S method hasn't been applied, my challenge still stands. You can try again TheCoffeeMachine or maybe someone else out there wants to take a whack at this problem (refer to the third post on this thread). Since the A-S method hasn't been used to solve this problem, my challenge still stands and if you want to see details about this method, I'll just refer you again to the third post on this thread. • Jan 15th 2011, 10:11 AM TheCoffeeMachine Quote: Originally Posted by wonderboy1953 A simpler method would help avoid algebraic mistakes (btw please recheck your algebra as it needs fixing). Where? I've rechecked and I can't find anything that needs fixing! In fact, it's perfect! (Cool) • Jan 15th 2011, 10:21 AM wonderboy1953 Quote: Originally Posted by TheCoffeeMachine Where? I've rechecked and I can't find anything that needs fixing! In fact, it's perfect! (Cool) Should be $\int {(t^2+1)(t^2+2)t\;{dt}$, not $\int {2(t^2+1)(t^2+2)t^2\;{dt}$ • Jan 15th 2011, 10:34 AM chiph588@ Quote: Originally Posted by wonderboy1953 Should be $\int {(t^2+1)(t^2+2)t\;{dt}$, not $\int {2(t^2+1)(t^2+2)t^2\;{dt}$ If you differentiate his answer you get the integrand. • Jan 15th 2011, 10:39 AM TheCoffeeMachine Quote: Originally Posted by wonderboy1953 Should be $\int {(t^2+1)(t^2+2)t\;{dt}$, not $\int {2(t^2+1)(t^2+2)t^2\;{dt}$ Wrong! Let me show you in slower/detailed steps, my dear wonderboy. We have $t = \sqrt{x+1} \Rightarrow \frac{dt}{dx} = \frac{1}{2\sqrt{x+1}} \Rightarrow dx = 2\sqrt{x+1}\;{dt} = 2t\;{dt}$. Similarly, we have $t = \sqrt{x+1} \Rightarrow t^2 = x+1$ and $x^2+5x+6 = (x+2)(x+3) = (t^2+1)(t^2+2)$. Thus $\int \left(x^2+5x+6\right)\sqrt{x+1}\;{dx} = \int {2(t^2+1)(t^2+2)t^2\;{dt}$. • Jan 15th 2011, 10:52 AM wonderboy1953 Step-by-step explanation Quote: Originally Posted by chiph588@ If you differentiate his answer you get the integrand. TheCoffeeMachine substituted $t = \sqrt{x + 1}$ This means that $x = t^2 - 1$ When you do the substitution in the starting equation, you have: $\int{[(t^2 - 1)^2 + 5(t^2 - 1) + 6]t\;{dt}}$ which leads to $\int {(t^2+1)(t^2+2)t\;{dt}$ • Jan 15th 2011, 10:57 AM wonderboy1953 Oops I overlooked the differential so TheCoffeeMachine is right. • Jan 18th 2011, 12:48 PM wonderboy1953 Hint Quote: Originally Posted by wonderboy1953 $\int{(x^2 + 5x + 6)\sqrt{x + 1}}dx$ by the addition-subtraction method. Moderator edit: Approved Challenge question. For those who may want to see what I have in mind, you can factor the $(x^2 + 5x + 6)$ portion into $(x + 2)(x + 3)$. With the first factor, you can subtract and add 1 to change it into $[(x + 1) + 1]$ (do you see why?) What would you do with the second factor to go on to complete the problem using the addition-subtraction method?
2017-10-18 13:15:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970899760723114, "perplexity": 2677.22890290978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822966.64/warc/CC-MAIN-20171018123747-20171018143747-00833.warc.gz"}
http://openstudy.com/updates/4f348ed9e4b0fc0c1a0c141b
## sammy12 Group Title A solid candy jawbreaker is cut in half. If its diameter is 4 cm, what is the surface area of a half piece? please show me the steps. 2 years ago 2 years ago 1. beth1127 Group Title |dw:1328844308091:dw| 2. sammy12 Group Title i am getting 8 pi .is it right 3. Tonik Group Title i think it should be 48pi 4. Tonik Group Title oh hell, no.... i thought radius was 4, not diameter... sec 5. sammy12 Group Title answer is 12 pi.I don't know how 6. funinabox Group Title 2A = 4pi*r^2 2A = 4pi*(2)^2 2A = 16pi A = 8pi I set it equal to 2A because we want half the surface area, I found the full SA then divided by 2. 7. sammy12 Group Title i am getting 8 pi 8. Tonik Group Title the surface area on the outside is 8pi, you're correct, but you need to add the surface are where the cut was made (basically the area of a circle radius 4 9. Tonik Group Title 10. Tonik Group Title so its 12pi 11. sammy12 Group Title 12. funinabox Group Title Ahhh yes I forgot about the inside cut. I assumed a hollow sphere. :O stupid mee 13. Xishem Group Title It's the surface area of half a sphere plus the surface area of the circle... $\frac{4πr^2}{2}+\pi r^2=2 \pi r^2+\pi r ^2=3 \pi r^2= 3 \pi (2cm)^2=12 \pi\ cm$ 14. Xishem Group Title *$12\pi\ cm^2$ 15. sammy12 Group Title Thanks
2014-09-01 21:40:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7172502279281616, "perplexity": 7325.418061272308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920694.0/warc/CC-MAIN-20140909044428-00160-ip-10-180-136-8.ec2.internal.warc.gz"}
https://puzzling.stackexchange.com/questions/46825/a-puzzling-newspaper-headline/47607
The following clues are not quite your usual crossword fare; they do fit the standard guidelines for cryptic clues, but each of them contains an unusual twist which (hopefully) makes them harder than average. By putting together all four solutions, you will discover a meaningful sentence. 1. The western hemisphere is never xenophobic first. (6) 2. Horse moves right, left, faster, by alternate routes. (5) 3. A most outstanding fate awaits the poor easterner later on. (5) 4. I do it, you are it, so it sounds like you should do it? (4) Solve the cryptic clues and find the final sentence. Hint: 3. What does 'outstanding' mean? 4. "I do it" = verb, "you are it" = noun, "sounds like" = homophone. • I earlier checked for all 4 letter words that can be used as both noun and verb, none made sense though. Working on "sounds like you should do it" – uptoNoGood Dec 22 '16 at 3:55 So, that clue: I do it, you are it, so it sounds like you should do it? (4) PREY: "prey" is both an intransitive verb ("I prey"), and a noun ("you are prey"), and is a homophone of "pray" (which is something you should do!). (Also, as @randal'thor explains in the comments, this also fits semantically, since the prey of someone who is preying should pray that they escape!) So, drawing from @Deusovi's answers to the other three clues, the final sentence is: Sphinx trots after prey • Yay, finally the 4th clue is solved! Note also how all three parts of the clue fit together semantically: if I am a predator, then I intend to prey (v) on you, my prey (n), so you should pray that you escape. – Rand al'Thor Jan 6 '17 at 0:40 • @randal'thor Ha, nice, didn't catch that! Added. – Volatility Jan 6 '17 at 0:45 • Nice! I never did find this; glad you did – Rubio Jan 7 '17 at 9:55 ### Partial Answer (clues 1-3 solved) The western hemisphere is never xenophobic first. (6) SPH(-ere) + I_ N_ X_ (no definition?) Horse moves right, left, faster, by alternate routes. (5) _T _R + rOuTeS A most outstanding fate awaits the poor easterner later on. (5) A F_E _R (with T inserted somehow?) → AFTER or possibly... A moved to the outstanding position of FATE → AFTE, then add _R I do it, you are it, so it sounds like you should do it? (4) • Oooops, I forgot to put a definition part into the first clue. I wonder if anyone would mind if I edited the question to add a word or two somewhere ... – Rand al'Thor Dec 18 '16 at 21:49 • 1 and 2: perfect. 3: you've got the right answer but not the full explanation. 4: keep thinking :-) – Rand al'Thor Dec 18 '16 at 21:50 • @randal'thor I don't think anyone would mind, but it would be obvious that was the definition :P – TrojanByAccident Dec 19 '16 at 3:04 • An explanation for rot13 ( ibjf ) … this sound like a wedding, first person says ‘I do’ then it’s your turn and so should also do it (I do)? – Tom Dec 20 '16 at 10:07 • @Tom: That doesn't make sense as a cryptic clue, though. – Deusovi Dec 20 '16 at 10:10 I think I have the complete answer to 1. A most outstanding fate awaits the poor easterner later on. (5) A + most outstanding FaTE + pooR easterner = AFTER = later on. • @deusovi already has that answer. (And neither of you have used "awaits" yet, so there's probably something still missing) – Rubio Dec 17 '16 at 7:50 • But not a full explanation. I take 'awaits' to mean 'needs something added'. – Neil W Dec 17 '16 at 7:52 • That would be ... unusual. :) – Rubio Dec 17 '16 at 7:53 • I don't quite understand your explanation here? – Rand al'Thor Dec 18 '16 at 21:51 • @randal'thor I'm pretty sure they are saying FTE is "fate," mostly (75%) outstanding in the sense that "outstanding" can mean "remaining." At least, that was my guess after seeing Deusovi's answer, but before reading this one. – Will Dec 18 '16 at 22:14 The western hemisphere is never xenophobic first. (6) SPHINX (Sph-Leftmost part of Sphere(Hemi-meaning we take one half), inx-First letters from "is never xenophobic". This contains no definition but two wordplay. Horse moves right, left, faster, by alternate routes. (5) Not sure about this,it's probably USHER? (H and S from Horse, alternate letters of Routes-RUE. I am not sure about its definition being "Faster") A most outstanding fate awaits the poor easterner later on. (5) No idea. Probably the last letters of something as indicated by "later on". I do it, you are it, so it sounds like you should do it? (4) URGE - (Sounds like indicates homophones. You and are means U and R. "You should do it" part is probably the definition. • There's only one wordplay for the first clue, not two. And I don't quite see how you get the last one. – Deusovi Dec 17 '16 at 6:52 • "You should do it" is like urging someone to do something... – Sid Dec 17 '16 at 6:54 A simpler explanation for 3... 'A most outstanding' (anagram hint for.. ) 'fate' (anagram gives AFTE) 'awaits' (before) 'the poor easterner' (rightmost letter of poor - R) 'later on.' (definition - AFTER) Alternatively... 'A most outstanding' applied to 'fate' means take the letter A from 'fate' and make stand it outside the word (put it first) - AFTE .. just realised this explanation was already suggested! Another suggestion... 'A' (A) 'most outstanding fate' (tall letters of 'fate' - FT) 'awaits' (before) 'the poor easterner' (rightmost letters of 'the' and 'poor' - ER) 'later on' (definition - AFTER). Although I would suggest 'easterners' would work better in the clue than 'easterner' if this is the case! A wild stab in the dark at 4... RITE - 'I do it,' (write - You wrote the qu) 'you are it,' (right - I am, hopefully!) 'so it sounds like you should do it?' (rite - homophone, ritual one should perform or do according to tradition) Which gives... SPHINX TROTS AFTER RITE. Which is feasible as an answer, if not wholly likely in this world! Credit to @Deusovi for previously correct answers for 1-3 • There is another Sphinx for PSE. But yeah, rite seems likely. – Sid Dec 28 '16 at 16:04 • @Sid.. Yeah, right! ;) – Arth Dec 28 '16 at 16:31 • Your third spoilertag is perfect for #3 - well done! But #4 still isn't r̶i̶t̶e̶ right: "sounds like" only appears once in the clue, so there's only one homophone involved. – Rand al'Thor Dec 28 '16 at 19:24 • @randal'thor Thanks for the feedback.. happy for #4 to be wrong, ('I do it' link felt very tenuous), but I think the one 'sounds like' with two homophones would probably be OK. – Arth Jan 3 '17 at 13:54 Note: uses parts from other answers, but has better explanations(IMO) 1. The western hemisphere is never xenophobic first. (6) SPHINX Western(left) hemi(half) of SPHere, Is Never Xenophobic first No definition 2. Horse moves right, left, faster, by alternate routes. (5) TROTS right of 'lefT, fasteR', by(next to) alternates of rOuTeS trots = horse moves 3. A most outstanding fate awaits the poor easterner later on. (5) AFTER A + the most outstanding(non-doubled) letters in 'FaTE' awaits the easterner(right-most) of pooR after = later on 4. I do it, you are it, so it sounds like you should do it? (4) No, I didn't really solve it
2020-02-18 03:39:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535574197769165, "perplexity": 6582.20541284591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143505.60/warc/CC-MAIN-20200218025323-20200218055323-00397.warc.gz"}
https://plainmath.net/90350/compose-a-number-expression-and-find-its
# Compose a number expression and find its value quotient from dividing the sum of the numbers 4/9 and - 5/6 by the number - 14/27 difference between the product of the sum and the difference of the numbers -1.5 and 4 and the number 2 product of sum and difference of numbers -1.9 and 0.9 cube of difference between numbers 6 and 8. tuzkutimonq4 2022-09-14 Answered Compose a number expression and find its value quotient from dividing the sum of the numbers 4/9 and - 5/6 by the number - 14/27 difference between the product of the sum and the difference of the numbers -1.5 and 4 and the number 2 product of sum and difference of numbers -1.9 and 0.9 cube of difference between numbers 6 and 8. You can still ask an expert for help Expert Community at Your Service • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers Solve your problem for the price of one coffee • Available 24/7 • Math expert for every subject • Pay only if we can solve it ## Answers (1) Hofpoetb9 Answered 2022-09-15 Author has 17 answers (4/9+(-5/6)):(-14/27)=-7/18*(-27/14)=3/4 (-1,5+4)(-1,5-4)-2=2,25-16-2=-15,75 (-1,9+0,9)(-1,9-0,9)=3,61-0,81=2,8 $\left(6-8{\right)}^{3}=\left(-2{\right)}^{3}=-8$ We have step-by-step solutions for your answer! Expert Community at Your Service • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers Solve your problem for the price of one coffee • Available 24/7 • Math expert for every subject • Pay only if we can solve it
2022-10-06 11:28:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4466005563735962, "perplexity": 1312.951776313561}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00184.warc.gz"}
https://www.rocketryforum.com/threads/need-help-with-value-of-trees-on-land.6571/
# Need help with value of Trees on land ### Help Support The Rocketry Forum: #### georgegassaway I'm wondering if any of you may have some knowledge, experience, or tips where to find some info on this. The scenario: I own a small amount former farmland that my grandfather left to my mother and I, which now I own alone. We had held onto the land purely as an investment, and had not even gone down there for many years. Crossing the land is a right of way by Transco, for four large underground gas pipelines. Transco is going to be adding a 5th pipeline to the land. They will be paying a small amount for the additional right of way to add the 5th line. But also, for the work they will be doing, they will need to clear out a temporary workspace. They will be paying for the temporary use of the workspace. But those are not the issue. The area they will be clearing out for temporary use, will require removing all trees in that space. They will be paying a fair value for the trees they have to remove. That area is overgrown with wild trees. It would be "easy" if the trees were say 5 or 6 oak trees and empty space between them. But it is not. There are not many "big" trees, but there are a LOT of small to medium trees there, sort of like a forest with trees that mostly have grown up wild the last 35-50 years. So, I am wondering, does anyone have a ballpark idea what a "stand of trees" would be worth based on square feet or some other basis? I know, many factors involved, tree size, type, whether useful for timber or not, and so forth. But I have to figure this is not a unique scenario, that there may be some info out there to give me some practical ballpark idea. I did indeed find out how the value of one individual tree could be determined, but the layout of the land with so many small to medium trees just does not make that practical. Attached are a few photos showing the area of trees that will be removed, and a satellite view of the area that I've drawn up where the tress are to be cleared. I need to determine a value to ask for those trees now, not after-the-fact. The work will not be done for a year or so, but the pipeline company wants to get everything settled with all of the landowner before they start. And I can sure use the money now rather than wait, anyway. The landowner on the other side, whom I talked to yesterday, had an "easy" way to figure out his. His land is cleared for a pasture, and will only have five individual big trees cut down. - George Gassaway #### Pat_B ##### Well-Known Member George- I'm a real estate appraiser, though I'm not a tree specialist. Nonetheless I'm familiar with some generalities on this topic. Here are some generalities and I would caution you to get local advice before acting on anything. First of all, there's a difference between land improved with trees that are intended to be harvested for landscape use and land that has additional value because it is covered with trees for amenity purposes for the benefit of the land owner. Your situation would appear to fall into the latter category. If it was just some large mature trees on a landscaped site then you could hire an arborist appraiser who could provide a value based upon the species of the trees along with their age, circumference, etc. This is typically what happens when a condemning authority acquires a small strip of land from someone and it takes out a few trees. Your situation is more of a 'forested site' versus an unforested site. That is, what is your site worth now that it is 'less forested'. The best data for that sort of situation is to find comparable sales from similar types of acquisitions. In effect, you would be analyzing these sales themselves to determine value rather than worrying about the value of 'individual' trees. These comparable sales would likely come from research of similar right of way types of acquisitions. Federal and state laws will also have a big influence on how an appraisal for this purpose needs to be done. There could be additional damages, or benefits, to the land owner as a result of the acquisition and those will need to be carefully analyzed. Also, you'll likely need an attorney who specializes in this sort of work to make sure the paperwork is acurately drafted. Otherwise, your property could be tied up in the future as a result of botched documents. Last edited: #### powderburner ##### Well-Known Member Would it work to find a tree transplanting specialist (the kind that moves BIG trees) and ask them what it will cost to replace them after the pipeline is finished? Or, what would it cost to remove the trees intact and alive with rootballs still attached and set them aside, then come back later and replant them? #### Pat_B ##### Well-Known Member The methods to value real estate sometimes seem arcane but they are based upon some good principles. The 'cost' to fix a deficiency (replacing the trees) rarely equals its value. So while you could indeed get a cost estimate of replanting the trees it would not likely (unless extremely coincidental) equal the 'market value' of such a loss. With landscaping it's even more difficult when you are talking about mature trees where it would generally be unfeasible to plant lots of mature trees. The 'cost' of that would likely far exceed the diminuation in value due to not having the trees in situ. #### powderburner ##### Well-Known Member You're probably right. Comparing transplant costs might not be directly comparable at all to land value, but I thought it might be a starting point. I was merely trying to paraphrase that time-honored bit of advice to "put things back like you found them." Maybe a real estate agent could give some advice? #### georgegassaway If there was some existing actual use for the trees, where there would be a desire or even need to have them back after the work was done, then replacement value could have come into play. But in this case, there is no need for them to be replaced. I may be selling the land in the near future, anyway. But I do want to get some fair value for those trees that will be lost due to the pipeline work. - George Gassaway #### DAllen ##### Well-Known Member If it was just some large mature trees on a landscaped site then you could hire an arborist appraiser who could provide a value based upon the species of the trees along with their age, circumference, etc. This is typically what happens when a condemning authority acquires a small strip of land from someone and it takes out a few trees. Frankly, I find that method speculative and arbitrary - not that it doesn't happen and I am certainly NOT criticizing Pat. Usually, when that happens the municipality in say a public works project pays a ridiculous price for junk lumber. The opposite can happen just as easily when the price is lower than market. I've personally been involved in a few of these situations and they left a bad taste in my mouth. Economics101 says that the value of an object or service is what someone is willing to pay for it. If you want to know the REAL value call lumber company and see what they want to pay for taking it down and hauling it out. You could have the land cleared for the gas company and make a little $on the side. Hope that helps. -Dave #### Pat_B ##### Well-Known Member The value of those few mature trees should be well in excess of their value as lumber. The amenity value of those trees is also taken into account, so the value of a 50-year old Oak tree should be much higher than a newly planted tree. However, where things go wrong is that too many condemning authorities take advantage of homeowners and give them too little for their trees and those sales are then used as comparable data for future appraisals. A large scale condemnation project such as a highway widening can basically create its own market as the number of individual takings start to occur. The appraisers are basically forced to use those prior takings as data for future takings and the courts will agree. That's why its best in those sorts of situations to band together with all the affected neighbors and try to negotiate as a group at one time before anyone accepts any individual offer. Otherwise, if a few neighbors let their properties go at low prices then they've set the value for everyone else. Now the same thing happens with the valuation of individual trees. The data that the arborist appraiser uses comes from negotiated sales between homeowners and the condemning authorities. Far too often those homeowners will settle for too little, then that becomes the data for future appraisals. Last edited: #### blackjack2564 ##### Crazy Jim's Gone Banana's TRF Supporter For what's it's worth. Back in the 90's I had a beautiful lot in the country on deep water in South Georgia. The land was owned by Georgia Pacific and had been tree farmed for ever. They decided to sell of the property on the rivers and marsh area as they could net much higher dollars than continued farming. It had mostly 30 -40 year old stock and a few giant 80-90 yr old trees and looked generally like your photo's. [Why this reply] When the neighboring lot was built on and the power lines went in, the electric company mistakenly ran the power from a pole on the front far corner of my [completely treed , undeveloped] lot. Across my lot at a diagonal to the opposite side then over to their house. Basically a diagonal across my lot, front to back, removing a swath of trees that = 1/3 acre. After informing them of the mistake .[which was quite understandable at this time, no road, first house in etc.] The line was removed, I was given 650.00 for the trees and free hook up when the time came for me to build, but this obviously was for the damage they did. I asked for underground hook up , worth 2500.00 at the time. Of course I wanted more, they wanted to pay less and some how the tree folks step in and set the value to which we both agreed. This would have been much less for inland land , but I got the premium for being on the water. Any how by looking at your plot which is .27 acres and my similar situation , maybe that will at least give you something concrete to start at. I asked a tree harvesting crew at the time what clearing was worth.. Any where from 250 -500 an acre depending on the density, age, type of trees, for pulp or timber products, non lumber. Also taking into account how large an area is timbered factors in the equip. men, fuel, and cost of hauling to a mill. This can make the difference of worth. Obviously a small tract is not even worth the time to cut. This is in the South and we're talking pine . But I bet it's similar everywhere unless we are talking high quality Oak, walnut,maple etc. And this did happen 17 years ago, so factor in inflation. I'm gonna guess you'ld be lucky to get 1000.00 plus for those trees because they are on the right of way with no such thing as a "good view" , but only their intrinsic value, what ever that turns out to be.........unless there is some special circumstance. Please let us know how you do. Last edited: #### georgegassaway ##### Lifetime Supporter TRF Lifetime Supporter Jim, that is exactly what I was hoping for by asking here. That someone might have had (or know of) a very similar situation, and some ballpark dollar amount for a given area that I could extrapolate from (given the area vs. inflation, vs. noise level of not being exactly the same thing). Thanks very much! - George Gassaway #### Pat_B ##### Well-Known Member I can't emphasize enough that valuing individual, or a swath of trees, on a replacement basis is NOT the proper way to value something like this. What everyone is talking about here is what is known as the cost approach in appraisal terminology. The market value (aka 'sales approach') is what is used in takings such as this. Not only is it incorrect, but it almost always results in the land owner getting way too little money. And, all of this is based upon local land values in your area which are likely to be different from other areas. I've worked on assignments where the taking of a small corner of property rendered the site useless because the total area of the site was now below the minimum required by the municipality to build a house. That taking on a per square foot basis was extremely valuable. In actuality the homeowner received over$50K for a 40 square foot corner. On the other hand, a corner taken away from crop land might only affect the farmer's yield by 1% and be valued accordingly. I had another situation where the site sat on a corner between two counties. The taking rendered the original driveway unusable, so the driveway had to be switched to the other side of the site which changed his address to the other county! Suddenly, the homeowner had to contend with all sorts of different buidling code requirements whenever he made any changes to his house. We're talking about how it affects the utility of the land-- we're not talking about valuing the trees or dirt as personal property. In takings such as this the land should be valued on a 'before taken' (forested) and 'after taken' (partially forested) basis. Comparable properties will be researched and chosen from BOTH categories. The value of your land 'before the taking' and 'after the taking' will be subtracted from each other, and the difference will be the value of the taking itself. There are MANY other compensable factors that can come into play that could warrant additional payments to the land owner. For example, the accessability of your site needs to be considered. If their easement makes your site unbuildable then the value of that easement could equal as much as the entire site. Or, if your site after the taking is still buildable but limited to a smaller house then the site could also be less valuable. There are sometimes issues of ingress and egress. You'll want to make sure that the taking doesn't affect how you can access your site to get to any potential future house. Please note that all of this still applies even if you personally never have any interest in building a house on your site. Doesn't matter- the highest and best use of your site will be determined and all of this revolves around what the HBU of your site should be, and the damages payable to you are based upon that. You'll also want to make sure you get compensated for your legal fees. We're talking about real property here-- not personal property which is what everyone is talking about when valuing the trees by themselves. The trees by themselves would only be valued when a small scale taking is involved like in someone's front yard. In a situation like that, the legal and appraisal fees would likely be too much for anyone to want to pay, so the parties usually stipulate to more of a simple formula for valuing the taking on a 'per tree' basis, and that's when an arborist gets involved. Most of this is covered on a statuatory basis based upon state and federal laws. What is, and what is not compensable is very important. I guarantee you that the utility company will want to oversimplify this and pay you a token fee for their taking. You might find out years later that your site is unbuildable as a result of their taking. Or, perhaps their taking ocurred in the best portion of your site, leaving you with a flood plain portion. All of this needs to be researched to really know how much compensation you are owed. In our area, under ground gas pipelines have a significant negative affect on market value. There are signage requirements that require the utility company to post very noticeable signs every so-many feet above ground. These signs are quite unsightly and visible to any potential purchaser. Furthermore, the information on the signs themselves can be quite frightening to any potential purchaser as they warn about the possibility of gas leaking or an explosion if you dig in that area. Anyone owning this type of land is really at a disadvantage as compared to the non-encumered land. Again, the effect on value of these unsightly signs is usually a compensable claim. Last edited: #### shrox ##### Well-Known Member Trees play a vital role in the environment, they shield the earth from falling rockets and kites. #### bobkrech ##### Well-Known Member George A co-worker inherited 38 acres of woods in the middle of Massachusetts on the top of a ridge on a nice lake. It was the family farm 2 generations ago, and the land he grew up on. The last farming occurred 70 years ago so the tress are between 70 and 100 years old. He is also a master wood worker and plans to build a retirement home on the top of the ridge overlooking the lake and a woodworking shop on the bottom near the road, and havest enough trees every to supply his wood working needs. Last winter he hired a Forrester who perform an extremely detailed tree survey of the property and made an estimate of the usability of each tree, and their yield. This winter he is planning to remove a stand of 70-100 year old mature white pines that will begin to die off over the next decade which cover an acre, thin out the defective hardwoods so that the remaining hardwoods grow straight. He is also going to clear ~1+ acres near the top for his house and a 40 foot wide winding path for the 1000' driveway from the street to the house location, however most of the land will remain forested. Wood is basically divided into 2 main categories: hardwood and soft wood. There are at least 2 dozen species of hardwoods in various sizes from 6" to a few 20"-24" trees on the property what will be removed. The larger, straight trees will be milled for timber and board, the defective trees down to around 8" diameter will be turned into pallet wood, and the smaller trees chopped into firewood. Hardwoods have significantly more value than soft wood, but the prices are pretty low. Standing trees for timber and boards is worth about $150 per MBF or 15 cents per board foot. Standing firewood is worth$10 a cord, and the chipped hardwoods are used to make pellets for stoves. The softwoods, principally pines, hemlocks and cedars have little value. The larger one, probably in the 8" and above range have some value for timber if they are straight, and cedar for garden chips but they frequently are used for paper pulp. The chips have some limited for industrial kilns, flake board, other composites and paper. The cost of sawyer cutting the wood is approximately equivalent to the the standing price or ~$150 per 1 MBF, the transportation cost is dependent on the price of fuel, but is probably running at$250-$300 per MBF. Storage at the lumber yard and kiln drying and the interest to dry the wood for a year or two is around$300 per MBF. The brings the cost of the wood before milling to ~$900 per MBF. After milling there is typically a 25% loss, so the mill price is somewhere around$1100-$1150 per MBF of medium quality lumber. After the markup by the retailer, it going to cost between$1700 to $2000 per MBF for medium quality lumber. Better quality hardwood and small quantities are higher. On average the ratio of retail price to standing price runs between 7 to 8 and as high as 10 depending on supply and demand and the price of fuel. Cord wood has even a higher multiplier. In New England is 24. Standing price is$10 a cord, then it is trucked to the processor where it is aged, cut ands split. Then it is broken into smaller lots and delivered to the retailer or end user. Again, if firewood is plentiful because of high winds and storm, the standing price plunges. When the price of fuel increase so does the retail price. Right now demand is low and there is plenty of wood in the supply chain so the standing price is actually lower than it was 20 years ago. The fuel prices are relative high so the prices are relatively high because wood is perceived to be a good, cost effective, alternative fuel. In reality it is only if you have a good supply of it and cut and split it yourself. I believe the forestry plan for my coworkers property involves about 2600 trees IIRC with a 60/40 hardwood to softwood split. After all is said and done, he figures he'll net a few thousand from it. The sawyers get paid by selling the wood to the mills, but he has to pay the Forrester who actually did an incredibly detailed yield evaluation of every single tree that will be cut down and he supervises the cutting to minimize collateral damage and is worth every penny he gest paid! My friend will spend all the proceedes and probably a bit more to grade and gravel the driveway. I afraid that you'll probably get only a few hundred dollars for the wood based on your description of the property since most of the trees are small (trees below 8" aren't worth much), and you have only a few large hardwoods. Since the amount of land to be cleared is small you could get a diameter tape and measure each tree. There are generic formulas for trunk diameter vs MFB yield for each species. You could get the data in less than a day, and sort it in a spread sheet to determine the MBF yield and the yield of timber and lumber, cord wood, pulpwood and chips, but in all probability your free time is worth more than the extra humdred you might be able to negotiate based on a detailed survey. Bob Last edited: #### Pat_B ##### Well-Known Member Bob- that's simply not how it is done in a taking. I would agree that the timber value is probably next to useless. That doesn't make the taking useless. It is figured in a much different way (explained above.) #### bobkrech ##### Well-Known Member Pat I would agree with you that in a taking, with a good lawyer, a person can do very well, however there already appears to be a preexisting right-of-way on the property as evidenced by the 4 existing pipelines. If the new pipeline does not go outside the preexisting easement, the terms of preexisting easement would apply and it should be recorded at the local registry of deeds. If the trees are outside the existing easement, then I would agree that one could clearly ask for more that the price of the wood. The present owner is required to allow access to the existing easement however if the easement is accessible without cutting trees, the land owner is under no obligation to agree to permit the tree cutting. Unfortuneately for the landowners, many power companies prefer to ask forgiveness rather than permission. Another friend has acreage in Maine on a rural road with a powerline, and the have permission by law to trim trees that can intefere with their lines. We he went up after the trimmed, he was appauled that they clear cut 30' into his property so they wouldn't have to come back for many years and there wasn't much he could do about it. George asked for the price of the wood and my description is correct. If he wants more than that the he'll probably need a lawyer how he has to pay. This isn't an urban area so the land isn't worth what it would be worth in an urban area. A buildable 1/4 acre lot in the metro Boston area is worth up to \$400K in the "better" communities, probably only a few thousand in a rural farm community. Bob Last edited: #### georgegassaway Thanks again for the info on this. Due to the situation, the only thing "extra" I can get out of this is the value of the trees (not replacement value), and also due to the situation it is not something practical to involve a lawyer with. The way the land is grown up, it was not very practical to attempt to go into that area and try to measure and count tree by tree. I got the info I was looking for, which was to have some really rough ballpark idea of what might be practical for me to get for the trees that will be removed. Blackjack's message of Oct 10th was the closest thing to this. I have put in an "estimated value" as the pipeline company requested. I'll find out at some point, maybe the next few weeks, if they'll go with that or pay something less than that. I know the way things go with "thread drift", and that you guys might want to discuss this further regarding principles, legalities, and so forth, in general. But I do want to make clear that as of this point I've gotten the info that I need for this particular situation (especially since I've already submitted the estimated value). - George Gassaway Last edited: #### Pat_B ##### Well-Known Member George- do you have any sort of agreement under the previous easement that allows them to expand it according to agreed upon terms? Here's a good link to your state law regarding the condemnation procedure. http://records.mobile-county.net/judicial/Condemnation.aspx It's very similar to laws in many other states. The before and after taking values are compared, and the difference is the value of the taking itself. Please note no procedure for the valuing of personal property (timber). It's just not done that way in compliance with the law unless we're talking about commercial crop land. Even then, there will still be two appraisals of the before and after taking that take into account the decreased yield of the timber operation as a result of the taking. The advantage to this method is that one of the appraisals will give you the market value of your property before the taking so that you are getting a 'free' appraisal of your land for what it's worth today. In a large scale condemnation the condemning party will already have completed those appraisals. Be sure to get a copy of those-- typically, the condemning party would present you with those appraisals simultaneously with their offer. It shouldn't cost you anything to get a copy of those appraisals. #### Len B ##### Old Member TRF Supporter Hi George, My wife works for the Canadian Department of Justice. She is involved with native land claims. Often, trees are an issue to be considered for a particular piece of land. They enlist the services of tree appraisers. They really are out there. In your case, this may be overkill and also may cut into the value you get for the trees. I just wanted to point out that they exist and may be found in your area.
2021-01-26 06:03:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25473710894584656, "perplexity": 1333.108249265732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704798089.76/warc/CC-MAIN-20210126042704-20210126072704-00606.warc.gz"}
https://mathhelpboards.com/threads/f-not-lipschitz-at-0-oo.9119/
f not Lipschitz at [0,oo]! evinda Well-known member MHB Site Helper Hi!!! I have also an other question... Could you explain me why $f(x)=\sqrt{x},x\geq 0$ is not Lipschitz at $[0,\infty]$??How can I show this??Do I have to use the condition $|f(x)-f(y)| \leq M|x-y|,M>0$ ,to show this?? ZaidAlyafey Well-known member MHB Math Helper Re: f not Lipschitz at [0,oo]!! Hi!!! I have also an other question... Could you explain me why $f(x)=\sqrt{x},x\geq 0$ is not Lipschitz at $[0,\infty]$??How can I show this??Do I have to use the condition $|f(x)-f(y)| \leq M|x-y|,M>0$ ,to show this?? Hint : choose $y=0$. ThePerfectHacker Well-known member Re: f not Lipschitz at [0,oo]!! Hi!!! I have also an other question... Could you explain me why $f(x)=\sqrt{x},x\geq 0$ is not Lipschitz at $[0,\infty]$??How can I show this??Do I have to use the condition $|f(x)-f(y)| \leq M|x-y|,M>0$ ,to show this?? If $f:[0,\infty)\to \mathbb{R}$ is Lipschitz it would mean that there is a positive $M>0$ so that for all $x,y\in [0,\infty)$ we have $|f(x)-f(y)|\leq M|x-y|$. Write it out in logic symbols: $$\exists M>0 ~ \forall x,y\in [0,\infty), ~ |f(x)-f(y)|\leq M|x-y|$$ When you negate this statement you get, $$\forall M>0, \exists x,y\in [0,\infty), ~ |f(x)-f(y)| > M|x-y|$$ You want to show the negated version as you are claiming $f$ is not Lipschitz. Thus, given any $M>0$ you therefore need to find two non-negative numbers $x$ and $y$ so that $|\sqrt{x} - \sqrt{y}| > M|x-y|$. chisigma Well-known member Re: f not Lipschitz at [0,oo]!! Hi!!! I have also an other question... Could you explain me why $f(x)=\sqrt{x},x\geq 0$ is not Lipschitz at $[0,\infty]$??How can I show this??Do I have to use the condition $|f(x)-f(y)| \leq M|x-y|,M>0$ ,to show this?? We say that the function f(x) satisfies the Lipschitz condition on the interval [a,b] if there is a constant K, independent from f and from the interval [a,b] such that for all $x_{1}$ and $x_{2}$ in [a,b] with $x_{1} \ne x_{2}$ is... $\displaystyle |f(x_{1}) - f(x_{2})| < K\ |x_{1} - x_{2}|\ (1)$ The function $\displaystyle f(x) = \sqrt{x}$ has an umbounded derivative in x=0, so that it doesn.t satisfy the Lipschitz condition in $[0,\infty]$... Kind regards $\chi$ $\sigma$​ evinda Well-known member MHB Site Helper Re: f not Lipschitz at [0,oo]!! If $f:[0,\infty)\to \mathbb{R}$ is Lipschitz it would mean that there is a positive $M>0$ so that for all $x,y\in [0,\infty)$ we have $|f(x)-f(y)|\leq M|x-y|$. Write it out in logic symbols: $$\exists M>0 ~ \forall x,y\in [0,\infty), ~ |f(x)-f(y)|\leq M|x-y|$$ When you negate this statement you get, $$\forall M>0, \exists x,y\in [0,\infty), ~ |f(x)-f(y)| > M|x-y|$$ You want to show the negated version as you are claiming $f$ is not Lipschitz. Thus, given any $M>0$ you therefore need to find two non-negative numbers $x$ and $y$ so that $|\sqrt{x} - \sqrt{y}| > M|x-y|$. I understand... But..is this relation $|\sqrt{x} - \sqrt{y}| > M|x-y|$ always satisfied? ThePerfectHacker Well-known member Re: f not Lipschitz at [0,oo]!! I understand... But..is this relation $|\sqrt{x} - \sqrt{y}| > M|x-y|$ always satisfied? No. It is not always satisfied. For example, $x=0,y=0$ will make it false. You need to show it is sometimes satisfied. You need to find some $x$ and $y$ that will do it for you. evinda Well-known member MHB Site Helper Re: f not Lipschitz at [0,oo]!! No. It is not always satisfied. For example, $x=0,y=0$ will make it false. You need to show it is sometimes satisfied. You need to find some $x$ and $y$ that will do it for you. Isn't this relation just satisfied for $x,y \in (0,1)$?? ThePerfectHacker Well-known member Re: f not Lipschitz at [0,oo]!! Isn't this relation just satisfied for $x,y \in (0,1)$?? No. Say $M=2$ so we are saying $|\sqrt{x}-\sqrt{y}| > 2|x-y|$ for all $x,y\in(0,1)$. But that is not true, just pick $x,y=1/2$. evinda Well-known member MHB Site Helper Re: f not Lipschitz at [0,oo]!! No. Say $M=2$ so we are saying $|\sqrt{x}-\sqrt{y}| > 2|x-y|$ for all $x,y\in(0,1)$. But that is not true, just pick $x,y=1/2$. Aha!But..to show the negated version,don't we have to find a condition that is satisfied for each x,y?Or am I wrong? Klaas van Aarsen MHB Seeker Staff member Re: f not Lipschitz at [0,oo]!! The typical way to disprove it, is a proof by contradiction. First assume it is Lipschitz. That is, there is some M such that the inequality holds for every x and y. And then find an x and y, such that the Lipschitz condition is not satisfied after all. (Hint: pick one of the 2 as zero and the other "small enough" depending on M.) ThePerfectHacker Well-known member Re: f not Lipschitz at [0,oo]!! Aha!But..to show the negated version,don't we have to find a condition that is satisfied for each x,y?Or am I wrong? No. You just need to find one $x$ and one $y$. That is all. evinda Well-known member MHB Site Helper Re: f not Lipschitz at [0,oo]!! No. You just need to find one $x$ and one $y$. That is all. So,I could pick for example $x=\frac{1}{2},y=0$ and $M=1$..Right?? - - - Updated - - - The typical way to disprove it, is a proof by contradiction. First assume it is Lipschitz. That is, there is some M such that the inequality holds for every x and y. And then find an x and y, such that the Lipschitz condition is not satisfied after all. (Hint: pick one of the 2 as zero and the other "small enough" depending on M.) I picked $y=0,x=\frac{1}{M}$ and I got $M \geq 1$.Would this be a contradiction? Klaas van Aarsen MHB Seeker Staff member Re: f not Lipschitz at [0,oo]!! - - - Updated - - - I picked $y=0,x=\frac{1}{M}$ and I got $M \geq 1$.Would this be a contradiction? Sorry, but no, that is not a contradiction. Let's see what we have. The Lipschitz condition is: $$|f(x)-f(y)| \le M|x-y|$$ With $f(x)=\sqrt x$ and with $y=0$ this becomes: $$|\sqrt x - \sqrt 0| \le M|x-0|$$ Since the domain is restricted to $x \ge 0$, we can simplify this to: $$\sqrt x \le Mx$$ Can you solve it for x? And if so, can you also find an x for which it is not true? evinda Well-known member MHB Site Helper Re: f not Lipschitz at [0,oo]!! Let's see what we have. The Lipschitz condition is: $$|f(x)-f(y)| \le M|x-y|$$ With $f(x)=\sqrt x$ and with $y=0$ this becomes: $$|\sqrt x - \sqrt 0| \le M|x-0|$$ Since the domain is restricted to $x \ge 0$, we can simplify this to: $$\sqrt x \le Mx$$ Can you solve it for x? And if so, can you also find an x for which it is not true? Can we square this: $\sqrt x \le Mx$ ?? If yes,then we have: $x \le M^2x^2$ and for $x=\frac{1}{4}$,we get: $\frac{1}{4} \le \frac{M^2}{16}$..This relation does not hold for $M>2$..Could we say it like that? ThePerfectHacker Well-known member Remember what you need to show. Given an $M>0$ you can find $x,y\geq 0$ such that $|\sqrt{x}-\sqrt{y}| > M|x-y|$. I will do it for $M=1$. Choose $x=\tfrac{1}{4}$ and $y=0$, then we get, $$\left| \sqrt{\frac{1}{4}} - \sqrt{0} \right| > 1\cdot \left| \tfrac{1}{4} - 0 \right |$$ Which is true. Now say that $M=2$, how would you choose $x$ and $y$? Klaas van Aarsen MHB Seeker Staff member Re: f not Lipschitz at [0,oo]!! Can we square this: $\sqrt x \le Mx$ ?? Yes, you can square this, since both sides are $\ge 0$. If yes,then we have: $x \le M^2x^2$ and for $x=\frac{1}{4}$,we get: $\frac{1}{4} \le \frac{M^2}{16}$..This relation does not hold for $M>2$..Could we say it like that? Wel... you still didn't solve for $x$ did you... can you? evinda Well-known member MHB Site Helper Remember what you need to show. Given an $M>0$ you can find $x,y\geq 0$ such that $|\sqrt{x}-\sqrt{y}| > M|x-y|$. I will do it for $M=1$. Choose $x=\tfrac{1}{4}$ and $y=0$, then we get, $$\left| \sqrt{\frac{1}{4}} - \sqrt{0} \right| > 1\cdot \left| \tfrac{1}{4} - 0 \right |$$ Which is true. Now say that $M=2$, how would you choose $x$ and $y$? For $M=2$ I would pick $x=\sqrt{1}{5},y=0$ .. - - - Updated - - - Yes, you can square this, since both sides are $\ge 0$. Wel... you still didn't solve for $x$ did you... can you? $x^2M^2-x \ge 0 \Rightarrow x(xM^2-1) \ge 0$ Klaas van Aarsen MHB Seeker Staff member - - - Updated - - - $x^2M^2-x \ge 0 \Rightarrow x(xM^2-1) \ge 0$ Let's try it like this: \begin{array}{} x &\le& M^2x^2 \\ \frac 1 {M^2} &\le& x \\ x &\ge& \frac 1 {M^2} \end{array} So let's pick $$\displaystyle x = \frac 1 {M^3}$$ evinda Well-known member MHB Site Helper Let's try it like this: \begin{array}{} x &\le& M^2x^2 \\ \frac 1 {M^2} &\le& x \\ x &\ge& \frac 1 {M^2} \end{array} So let's pick $$\displaystyle x = \frac 1 {M^3}$$ Oh yes!!! This is a contradiction!!!
2021-09-24 21:08:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9361511468887329, "perplexity": 841.7880705761355}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00456.warc.gz"}
https://mathematica.stackexchange.com/posts/13259/revisions
Tweeted twitter.com/#!/StackMma/status/258992968185311233 occurred Oct 18 '12 at 18:08 4 added 227 characters in body edited Oct 18 '12 at 13:28 fpghost 91511 gold badge1111 silver badges2121 bronze badges I'm currently looking at a simplified problem that approximates another problem I'm looking into. In this simplified problem I at least have an analytic integrand and can easily provide all info on here: Given the definitions v = 0.6; g = 1/Sqrt[1 - v^2]; ig[tau1_?NumericQ, tau2_?NumericQ, ωω_?NumericQ, ll_?IntegerQ] := (((2 ll + 1) ωω)/(4 Pi^2)) * SphericalBesselJ[ll, ωω v g tau1] * SphericalBesselJ[ll, ωω v g tau2] * Exp[-I ωω g (tau1 - tau2)]; I would like to numerically integrate this: f[ω_, l_, pg_, wp_] := 2 Re[NIntegrate[ Exp[-I 1 s] ig[100, 100 - s, ω, l], {s, 0, 40}, PrecisionGoal -> pg, WorkingPrecision -> wp, MaxRecursion -> 20]] But for example f[2, 1, 15, 25] etc I get a host of errors like ::slwcon,::einc. I was wondering if I could use "LevinRule" here and if so what would the options be? I believe the problem gets worse at large $$\omega$$ maybe 100 or more. Where even putting the WorkingPrecision upto 40 and PrecisionGoal down to 10, things still scream, if I generate a table of these from 1 to 100 in omega. I'm currently looking at a simplified problem that approximates another problem I'm looking into. In this simplified problem I at least have an analytic integrand and can easily provide all info on here: Given the definitions v = 0.6; g = 1/Sqrt[1 - v^2]; ig[tau1_?NumericQ, tau2_?NumericQ, ωω_?NumericQ, ll_?IntegerQ] := (((2 ll + 1) ωω)/(4 Pi^2)) * SphericalBesselJ[ll, ωω v g tau1] * SphericalBesselJ[ll, ωω v g tau2] * Exp[-I ωω g (tau1 - tau2)]; I would like to numerically integrate this: f[ω_, l_, pg_, wp_] := 2 Re[NIntegrate[ Exp[-I 1 s] ig[100, 100 - s, ω, l], {s, 0, 40}, PrecisionGoal -> pg, WorkingPrecision -> wp, MaxRecursion -> 20]] But for example f[2, 1, 15, 25] etc I get a host of errors like ::slwcon,::einc. I was wondering if I could use "LevinRule" here and if so what would the options be? I'm currently looking at a simplified problem that approximates another problem I'm looking into. In this simplified problem I at least have an analytic integrand and can easily provide all info on here: Given the definitions v = 0.6; g = 1/Sqrt[1 - v^2]; ig[tau1_?NumericQ, tau2_?NumericQ, ωω_?NumericQ, ll_?IntegerQ] := (((2 ll + 1) ωω)/(4 Pi^2)) * SphericalBesselJ[ll, ωω v g tau1] * SphericalBesselJ[ll, ωω v g tau2] * Exp[-I ωω g (tau1 - tau2)]; I would like to numerically integrate this: f[ω_, l_, pg_, wp_] := 2 Re[NIntegrate[ Exp[-I 1 s] ig[100, 100 - s, ω, l], {s, 0, 40}, PrecisionGoal -> pg, WorkingPrecision -> wp, MaxRecursion -> 20]] But for example f[2, 1, 15, 25] etc I get a host of errors like ::slwcon,::einc. I was wondering if I could use "LevinRule" here and if so what would the options be? I believe the problem gets worse at large $$\omega$$ maybe 100 or more. Where even putting the WorkingPrecision upto 40 and PrecisionGoal down to 10, things still scream, if I generate a table of these from 1 to 100 in omega. 3 deleted 81 characters in body edited Oct 18 '12 at 12:28 Mr.Wizard♦ 235k3030 gold badges488488 silver badges10901090 bronze badges I'm currently looking at a simplified problem that approximates another problem I'm looking into. In this simplified problem I at least have an analytic integrand and can easily provide all info on here: Given the definitions v = 0.6; g = 1/Sqrt[1 - v^2]; ig[tau1_?NumericQ,  tau2_?NumericQ, \[Omega]\[Omega]_ωω_?NumericQ,  ll_?IntegerQ] :=   (((2 ll + 1) \[Omega]\[Omega]ωω)/(  4 Pi^2)) SphericalBesselJ[* ll SphericalBesselJ[ll, \[Omega]\[Omega]ωω v g tau1] SphericalBesselJ[* ll SphericalBesselJ[ll, \[Omega]\[Omega]ωω v g tau2] * Exp[-I \[Omega]\[Omega]ωω g (tau1 - tau2)]; I would like to numerically integrate this: f[\[Omega]_f[ω_, l_, pg_, wp_] := 2 Re[NIntegrate[ Exp[-I 1 s] ig[100, 100 - s, \[Omega]ω, l] , {s, 0, 40}, PrecisionGoal -> pg, WorkingPrecision -> wp, MaxRecursion -> 20]] But for example f[2, 1, 15, 25]f[2, 1, 15, 25] etc I get a host of errors like ::slwcon::slwcon,::einc::einc. I was wondering if I could use "LevinRule" here and if so what would the options be? I'm currently looking at a simplified problem that approximates another problem I'm looking into. In this simplified problem I at least have an analytic integrand and can easily provide all info on here: Given the definitions v = 0.6; g = 1/Sqrt[1 - v^2]; ig[tau1_?NumericQ,  tau2_?NumericQ, \[Omega]\[Omega]_?NumericQ,  ll_?IntegerQ] := (((2 ll + 1) \[Omega]\[Omega])/(  4 Pi^2)) SphericalBesselJ[ ll, \[Omega]\[Omega] v g tau1] SphericalBesselJ[ ll, \[Omega]\[Omega] v g tau2] Exp[-I \[Omega]\[Omega] g (tau1 - tau2)]; I would like to numerically integrate this: f[\[Omega]_, l_, pg_, wp_] := 2 Re[NIntegrate[ Exp[-I 1 s] ig[100, 100 - s, \[Omega], l] , {s, 0, 40}, PrecisionGoal -> pg, WorkingPrecision -> wp, MaxRecursion -> 20]] But for example f[2, 1, 15, 25] etc I get a host of errors like ::slwcon,::einc. I was wondering if I could use "LevinRule" here and if so what would the options be? I'm currently looking at a simplified problem that approximates another problem I'm looking into. In this simplified problem I at least have an analytic integrand and can easily provide all info on here: Given the definitions v = 0.6; g = 1/Sqrt[1 - v^2]; ig[tau1_?NumericQ, tau2_?NumericQ, ωω_?NumericQ, ll_?IntegerQ] :=  (((2 ll + 1) ωω)/(4 Pi^2)) * SphericalBesselJ[ll, ωω v g tau1] * SphericalBesselJ[ll, ωω v g tau2] * Exp[-I ωω g (tau1 - tau2)]; I would like to numerically integrate this: f[ω_, l_, pg_, wp_] := 2 Re[NIntegrate[ Exp[-I 1 s] ig[100, 100 - s, ω, l], {s, 0, 40}, PrecisionGoal -> pg, WorkingPrecision -> wp, MaxRecursion -> 20]] But for example f[2, 1, 15, 25] etc I get a host of errors like ::slwcon,::einc. I was wondering if I could use "LevinRule" here and if so what would the options be? 2 edited tags | link edited Oct 18 '12 at 12:23 J. M. will be back soon♦ 101k1010 gold badges317317 silver badges477477 bronze badges 1 asked Oct 18 '12 at 12:21 fpghost 91511 gold badge1111 silver badges2121 bronze badges
2019-09-23 18:38:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6287925243377686, "perplexity": 6522.459141293199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577478.95/warc/CC-MAIN-20190923172009-20190923194009-00374.warc.gz"}
https://mathoverflow.net/questions/234234/etale-localization-reference-request
# etale localization reference request I'm looking for a reference for the following statement: Let $P$ be a property of morphisms of schemes local on the target in the etale topology. Let $f : X\rightarrow Y$ be a morphism of schemes which is locally of finite presentation, and such that for all points $y\in Y$, the restriction of $f$ to the strict henselian local ring $\mathcal{O}_{Y,y}^{sh}$ has property $P$. Then $f$ has property $P$. (Is the LoFP condition necessary?) This seems to follow from various results in the stacks project, but I can't find it explicitly stated anywhere except for the specific case where P = "flat" (tag 05VL). I need this result for an appendix in my thesis. Currently I've stitched a proof together using various results of the stacks project, but it seems more elegant to cite a reference that proves this explicitly if possible. • You could contribute it to the stacks project. Mar 22, 2016 at 6:32 • There exist people who do not like the stacks project as a reference because it has not been refereed. I am not suggesting that I am one of those people, but refereeing is part of the academic process -- although probably far more so in other sciences. The argument is that "I am written by a smart guy" should not be enough -- look at the ABC fiasco for example. – znt Mar 22, 2016 at 7:53 That is not true without further hypotheses, but it is true with one additional hypothesis. First, here is a counterexample. Let $P$ be the property that the morphism $f$ is quasi-compact. This property is local for the étale topology, and even for the fpqc topology, cf. http://stacks.math.columbia.edu/tag/02KQ for instance. Let $Y$ be $\text{Spec}\ \mathbb{Z}$, and let $X$ be the disjoint union over all primes $p$ of $\text{Spec}\ \mathbb{Z}/p\mathbb{Z}$. There is a unique morphism $f:X\to Y$. This morphism is not quasi-compact since $X$ is not quasi-compact. The morphism $f$ is locally of finite presentation since every point of $X$ has an open neighborhood that is isomorphic to $\text{Spec}\ \mathbb{Z}/p\mathbb{Z}$ for some prime $p$. On the other hand, for every closed point $y=\langle p\rangle$ of $Y$ the base change of $f$ over the local ring $\mathcal{O}_{Y,y} = \mathbb{Z}_{\langle p \rangle}$ is the quasi-compact, even affine, morphism corresponding to the ring homomorphism $\mathbb{Z}_{\langle p \rangle} \to \mathbb{Z}_{\langle p \rangle}/p\mathbb{Z}_{\langle p \rangle}$. Thus the base change of $f$ to every strict Henselization is also quasi-compact. The simplest additional hypothesis guaranteeing your result is "compatible with filtered limits of schemes / filtered colimits of rings": for every affine scheme $Y=\text{Spec}\ A$ that is a filtered colimit $A = \varinjlim A_\lambda$ of rings $(A_\lambda)_{\lambda\in I}$, for every compatible family of morphisms $$f_\lambda:X_\lambda \to \text{Spec}\ A_\lambda, \ \ \phi_{\mu,\lambda}:X_\mu \xrightarrow{\cong} X_{\lambda}\otimes_{A_\lambda} A_\mu,$$ with filtered limit $f:X\to \text{Spec}\ A$, then $f$ has property $P$ if and only if there exists some $\lambda$ in $I$ such that for all $\mu>\lambda$, $f_\mu$ has property $P$. There are many examples of such properties in EGA $\textrm{IV}_3$, Section 8.10, pp. 36-41. (Please note, in EGA, there is a standing hypothesis that every $f_\lambda$ is finitely presented, i.e., locally finitely presented and quasi-compact. Also I have stated "compatible with limits" slightly incorrectly because I am only considering the case that $Y$ is affine. The correct formulation is in EGA. If your property is étale local on the base, then you can always reduce to the affine case.) Since the strict henselization $A$ of a Noetherian ring $B$ at a prime $\mathfrak{p}$ is a filtered colimit of finitely presented, étale $B$-algebras, if your property is compatible with limits and is also étale local, then it holds on $\text{Spec} B$ if and only if it holds after base change to each strict Henselization $\text{Spec}\ B_{\mathfrak{p}}^{sh}$. • The definition of "finitely presented" requires "quasi-separated" too (as J.S. knows). Also, it is worth noting that EGA has a many more instances of such properties P later on, such as flatness (11.2.6), smooth, etale, unramified (IV$_4$, 17.7.8), and differential smoothness (17.12.6); overall EGA remains the best source on such limit results. But for the original question with P = "flat" none of this discussion (nor LFP) is needed, since the OP's hypotheses (with all $y\in Y$) makes it an elementary exercise with flatness because each $O_y \rightarrow O_y^{\rm{sh}}$ is faithfully flat. Mar 22, 2016 at 14:57 • The stacks project so far has P = "affine", "finite", "unramified", "closed immersion", "separated", "flat", "finite locally free", "smooth", "etale", "isomorphism", "monomorphism", "surjective", "syntomic", "proper", "quasi-finite", and "at-worst-nodal of relative dimension 1". Most can be found in stacks.math.columbia.edu/tag/081C and it is usually straighforward to add new ones. Mar 22, 2016 at 16:00 • @CountDracula: Is the proof for flatness in the Stacks Project simpler than (or much different from) the one in EGA? (There are very many back-references to chase down in the SP version, so I can't tell at a glance how it compares with Raynaud's proof in EGA.) Mar 22, 2016 at 16:08 • @nfdc23 I think it is about the same complexity and essentially the same. I think in both the key ingredient is openness of flatness for fp ring map, but I read the proof in EGA a really long time ago so I am not completely sure. Still the graph stacks.math.columbia.edu/tag/02JO/graph/force of logical implication does not look too big! Mar 22, 2016 at 16:54 • @wyc: No, you definitely need a finite presentation assumption to spread-out from local rings to ambient affine base schemes, for example; it is not just a device to bootstrap beyond the affine case. Jason Starr gives you an example of failure of spreading-out from stalks on $Y$ with $Y$ affine. Mar 22, 2016 at 23:46
2023-03-31 22:46:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8809094429016113, "perplexity": 329.3953305749175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00194.warc.gz"}
https://ssconlineexam.com/forum/4524/When-a-body-is-earth-conncected,-electrons-from-th
# When a body is earth conncected, electrons from the earth flow into the body. This means the body is [ A ]    charged negatively [ B ]    an insulator [ C ]    uncharged [ D ]    charged positively Answer : Option D Explanation :
2019-11-22 11:13:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8000350594520569, "perplexity": 6223.380484060643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671249.37/warc/CC-MAIN-20191122092537-20191122120537-00505.warc.gz"}
https://www.zora.uzh.ch/id/eprint/160193/
# Measurement of the associated production of a single top quark and a Z boson in pp collisions at $\sqrt s$ = TeV ## Abstract A measurement is presented of the associated production of a single top quark and a Z boson. The study uses data from proton–proton collisions at $\sqrt s = 13TeV$ recorded by the CMS experiment, corresponding to an integrated luminosity of $35.9 fb^{−1}$. Using final states with three leptons (electrons or muons), the $tZq$ production cross section is measured to be $\sigma(pp\rightarrow tZq \rightarrow Wb\ ℓ^+ℓ^- q)= 123^{+33}_{-31} (stat)^{+29}_{-23}(syst) fb)$ , where ℓ stands for electrons, muons, or $τ$ leptons, with observed and expected significances of 3.7 and 3.1 standard deviations, respectively. ## Abstract A measurement is presented of the associated production of a single top quark and a Z boson. The study uses data from proton–proton collisions at $\sqrt s = 13TeV$ recorded by the CMS experiment, corresponding to an integrated luminosity of $35.9 fb^{−1}$. Using final states with three leptons (electrons or muons), the $tZq$ production cross section is measured to be $\sigma(pp\rightarrow tZq \rightarrow Wb\ ℓ^+ℓ^- q)= 123^{+33}_{-31} (stat)^{+29}_{-23}(syst) fb)$ , where ℓ stands for electrons, muons, or $τ$ leptons, with observed and expected significances of 3.7 and 3.1 standard deviations, respectively. ## Statistics ### Citations Dimensions.ai Metrics 22 citations in Web of Science® 30 citations in Scopus® ### Altmetrics Detailed statistics
2021-10-17 10:41:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8033592700958252, "perplexity": 1016.6529793614866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00576.warc.gz"}
http://ams.org/news/math-in-the-media/math-in-the-media
### Tony Phillips' Take Blog on Math Blogs "Fractal Monarchs," by Doug Dunham and John Shier (University of Minnesota, Duluth) was awarded Best photograph, painting, or print at the 2017 Mathematical Art Exhibition held at the 2017 Joint Mathematics Meetnigs. See how this image was created and more beautiful works from the exhibition on AMS Mathematical Imagery. # Tony Phillips' Take on Math in the MediaA monthly survey of math news ## A virtual trisectrix in space? Orbital resonances are simple numerical relationships between the periods of nearby planets or satellites. A salient example in the solar system is given by three of Jupiter's moons: during every single revolution by Ganymede, Europa makes two and Io four (nice animation here). Recently predicted and even more recently observed is a retrograde 1:1 resonance. This occurs with Jupiter and the asteroid 2015 BZ509, as reported by Paul Wiegert, Martin Connors and Christian Veillet in Nature, March 30, 2017. The planet and the asteroid have the same period, but go around the Sun in opposite directions. The path of BZ, as seen from Jupiter (the analogue of the epicyclic path of Mars as seen from Earth) is characterized as a "trisectrix" in the News & Views commentary in that same issue of Nature. (Those authors, Helena Morais and Fathi Namouni, were the first to work out the possibility of this type of resonance, back in 2013). Strictly speaking there is only one trisectrix, the planar curve with polar equation $r=1+2\cos\theta$, related in fact to angle trisection (nice explanation at 2000clicks.com). The virtual path of 2015 BZ509 is not even planar. That was the title for a report by Christopher Mele in the New York Times (May 15, 2017; as the Times put it elsewhere, "By Counting His Chicks, Texas Teenager Tops Pecking Order in Math Contest"). "A 13-year-old boy from Texas won a national math competition on Monday with an answer rooted in probabilities -- and a dash of farming. The boy, Luke Robitaille, took less than a second to buzz in at the Raytheon Mathcounts National Competition with the correct answer." Mele gives us the chicks problem and a couple more from past competitions. (Answers below). • In a barn, 100 chicks sit peacefully in a circle. Suddenly, each chick randomly pecks the chick immediately to its left or right. What is the expected number of unpecked chicks? • The smallest integer of a set of consecutive integers is $-32$. If the sum of these integers is 67, how many integers are in the set? • A bag of coins contains only pennies, nickels and dimes with at least five of each. How many different combined values are possible if five coins are selected at random? "'You get to think about things and move logically toward solving problems,' said Luke, who came in second place at last year's competition." The Raytheon competition, Mele tells us, is "a way to promote skills in science, technology, engineering and mathematics -- known as the STEM fields." He reminds us of the current national lack of STEM professionals and how, according to the President's Council of Advisors on Science and Technology, to maintain our "supremacy in science and technology" we will have to "increas[e] the number of students earning STEM degrees by about 34 percent annually." Mele also spoke with Lou DiGioia, "executive director of Mathcounts, [who] said the contestants' achievements were the results of endless hours of practice and coaching and not necessarily innate math abilities. 'These are not natural prodigies,' he said. 'Nobody watches a basketball game and says, "Oh, LeBron James was born that way.'" (Answers: 25, 67, 21). Tony Phillips Stony Brook University tony at math.sunysb.edu Archive of Reviews: Books, plays and films about mathematics Citations for reviews of books, plays, movies and television shows that are related to mathematics (but are not aimed solely at the professional mathematician). The alphabetical list includes links to the sources of reviews posted online, and covers reviews published in magazines, science journals and newspapers since 1996
2017-06-27 22:19:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2522397041320801, "perplexity": 2887.3115083894104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321938.75/warc/CC-MAIN-20170627221726-20170628001726-00610.warc.gz"}
http://mathhelpforum.com/advanced-algebra/194899-change-co-ordinates.html
1. ## Change of co-ordinates I have the following B:=[p_0,p_1,p_2] denote the natural ordered basis for P_2(R) C:=[h_1,h_2,h_3] where h_1(x)=7x^2+2x+3; h_2(x)=4x^2+5x+7 and h_3(x)=-9x^2+x+1 Construct the change of coordinate matrix A which converts C-coordinates to B-coordinates. Now my first thought was the rows in C are given by the columns in B, however when going through the calculations to construct f_c, in a later calculation. Do i have to find an expression for p_0, p_1 and p_2 using the equations? 2. ## Re: Change of co-ordinates presumably, you mean $B = \{1,x,x^2\}$, that is, $p_0(x) = 1, p_1(x) = x, p_2(x) = x^2$. suppose we have a vector in $P_2(\mathbb{R})$ ginve in C-coordinates, that is: $v = c_1h_1 + c_2h_2 + c_3h_3$ or $v = [c_1,c_2,c_3]_C$. in other words, $h_1 = [1,0,0]_C,h_2 = [0,1,0]_C,h_3 = [0,0,1]_C$. now if we had such a matrix A. then: $A([h_i]_C)$ would be its i-th column. but that would just be the i-th basis vector of C in B-coordinates. so, for example, $A([h_1]_C) = [h_1]_B = [3,2,7]_B$, since: $h_1(x) = 3 + 2x + 7x^2 = 3p_0(x) + 2p_1(x) + 7p_2(x)$ convince yourself that a linear combination of the basis vectors in C is taken to the same linear combination of the images of the basis vectors in B-coordinates, because a matrix is a linear mapping. 3. ## Re: Change of co-ordinates Another approach: by a well known theorem if $B=\{u_1,\ldots,u_n\},\;B=\{u'_1,\ldots,u'_n\}$ are basis of a vector space $V$ and $\begin{Bmatrix} u'_1=a_{11}u_1+\ldots +a_{1n}u_n\\...\\ u'_n=a_{n1}u_1+\ldots +a_{nn}u_n\end{matrix}$ then, $[x]_B=P\;[x]_{B'}$ where $P=\begin{bmatrix} a_{11} & \ldots & a_{n1}\\ \vdots&&\vdots \\ a_{1n} &\ldots & a_{nn}\end{bmatrix}$ In our case, we immediately get $[x]_B=\begin{bmatrix}{3}&{7}&{\;\;1}\\{2}&{5}&{\;\;1} \\{7}&{4}&{-9}\end{bmatrix}[x]_C$ 4. ## Re: Change of co-ordinates Originally Posted by Deveno presumably, you mean $B = \{1,x,x^2\}$, that is, $p_0(x) = 1, p_1(x) = x, p_2(x) = x^2$. suppose we have a vector in $P_2(\mathbb{R})$ ginve in C-coordinates, that is: $v = c_1h_1 + c_2h_2 + c_3h_3$ or $v = [c_1,c_2,c_3]_C$. in other words, $h_1 = [1,0,0]_C,h_2 = [0,1,0]_C,h_3 = [0,0,1]_C$. now if we had such a matrix A. then: $A([h_i]_C)$ would be its i-th column. but that would just be the i-th basis vector of C in B-coordinates. so, for example, $A([h_1]_C) = [h_1]_B = [3,2,7]_B$, since: $h_1(x) = 3 + 2x + 7x^2 = 3p_0(x) + 2p_1(x) + 7p_2(x)$ convince yourself that a linear combination of the basis vectors in C is taken to the same linear combination of the images of the basis vectors in B-coordinates, because a matrix is a linear mapping. Yes $A([h_1]_C) = [h_1]_B = [3,2,7]_B$, $A([h_2]_C) = [h_2]_B = [7,5,4]_B$ and $A([h_3]_C) = [h_3]_B = [1,1,-9]_B$ so can we say that M=A (inverse), where M=3 7 1 2 5 1 7 4 -9 and just find the inverse of M to find our matrix A? 5. ## Re: Change of co-ordinates no, the matrix: $\begin{bmatrix}3&7&1\\2&5&1\\7&4&-9\end{bmatrix}$ is the matrix A, it changes C-coordinates into B-coordinates. if you have a non-standard basis, and you want to change coordinates from the non-standard basis to the standard basis, you write down the matrix whose columns are the non-standard basis vectors coordinates in the standard basis, that is the "change of coordinate" matrix. if you want to change from B-coordinates to C-coordinates, THEN you would take the inverse of A.
2017-08-20 04:44:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909607470035553, "perplexity": 563.6249754594711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105970.61/warc/CC-MAIN-20170820034343-20170820054343-00042.warc.gz"}
https://environmentalcomputing.net/statistics/meta-analysis/meta-analysis-1/
# Introduction ### Background What is a meta-analysis? A meta-analysis is a quantitative summary of studies on the same topic. Why do we want to perform a meta-analysis? 1. Finding generalities 2. Increasing power and precision 3. Exploring differences between studies 4. Settling controversies from conflicting studies (testing hypotheses) 5. Generating new hypotheses The process of meta-analysis How many steps involved in meta-analysis? One answer is 5 steps 1. Formulating questions & hypothesis or finding a topic 2. Literature search & paper collection 3. Data extraction & coding 4. Meta-analysis & publication bias tests 5. Reporting & publication We only consider the step iv in this tutorial. You will need to learn the other steps elsewhere. To get you started, we recently wrote an overview paper which divides the process of meta-analysis into 10 questions (Nakagawa et al. 2017). The 10 questions will guide you through judging the quality of a meta-analysis. 1. Is the search systematic and transparently documented? 2. What question and what effect size? 3. Is non-independence taken into account? 4. Which meta-analytic model? 5. Is the level of consistency among studies reported? 6. Are the causes of variation among studies investigated? 7. Are effects interpreted in terms of biological importance? 8. Has publication bias been considered? 9. Are results really robust and unbiased? 10. Is the current state (and lack) of knowledge summarized? ### Metafor for meta-analysis I think the R package metafor (Viechtbauer 2010) is the most comprehensive meta-analytic software and the author Wolfgang Viechtbauer, who, I have to say, has the coolest name among my friends, is still actively developing it. First, install and load the metafor package. library(metafor) Have a look at the data set named dat.curtis1998 included in the package. If you have to see the other data sets included in this package, try help(package="metafor"). dat <- metafor::dat.curtis1998 str(dat) ## 'data.frame': 102 obs. of 20 variables: ## $id : int 21 22 27 32 35 38 44 63 86 87 ... ##$ paper : int 44 44 121 121 121 121 159 183 209 209 ... ## $genus : chr "ALNUS" "ALNUS" "ACER" "QUERCUS" ... ##$ species : chr "RUBRA" "RUBRA" "RUBRUM" "PRINUS" ... ## $fungrp : chr "N2FIX" "N2FIX" "ANGIO" "ANGIO" ... ##$ co2.ambi: num 350 350 350 350 350 350 350 395 350 350 ... ## $co2.elev: num 650 650 700 700 700 700 700 795 700 700 ... ##$ units : chr "ul/l" "ul/l" "ppm" "ppm" ... ## $time : int 47 47 59 70 64 50 730 365 365 365 ... ##$ pot : chr "0.5" "0.5" "2.6" "2.6" ... ## $method : chr "GC" "GC" "GH" "GH" ... ##$ stock : chr "SEED" "SEED" "SEED" "SEED" ... ## $xtrt : chr "FERT" "FERT" "NONE" "NONE" ... ##$ level : chr "HIGH" "CONTROL" "." "." ... ## $m1i : num 6.82 2.6 2.99 5.91 4.61 ... ##$ sd1i : num 1.77 0.667 0.856 1.742 1.407 ... ## $n1i : int 3 5 5 5 4 5 3 3 20 16 ... ##$ m2i : num 3.94 2.25 1.93 6.62 4.1 ... ## $sd2i : num 1.116 0.328 0.552 1.631 1.257 ... ##$ n2i : int 5 5 5 5 4 3 3 3 20 16 ... This data set is from the paper by Curtis and Wang (1998). They looked at the effect of increased CO$_2$ on plant traits (mainly changes in biomass). So we have information on control group (1) and experimental group (2) (m = mean, sd = standard deviation) along with species information and experimental details. In meta-analysis, these variables are often referred to as ‘moderators’ (we will get to this a bit later). ### Calculating ‘standardized’ effect sizes To compare the effect of increased CO$_2$ across multiple studies, we first need to calculate an effect size for each study - a metric that quantifies the difference between our control and experimental groups. There are several ‘standardized’ effect sizes, which are unit-less. When we have two groups to compare, we use two types of effect size statistics. The first one is standardized mean difference (SMD or also known as Cohen’s $d$ or Hedge’s $d$ or $g$; some subtle differences between them, but we do not worry about them for now): $$$\mathrm{SMD}=\frac{\bar{x}_{E}-\bar{x}_{C}}{\sqrt{\frac{(n_{C}-1)sd^2_{C}+(n_{E}-1)sd^2_{E}}{n_{C}+n_{E}-2}}}$$$ where $\bar{x}_{C}$ and $\bar{x}_{E}$ are the means of the control and experimental group, respectively, $sd$ is sample standard deviation ($sd^2$ is sample variance) and $n$ is sample size. And its sample error variance is: $$$se^2_{\mathrm{SMD}}= \frac{n_{C}+n_{E}}{n_{C}n_{E}}+\frac{\mathrm{SMD}^2}{2(n_{C}+n_{E})}$$$ The square root of this is referred to as ‘standard error’ (or the standard deviation of the estimate – confused?). The inverse of this ($1/se^2$) is used as ‘weight’, but things are bit more complicated than this as we will find out below. Another common index is called ‘response ratio’, which is usually presented in its natural logarithm form (lnRR): $$$\mathrm{lnRR}=\ln\left({\frac{\bar{x}_{E}}{\bar{x}_{C}}}\right)$$$ And its sampling error variance is: $$$se^2_\mathrm{lnRR}=\frac{sd^2_{C}}{n_{C}\bar{x}^2_{C}}+\frac{sd^2_{E}}{n_{E}\bar{x}^2_{E}}$$$ Let’s get these using the function escalc in metafor. To obtain the standardised mean difference, we use: # SMD SMD <- escalc(measure = "SMD", n1i = dat$n1i, n2i = dat$n2i, m1i = dat$m1i, m2i = dat$m2i, sd1i = dat$sd1i, sd2i = dat$sd2i) where n1i and n2i are the sample sizes, m1i and m2i are the means, and sd1i and sd2i the standard deviations from each study. The oject created now has an effect size (yi) and its variance (vi) for each study ## yi vi ## 1 1.8222 0.7408 ## 2 0.5922 0.4175 ## 3 1.3286 0.4883 ## 4 -0.3798 0.4072 ## 5 0.3321 0.5069 ## 6 2.5137 0.9282 To obtain the response ratio (log transformed ratio of means), we would use: lnRR <- escalc(measure = "ROM", n1i = dat$n1i, n2i = dat$n2i, m1i = dat$m1i, m2 = dat$m2i, sd1i = dat$sd1i, sd2i = dat$sd2i) The original paper used lnRR so we will use it, but you may want to repeat analysis below using SMD to see whether results are consistent. Add the effect sizes to the original data set with cbind or bind_cols from the package dplyr library(dplyr) dat <- bind_cols(dat, lnRR) You should see yi (effec size) and vi (sampling variance) are added. ## 'data.frame': 102 obs. of 22 variables: ## $id : int 21 22 27 32 35 38 44 63 86 87 ... ##$ paper : int 44 44 121 121 121 121 159 183 209 209 ... ## $genus : chr "ALNUS" "ALNUS" "ACER" "QUERCUS" ... ##$ species : chr "RUBRA" "RUBRA" "RUBRUM" "PRINUS" ... ## $fungrp : chr "N2FIX" "N2FIX" "ANGIO" "ANGIO" ... ##$ co2.ambi: num 350 350 350 350 350 350 350 395 350 350 ... ## $co2.elev: num 650 650 700 700 700 700 700 795 700 700 ... ##$ units : chr "ul/l" "ul/l" "ppm" "ppm" ... ## $time : int 47 47 59 70 64 50 730 365 365 365 ... ##$ pot : chr "0.5" "0.5" "2.6" "2.6" ... ## $method : chr "GC" "GC" "GH" "GH" ... ##$ stock : chr "SEED" "SEED" "SEED" "SEED" ... ## $xtrt : chr "FERT" "FERT" "NONE" "NONE" ... ##$ level : chr "HIGH" "CONTROL" "." "." ... ## $m1i : num 6.82 2.6 2.99 5.91 4.61 ... ##$ sd1i : num 1.77 0.667 0.856 1.742 1.407 ... ## $n1i : int 3 5 5 5 4 5 3 3 20 16 ... ##$ m2i : num 3.94 2.25 1.93 6.62 4.1 ... ## $sd2i : num 1.116 0.328 0.552 1.631 1.257 ... ##$ n2i : int 5 5 5 5 4 3 3 3 20 16 ... ## $yi : num 0.547 0.143 0.438 -0.113 0.117 ... ## ..- attr(*, "ni")= int [1:102] 8 10 10 10 8 8 6 6 40 32 ... ## ..- attr(*, "measure")= chr "ROM" ##$ vi : num 0.0385 0.0175 0.0328 0.0295 0.0468 ... Visualising effect size. We can visualize point estimates (effect size) and their 95% confidence intervals, CIs (based on sampling error variance) by using the forest function, which draws a forest plot for us. forest(dat$yi, dat$vi) The problem you see is that when there are many studies, a forest plot does not really work (unless you have very large screen!). Let’s look at just the first 12 studies. forest(dat$yi[1:12], dat$vi[1:12]) We can calculate many different kinds of effect sizes with escalc; other common effect size statistics include $Zr$ (Fisher’s z-transformed correlation). By the way, along with my colleagues, I have proposed a new standardized effect size called lnCVR (the log of coefficient of variation ratio – mouthful!), which compares the variability of two groups rather than means. See whether you can calculate it with these data. Actually, the development version of metafor, let you do this with escalcgithub page. lnCVR is called “CVR” in escalc. Actually, if you re-analysis this data with lnCVR, you may be able to publish a paper! Nobody has done it yet. Do it tonight! Once you have calculated effect sizes, move on to the next page: Meta-analysis 2: fixed-effect and random-effect models ### Further help (references) Go to the metafor package’s website. There you find many worked examples. Curtis, P. S., and X. Z. Wang. 1998. A meta-analysis of elevated CO2 effects on woody plant mass, form, and physiology. Oecologia 113:299-313. Nakagawa, S., R. Poulin, K. Mengersen, K. Reinhold, L. Engqvist, M. Lagisz, and A. M. Senior. 2015. Meta-analysis of variation: ecological and evolutionary applications and beyond. Methods in Ecology and Evolution 6:143-152. Viechtbauer, W. 2010. Conducting meta-analyses in R with the metafor package. Journal of Statistical Software 36:1-48. Authors: Shinichi Nakagawa and Malgorzata (Losia) Lagisz Year: 2016 Last updated: Feb 2022
2022-07-06 19:17:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.4316175878047943, "perplexity": 5133.300991070393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104676086.90/warc/CC-MAIN-20220706182237-20220706212237-00388.warc.gz"}
https://de.zxc.wiki/wiki/Spezifische_Absorptionsrate
# Specific absorption rate SAR is the abbreviation for the specific absorption rate and a measure of the absorption of electromagnetic fields in a material. It always leads to its heating . The specific absorption rate is expressed as power per mass in units of W / kg . ## determination The specific absorption rate can be calculated from 1. the field strength in the tissue: ${\ displaystyle \ mathrm {SAR} = {\ frac {1} {2}} {\ frac {\ sigma | {\ vec {E}} | ^ {2}} {\ rho}}}$ 2. the current density in the tissue: ${\ displaystyle \ mathrm {SAR} = {\ frac {1} {2}} {\ frac {J ^ {2}} {\ rho \ sigma}}}$ 3. the temperature increase in the tissue: ${\ displaystyle \ mathrm {SAR} = c_ {i} {\ frac {dT} {dt}}}$ there is: E = electric field strength ([ E ] = V / m ) at a point inside a body to be measured (e.g. measuring phantom). J = current density ([ J ] = A / ), caused by the magnetic and electric fields. (The limit value of the current density for skilled workers is, for example, 10 mA / m². For normal people, 2 mA / m².) ρ = density of the fabric ([ ρ ] = kg / ) σ = electrical conductivity of the tissue ([σ] = S / m ) c i = specific heat capacity of the tissue ([ c i ] = J / kg K ) dT / dt = time derivative of tissue temperature ([ dT / dt ] = K / s ) ## Cellular area A SAR value is often given for cell phones . With modern devices, this is approximately between 0.10 and 1.99 W / kg. The lower the SAR value, the less the tissue is heated by the radiation. The recommended upper limit of the World Health Organization is 2.0 W / kg. In the past, the SAR value for mobile phones was determined by manufacturers under inconsistent conditions, which made it unreliable. A European standard ( EN 50361) has existed since autumn 2001 , which precisely defines the measurement conditions. In March 2007, EN 50361 was replaced by EN 62209-1. A maximum SAR value of 0.5 W / kg when used on the ear and 1.0 W / kg when used on the body is permitted for labeling a mobile phone with the blue angel . A maximum SAR value of 0.8 W / kg is permitted for labeling with the TCO '01 Mobile Phone logo. The SAR value for mobile phones is given for the maximum transmission power . Due to the power regulation , however, a smaller SAR value usually occurs during operation, which depends on the respective cellular network . In well-developed networks with a high density of transmission masts , the mobile phone usually transmits with significantly lower transmission power than in poorly developed networks. As a result, exposure when making calls is also lower in areas with a high density of transmission towers than in areas with a low density of transmission towers. Staying in vehicles and buildings (especially those made of reinforced concrete ), on the other hand, increases the required transmission power. The power levels of standard GSM mobile phones are between 1 and 2000 mW depending on the reception situation and generate permissible SAR values ​​within this range. ## Other technically relevant areas Other areas where SAR values ​​are relevant:
2021-04-21 12:35:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.711435079574585, "perplexity": 1407.587954345105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00006.warc.gz"}
http://tex.stackexchange.com/questions/173820/how-to-put-a-matrix-of-images
# How to put a matrix of images? I´m trying to put 4 images as a matrix 2 x 2 I´m having troublees with the firs row.... What am I doing bad? (I have read that subcaption(or similar name) package .... was a good packed) \documentclass[12pt,a4paper]{article} \usepackage{mwe} \usepackage{graphicx} \begin{document} \begin{tabular}{|c|c|} \hline % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ... \includegraphics[width=60mm]{simu.jpg} & \includegraphics[width=60mm]{simu.jpg} \\ {\small ''iteraciones máximas de BT''=20} & \\ {\small ''Periodo de Tenencia en Lista Tabú''=2}& \\ \hline \end{tabular} \end{document} What do you recommend me? What´s the best practise/way in these cases. The simu.jpg is a normal image SIMU.JPG With These technic I obtained a months ago this , without any problem. - ## 3 Answers There is a small problem that the image touches the upper line. Some PDF readers does not show the upper line above the images depending on the view scale settings. The following example defines macro \addheight that increases the height of the image box (adds some white space above the image). The amount can be configured via the optional argument. \documentclass[12pt,a4paper]{article} \usepackage{graphicx} \newcommand*{\addheight}[2][.5ex]{% \raisebox{0pt}[\dimexpr\height+(#1)\relax]{#2}% } \begin{document} \noindent \begin{tabular}{|c|c|} \hline \addheight{\includegraphics[width=60mm]{simu.jpg}} & \addheight{\includegraphics[width=60mm]{simu.jpg}} \\ \small row 2, column 1'' & row 2, column 2'' \\ \hline \end{tabular} \end{document} - I tried to compile your code, but I obtain the same message that with my code... sensa.square7.ch/gdfgdfg.jpg –  Mika Ike Apr 27 at 20:04 @MikaIke: Compile with pdflatex instead of latex. For latex/dvips/ps2pdf the .jpg files should be converted to .eps (e.g. jpeg2eps) –  Heiko Oberdiek Apr 27 at 20:28 Thank you! Yes You´re right with PDF-Latex my initial purpose also runs well!! but... I need to use LATEX bacause in the same document I have pstricks figures. What´s the solution? –  Mika Ike Apr 27 at 20:39 @MikaIke: There are many ways, convert .png and .jpeg to .eps, use XeLaTeX, or auto-pst-pdf with pdflatex, ... too many to explain in a comment. –  Heiko Oberdiek Apr 27 at 20:47 So long as the images all have the same dimensions, you can do with a tabular; I defined an auxiliary \subf command for the picture and the subcaption, but just for keeping things properly segregated. You can add any number of \\ commands in the caption. \documentclass[12pt,a4paper]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[spanish]{babel} \usepackage{graphicx} \newcommand{\subf}[2]{% {\small\begin{tabular}[t]{@{}c@{}} #1\\#2 \end{tabular}}% } \begin{document} \begin{figure} \centering \begin{tabular}{|c|c|} \hline \subf{\includegraphics[width=60mm]{example-image-4x3.pdf}} {iteraciones máximas \\ de BT''$=20$} & \subf{\includegraphics[width=60mm]{example-image-4x3.pdf}} {Periodo de Tenencia \\ en Lista Tabú''$=2$} \\ \hline \subf{\includegraphics[width=60mm]{example-image-4x3.pdf}} {iteraciones máximas \\ de BT''$=20$} & \subf{\includegraphics[width=60mm]{example-image-4x3.pdf}} {Periodo de Tenencia \\ en Lista Tabú''$=2$} \\ \hline \end{tabular} \end{figure} \end{document} If some space is wanted between the tabular rule and the image, here's a possible way: create a fake first line and back up vertically. The code is exactly the same as before, but the definition of \subf changes into \newcommand{\subf}[2]{% {\small\begin{tabular}[t]{@{}c@{}} \mbox{}\\[-\ht\strutbox] #1\\#2 \end{tabular}}% } - I like your answer but as Heiko Oberdiek pointed out, the overlap of the edges of the graphic element and the edge of the table create a rather awkward end result. Isn't there a (a more robust) way to inherently add space between a graphic element within a tabular environment? –  1010011010 Apr 27 at 19:44 @Euryris Of course there is; I'll add it. –  egreg Apr 27 at 19:49 @egreg I tried your code, (changing the source ...pdf for the simu.jpg) and reach... this error (the same with my purpose and the Heiko´purpose) sensa.square7.ch/gdfgdfg45.jpg –  Mika Ike Apr 27 at 20:12 @MikaIke You're compiling with latex instead of pdflatex; the former can't deal with JPG. –  egreg Apr 27 at 20:25 @egreg Yes, your´re rigght like Heiko said me the same. The problem is taht in the same document I need tuo use LATESX, bacause I have pstricks pictures. What´s the solution? –  Mika Ike Apr 27 at 20:41 Part 1 (figures): Have you tried something like: \documentclass{whatever} ... \newsavebox{<boxname>} \setbox\<boxname>=\hbox{ \includegraphics[<options>]{<filename}} } \begin{document} ... \begin{tabular}{...} ... {\usebox{\<boxname>}} ... \end{tabular} ... \end{document} You can resize boxes using the \resizebox or manually with arguments of includegraphics. Part 2 (text): You can also use \hbox (this is in the standard LaTeX engine, no package required), or \pbox{<size of box>}{<text, argument, figure, you name it>} (this will require \usepackage{pbox} to function, and perhaps a small update of your TeX/package distribution). \pbox also allows you to use linebreaks within the environment: \pbox{<size of box>}{<text, argument, figure, you name it> \\ <more text, arguments, figures, etc.> } (If you decide this is prettier.) These are pretty robust and impractical solutions to put figure (option 1) and text (option 2) into floats (floats are figures, tikzpictures, and I'm pretty sure this works too for tables). MWE \documentclass[12pt,a4paper]{article} \usepackage{mwe} \usepackage{graphicx} \usepackage{pbox} \begin{document} \begin{tabular}{|c|c|} \hline % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ... \pbox{6cm}{\vspace{5ex} \includegraphics[width=60mm]{simu.jpg}} & \pbox{6cm} {\vspace{5ex} \includegraphics[width=60mm]{simu.jpg}} \\ {\pbox{6cm}{\vspace{.25ex}\small ''iteraciones máximas de BT''=20\vspace{5ex}}} & {\pbox{6cm}{\vspace{5ex}\small ''Periodo de Tenencia en Lista Tabú''=2\vspace{5ex}}} \\ & \\ \hline \pbox{6cm}{\vspace{5ex} \includegraphics[width=60mm]{simu.jpg}} & \pbox{6cm} {\vspace{5ex} \includegraphics[width=60mm]{simu.jpg}} \\ {\pbox{6cm}{\vspace{5ex}\small ''iteraciones máximas de BT''=20\vspace{5ex}}} & {\pbox{6cm}{\vspace{.25ex}\small ''Periodo de Tenencia en Lista Tabú''=2 \vspace{5ex}}} \\ \hline \end{tabular} \end{document} Here you can see that you can control the height to whatever length you want (using vspace) and the width, using pbox. Is this what you want? - but.., I have use that with other smaller graphics and... good results!! ... with width I would must control the width of the image –  Mika Ike Apr 27 at 18:52 There's no need to use a figure environment for \includegraphics. –  egreg Apr 27 at 19:00 I just used figure as an example. I'll edit to avoid confusion. Thanks. –  1010011010 Apr 27 at 19:03 is possible a minimal example?. I think that I have view in any post any similar to what I want, but.. I can´t find it. –  Mika Ike Apr 27 at 19:07
2014-11-26 07:48:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9765568971633911, "perplexity": 8920.448497169838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006593.41/warc/CC-MAIN-20141125155646-00154-ip-10-235-23-156.ec2.internal.warc.gz"}
https://lexique.netmath.ca/en/ratio-of-similarity/
# Ratio of Similarity ## Ratio of Similarity The coefficient k of proportionality between the lengths of the image of an initial geometric figure and the corresponding lengths of the initial figure by a similarity. • To determine the ratio of similarity, all that is required is two points A and B and their images, A′ and B′. Therefore, the ratio of similarity is: k = $$\dfrac{\textrm{m}\overline{\textrm{A’B’}}}{\textrm{m}\overline{\textrm{AB}}}$$.
2023-04-01 23:12:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9149165749549866, "perplexity": 637.9945276332447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00029.warc.gz"}
https://www.orbiter-forum.com/threads/ssu-development-thread-4-0-to-5-0.33858/page-72
# SSU Development thread (4.0 to 5.0) #### GLS Well, a good model is easy to find. Getting one with a suitable license for our work is the problem. :facepalm: We still have to talk about that... I think I'd like it to be GPL, as it is on the code header, but in SourceForge it is LGPL... Also the link in the code header doesn't work, and I don't have enough access to see if there's a way to have a (new) page for the license or whatever. :shrug: #### Urwumpe ##### Not funny anymore Donator We still have to talk about that... I think I'd like it to be GPL, as it is on the code header, but in SourceForge it is LGPL... Also the link in the code header doesn't work, and I don't have enough access to see if there's a way to have a (new) page for the license or whatever. :shrug: Damn, there was something I wanted to do when I am back home. Which I am. Fixed it, we are GPL v2 now. In other news: It looks like I am the last halfway active admin for the project, the others are kwan3218 and SiameseCat. Should we get some redundancy there again? If yes, who should be promoted? #### GLS Damn, there was something I wanted to do when I am back home. Which I am. Fixed it, we are GPL v2 now. Thanks! In other news: It looks like I am the last halfway active admin for the project, the others are kwan3218 and SiameseCat. Should we get some redundancy there again? If yes, who should be promoted? I'm always up for a pay raise... :lol: Donator #### GLS Well, we also have the GPL v2 in the Docs folder, it would be enough to refer to this. Ok, after I finish the runways I'll change the headers to point people in the direction of that file. #### Urwumpe ##### Not funny anymore Donator Ok, after I finish the runways I'll change the headers to point people in the direction of that file. If needed. I would point to the file and the URL as well, just for backup. Also, we should maybe include a text file that contains our definition of a system library. #### GLS Also, we should maybe include a text file that contains our definition of a system library. #### Urwumpe ##### Not funny anymore Donator Well, we consider the Orbiter SDK to be a system library in our context, thus linking to a non-opensource-library is no issue. While easily understandable why, its better to make it explicit. #### GLS As SiameseCat isn't around, could the mods add line at the start of the description pointing people to the latest version? ---------- Post added at 07:48 PM ---------- Previous post was at 05:58 PM ---------- I'm finally done with the runways! Still plenty left to be done, but IMO we now have a good global coverage and I'd like to move to other things. Boxes still to tick before we can release: 1) AerojetDAP altitude in TAEM 2) new (or at least resized) vc 3) Crawler 4) correct EDW lakebed runways to Earth's curvature (waiting for Martin's input) (and maybe change White Sands to mesh too) 5) bare-bones SSUW? ---------- Post added at 10:48 PM ---------- Previous post was at 07:48 PM ---------- Just so I don't have to change 222 files more than once, is it ok to replace the last line of the header with this? Code: See https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html or file Doc\Space Shuttle Ultra\GPL.txt for more details. #### Urwumpe ##### Not funny anymore Donator Just so I don't have to change 222 files more than once, is it ok to replace the last line of the header with this? Code: See https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html or file Doc\Space Shuttle Ultra\GPL.txt for more details. Well, the FSF worldview suggest copying the full preamble of their license in every file. As you can expect, this will get nasty if you have to update them all without functional changes to the sources. But as I see it: Without a copyleft preamble in the sources, removing a single source file from our project would automatically make it MORE copyright protected than less. Just like copying a source code snippet from it without copying the preamble does not mean its suddenly public domain. My suggestion would be just checking if the header of up to date before every commit. Since we are the master repository of these sources, we should actually not commit without checking if the files are of proper quality anyway (Conforms to the styleguide we lack!) #### GLS Well, the FSF worldview suggest copying the full preamble of their license in every file. As you can expect, this will get nasty if you have to update them all without functional changes to the sources. But as I see it: Without a copyleft preamble in the sources, removing a single source file from our project would automatically make it MORE copyright protected than less. Just like copying a source code snippet from it without copying the preamble does not mean its suddenly public domain. My suggestion would be just checking if the header of up to date before every commit. Since we are the master repository of these sources, we should actually not commit without checking if the files are of proper quality anyway (Conforms to the styleguide we lack!) Ok, I'm not understanding if putting whatever is needed on all the files at once is bad because it's boring and/or a ton of work, or if it's (somehow) "legally bad". I can take the boring :lol:, and it would be a one-time thing so I have no problems in doing it "like a band-aid". :shrug: On what should be in the header, here is what we currently have: Code: /**************************************************************************** This file is part of Space Shuttle Ultra Translational Hand Controller Subsystem Operating Program definition Space Shuttle Ultra is free software; you can redistribute it and/or modify the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Space Shuttle Ultra is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Space Shuttle Ultra; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA **************************************************************************/ (the last line would be replaced with the updated link and file location) Currently almost all .h files have this, and most .cpp files don't. :shifty: If the whole "contract" must be used, then we replace the last line with it or even the whole thing. #### Urwumpe ##### Not funny anymore Donator Ok, I'm not understanding if putting whatever is needed on all the files at once is bad because it's boring and/or a ton of work, or if it's (somehow) "legally bad". I can take the boring :lol:, and it would be a one-time thing so I have no problems in doing it "like a band-aid". :shrug: Currently almost all .h files have this, and most .cpp files don't. :shifty: If the whole "contract" must be used, then we replace the last line with it or even the whole thing. The preamble with the correct link or location should be enough, not the whole license. Its just bad because it makes differential debugging impossible - if you change all source files because of a formality that does not change the functionality, how could we tell then where something went wrong in the worst case? I would do this shortly before release, after code freeze and and test phase, when we are getting ready to tag the new version. #### GLS The preamble with the correct link or location should be enough, not the whole license. Its just bad because it makes differential debugging impossible - if you change all source files because of a formality that does not change the functionality, how could we tell then where something went wrong in the worst case? I would do this shortly before release, after code freeze and and test phase, when we are getting ready to tag the new version. a) why would something go wrong? b) any "bug hunting" would not be affected IMO, as SVN keeps track of all the previous changes, so this change would just not be on the code... :shrug: #### Urwumpe ##### Not funny anymore Donator a) why would something go wrong? b) any "bug hunting" would not be affected IMO, as SVN keeps track of all the previous changes, so this change would just not be on the code... :shrug: If you try to limit the code changes to those done in a selected period of time by version management, you will get MANY files that way that had been changed. Its just as nuclear as letting a source code formatter run on the sources. And we need no reasons for letting something go wrong, we only need opportunities. #### Wolf ##### Donator Donator Is it normal to see RWY22/04 at EDW like this (no markings)or is something wrong on my end? #### DaveS ##### Space Shuttle Ultra Project co-developer Donator Beta Tester Is it normal to see RWY22/04 at EDW like this (no markings)or is something wrong on my end? View attachment 15645 You need to EDWRunway to your Base.cfg file for the texture to load. #### Wolf ##### Donator Donator You need to EDWRunway to your Base.cfg file for the texture to load. Can you please confirm the exact text to add and the folder? I added "EDWRunway" in the Orbiter/config/Base.cfg file but nothing has changed, still a blank vanilla RWY maybe SSU\EDW0422tex ? #### DaveS ##### Space Shuttle Ultra Project co-developer Donator Beta Tester Can you please confirm the exact text to add and the folder? I added "EDWRunway" in the Orbiter/config/Base.cfg file but nothing has changed, still a blank vanilla RWY Correction: It should SSU\EDWRunway (I forgot to add the SSU subfolder to the path). #### GLS If you try to limit the code changes to those done in a selected period of time by version management, you will get MANY files that way that had been changed. Its just as nuclear as letting a source code formatter run on the sources. And we need no reasons for letting something go wrong, we only need opportunities. I see your point, but letting this drag on will only allow for files to "slip thru the cracks". Some files will probably not change in the foreseeable future, and thus will easily be forgotten. About the number of files: let's say this requires 500 files to be touched (the .h files and let's say we want the .cpp files as well). Well, I was doing about 200 per revision (for several revs) when working the terrain, so for a one-time thing, 500 doesn't bother me particularly.
2021-02-24 23:28:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5926064848899841, "perplexity": 2178.1672038321567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00494.warc.gz"}
https://www.amplifiedparts.com/products/books_dvds_software?sort=Author
# Books Electronic Projects for Musicians, A Comprehensive Guide by Craig Anderton A comprehensive guide by Craig Anderton. Shows you how to build your own Preamp, Compressor/Limiter, Ring Modulator, Phase Shifter, Talk Box, plus 22 other inexpensive electronic accessories. Written in clear language, with hundreds of helpful diagrams and illustrations and simple ste-by-step instructions. Many effects builders have cited Electronic Projects for Musicians as the book that got them started. This book lays out the basics of effects building, detailing the tools, parts and techniques needed. This book contains 27 different projects including fuzz pedals, ring modulator, phase shifter and compressor, complete with diagrams, illustrations and simple step by step instructions. for those wanting to build their own effects, this is your starting point. $30.95 Guitar Player Repair Guide, 3rd Edition with DVD by Dan Erlewine This revised edition is a step-by-step manual to maintaining and repairing electric or acoustic guitars and basses. Players will master the basic maintenance and set up of a guitar. An advanced crafts-person will discover this to be a valuable reference source for complex procedures such as replacing nuts and frets. This third edition incorporates new techniques and tools, plus detailed repair information for specific models.$28.95 Saga of the Vacuum Tube - a History by Gerald F.J. Tyne The most comprehensive history of the thermionic vacuum tube. Contains hundreds of photos and essential facts to assist in identifying tubes made prior to 1930, and traces the evolution of tube development with histories of the manufacturers. Truly a classic! 494 pages, 1.4 lbs., 5.5" x 8.5" $8.95 Tube Guitar Amplifier Essentials by Gerald Weber Gerald Weber, of Kendrick amplification fame, is the author of A Desktop Reference of Hip Vintage Guitar Amps and Tube Amp Talk for the Guitarists and tech. This is his third book, and works well as a companion to his previous books or on its own. 528 pages, softbound$29.89 Tube Amp Talk for the Guitarist and Tech by Gerald Weber Several years ago, Gerald Weber started a fine tradition in the pages of Vintage Guitar magazine - that of freely sharing with others his vast knowledge of "everything amplifier." Gerald was the first of many fine tech writers to offer their hard-earned tips and tricks on the pages of Vintage Guitar. In fact, he has so much to offer, he writes two columns each month. Gerald continued that tradition with his popular first book, A Desktop Reference of Hip Vintage Guitar Amps, and follows up with this book, Tube Amp Talk for the Guitarist and Tech. An added bonus of this edition is the inclusion of the "Tranwreck Pages" by noted amp expert Ken Fischer. Whether you want to maintain your amp to keep the tone you love, or modify it for an exciting new sound, Gerald Weber can help. Maybe you need that little bit extra to get what you want. $29.95 All About Vacuum Tube Guitar Amplifiers by Gerald Weber "Over 500 pages of all new material not found in my other books or anywhere else for that matter. Tons of empirical data that de-mystify the inner workings of tube amps to help you get the most from your amps!"$29.95 by Gerald Weber “Sound Advice from Gerald Weber” Everything you wanted to ask about Vacuum Tube Guitar Amplifiers. Foreword by Aspen Pittman. As an advice columnist for over 20 years in three major guitar magazines, Gerald Weber has probably answered more tube amp questions than anyone. In Gerald’s 5th book about Vacuum Tube Amplifiers, he answers over 300 useful questions… $29.95 A Desktop Reference of Hip Vintage Guitar Amps by Gerald Weber Gerald Weber answers questions about vintage guitar amplifiers on topics such as biasing for output tubes and easy modifications. He includes an analysis of Fender® classics such as the Tweed Champ, Tweed Deluxe, Deluxe Reverb, Super Reverb and others. Also included are schematics for Ampeg®, Fender®, Gibson®, Hiwatt, Magnatone, Marshall®, Rickenbacker, Silvertone and Vox® amps. 500 pages. 1.5 lbs, 6" x 9"$29.95 Electric Guitar Amplifier Handbook, fourth edition by Jack Darr New reprint of the 4th edition of this book, originally published in 1973. Discusses signal amplification theory, how each part functions in an amplifier and contains many schematics and amps popular at the time. Softbound, 382 pages. One of today's most popular instruments is the electric guitar. Proper operation of the amplifier associated with the guitar is essential to the amplification and enhancement of the guitar sound. The ultimate test of a guitar amplifier is the quality of the musical sound that comes from the speaker. Many electronic service shops hesitate to service guitar amplifiers because the service litarature is sometimes difficult to acquire through standard channels. Electric Guitar Amplifier Handbook has been written to dispel any qualms the serviceman may have about servicing these amplifiers. While explaining in detail the functions of all parts of the guitar, the author relates them to familiar electronic circuitry. $36.95 DIY Speaker Cabinets for Musical Instrument Applications by Kevin O'Connor Speaker cabinet design is mysterious to most people, but it doesn't have to be - especially when it comes to speakers for musical instruments. Cabinet evolution is described culminating in Kevin's own designs built by London Power, based on 20+ years of experience and input from expert speaker designers. Resonant and non-resonant cabinets are explored, along with the choice for building materials, finishes, woodworking techniques, installing grill cloth, and the use of wheels and handles. Driver selection and mixing is detailed, covering driver impedance, power ratings and wiring.$22.00 Tonnes of Tone, Electronics Projects for Guitar and Bass by Kevin O'Connor A collection of projects for electric guitar and bass players. Save money by building your own acoustic preamp; build a two-channel tube preamp for guitar, and more. Construct a 15W tube power amp or a true class-A solid state amp. The "experimenters power supply" is excellent for tinkering with tube circuits, containing separate AC heater, DC heater plate and bias supplies. Build a wide range of tremolo circuit and an all-tube reverb unit. Circuit operation, construction, & layout guidelines are provided. Every project uses easy to find parts. $37.00 The Ultimate Tone, Volume 1, Modifying & Building Tube Amps by Kevin O'Connor This is the book that jump-started the boutique tube guitar amp business. Learn the truth about tube preamp design and modification; see how tube power amps work and can be made more reliable; see how reverbs and effects loops work and better ways to configure them; learn why some amp brands are easier to service and to mod than others; see how simple switching circuits are but how capable they can be. Tube data for the common types used in guitar and bass amps is provided, along with a discussion of stage set-up and player ergonomics. Spiral bound, 395 figures. 368 pages, 2.5 lbs, 8.5" x 11"$85.00 Principles of Power, A Practical Guide to Tube Amplifier Design by Kevin O'Connor This book offers a simplified method for designing tube power amps for audio. Tube operation is explained, as are circuit principles. Class-A or B, cathode, fixed bias, and push-pull and single-ended designs are covered, as are many front-end circuits. Includes designs using off-the-shelf transformers and current production tubes. Power outputs of 15 to 400W are depicted using pentodes and true triodes. Also, covers power supplies, selected tube data, and information on Hammond and Plitron transformers. Guitar amp builders please refer to "The Ultimate Tone Vol. 2" $52.00 The Ultimate Tone, Volume 6, Timeless Tone Built for the Future by Kevin O'Connor The evolution of guitar amp circuits becomes a revolution! The Ultimate Tone, Volume 6 takes you through advanced DC Power Scaling methods; reveals the secrets of Dumble amps; explores the truth about high-gain preamps; demonstrates how to build Z-B-X push-pull power amps with a single power tube; simplifies the design of 400W+ tube amps - methods that can be applied to any power level; presents class-G tube amps where 100W of output has the same waste heat as a 25W amp; explores modern mains control and sustain circuitry.$85.00 The Ultimate Tone, Volume 3, Generations of Tone by Kevin O'Connor Have you ever wondered if there is a better way to build a Bassman, Champ, Plexi, an 800, AC-30, Bulldog or Portaflex? Or you wanted to build an SVT with off-the-shelf parts? How about a master-volume amp that doesn't change tone with the master setting? Everything you need to know is right here, including: proper grounding techniques, wiring methods, and mechanical considerations. Eighteen chapters cover the "iconic" amps everyone knows and loves, with schematics and layouts for each, along with the technical history of the product. $85.00 On Backorder The Ultimate Tone, Volume 2, Systems Approach to Stage Sound by Kevin O'Connor The achievement of great, live sound is the premise used to explore modern high-gain guitar pre-amps, proper treatment of acoustic instruments, tube vocal PAs; dynamic bass gear, all types of tube, semi, hybrid and alternative power amplifier design, power supply mods that are "toneful", simple switching methods, speaker cabinet design, getting the most from effects loops and chains, and, modular stage monitoring systems that work!$60.00 The Ultimate Tone, Volume 4, Advanced Methods for Amp Design by Kevin O'Connor Outlines, in schematic form, many approaches that achieve the goal of power scaling, super scaling and power management. A complete history and discourse on the technology outlines how to implement multiple power scale controls, alternative circuit forms and system impacts. Further tutorials outline the methods for super scaling, with both fixed and variable boost ratios, using both solid-state and tube methods. $130.00 The Ultimate Tone, Volume 5, Tone Capture by Kevin O'Connor Every part of your system contributes to your Ultimate Tone, so every part is important. The Ultimate Tone, Volume 5 delineates, in clear language, the function of each electronic component in a circuit, and how alterations to the value of each part affect the final sound. Gain stage voicing, interstage attenuator optimization, sustain restoration for clean and distorted tones, power amp and preamp "tone generators", simple boost pedals, amp selectors and power conditioners are all covered, with designs using off-the-shelf parts. Like TUT3, this is a "project book" presenting design challenges and solutions, with a diversity of tube and solid-state circuits for bass, guitar and keyboards.$79.95 Ready Set Go! An Electronic Reference for the Everyman by Kevin O'Connor A handy reference for the novice electronic tinkerer and experienced craftsman alike. Informative and easy to read, the book • explains how to read schematic symbols • explains standard notation, S.I. symbols and tolerances • reveals the resistor color code • lists standard resistor values for 1%, 5%, 10% and 20% components • lists standard capacitor values and explains the colour coding of vintage caps • explains current, voltage, power, resistance, capacitance and inductance • explains Ohm's Law and how to figure out series/parallel resistance • transformers and impedance are clearly explained Examples are worked out throughout the book to illustrate how truly simple basic electronics is! $33.00 On Backorder Tube Lore II, A Reference for Users and Collectors, 2nd Edition by Ludwell Sibley The classic Ludwell Sibley book is now available in this wonderful second edition. Tube Lore II contains a wealth of knowledge and information on a wide variety of tube types. Any tube collector or radio aficionado will find the contents of this book to be an invaluable reference.$34.95 Guitar Amplifier Electronics: Circuit Simulation by Richard Kuehnel This book is a step-by-step tutorial for simulating vacuum tube guitar amplifiers using LTspice, a free SPICE-based electronic circuit simulator for Windows and macOS. The book walks you through the basics of developing a schematic, setting up a simulation, integrating vacuum tube models, and interpreting the results. From there it gradually ramps up to advanced topics, like verifying the sonic performance of a guitar amplifier as a complete system, analyzing the dynamics of overdrive, and writing new SPICE models for triodes and power tubes. $29.95 The Fender® Bassman 5F6-A, 3rd Edition by Richard Kuehnel This book examines this famous amplifier by studying its circuit design in great detail. It starts by breaking the amplifier into its major components: the 12AY7 preamp, the 12AX7 voltage amp, the 12AX7 cathode follower, the tone stack, the long-tailed-pair phase splitter, the push-pull power amplifier, and the power supply. Each component is analyzed to determine how it works and to derive the design formulas needed to predict its performance. The results are then compared to bench tests of the actual circuit. Finally all of the components are put together to analyze total system behavior and to discover how and where the amp transitions into distortion.$45.95 Fundamentals of Guitar Amplifier: System Design by Richard Kuehnel Books about guitar amplifier electronics usually focus on individual circuits: preamp stages, tone stacks, phase inverters, the power amp, and power supply. Fundamentals of Guitar Amplifier System Design presents a structured, top-down approach to crafting a complete guitar amplifier. $29.95 Guitar Amplifier Electronics: Basic Theory by Richard Kuehnel This book presents vacuum tube amplifier theory using modern, web-based design tools and computer visualizations to eliminate the usual litany of mathematical formulas. The book begins with the fundamentals of electronics and vacuum tubes. It then proceeds through each stage of a modern guitar amplifier, including preamp voltage amplification stages, cathode followers, tone stacks, power amps, phase inverters, and power supplies. The final chapter is devoted to the craft of controlling overdrive dynamics and harmonic distortion.$29.95 Don't see what you're looking for? Send us your product suggestions!
2021-09-19 13:41:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1873074769973755, "perplexity": 6701.184886617617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00216.warc.gz"}
https://www.physicsforums.com/threads/distribution-of-2-matrices-with-the-same-eigenvalues.844947/
# Distribution of 2 matrices with the same eigenvalues 1. Nov 24, 2015 ### nikozm Hi, I was wondering if two matrices with the same eigenvalues share the same PDF. Any ideas and/or references would be helpful. 2. Nov 24, 2015 ### andrewkirk Matrices in general don't have anything that is widely referred to as a 'PDF'. The only PDF I know is 'probability density function' in probability theory. Is that what you mean? If so, how do you want to relate it to a matrix? Matrices are used in probability theory and statistics in numerous different ways. 3. Nov 25, 2015 ### nikozm Hello, Assume that H is a n \times m matrix with i.i.d. complex Gaussian entries each with zero mean and variance \sigma. Also, let n>=m. I ' m interested in finding the relation between the distribution of HHH and HHH, where H stands for the Hermittian transposition. I anticipate that both follow the complex wishart distribution with the same parameters (since they share the same nonzero eigenvalues), but I m not sure about this. Any ideas ? Thanks in advance.. 4. Nov 25, 2015 ### andrewkirk The matrix $H^HH$ will be $m\times m$ while $HH^H$ will be $n\times n$. They will have different numbers of eigenvalues. Why do you think the nonzero ones will be the same? 5. Nov 25, 2015 ### nikozm Indeed, they have different dimensions. However their non-zero eigenvalues are the same. This is a fact. If you hold reservations about the latter just implement it in Matlab and see with the command eig their corresponding eigenvalues. My question is: if they also have the same probability density function.
2017-08-21 17:06:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8972318768501282, "perplexity": 478.95394017261503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00300.warc.gz"}
https://gateoverflow.in/301857/aptitute
+1 vote 83 views # The no. of ways to distribute n items among r people where each gets zero or more items. please give the detailed formula for this or give any good example. | 83 views 0 $^{n+r-1}C_{r-1}$ 0 Number of ways to distribute n among r = r^n
2019-11-14 16:13:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4651736915111542, "perplexity": 1418.6757573722398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668529.43/warc/CC-MAIN-20191114154802-20191114182802-00086.warc.gz"}
http://tex.stackexchange.com/questions/72815/how-to-upload-c-code-into-latex-appendix-section
# How to upload C code into latex appendix section [duplicate] Possible Duplicate: I am trying to upload the code into my latex document.Please let me know how can I do that.My code is as below. ``````\appendix %My C code goes here `````` - You can add it with the package `listings` for example. You know this package? If not, please try `texdoc listings` in your command line. –  Kurt Sep 16 '12 at 20:16 I am supposed to use only appendix.Is it possible only with \appendix tag? –  Vutukuri Sep 16 '12 at 20:17 @Vutukuri It is not clear what you want. In the appendix you use the usual sectional unit commands for creating entries in the appendix: \chapter, \section, ... (but they get numbered differently). To include a listing, you must provide the LaTeX commands. @Kurt recommened the `listings` package to do this. (I'd recommend the `lstlistings` package.) –  Marc van Dongen Sep 16 '12 at 20:20 ## marked as duplicate by lockstep, Marco Daniel, percusse, doncherry, Lev BishopSep 21 '12 at 6:48 You can try using `listings` package for proper source code formating, or you can just use `\verbatim` environment.
2014-03-15 18:40:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219550490379333, "perplexity": 3540.419553505686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678699096/warc/CC-MAIN-20140313024459-00095-ip-10-183-142-35.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Criterion_for_Ring_with_Unity_to_be_Topological_Ring
# Criterion for Ring with Unity to be Topological Ring ## Theorem Let $\struct {R, +, \circ}$ be a ring with unity. Let $\tau$ be a topology over $R$. Suppose that $+$ and $\circ$ are $\tau$-continuous mappings. Then $\struct {R, +, \circ, \tau}$ is a topological ring. ## Proof As we presume $\circ$ to be continuous, we need only prove that $\struct {R, +, \tau}$ is a topological group. As we presume $+$ to be continuous, we need only show that negation is continuous. As $\struct {R, \circ}$ is a semigroup and $\circ$ is continuous: $\struct{R, \circ, \tau}$ is a topological semigroup. From Identity Mapping is Homeomorphism, the identity mapping $I_R : \struct {R, \tau} \to \struct {R, \tau}$ is continuous. From Multiple Rule for Continuous Mappings to Topological Semigroup, the mapping $\paren{- 1_R} \circ I_R : R \to R$ defined by: $\forall b \in R : \map {\paren {\paren {-1_R} \circ I_R} } b = \paren {-1_R} \circ b$ is continuous. From Product with Ring Negative, for each $b \in R : -b = \paren {-1_R} \circ b$. Hence negation is continuous. $\blacksquare$
2023-03-27 23:51:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.796858549118042, "perplexity": 316.541993618308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00330.warc.gz"}
http://ringofbrodgar.com/wiki/Ruffe
World 13: as of 2 April 2021 HnH Topic: World 13 # Ruffe Ruffe Vital statistics Size 1 x 1 Skill(s) Required Fishing Object(s) Required Bushcraft Fishingpole or Primitive Casting-Rod Produced By River or Lake Required By Filet of Ruffe, Spitroast Ruffe Go to Objects A small yellow fish. ### How to Acquire You must have the fishing skill and a fully equipped Bushcraft Fishingpole or Primitive Casting-Rod in order to catch fish. Use the menu action to cast your line into a body of water. ### How to Use Ruffe can be prepared severals ways, including Spitroasting, Drying, or cooked over an open Fire Once caught and butchered, a Ruffe produces 1 Raw Filets Use the menu action Craft > Food > Roast Meat To cook a Filet of Ruffe over a fire creates Roast Filet of Ruffe, which provides : STR +1 +2 AGI +1 +2 INT +1 +2 CON +1 +2 PER +1 +2 CHA +1 +2 DEX +1 +2 WIL +1 +2 PSY +1 +2 EnergyHungerTotal FEPs +1 / +2 Hunger per FEP 00000000003000000012533 / 01 / 1 Drying a Filet of Ruffe on a Drying Frame takes around 66 RL hours and produces Dried Filet of Ruffe, which provides: STR +1 +2 AGI +1 +2 INT +1 +2 CON +1 +2 PER +1 +2 CHA +1 +2 DEX +1 +2 WIL +1 +2 PSY +1 +2 EnergyHungerTotal FEPs +1 / +2 Hunger per FEP 00000000004.500000001251.54.5 / 00.33 / 3 Spitroasting a whole Ruffe over a fire using a Roasting Spit creates 2 pieces of Spitroast Ruffe, which provide: STR +1 +2 AGI +1 +2 INT +1 +2 CON +1 +2 PER +1 +2 CHA +1 +2 DEX +1 +2 WIL +1 +2 PSY +1 +2 EnergyHungerTotal FEPs +1 / +2 Hunger per FEP 1.5000000000300000001251.54.5 / 00.33 / 3 ### Quality • The quality of the caught fish is determined primarily by map data. The Q on your fishing gear can only reduce fish quality if lower than the fish's. • The quality of Roasted fish is Softcapped by ${\displaystyle {\sqrt[{2}]{Perception*Survival}}}$. • The quality of Dried fish is equal to the quality of the fish and not affecting by the character or the drying frame • The quality of spitroast fish is equal to ${\displaystyle {\frac {_{q}RoastingSpit+_{q}Fish}{2}}}$ and is Softcapped by the Survival of the character who places the fish on the spit.
2021-06-20 21:52:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23261676728725433, "perplexity": 13446.246674327058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00388.warc.gz"}
https://cs.meta.stackexchange.com/questions/190/the-computer-architecture-tag
# The computer-architecture tag I would use it for questions about designing a processor or other electronic circuit, and possibly for questions about the semantics of machine instructions. I would not use it for question about the influence of processor differences on the behavior of algorithms, e.g. on questions about cache user (1 2). Thoughts? • -1, disagree. See comment on Kaveh's answer for reasoning. – Patrick87 Mar 26 '12 at 12:47 I think the tag is OK. Questions related to optimization of programs related to computer architecture is also part of computer architecture (e.g. see this). More generally, the goal of tags, as I understand it, is to help categorize questions and help users in finding/searching them. So if it is likely that a user would with the knowledge of an area is likely to search or answer the question then, IMHO, it is OK to use that tag for the question, particularly when they are high level subject classification tags like: discrete-mathematics, combinatorics, complexity-theory, algorithms, data-structures, formal-languages (and automata theory), logic, computability, information-theory, numerical-analysis, symbolic-computation, cryptography, security, artificial-intelligence, machine-learning, computer-vision, computational-linguistics, natural-language-processing, knowledge-representation-reasoning, robotics, computational-geometry, computational-engineering (and science), computation-finance, databases, information-retrieval, distributed-computing, parallel-computing, neural-computing, evolutionary-computing, algorithmic-game-theory, computer-graphics, computer-architecture (or maybe hardware-architecture), computer-networks (and internet architecture), operating-systems, information-networks (and social-networks), human-computer-interaction, multimedia, sound, programming-languages, software-engineering, ... • Those two questions are only very tangentially related to computer architecture, and cpu-cache already embodies the connection. I wouldn't think to search these questions under computer-architecture. Not every question needs to have every subject classification tag that may be relevant (we'd need far more than 5 tags if this was the intent). – Gilles 'SO- stop being evil' Mar 26 '12 at 10:48 • +1, agree with Kaveh. @Gilles While it's true that a question needn't be tagged with every applicable subject descriptor, cache design is one of the bread-and-butter areas of computer architecture... and the design of caches is inherently, fundamentally and inextricably linked to the memory access patterns of algorithms which access cache. I, for one, don't see any harm in allowing users to use the computer-architecture tag here; naturally, if users prefer the cpu-cache tag, let them use that. I say we see what the community decides, rather than prescribe behavior in this case. – Patrick87 Mar 26 '12 at 12:42 • As an addendum: while it's true that the design of algorithms assumes a certain underlying memory system and performance model, it is equally true that answering a question about the design of an algorithm for such a model requires an understanding of the model, hence, of computer-architecture. When a user asks a question about how to compute the complexity of Kruskal's MST-finding algorithm, should we disallow the tag graph-algorithms? You don't need to know anything about graph algorithms to answer that, provided that enough explanation is given in the question. – Patrick87 Mar 26 '12 at 12:47 • @Patrick87 Would you say an assumption like "The $k$ last accessed values are available at time cost $c_1$ while all others cost $c_2$." relate to computer architecture? I don't, and I don't need more to do a cache-aware runtime analysis. – Raphael Mod Mar 26 '12 at 12:51 • @Raphael Of course I would say that the statement you provide is related to computer architecture. It implies a certain (perhaps idealized) design of a cache-and-memory system, the performance of which influences application performance. Cache is one of the bread-and-butter areas of computer architecture; if answering a question about an algorithm's performance mentions caches, pipelines, I/O, etc. at all, it is a question related to computer architecture (about how computer architecture affects performance). You don't need to know about graph algorithms to analyze Kruskal's method, either... – Patrick87 Mar 26 '12 at 12:58 • @Patrick87: Well, I think that view is too broad. Tags should signify where the principal concerns of a question belong to. Otherwise we can put any tag on any question as most topics are related via not too many hops. – Raphael Mod Mar 26 '12 at 13:22 • @Patrick87 “I say we see what the community decides, rather than prescribe behavior in this case.” The community is in dispute, that's what we're here to resolve. I disagree with Kaveh's edits to add the tag on those two questions (one of them mine). – Gilles 'SO- stop being evil' Mar 26 '12 at 16:23 • @Gilles Yes, the community is in dispute, but my concern is in limiting the scope of the dispute from one which artificially limits the scope of the computer-architecture tag to one which involves a specific, disputed instance of retagging. I find the former scope much too broad, while I find the latter acceptable. Regarding the latter, I still tend to agree with Kaveh; if the question asks about cache, pipelines or I/O, hardware-software interaction, system-dependent performance, etc., I see nothing wrong with adding a computer-architecture tag. – Patrick87 Mar 26 '12 at 17:49 • @Gilles Frankly, I find Kaveh's adding the programming-languages tag much more objectionable than his adding the computer-architecture tag. – Patrick87 Mar 26 '12 at 17:50 • @Gilles, cpu-cache is too specific to be a one of area tags. It is a topic in computer-architecture and operating-systems. The goal of category tags is to have a small number of tags that will cover all questions so people can use them to filter their searches effectively. This is what is also enforced in many other places like arxiv. Every paper has to have one of the area tags. IMO, if you consider that cpu-cache tag is fine for the question, then computer-architecture tag should also be fine, since it is a superset of it. – Kaveh Mar 26 '12 at 17:56 • @Patrick87, I added that because of reference to garbage-collectors, garbage collectors are typically studied in programming-languages area and the reasoning is similar to the one I posted in the comment above. – Kaveh Mar 26 '12 at 17:57 • @Raphael: "Tags should signify where the principal concerns of a question belong to", I agree, but it doesn't rule out there should not be other uses for tags. If you mean that the tags should be used only in that way then I have to disagree. IMO, the category tags (which are not too many) should also be used to categorize the question. The tags are not for the OP (who already knows them and can explain in the question) but mainly for others as a way of filtering/searching questions for questions they are interested in. This will become more clear later when the number of users – Kaveh Mar 26 '12 at 18:02 • [cont.] reaches tens of thousands (hopefully) and the number of questions/answers per day will be too many for people to read all of them and a question will not stay on the main page for more than an hour. At that point an effective way of filtering questions will be needed and having these category tags where each question belongs to at least one of them will help a lot. Take a look at Mathematics to get a feeling where a question doesn't stay on the front page for more than 2 hours. – Kaveh Mar 26 '12 at 18:06 • [cont.] I am hoping and expecting that Computer Science will be even larger, most people will want to read questions only in specific topics. Having a reasonable and short list of category tags covering all questions will help a lot. – Kaveh Mar 26 '12 at 18:13 • @Kaveh A tag can be appropriate without a more general tag being appropriate. Tags indicate what a question is about; a question about caches is not necessarily about computer architecture (if you like, it's about direct relationships, not the transitive closure thereof). We only have 5 tags here, remember; often there will be more than 5 applicable category tags by your reasoning, so it can't work. – Gilles 'SO- stop being evil' Mar 26 '12 at 19:15 As I asked my question Research on evaluating the performance of cache-obliviousness in practice, I was initially looking for a tag "cache-oblivious". Such a tag exists on CSTheory, for example. I'd have to agree with Raphael's answer. For example cache-obliviousness is not about computer architechtures. It is about analyzing algorithms and data structures on a hypothetical, idealized model of computation. In fact, one of the main points about being cache-oblivious is that you don't care or need to know the details of the underlying memory hierarchy. Sizes of the caches etc. can be whatever, they do not affect our asymptotic analysis there. In this sense, my question is not really about real caches or architechtures. They are something more concrete. In this way I also agree with the OP. Maybe a more fitting tag for my question would be something that is about benchmarking something. I only wonder if such a tag exists. • I looked at the second paper, it is categorized under experimental-algorithms on ACM DL. I still think that the computer-architecture is appropriate and can help people who might be interested in the question find it, but since you are the author feel free to remove it. – Kaveh Mar 26 '12 at 19:54 • @Kaveh True, you have a good point. I guess at least for the time being my question could also be tagged with computer-architechture. – Juho Mar 26 '12 at 20:57 I agree with Gilles' sentiment. Algorithm analysis uses models of memory hierarchies and does (usually) not refer to explicit computer architectures. Therefore, should not be used for questions on algorithms regarding cache efficiency. Regarding searches, if I clicked on I would not expect to find questions on algorithms. I want questions that discuss (dis)advantages of computer architecture decisions, not algorithms. • -1, disagree. See comment on Kaveh's answer for reasoning. – Patrick87 Mar 26 '12 at 12:47 • Raphael, optimization of algorithms with respect to specific hardware to use their features are part of computer-architecture, and cache is one of them such features. IMHO, the view expressed in the answer is too narrow. On the other hand, people working in algorithms are also interested in this topic. A question can be both in computer-architecture and algorithms, they are not disjoint topics. – Kaveh Mar 26 '12 at 17:50 • The point is that you do not have to consider specific hardware (architecture) in order to analyse an algorithm w.r.t. caches; all you need is the assumption that there is some kind of limited capacity short-term memory. – Raphael Mod Mar 26 '12 at 17:52 • @Raphael, it can be the case, but the question in its current form is not phrased that way, it explicitly refers to cpu-caches. If it was phrased as a theoretical question then I would probably agree with you, but at the moment it is asking about the performance in practice. In any case, it think it is very likely that a person working in computer-architectures would be interested and can answer the question. – Kaveh Mar 26 '12 at 18:29
2021-07-28 20:51:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3856334388256073, "perplexity": 1212.0381896534957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153791.41/warc/CC-MAIN-20210728185528-20210728215528-00497.warc.gz"}
http://www.uvm.edu/~cbcafier/uvm/cs166/
I am teaching assistant for CS 166 — Principles of Cybersecurity — fall semester 2019, under Prof J. Eddy. My office hours for CS 166 are • Monday 1:00 PM to 2:00 PM • Tuesday 9:00 AM to 10:00 AM or by appointment. My office is E332 Innovation Hall, but office hours will be in E338, the common area immediately outside E332. Please feel free to email with questions to clayton dot cafiero at (you know the rest)! Topics - This course builds a strong foundation in the principles of cybersecurity. Topics include an introduction to cybersecurity, fundamental security design principles, programming flaws, malicious code, web and database security; as well as common cryptography algorithms and hashing functions. The course concludes with an overview of computer networks and common network threat vectors. • ## How to read a specification Here’s what Prof Eddy wrote. 1) The program should display a welcome message and prompt the user for a username. Create a simulated buffer overflow condition by allowing a user to input more data than the size of the allocated memory (causing the program to crash). 2) Implement input validation to mitigate the simulated overflow vulnerability. Check that the username entered has a minimum length of 8 characters and a maximum of 25 characters. If the user enters a username length outside those limits, return an error message and prompt the user to re-enter the username. Also, recall that Prof Eddy supplied a video accompanying the spec, which gave a clear example of one way to simulate the error. ### Program #1 • Your program should display a welcome message and prompt for a username. Simple. • There’s no mention of password or any other input. Just username. No need to prompt for multiple usernames, passwords, mother’s maiden name, last four digits of your SSN, color of the sky, or anything else. • If input exceeds size of allocated memory, simulate a buffer overflow error! (I know, this is a little kludgey, but it is what it is, and we should all understand it is to illustrate a point.) There are many ways to do this; the video shows you a very simple one. Some of you got creative (in a positive way) and that’s OK; others didn’t simulate an error at all (insert sad-face emoji here). • The specification isn’t as explicit as it might be in this case, but for best comparison between vulnerable and fixed code, generally it’s best to keep as many things constant as possible. So since you were asked to permit usernames of length 8-25 characters, this should have (implicitly) been a constraint for program #1, with usernames with fewer than eight characters rejected normally, and usernames with more than 25 characters generating an error. ### Program #2 • Ideally, the preallocated structure that should have been used in program #1 should have been left in place, and input validation should have ensured that nothing bigger than that structure would be accepted as input. (I did not deduct points if that preallocated structure was removed in program #2.) In summary, please read assignment specifications carefully, and think how best to demonstrate that you did indeed fix something that was broken! While most folks did OK, some folks didn’t satisfy basic requirements, and some folks made extra work for themselves. As we gain more experience, I will be a little more strict about how assignments are graded viz. with regard to specification. • I work (now part-time) for a company that has designed and built an equity research publishing and distribution platform used by numerous equity research firms (usually smaller “boutique” shops of between 40 and 200 analysts). This platform is web-based, and we provide customized environments for each client. Our clients sell their research, commonly on a subscription model, to hedge funds, equity funds, and institutional investors. Security is of paramount importance. Bad actors might seek to access research for which they have not paid, and then trade on this information. On Tuesday, 17 September, one of our clients was subject to a few dozen attempts at SQL injection attack. The attacks failed because the application is hardened against such attacks, but our monitoring system detected the attempts and created alerts. Here’s a screenshot from our monitoring system: You can see in the screenshot that the perpetrator tried something very similar to what’s explained in this course: '"2 AND "x"="x' The idea behind the attack is that the string ‘“2 AND “x”=”x’ is appended to the SQL query, where a search term might be. The attacker is trying to take a query like SELECT * FROM table_foo WHERE record_id = 2 and get it to execute something like this: SELECT * FROM table_foo WHERE record_id = 2 AND "x"="x" Since the last clause always evaluates to True, such a query might return all records. We harden our web applications in many ways so this kind of attack won’t ever work. In lab you will harden a demo application by “sanitizing” input from the user — making sure that everything you use to build a SQL query is safe. There were other attempts as well, all using the same basic idea. See if you can understand what’s going on in each. Imagine these strings appended to a query template ending in “WHERE some_column =”. '2 AND 1=1' '299999" union select unhex(hex(version())) -- "x"="x' "2' or (1,2)=(select*from(select name_const(CHAR(111,108,111,108,111,115,104,101,114),1), name_const(CHAR(111,108,111,108,111,115,104,101,114),1))a) -- 'x'='x" Anyhow, it was a simple matter to track the attacks to their origin. This turned out to be from a managed hosting service in Singapore. The offending IP address was reported and blocked. Please feel free to come by during office hours if you find this interesting, or visit the OWASP website’s page on SQL injection. And here’s a little something on-topic from Randall Munroe’s XKCD: • Most of you did a pretty good job with the functional requirements of the application. There were more than a few points taken off for not following the assignment instructions. I implore you: please read the instructions carefully and if you have questions, see me, Prof Eddy, or one of the other TAs. This is a silly way to lose points. Some other general feedback: • While many used the with open idiom to open your data file, many did not. This idiom is pretty standard and I encourage its use. • Some of you elected not to use the CSV module – which is part of the Python standard library. That’s OK if that’s your decision, but I encourage you to use standard library modules where appropriate. It’s good practice, it helps you write cleaner code, and you don’t have to reinvent the wheel. (N.B. I refer only to standard library modules and not third-party modules.) The standard library is substantial (even after many years programming in Python I couldn’t claim to be familiar with every module.) The point is: there’s a lot out there at your disposal. Use it! • Apropos of the above, if you find yourself writing a parser then you’ve missed an opportunity (unless, of course, your task is to write a parser). • If you find yourself repeating blocks of code, or blocks of very similar code, chances are near 100% that you’ve made an error in design. Take a step back and assess. See: Code Smells on Wikipedia and notice that duplicated code is the first item on the list for application-level smells! • Generally, I frown on mixing file I/O and logic, though I understand your reasons for doing so in this assignment and did not deduct points. Even in an environment where you’re using a database and not the file system for storing your data, it’s good practice to avoid having to go back to the well any more than is necessary. • Give thought to appropriate data structures for your application. This can save you much heartache and frustration. • Always provide instructions for testing your code in a separate file. • Finally, since this is a course on cybersecurity, be watchful. Some of you wrote code that introduced vulnerabilities in your application. Always be on the lookout. HTH • Coding Style - Rule #1: Don’t invent your own coding style. With Python, there are common guidelines and best practices for just about everything. I encourage you to read PEP 8 - Style Guide for Python Code. You might also consider checking your code using pylint or pep8, both of which are installable via `pip’. Most likely, I won’t take off points for not following PEP 8 unless it interferes with readability, nevertheless, please consider this suggestion. By the way, while you’ll notice right near the start of PEP8 is a section entitled “A Foolish Consistency is the Hobgoblin of Little Minds” (perhaps not ironically this quotation is not properly attributed to its author, Ralph Waldo Emerson, in his famous essay “Self-Reliance”). While this section provides good advice to the experienced pythonista, it’s best while learning to stick to the standards to the greatest reasonable extent. (I’ve been coding in Python for almost 20 years now, and I stick to these standards, deviating only in specific cases with solid justification and not on a whim.)
2019-11-15 22:52:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21426287293434143, "perplexity": 1965.3364992776606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.22/warc/CC-MAIN-20191115222436-20191116010436-00184.warc.gz"}
http://www.csam.or.kr/journal/view.html?uid=1787&&vmd=Full
TEXT SIZE CrossRef (0) A note on the test for the covariance matrix under normality Hyo-Il Park aDepartment of Statistics, Cheongju University, Korea Correspondence to: 1Department of Statistics, Cheongju University, 298 Dae-Sung Ro, Cheongwon-gu, Cheongju-si, Chungcheongbuk-do 28503, Korea. E-mail: hipark@cju.ac.kr Received October 11, 2017; Revised December 4, 2017; Accepted December 5, 2017. Abstract In this study, we consider the likelihood ratio test for the covariance matrix of the multivariate normal data. For this, we propose a method for obtaining null distributions of the likelihood ratio statistics by the Monte-Carlo approach when it is difficult to derive the exact null distributions theoretically. Then we compare the performance and precision of distributions obtained by the asymptotic normality and the Monte-Carlo method for the likelihood ratio test through a simulation study. Finally we discuss some interesting features related to the likelihood ratio test for the covariance matrix and the Monte-Carlo method for obtaining null distributions for the likelihood ratio statistics. Keywords : asymptotic normality, likelihood ratio test, Monte-Carlo method, multivariate data 1. Introduction The inferences about scale parameter or variance in the univariate case, have many results shown in the literature. Especially, the likelihood ratio (LR) procedure for testing the problem for variance has been completely achieved and verified for its efficiency and uniqueness under normality. For example, LR test statistics follow exactly the chi-square distributions under the null hypothesis and the LR tests themselves are optimal in the sense of power of test. However for the multivariate case with multivariate normality, only asymptotic procedures are available even though the statistics for the covariance matrix have been derived by applying the LR principle. The reason for this phenomenon may come from the fact that the distributional theories for the matrix-valued statistics have not been fully investigated. Any prospect for the theoretic development would also not be seen in any near future because of complexity or non-existence of distributions for matrix-valued statistics. For this reason, several modifications with high dimensional cases have been reported (Bai et al., 2009; Cai and Ma, 2013; Gupta and Bodnar, 2014) or applications of the bootstrap method, which is a re-sampling method, have been applied (Beran and Srivastava, 1985). Pinto and Mingoti (2015) also performed a comparison study for the asymptotic LR test with the VMAX proposed by Costa and Machado (2008). For the test procedure of the covariance matrix, the LR functions have been mainly expressed with the corresponding eigenvalues of the sample covariance matrix. Even though the eigenvalues of a sample covariance matrix may consist of a vector instead of a matrix, discussions of the distributions and their properties for the LR functions or related statistics have not been fully investigated or justified in a theoretic manner. All the results up to this date, have been confined only to limiting distributions based on log likelihood arguments. Therefore we may question how close the limiting distributions are to the exact ones if they are obtainable or whether the conclusions based on the limiting distributions would be reliable when the p-values are too close to a significance level, say 0.05. For those reasons, it would be necessary to achieve a sensible and reasonable method to obtain the null distributions of LR functions or related statistics. In the multivariate analysis under normality, the distributions of LR statistics have been fully studied and tabulated systematically for many cases. However when it would be difficult to derive the exact distributions theoretically, one may consider deriving the limiting distributions asymptotically which may be obtained using log likelihood arguments. However, one may obtain null distributions using one of the popular re-sampling methods such as bootstrap or permutation methods that are heavily dependent on the computer power and its facilities. Along with this, one may also obtain the null distribution of an LR statistic, LR, in the following idea and rationale. For this discussion, let E0(LR) be the expectation of LR under the null hypothesis. Since we can generate pseudo random vectors by the scenario of the null hypothesis, we may consider the computed quantity of LR with the generated pseudo random vectors an unbiased estimator of E0(LR) under the null hypothesis. If we iterate this process many times, say, M times, then we may consider having a sample of size M from a population with mean, E0(LR) but unknown distribution function, G, say, under the null hypothesis. From this sample, one may construct an empirical distribution function, ĜM, which can be considered an estimator of G. Since ĜM is a consistent estimator of G from the Glivenko-Cantelli lemma (Chung, 2001), one can estimate consistently a quantile or critical value for any given probability or significance level. We will call this process to obtain null distribution of an LR statistic, the Monte-Carlo (MC) method. In this research, we consider obtaining null distributions of the LR functions for testing the covariance matrices under the multivariate normal distribution and compare them with the limiting distributions. For this purpose, the rest of this paper will be organized in the following order. In the next section, we review the LR tests with limiting distributions in some detail and propose the MC method to obtain quantiles as critical values for the LR functions. Then we illustrate the usage of distributions with numerical examples for the decision of structures of the covariance matrices and compare the precision between the two methods by obtaining empirical powers through a simulation study in the Section 3. In the Section 4, we discuss some interesting features related with the LR functions and the MC method. 2. Likelihood ratio test for the covariance matrix Let X1, …, Xn be a random sample of q-variate column vectors with size n, from a q-variate normal distribution with mean vector, μ and covariance matrix, ∑. Then it is of our interest to test $H0:Σ=Σ0,$ with the condition that the mean vector μ is unknown but ∑0 is a pre-specified positive definite q × q matrix. However we note that without loss of generality, we may assume that ∑0 = I since $XΣ0-1/2$ has I as its covariance matrix under H0 : ∑ = ∑0, where I is the q × q identity matrix. Then it is well-known that $Sn=1n∑i=1n(Xi-X¯)(Xi-X¯)T$ is the maximum likelihood estimator of ∑, where is the sample mean vector of μ, also known as the maximum likelihood estimator of μ and (·)T is the transpose of a vector or matrix. We also assume that Sn is positive definite for each n. In order to discuss the LR function, L(∑; X1, …, Xn) for testing H0 : ∑ = I, let Λjn be the jth eigenvalue of Sn, j = 1, …, q. Then the LR function for testing H0 : ∑ = I against H1 : ∑ ≠ I can be expressed as, with the notation that |A| and TR(A) are the determinant and trace of the matrix A, respectively, $L(Σ;X1,…,Xn)=sup {∏i=1nf(Xi;μ,Σ∣H0)}sup {∏i=1nf(Xi;μ,Σ∣H0∪H1)}=∣Sn∣n2exp [-n2TR(Sn)+qn2]=(∏j=1qΛjn)n2exp [-n2∑j=1qΛjn+qn2]=∏j=1q{Λjnn2 exp [-n2Λjn+n2]}.$ f in L(∑; X1, …, Xn) denotes the q-variate normal probability density function. Then the testing rule would be to reject H0 : ∑ = I in favor of H1 : ∑ ≠ I for some small but positive values of L(∑; X1, …, Xn) in the light of LR principle. Then in order to complete the multivariate test H0 : ∑ = I, we need a null distribution of the LR function for any form listed above. However any exact null distribution of any form of LR statistics has not been reported and only an asymptotic result based on the log likelihood arguments has been available. In the following, we state a limiting distribution for H0 : ∑ = I. The proof for this, you may refer to Silvey (1975) or Mardia et al. (1979). ### Lemma 1 The distribution of $l=-2 log L(Σ;X1,…,Xn)$ is a chi-square with q(q + 1)/2 degrees of freedom (df) asymptotically under H0 : ∑ = I. Then the testing rule based on l would be to reject H0 : ∑ = I for some large values of l. Thus one may complete the test H0 : ∑ = I asymptotically by invoking a table for the chi-square distributions. One may also take the MC approach for obtaining null distribution of l in the following order. • Generate pseudo random normal vectors of size n from Nq(0, I). • Then compute the LR statistic of l. • Iterate (I) and (II) M times and order M number of the LR statistics of l. • From the ordered statistics of l’s, obtain (or estimate) pth quantile for any given probability p. • Repeat K times from (I) to (IV), obtain K number of pth quantiles and average them. Then one can carry out the LR test by obtaining the critical values for any given significance levels or p-values using the procedure from (I) to (V). In order to investigate the behavior of quantiles obtained from the MC method and compare them with quantiles from the chi-square distributions which are the limiting distributions, we have obtained quantiles (or critical values) of l, the log likelihood ratio statistic, for some selected sample sizes, 10, 15, 20, 25, and 30 and probabilities (or significance levels), 0.01, 0.05, 01, 0.9, 0.95, and 0.99 by choosing M = 100,000 and K = 2,000 for N2(0, I) and N3(0, I). We tabulated the results in Tables 1 and 2 for N2(0, I) and N3(0, I), respectively. Also we included quantiles, $χp2(3)$ and $χp2(6)$ for the chi-square distributions with 3 and 6 df s to compare them with quantiles obtained by the MC method. We note that quantiles of l obtained from MC method approach to $χp2(3)$ and $χp2(6)$ as the sample sizes increase. Therefore one may conclude that one should use quantiles obtained from the MC method especially when sample sizes are small. The simulation study will confirm this observation later. We also note that as q, the dimension, increases, the difference between quantiles from the MC method and chi-square distributions tends to become wider. All computations have been conducted using the SAS/IML PC-version. Then we may finish the test for testing H0 : ∑ = I by obtaining a critical value for l using the MC method for the given significance level. It would be interesting to compare their performance and precision between the two LR tests. This will be accomplished in the next section with a simulation study. In the tables of the next section, MC implies the LR test based on the permutation principle and AS means the LR one applying asymptotically the chi-square distribution. We begin the next section with some numerical examples. 3. Examples and a simulation study We first illustrate the two tests, MC and AS with the head data of brothers (Frets, 1921) summarized in Mardia et al. (1979) and the turtles data in Jolicoeur and Mosimann (1960). We note that the brothers data set is bivariate and the sample size is 25. We also note that the turtles data set is tri-variate and the sample size is 24. Therefore we have used the chi-square distributions with 3 and 6 dfs for the AS tests. For the brothers data, one may use the null distribution of obtained from the MC method in Table 1 when n = 25. However we have applied the MC method again to obtain the p-values for the comparison between the two LR tests for both cases. Mardia et al. (1979) were interested in investigating the structure of the covariance matrix ∑ for the head length between the first and second sons through a testing approach whether they are independent or not. However their conclusions were implicit since they could not obtain exact p-values even though they are asymptotic. In this study, also we consider investigating the structure of the covariance matrix ∑ by using the two LR tests for the following two hypotheses such as $H01:(σ12σ12σ21σ22)=(1005050100)$ and $H02:(σ12σ12σ21σ22)=(10000100).$ We obtained the p-values for the MC and AS tests for the two null hypotheses, H01 and H02 and tabulated them in Tables 3 and 4. The two test, MC and AS show similar patterns for the p-values for each case. Therefore one may choose H01 for the covariance matrix ∑. As another example, 24 turtles were collected and for each turtle, the carapace dimensions were measured in three mutually perpendicular directions of space: length, maximum width, and height. More detailed definitions and explanations of these measurements and contents, you may refer to Jolicoeur and Mosimann (1960). Each specimen is therefore represented in this study by a set of three measurements. Originally Jolicoeur and Mosimann (1960) were interested in the principal component analysis among three variables. However in this study, we are interested in detecting the structures of covariance matrix which are described as the two null hypotheses, H03 and H04. $H03:(σ12σ12σ13σ21σ22σ23σ31σ32σ32)=(1407535755020352010)$ and $H04:(σ12σ12σ13σ21σ22σ23σ31σ32σ32)=(1400005000010).$ We have obtained that 6.2096584 and 110.33072 as the values of l, respectively. The respective p-values are summarized in Tables 5 and 6 with the two methods to obtain the p-values. The tables show the strong evidence for H03 for the covariance matrix. Now we compare performance and precision between the two LR tests, MC and AS by obtaining empirical powers through a simulation study under several scenarios for the bivariate normal case. We conducted a simulation study by generating bivariate normal pseudo-random vectors with a zero mean vector and varying the values of components of the covariance matrix with six cases of the sample sizes, 5, 10, 15, 20, 25, and 30 in order to inspect the behaviors of the two tests for the small sample cases. In the tables, (1, 1, 0) means that $σ12=σ22=1$ and σ12 = 0, which is I. We have applied the chi-square distribution with 3 df for the AS test since we deal with the bivariate case. In Table 7, we only consider varying the values of σ12 with $σ12=σ22=1$. In Table 8, we considered the case by varying the value of $σ22$ with $σ12=1$ and σ12 = 0. Finally we consider the case that both variances vary independently (σ12 = 0) and dependently (σ12 = 0.5) in Table 9. We chose 0.05 for the nominal significance level for all cases with 10,000 iterations for each simulation. All the computations were conducted with SAS/IML with PC-version. We first note that from Table 7, the results are almost symmetric when the values of covariance are assigned with the opposite signs. For this reason, we consider only positive values for covariance in Tables 8 and 9. From Table 7, MC test achieves its nominal significance level well while AS one always achieves higher values than the nominal significance level for all cases. The reason for this may come from the fact that quantiles of the chi-square distribution with 3 df are lower than those from the MC method for all sample sizes as observed in Table 1. However as the sample sizes increase, empirical significance levels approach to the nominal one for the AS test. It is therefore recommended to use quantiles obtained from the MC method for the small sample case. In Table 8, one may observe that reversal phenomenon about empirical powers happened for the AS test. Even though the sample size increases, the empirical power decreases for the AS test for some cases. In all the tables, as the difference between two variances increases and/or covariance approaches to 1, the empirical powers increases. Finally we note that the empirical powers of MC are all lower than those of AS as we have expected since the quantiles of l approaches to $χ0.052(3)$ from above. 4. Concluding remarks In (2.1), we have expressed the LR function with a multiplication of q number of LR functions with individual eigenvalues of the sample covariance matrix Sn and noted that (2.1) is a multiplication of Λjn’s which are independent and distributed as chi-square with n − 1 df. Using this expression, we have tried to obtain a reasonable test procedure for the covariance matrices based on Λjn’s but simply failed. The failure of obtaining any test procedure with Λjn’s may be because the relation between a matrix and its corresponding eigenvalues may not be fully investigated in the sense that which eigenvalue corresponds to which component of the matrix. Therefore it would be salient to have a precise or reasonable relation between a matrix and its eigenvalues. In Tables 1 and 2, we have already noticed that as the sample sizes increase, quantiles obtained from the MC method approach to the limiting quantiles of the chi-square distribution. This phenomenon is a standard that confirms the large sample approximation theory in general. Therefore we may recommend to apply the MC method, when sample sizes are small. Or even for any reasonable sample sizes, the computation time to obtain a distribution with the MC method would be negligible. The MC method can be applied other than the LR statistics for the test of covariance matrix if the null hypothesis is non-ambiguous and well-defined. For example, Park (2017) has used the MC method to obtain a null distribution of LR statistics for the multivariate simultaneous test. However it would be difficult to apply the MC method for a nonparametric test since the distribution of population for the null hypothesis is too broad to choose a specific one. One may also note that Kim and Cheon (2013) applied the MC method to estimate the posterior distribution in the Bayesian analysis. Finally we note that the quantiles from the chi-square distribution in Table 1 are higher than those obtained the MC method for all cases. This is why the empirical powers of AS test in Table 7 to 9 are higher than those of the MC one for all cases. One may also be suspicious that the empirical powers of MC test are not exactly 0.0500 under H0 : ∑ = I in spite of using the same MC method to obtain quantiles of l. The reason for this is that we have used different seed numbers to generate pseudo random vectors for each case. TABLES ### Table 1 Quantiles (or critical values) for some selected probabilities (or significance levels) and sample sizes for l for N2(0, I) Quantile$χp2(3)$n 1015202530 q0.010.11480.14380.13280.12790.12510.1232 q0.050.35180.44020.40670.39170.38290.3775 q0.100.58440.73100.67540.65050.63610.6270 q0.906.25147.78197.20946.94896.80056.6736 q0.957.81479.71719.00788.68398.49848.3793 q0.9911.344914.075013.062812.596712.330512.1624 ### Table 2 Quantiles (or critical values) for some selected probabilities (or significance levels) and sample sizes for l for N3(0, I) Quantile$χp2(6)$n 1015202530 q0.010.87211.13441.03150.98650.96110.9453 q0.051.63542.12731.93421.85031.80281.7728 q0.102.20412.86722.60682.49392.42952.3893 q0.9010.644613.839212.581912.036911.731111.5359 q0.9512.591616.370914.882614.237813.876313.6454 q0.9916.811921.866819.868919.003818.525718.2169 p-values for H01 Testp-value MC0.2722 AS0.2349 p-values for H02 Testp-value MC0.0005 AS0.0042 p-values for H03 Testp-value MC0.4683 AS0.4001 p-values for H04 Testp-value MC0.0000 AS0.0000 ### Table 7 Empirical powers by varying covariance only Testn($σ12,σ22$, σ12) (1, 1, 0)(1, 1, 0.2)(1, 1, −0.2)(1, 1, 0.5)(1, 1, −0.5)(1, 1, 0.8)(1, 1, −0.8) MC50.06370.07510.07360.13240.13500.33040.3341 100.05030.07430.07110.22820.21680.72650.7283 150.04930.08130.08240.34520.34190.94190.9383 200.05000.09700.09060.45830.45560.99210.9924 250.05100.11520.11200.59440.58420.99990.9996 300.04900.12140.12290.68860.68101.00001.0000 AS50.18390.19690.20000.29730.29240.61440.6158 100.09940.12910.12760.32860.32010.87900.8821 150.08060.12010.12090.43400.42870.98350.9823 200.07230.12860.12550.53840.53590.99830.9975 250.06730.13770.13560.64270.62820.99990.9999 300.06260.14570.14570.72990.72371.00001.0000 ### Table 8 Empirical powers by varying only one variance with 0 covariance Testn($σ12,σ22$, σ12) (1, 1.2, 0)(1, 0.8, 0)(1, 1.5, 0)(1, 0.5, 0)(1, 1.8, 0)(1, 0.2, 0) MC50.07660.06090.11040.08410.15370.2022 100.07000.05500.13060.12000.21480.5617 150.07160.05730.15470.18790.28300.8632 200.07260.06170.18420.25980.35500.9722 250.07410.06290.19350.46020.36150.9990 300.07670.07110.21000.54120.43540.9998 AS50.17490.21110.17940.29550.20380.5930 100.09740.12980.13380.28120.20010.8439 150.08690.11720.14800.34900.26030.9683 200.08050.11490.17300.43110.33000.9963 250.08430.11590.20280.51080.39650.9995 300.08440.12020.24060.59120.47081.0000 ### Table 9 Empirical powers by varying both variances and covariance Testn($σ12,σ22$, σ12) (1.2, 0.8, 0)(1.2, 0.8, 0.5)(1.5, 0.5, 0)(1.5, 0.5, 0.5)(1.8, 0.2, 0)(1.8, 0.2, 0.5) MC50.07340.14430.13430.24160.33150.7114 100.07390.26470.22560.51550.73130.9990 150.08300.40460.34140.75950.94211.0000 200.09940.54330.46160.90210.99061.0000 250.11370.69190.59960.97770.99941.0000 300.12490.78140.68800.99351.00001.0000 AS50.19960.31710.29570.47010.61550.9713 100.12660.37800.32640.67650.88121.0000 150.12080.50230.43100.85970.98091.0000 200.12820.62130.53970.95010.99731.0000 250.13630.73080.64270.98400.99971.0000 300.14960.81530.72540.99581.00001.0000 References 1. Bai, Z, Jiang, D, Yao, JF, and Zheng, S (2009). Corrections to LRT on large-dimensional covariance matrix by RMT. The Annals of Statistics. 37, 3822-3840. 2. Beran, R, and Srivastava, MS (1985). Bootstrap tests and confidence regions for functions of a covariance matrix. The Annals of Statistics. 13, 95-115. 3. Cai, TT, and Ma, Z (2013). Optimal hypothesis testing for high dimensional covariance matrices. Bernoulli. 19, 2359-2388. 4. Costa, AFB, and Machado, MAG (2008). A new chart for monitoring the covariance matrix of bivariate processes. Communications in Statistics - Simulation and Computation. 37, 1453-1465. 5. Chung, KL (2001). A Course in Probability Theory. New York: Academic Press 6. Frets, GP (1921). Heredity of headform in man. Genetica. 3, 193-384. 7. Gupta, AK, and Bodnar, T (2014). An exact test about the covariance matrix. Journal of Multivariate Analysis. 125, 176-189. 8. Jolicoeur, P, and Mosimann, JE (1960). Size and shape variation in the painted turtle: a principal component analysis. Growth. 24, 339-354. 9. Kim, J, and Cheon, S (2013). Bayesian multiple change-point estimation and segmentation. Communications for Statistical Applications and Methods. 20, 439-454. 10. Mardia, KV, Kent, JT, and Bibby, JM (1979). Multivariate Analysis. New York: Academic Press 11. Park, HI (2017). A simultaneous inference for the multivariate data. Journal of the Korean Data Analysis Society. 19, 557-564. 12. Pinto, LP, and Mingoti, SA (2015). On hypothesis tests for covariance matrices under multivariate normality. Pesquisa Operacional. 35, 123-142. 13. Silvey, SD (1975). Statistical Inference. London: Chapman and Hall
2018-02-17 21:06:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8176613450050354, "perplexity": 705.4919719054336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807825.38/warc/CC-MAIN-20180217204928-20180217224928-00187.warc.gz"}
https://stats.stackexchange.com/questions/509447/whats-the-pros-and-cons-between-huber-and-pseudo-huber-loss-functions
# What's the pros and cons between Huber and Pseudo Huber Loss Functions? The Huber Loss is: $$huber = \begin{cases} \frac{1}{2} t^2 & \quad\text{if}\quad |t|\le \beta \\ \beta |t| &\quad\text{else} \end{cases}$$ The pseudo huber is: $$pseudo = \delta^2\left(\sqrt{1+\left(\frac{t}{\delta}\right)^2}-1\right)$$ What are the pros and cons of using pseudo huber over huber? I don't really see much research using pseudo huber, so I wonder why? For me, pseudo huber loss allows you to control the smoothness and therefore you can specifically decide how much you penalise outliers by, whereas huber loss is either MSE or MAE. Also, the huber loss does not have a continuous second derivative. So, what exactly are the cons of pseudo if any? 1. You don't have to choose a $$\delta$$. (Of course you may like the freedom to "control" that comes with such a choice, but some would like to avoid choices without having some clear information and guidance how to make it.) • Thanks, although i would say that 1 and 3 are not really advantages, i.e. we can make $\delta$ so it is the same curvature as MSE. And for point 2, is this applicable for loss functions in neural networks? I'm not sure Feb 14 at 16:59
2021-10-22 16:41:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115577697753906, "perplexity": 655.9018590183232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00296.warc.gz"}
http://www.askiitians.com/forums/Analytical-Geometry/24/44541/triangle.htm
if median of triangle are 5 cm 6cm and 7cm then find area of this triangle. 2 years ago Share Ans:Let the median length be m1, m2& m3. Then area of triangle A is:$A = \frac{4}{3}\sqrt{S(S-m_{1})(S-m_{2})(S-m_{3})}$where S is$S= \frac{m_{1}+m_{2}+m_{3}}{2}$$S= \frac{5+6+7}{2} = 9$$A = \frac{4}{3}\sqrt{S(S-m_{1})(S-m_{2})(S-m_{3})}$$A = \frac{4}{3}\sqrt{9(9-5)(9-6)(9-7)}$$A = \frac{4}{3}\sqrt{9.4.3.2}$$A = 8\sqrt{6}$Thanks & RegardsJitender SinghIIT DelhiaskIITians Faculty 4 months ago More Questions On Analytical Geometry Post Question Vouchers To Win!!! two lines 5x-12 y=10 and 5x-12y=40 touches a circle C1 of diameter 6 if the centre of C1 lies in the first quadrant , find the equation of the circle C2 which is concentric with C1 and cuts... in your question no circle with the given diameter can touch both the given lines Thanks and Regards, M.MURALIKRISHNA askIITians faculty MuraliKrishna Medavaram 8 months ago Distance between 2 parallel lines (in this case the given lines) is given by : (c1-c2) / (a^2+b^2)^1/2 [where c1 and c2 are constants]. In this case the distance between the parallel lines... Ansh Arora 8 months ago What are complex numbers? I mean what is their significance? Imaginary/complex numbers are multiples of i. Like 2i, 4i, 2.881i etc... They arise because not all equations can be solved by using only real numbers. Real numbers are all numbers ranging... bharat bajaj 7 months ago Imaginary/complex numbers are multiples of i. Like 2i, 4i, 2.881i etc... They arise because not all equations can be solved by using only real numbers. Real numbers are all numbers ranging... bharat bajaj 7 months ago find the angle between the two st lines 3x=4y+7 and 5y=12x+6 and also the equations to the two lines which pass through the point (4,5) and make equal angles with the two given lines Ans: Hello student, Please find answer to your question below …...........(1) ............(2) Angle b/w two lines: Let the slope of the lines be ‘m’ which make equal angle... Jitender Singh 3 months ago The slope of the tangent to a curve y=f(x) at (x, f(x)) is 2x+1. If the curve passes through the point (1,2), then the area of the region bounded by the curve, the x-axis and the line x=1 is... Ans: Hello Student, Please find answer to your question below Slope of the curve at any point x Integrate both sides It is passing through (1, 2) Area bounded by curve in the given region: Jitender Singh 3 months ago cos^2a-6sinacosa+3sin^2a+2 Ans: For maxima: when Maximum value: Thanks & Regards Jitender Singh IIT Delhi askIITians Faculty Jitender Singh 4 months ago here ,a,b.c are vectors. What is the relation of a,b,c ? Does it say a-b-c=0 ? what do we need to find ? Pls define the question properly. Thanks & Regards Sunil Kumar, askIITians Faculty M.Sc., IIT Kharagpur Sunil Kumar 11 months ago View all Questions »
2014-11-01 03:10:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.519208550453186, "perplexity": 2323.904089854649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637903439.28/warc/CC-MAIN-20141030025823-00152-ip-10-16-133-185.ec2.internal.warc.gz"}
https://pypi.org/project/filtercoffee/0.3/
A simple WSGI Middleware for compiling CoffeeScript to JavaScript on the fly ## FilterCoffee FilterCoffee is a simple WSG middleware for compiling CoffeeScript to JavaScript on the fly. It is intended for use in the development of WSGI applicatons, but for deployed applications you should use some other strategy for delivering your compiled CoffeeScript (e.g., write a script to compile all your CoffeeScripts to JavaScript). FilterCoffee caches the compiled CoffeeScripts in memory but will recompile scripts when they are modified. A CoffeeScript compilation error results in the request returning a 500 error containing the CoffeeScript error message in the body. Error messages are also output to the wsgi.error stream so that they will show up in your console or in your servers error log. ### Installation FilterCoffee depends on CoffeeScript and in turn node.js. CoffeeScript expects the coffee command to be available on the current PATH. See the installation instructions for CoffeeScript for more information: http://coffeescript.org/#installation There are a number of different ways to install CoffeeFilter: #### Using PIP This is the preferred method. Run: pip install filtercoffee #### For an Individual Applicaiton Copy filtercoffee.py into an appropriate place in your WSGI applications code. Run: python setup.py install ### Basic Usage You can wrap your WSGI application in the FilterCoffee middleware like so (assuming that the variable app contains your WSGI application and the variable debug is only set when the application is in development mode: if debug: import filtercoffee app = filtercoffee.FilterCoffee( app, static_dir='/path/to/static/files') FilterCoffee will now intercept any request that ends in .js and check if a corresponding .coffee file exists. If a .coffee file exists it will be compiled and the comiled output will be returned in the response (the compiled output is also cached such that recompilation only occurs if the .coffee file changes). If no .coffee file exists, the original application is called to handle the request. FilterCoffee has flexible support for deciding what it should consider a Coffee- or JavaScript. Check the arguments to FilterCoffee’s __init__ method. ## Changelog 0.3 Feb 01, 2012 • Include README in source distribution 0.2 Jan 25, 2012 • PyPI 0.1 Jan 17, 2012 • Initial release ## Project details Uploaded source
2022-11-27 10:13:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26479262113571167, "perplexity": 8052.140889304122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00312.warc.gz"}
https://www.physicsforums.com/threads/hybridization-and-sigma-and-pi-bonds.446377/
# Hybridization and Sigma and Pi bonds ## Main Question or Discussion Point Say we have sp2 hybridization of nitrogen, can the unhybridized p orbital form sigma bonds too or can it only form pi bonds? So in the case of nitrogen, it would be able to form molecules with 2 sigma bonds and one pi bond, correct? Thanks ## Answers and Replies Correct, I think - isn't this just what we have in e.g. oximes, or imines? DrDu
2020-08-08 11:37:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8715705275535583, "perplexity": 3957.038761674295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737645.2/warc/CC-MAIN-20200808110257-20200808140257-00265.warc.gz"}
https://installwiki.blogspot.com/2018/04/user-talkcacyclewikedarchive-011.html
## User talk:Cacycle/wikEd/Archive 011 src: bloximages.newyork1.vip.townnews.com ## Inserting/deleting newlines spuriously It appears that wikEd sometimes inserts or deletes a newline in the edit window. Typical scenario is as follows: 1. you copy and paste something in the edit window 2. you see that there is e.g. one empty line in the edit window 3. after you save the changes and edit the article again, you find out that there are two empty lines (or no empty lines) where there used to be one This happens in Google Chrome, never in Firefox. Do you know what might be the reason? I'm suspecting a Webkit bug... GregorB (talk) 22:38, 18 January 2009 (UTC) Funny, it happened without a copy/paste, just as I was writing this. See the diff to the previous edit. GregorB (talk) 22:40, 18 January 2009 (UTC) When pasting rich text it is not always easy to see the number of line breaks. Push the or button to remove the original formatting of your pasted text. I could not reproduce your problem in Chrome, please provide a detailed how-to so that I can reproduce it. Thanks, Cacycle (talk) 00:26, 19 January 2009 (UTC) Here's what "works" for me: 2. Type #<Enter>#<Enter>#<Enter> 3. Click on "Preview" (or on ). 4. "#" characters from the step 2 are now separated by empty lines The same happened with the numbered list items from my edit as I was writing it. :) GregorB (talk) 17:01, 19 January 2009 (UTC) Confirmed -- this also reproduces the bug on Safari. Viktor Haag (talk) 13:29, 22 January 2009 (UTC) I also echo the suspicion that this may be a WebKit issue: this behaviour also happens to me when using wikEd on Safari (Safari 3.2.1/5525.27.1 running on Mac OSX 10.5.6 on a Dual2 PPC G5). This is not limited to pasting rich text for me. This happens to me during the normal course of editing a page, and usually seems to be clustered around lines with headings and/or templates in them. Unfortunately, it doesn't seem to me to be entirely reproducible or predictable. Viktor Haag (talk) 13:54, 19 January 2009 (UTC) Maybe it is a Mac-only problem. GregorB: please could you fill out the bug report form on the top for your system information. Thanks, Cacycle (talk) 14:46, 22 January 2009 (UTC) Here it is: • WikEd version: 0.9.68a • Browser id: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.43 Safari/525.19 • Console errors: none (hope I'm getting it right: ctrl-shift-J window) • User scripts: none • OS: Windows XP SP3 • Problem and steps to reproduce: as described above Apparently not Mac-related... GregorB (talk) 15:29, 22 January 2009 (UTC) Thanks, I can now reproduce this and will try to fix this as soon as I find the time. Cacycle (talk) 02:55, 26 January 2009 (UTC) Safari and Chrome use <div>...</div> tags for linebreaks instead of <br>. I have no idea why they do this. It seriously messes things up and there is no easy workaround for it other than using browser detection as far as I can see :-S Cacycle (talk) 04:29, 27 January 2009 (UTC) Divs are inserted indeed; this behavior can be seen at e.g. http://www.kevinroth.com/rte/demo.htm. I could not find this reported at http://code.google.com/p/chromium/issues/list - which is a bit odd, since one would expect this to hurt other rich text editors too. It's probably a some sort of WebKit misfeature. I'm currently doing 25% of editing in Chrome, 75% in Firefox. This could force me back to 100% Firefox - no big deal perhaps, but still... GregorB (talk) 09:36, 27 January 2009 (UTC) I have probably fixed this with browser detection in 0.9.71. Please report back if you still experience problems related to this. Thanks, Cacycle (talk) 03:15, 2 February 2009 (UTC) Just found this thread. This is still happening to me (Feb 4, Chrome/WinXP). First it deletes the breaks[1], then inserts too many.[2] I've turned off wikEd and it works fine. Incidentally, in Chrome misspelled words are only highlighted when wikEd is turned off... don't know if you aware of this or not. NJGW (talk) 07:08, 5 February 2009 (UTC) It works fine for me with Chrome >= 1.0.154.43 / WinXP. Please could you fill out the full bug report form on top of the page so that I can reproduce your problems. Missing spellcheck is a browser problem not related to wikEd. Thanks in advance, Cacycle (talk) 04:53, 19 February 2009 (UTC) Unfortunately, the problem persists for me in 0.9.73a (i.e. I can still reproduce it from steps given above). My version of Google Chrome is now 1.0.154.48, and the rest is the same (Win XP SP3, etc.). GregorB (talk) 09:07, 19 February 2009 (UTC) I still have the problem here as well. Using Safari 4 5528.16, with default user agent (which I assume is Safari Public Beta - Mac), with 0.9.76b (built Mar 29/09). If I insert a new line in front of a paragraph (on the same line just before its first character), the newline seems to stick around. If I try to insert a newline by adding it to the end of the preceeding paragraph, it appears to get deleted. Viktor Haag (talk) 15:54, 30 March 2009 (UTC) The problem still seems to exist with same version of Safari as my last report, and with 0.9.78G (build April 26/2009). Viktor Haag (talk) 16:31, 28 April 2009 (UTC) Please be as specific as possible so that I can reproduce your problem - what do you mean by "sticking around" and when does it "appear to get deleted"? Do you also experience the Safari bug that turns the text blue after pushing the [T] button? Cacycle (talk) 20:53, 28 April 2009 (UTC) The first thing I tried in the new Google Chrome (2.0.172.28) was the four-step process to reproduce the problem described above. WikEd is 0.9.78 at the moment. Unfortunately, the behavior is the same... GregorB (talk) 09:52, 22 May 2009 (UTC) I too have tried 0.9.79c with the four-step process above with Safari (v4 Public Beta on Mac), and the behaviour is still the same, as with Google Chrome. Here is a process to describe the problem I'm seeing on Safari in addition to the test listed above: 1. Open e.g. thispage in a new tab. 2. Type "== This is a heading ==" on the top line of the text box (line one). 3. Type "This is some plain text following the heading." on the very next line (line two). 4. Click "Preview" : the text in the editing window changes so that there's a newline inserted above the heading text and between the heading text and the line of plain text. This is what I mean by "spurious newlines". Viktor Haag (talk) 14:20, 27 May 2009 (UTC) With WikEd 0.9.80 G, running on Safari Version 4.0 (5530.17), this behaviour is partially fixed; from the immediately preceding example, now there's a newline inserted between the heading text and the line of plain text, but no extra line inserted above the heading text. Viktor Haag (talk) 19:32, 15 June 2009 (UTC) Ditto for Chrome 2.0.172.33 (Win XP), wikEd 0.9.83a. Unfortunately this is still enough for me to refrain from using Chrome. GregorB (talk) 09:22, 29 June 2009 (UTC) With WikEd 0.9.85d G, running on Safari Version 4.0.2 (5530.19), the immediately preceding test still inserts a spurious newline between the heading text and the line of plain text. I have noticed anecdotally that, when creating table markup, WikEd inserts newlines between "rows" of table text as well. This bug is not a critical fault, but it is a major pain for usability.Viktor Haag (talk) 14:59, 6 August 2009 (UTC) I am currently working on a new highlighting engine, let's see if it persists after the move. If not, I will try to fix it then, promised. Cacycle (talk) 17:23, 6 August 2009 (UTC) This is good news; it would be useful to know when the target date for this new engine is; I'm using WikEd 0.9.88b G with Safari Version 4.0.3 (5531.9), and the problems described above in this thread are still present. Viktor Haag (talk) 13:55, 1 October 2009 (UTC) I'm now using 0.9.90f G with Safari 4.0.4 and the example above still has problems. Viktor Haag (talk) 14:37, 17 February 2010 (UTC) I came here to report that an other user and I have this problem too at the hungarian wikipedia. We use Chrome, personally I use the 6.0.427.0 beta, I really love wikEd, I didn't want to turn it but this bug is very annoying... I hope you the fix come soon (either from chromium dev's or wikEd's). ~ Boro (talk) 18:29, 10 August 2010 (UTC) Please could you file a bug report (see top of this page) with wikEd and Chrome versions at the bottom of this page and a link to this section? Thanks, Cacycle (talk) 21:59, 10 August 2010 (UTC) Please see below, should be fixed in 0.9.91k. Cacycle (talk) 21:10, 16 August 2010 (UTC)== Inserting/deleting newlines spuriously == It appears that wikEd sometimes inserts or deletes a newline in the edit window. Typical scenario is as follows: 1. you copy and paste something in the edit window 2. you see that there is e.g. one empty line in the edit window 3. after you save the changes and edit the article again, you find out that there are two empty lines (or no empty lines) where there used to be one This happens in Google Chrome, never in Firefox. Do you know what might be the reason? I'm suspecting a Webkit bug... GregorB (talk) 22:38, 18 January 2009 (UTC) Funny, it happened without a copy/paste, just as I was writing this. See the diff to the previous edit. GregorB (talk) 22:40, 18 January 2009 (UTC) When pasting rich text it is not always easy to see the number of line breaks. Push the or button to remove the original formatting of your pasted text. I could not reproduce your problem in Chrome, please provide a detailed how-to so that I can reproduce it. Thanks, Cacycle (talk) 00:26, 19 January 2009 (UTC) Here's what "works" for me: 2. Type #<Enter>#<Enter>#<Enter> 3. Click on "Preview" (or on ). 4. "#" characters from the step 2 are now separated by empty lines The same happened with the numbered list items from my edit as I was writing it. :) GregorB (talk) 17:01, 19 January 2009 (UTC) Confirmed -- this also reproduces the bug on Safari. Viktor Haag (talk) 13:29, 22 January 2009 (UTC) I also echo the suspicion that this may be a WebKit issue: this behaviour also happens to me when using wikEd on Safari (Safari 3.2.1/5525.27.1 running on Mac OSX 10.5.6 on a Dual2 PPC G5). This is not limited to pasting rich text for me. This happens to me during the normal course of editing a page, and usually seems to be clustered around lines with headings and/or templates in them. Unfortunately, it doesn't seem to me to be entirely reproducible or predictable. Viktor Haag (talk) 13:54, 19 January 2009 (UTC) Maybe it is a Mac-only problem. GregorB: please could you fill out the bug report form on the top for your system information. Thanks, Cacycle (talk) 14:46, 22 January 2009 (UTC) Here it is: • WikEd version: 0.9.68a • Browser id: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.43 Safari/525.19 • Console errors: none (hope I'm getting it right: ctrl-shift-J window) • User scripts: none • OS: Windows XP SP3 • Problem and steps to reproduce: as described above Apparently not Mac-related... GregorB (talk) 15:29, 22 January 2009 (UTC) Thanks, I can now reproduce this and will try to fix this as soon as I find the time. Cacycle (talk) 02:55, 26 January 2009 (UTC) Safari and Chrome use <div>...</div> tags for linebreaks instead of <br>. I have no idea why they do this. It seriously messes things up and there is no easy workaround for it other than using browser detection as far as I can see :-S Cacycle (talk) 04:29, 27 January 2009 (UTC) Divs are inserted indeed; this behavior can be seen at e.g. http://www.kevinroth.com/rte/demo.htm. I could not find this reported at http://code.google.com/p/chromium/issues/list - which is a bit odd, since one would expect this to hurt other rich text editors too. It's probably a some sort of WebKit misfeature. I'm currently doing 25% of editing in Chrome, 75% in Firefox. This could force me back to 100% Firefox - no big deal perhaps, but still... GregorB (talk) 09:36, 27 January 2009 (UTC) I have probably fixed this with browser detection in 0.9.71. Please report back if you still experience problems related to this. Thanks, Cacycle (talk) 03:15, 2 February 2009 (UTC) Just found this thread. This is still happening to me (Feb 4, Chrome/WinXP). First it deletes the breaks[3], then inserts too many.[4] I've turned off wikEd and it works fine. Incidentally, in Chrome misspelled words are only highlighted when wikEd is turned off... don't know if you aware of this or not. NJGW (talk) 07:08, 5 February 2009 (UTC) It works fine for me with Chrome >= 1.0.154.43 / WinXP. Please could you fill out the full bug report form on top of the page so that I can reproduce your problems. Missing spellcheck is a browser problem not related to wikEd. Thanks in advance, Cacycle (talk) 04:53, 19 February 2009 (UTC) Unfortunately, the problem persists for me in 0.9.73a (i.e. I can still reproduce it from steps given above). My version of Google Chrome is now 1.0.154.48, and the rest is the same (Win XP SP3, etc.). GregorB (talk) 09:07, 19 February 2009 (UTC) I still have the problem here as well. Using Safari 4 5528.16, with default user agent (which I assume is Safari Public Beta - Mac), with 0.9.76b (built Mar 29/09). If I insert a new line in front of a paragraph (on the same line just before its first character), the newline seems to stick around. If I try to insert a newline by adding it to the end of the preceeding paragraph, it appears to get deleted. Viktor Haag (talk) 15:54, 30 March 2009 (UTC) The problem still seems to exist with same version of Safari as my last report, and with 0.9.78G (build April 26/2009). Viktor Haag (talk) 16:31, 28 April 2009 (UTC) Please be as specific as possible so that I can reproduce your problem - what do you mean by "sticking around" and when does it "appear to get deleted"? Do you also experience the Safari bug that turns the text blue after pushing the [T] button? Cacycle (talk) 20:53, 28 April 2009 (UTC) The first thing I tried in the new Google Chrome (2.0.172.28) was the four-step process to reproduce the problem described above. WikEd is 0.9.78 at the moment. Unfortunately, the behavior is the same... GregorB (talk) 09:52, 22 May 2009 (UTC) I too have tried 0.9.79c with the four-step process above with Safari (v4 Public Beta on Mac), and the behaviour is still the same, as with Google Chrome. Here is a process to describe the problem I'm seeing on Safari in addition to the test listed above: 1. Open e.g. thispage in a new tab. 2. Type "== This is a heading ==" on the top line of the text box (line one). 3. Type "This is some plain text following the heading." on the very next line (line two). 4. Click "Preview" : the text in the editing window changes so that there's a newline inserted above the heading text and between the heading text and the line of plain text. This is what I mean by "spurious newlines". Viktor Haag (talk) 14:20, 27 May 2009 (UTC) With WikEd 0.9.80 G, running on Safari Version 4.0 (5530.17), this behaviour is partially fixed; from the immediately preceding example, now there's a newline inserted between the heading text and the line of plain text, but no extra line inserted above the heading text. Viktor Haag (talk) 19:32, 15 June 2009 (UTC) Ditto for Chrome 2.0.172.33 (Win XP), wikEd 0.9.83a. Unfortunately this is still enough for me to refrain from using Chrome. GregorB (talk) 09:22, 29 June 2009 (UTC) With WikEd 0.9.85d G, running on Safari Version 4.0.2 (5530.19), the immediately preceding test still inserts a spurious newline between the heading text and the line of plain text. I have noticed anecdotally that, when creating table markup, WikEd inserts newlines between "rows" of table text as well. This bug is not a critical fault, but it is a major pain for usability.Viktor Haag (talk) 14:59, 6 August 2009 (UTC) I am currently working on a new highlighting engine, let's see if it persists after the move. If not, I will try to fix it then, promised. Cacycle (talk) 17:23, 6 August 2009 (UTC) This is good news; it would be useful to know when the target date for this new engine is; I'm using WikEd 0.9.88b G with Safari Version 4.0.3 (5531.9), and the problems described above in this thread are still present. Viktor Haag (talk) 13:55, 1 October 2009 (UTC) I'm now using 0.9.90f G with Safari 4.0.4 and the example above still has problems. Viktor Haag (talk) 14:37, 17 February 2010 (UTC) I came here to report that an other user and I have this problem too at the hungarian wikipedia. We use Chrome, personally I use the 6.0.427.0 beta, I really love wikEd, I didn't want to turn it but this bug is very annoying... I hope you the fix come soon (either from chromium dev's or wikEd's). ~ Boro (talk) 18:29, 10 August 2010 (UTC) Please could you file a bug report (see top of this page) with wikEd and Chrome versions at the bottom of this page and a link to this section? Thanks, Cacycle (talk) 21:59, 10 August 2010 (UTC) Please see below, should be fixed in 0.9.91k. Cacycle (talk) 21:10, 16 August 2010 (UTC) ## Incremental find in Google Chrome Incremental find does not work right in Google Chrome (wikEd 0.9.75, Chrome 1.0.154.48). Here's how to reproduce: 2. In the wikEd search box, type letter by letter: wiked 3. Notice that with each keypress the focus jumps to the next word matching the entered string, instead of staying on the same word. So, typing "wiked" gets you to the fifth instance of the word in the edit window, instead of the first. This works as expected in Firefox 3. GregorB (talk) 14:13, 11 March 2009 (UTC) Thanks for reporting, I will check into this (might be a tricky one...) 23:36, 12 March 2009 (UTC) Works in wikEd 0.9.91o. Cacycle (talk) 19:24, 24 August 2010 (UTC) src: www.gannett-cdn.com ## Basic fixes breaks CATSORT When I use the single check button, it removes the space following the pipe in a Category link on an article that is supposed to be at the top of its category (e.g., "[[Category:History of Louisville, Kentucky| ]]" --> "[[Category:History of Louisville, Kentucky|]]"). It took me a while to realize this as the diff before saving is nearly identical, but if the space isn't reinserted before saving, the link is saved as with the page title inserted after the pipe. This screws up the sorting in the category and it would be helpful if you could get the script to skip fixing spaces following a pipe in category links. I realize it must be wikimedia software adding the pagename after the pipe when saving--causing the discrepancy between diffs before and after saving--but tweaking the script would prevent accidental changes to the sorting. Thanks. --Ost (talk) 18:39, 14 July 2009 (UTC) Would you happen to know more about the reason for this behavior? It's really annoying to repair similar miscategorizations at a later stage. Thanks. -- --Preceding unsigned comment added by 217.227.116.77 (talk) 08:09, 5 June 2010 (UTC) Fixed in 0.9.91i. Cacycle (talk) 20:44, 22 July 2010 (UTC) src: cdn.vox-cdn.com ## Possible Firefox 3.6 bug? I've been using Firefox 3.6 for a while, and I've had a problem with the search function. When I type a letter in the search box, my cursor jumps to the first occurrence of that letter. When I click again in the search box and type a second letter, the cursor jumps to the first occurrence of that two-letter combination; and so on. I don't see this behavior in Firefox 3.5.7, but I'm not 100% positive every extension and setting is the same between the two installations. Any ideas? -- Malik Shabazz Talk/Stalk 17:44, 20 January 2010 (UTC) I will check into this next week :-) Thanks for reporting, Cacycle (talk) 18:50, 20 January 2010 (UTC) Thanks. -- Malik Shabazz Talk/Stalk 18:52, 20 January 2010 (UTC) Same here (Firefox 3.6, Windows XP). Also, initially there is no cursor in the edit box. It appears only when you go to the search box, then return to the edit box. GregorB (talk) 21:43, 21 January 2010 (UTC) On a more positive note, this appears to be fixed in 3.6. GregorB (talk) 22:12, 21 January 2010 (UTC) I can confirm both bugs on FF 3.6 (Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6) (Leopard 10.5.8). Search & replace are useless this way, unless a user types data into the main edit box and/or drag-selects, then drags/drops into the search/replace fields where appropriate. Very cumbersome workaround, for fast typists anyway. ---Schweiwikist (talk) 07:15, 23 January 2010 (UTC) Followup: Bugs gone under: (Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5) ---Schweiwikist (talk) 07:23, 23 January 2010 (UTC) Editbox cursor bug still present under Windows Vista and 7. Can be corrected on an edit-by-edit basis by turning off WikiEd (top right of the page) and turning it back on. --King Öomie 15:10, 25 January 2010 (UTC) Fixed with a workaround in 0.9.90c and filed the Mozilla bug report 543204. Cacycle (talk) 13:55, 30 January 2010 (UTC) ## wikEd adds 2 return characters at the top of the page wikEd has always done some weird doubling of return characters when typing a return character or pasting something in the page, but I just turn of wikEd for a second paste it in and turned it back on, no problem. But recently, don't know exactly when, wikEd started to ad two return characters at the top of the page, this is extremely annoying as the only way to get rid of them, since they don't show initially, is to do a full preview (not inline) which then reveals them, so I can see them, if I just delete them them there, they're not gone. Removing them, deactivating wikEd, and reactivating it just makes them pop up again. The only way is to remove them, preview the page again and then it's only one, and then I can remove that one too. Editing a page and saving it in one turn is out of the question at this point. This is, as you can imagine, extremely annoying as it adds a blank space at the top of the page. I use Mac OS X 10.6, Safari 4.0.4 (6531.32.10), no changes on my end as far as I can remember. Xeworlebi (toc) 14:09, 25 January 2010 (UTC) This is a known problem with Webkit-based browsers (Safari and Chrome). See this. GregorB (talk) 17:02, 25 January 2010 (UTC) Didn't see that, apologies. Looking trough it it seems a pretty old problem, I never had that much of a problem with it, but recently with the hidden double return at the top of the page it is getting to the point that I have wikEd turned off more than on, which is a shame. Xeworlebi (toc) 00:58, 26 January 2010 (UTC) This drives me away from using Google Chrome. Still, Firefox is fine, so it's not that bad in my case. GregorB (talk) 14:38, 27 January 2010 (UTC) It is a Webkit (Safari, Chrome) bug, I have filed the Webkit bug report 34377. Cacycle (talk) 19:26, 30 January 2010 (UTC) src: ep1.pinkbike.org ## Bug with preview of <source lang="xxx"> ... </source> construct Before I get to that bug, WikiEd refuses to load when I click on the New Section tab on the discussion page. So I will create this section and then edit it. ("Loading error - wikiEd 0.9.90a G (January 15, 2010) Click to disable" in browser Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.12) Gecko/20080201 Firefox/2.0.0.12) Hgrosser (talk) 23:49, 26 January 2010 (UTC) When the <source lang="xxx"> ... </source> construct is used, wikiEd does not show the syntax coloring in its preview unless Wikipedia's built-in Preview function is used to preview the page first (and this preview contains a <source> block in the same language as the wikiEd preview). Here's a sample (from Rm (Unix)#User-Proofing): Hgrosser (talk) 00:11, 27 January 2010 (UTC) I have added GeSHi support for local previews in 0.9.90d. The new section issue has already been fixed in a previous release. Thanks for the suggestion, Cacycle (talk) 22:08, 30 January 2010 (UTC) src: cdn.vox-cdn.com ## Firefox 3.6 When I use WikEd on my laptop it turning off the cursor. Anything one can do about this as I prefer the cursor on? Doc James (talk · contribs · email) 14:23, 27 January 2010 (UTC) That seems to be a Firefox 3.6. bug. The cursor appears after pushing wikEd buttons, e.g. . Cacycle (talk) 23:46, 27 January 2010 (UTC) I have filed Firefox bug report 542727. Cacycle (talk) 08:48, 28 January 2010 (UTC) Thanks hopefully they will fix it soon. Makes it a little hard to edit.Doc James (talk · contribs · email) 08:51, 28 January 2010 (UTC) Fixed with a workaround in 0.9.90c and filed the Mozilla bug report 542727. Cacycle (talk) 13:53, 30 January 2010 (UTC) Awesome job, thanks for the fix. Glycerine102 (talk) 18:39, 31 January 2010 (UTC) The workaround does not work all the time :-( Cacycle (talk) 23:36, 31 January 2010 (UTC) ## Edit summary I am using WikEd with Safari and I have a problem with the edit summary field. When I go to edit a page the editor comes up with WikEd, however, this particular field is off the screen to the far left requiring me to scroll over to type a summary and then scroll back to the right to click save. What can I do? Supertouch (talk) 14:35, 27 January 2010 (UTC) I do not see this with wikEd 0.9.90b as a Gadget and Safari 4.0.4. Please could you fill out the bug form (top of this page)? Thanks, Cacycle (talk) 23:52, 27 January 2010 (UTC) src: mormonmissionprep.com ## Edit-window view of italics within links (wikEd version: 0.9.90b G; browser id: Mozilla/5.0 (Windows; U; Windows NT 5.1; el; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7) Although they work just fine when previewing or saving a page, italics within links do not show properly within the edit window. For example, writing [[Italics (film)|''Italics'' (film)]] ought to show as [[Italics (film)|''Italics'' (film)]], but instead shows as [[Italics (film)|''Italics'' (film)]]. It is a rather odd reversal. Note: although the above text is admittedly rather more complicated than regular syntax, it also displays erroneously: it interprets the (real) closing apostrophes for the second example link's Italics as opening apostrophes for the rest of my message. It seems to be the same problem: a confusion between opening and closing apostrophes. Waltham, The Duke of 18:47, 29 January 2010 (UTC) The upcoming release of wikEd has a new highlighter that should be able to handle this. Thanks for reporting, Cacycle (talk) 21:11, 29 January 2010 (UTC) Yes, the new wikEdbeta3 handles that correctly, see the top of the page for how to test that version :-) Cacycle (talk) 23:35, 31 January 2010 (UTC) src: www.wickedlocal.com ## wikEd options don't work with the beta I tried the wikEd beta. My wikEd options don't work with it, apparently. Here they are if you want to take a look. They do indeed work with the current wikEd, though. Also, regarding wikEd beta, the buttons that appear to represent hidden templates and references on a Mac in Firefox have text that is a bit too small to read. It looks like they are using &lt;input&gt;, as they look just like how buttons are formatted in Firefox on a Mac. Gary King (talk) 22:47, 31 January 2010 (UTC) The new wikEd version has a completely new highlighting system, therefore several of the old css styles do no longer work. The buttons are no real input elements, they are actually no real html elements, they are completely realized in css :-) Maybe they are too small to read because of your custom setting? BTW, I have just published beta3, please update with Shift-Reload. Cacycle (talk) 23:31, 31 January 2010 (UTC) No, the buttons aren't affected by my custom settings because as I said, my custom settings don't even work anyway. Gary King (talk) 19:36, 1 February 2010 (UTC) Usability Initiative beta uses a new skin called vector. User scripts for that skin are on User:Gary King/vector.js, not on User:Gary King/monobook.js. Cacycle (talk) 08:45, 15 February 2010 (UTC) ## Diff takes 3-4 minutes to load I've got the latest version of Firefox, not using any odd skins, Core 2 Duo laptop. When I pull up this month's diff (with WikEd diff enabled, WikEd otherwise disabled) of WP:MOS ("220 intermediate revisions not shown"), I get "Firefox not responding". It looks like the system responds again after 3 to 4 minutes, and the diff appears to be correct. Thought you'd want to know; I don't know whether there's something odd about this month's WP:MOS diff (this has never happened before, and I've used WikEd's diff button a lot) or whether something has changed in WikEd. - Dank (push to talk) 00:25, 1 February 2010 (UTC) P.S. If it matters, I don't have a slow machine: 4 Gigs of RAM, Windows 7, Dell. - Dank (push to talk) 01:18, 1 February 2010 (UTC) Please could you provide the diff link for two specified revisions from the history page where it takes so long (e.g. http://en.wikipedia.org/w/index.php?title=Wikipedia%3AManual_of_Style&action=historysubmit&diff=341159534&oldid=334949000). BTW, with that link I do not see the problem. It could be a server / connection problem as wikEdDiff has to load both revisions from the server in the background in order to create the diff. Cacycle (talk) 08:21, 1 February 2010 (UTC) http://en.wikipedia.org/w/index.php?title=Wikipedia:Manual_of_Style&limit=250&action=history, click on 11:26 Dec 31, click on any version around the end of January (I tried several), push the button. - Dank (push to talk) 13:23, 1 February 2010 (UTC) Out of curiosity, I tried this. Takes c. 15-20 seconds or so on my 2006 laptop. GregorB (talk) 15:29, 1 February 2010 (UTC) That would be this link: http://en.wikipedia.org/w/index.php?title=Wikipedia%3AManual_of_Style&action=historysubmit&diff=341159534&oldid=335090526 which takes less than 2 s for me (including the backgrount article text loading). Looks like a server/connection problem on your side to me... Cacycle (talk) 23:22, 1 February 2010 (UTC) src: www.bicyclescreatechange.com wikEd 0.9.90d (January 30, 2010) - on Firefox 3.6 and Google Chrome 5.0.307.1. No adding a script or switching on and off works... - Kochas (talk) 03:14, 5 February 2010 (UTC) I get the same error, both with Firefox 3.0.15 [Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.15) Gecko/2009101601 Firefox/3.0.15 (.NET CLR 3.5.30729)] and 3.6 [Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)]: "Loading error - wikEd 0.9.90d G (January 30, 2010) Click to disable" - Hgrosser (talk) 04:28, 5 February 2010 (UTC) Still not working for me. --Aaroncrick (talk) 06:25, 5 February 2010 (UTC) Same problem here. Waltham, The Duke of 07:45, 5 February 2010 (UTC) And for me too. Is it related somehow to Wikipedia:Help desk#Lost toolbar and Wikipedia:VPT#Edit box & monospace style changes? - ukexpat (talk) 15:05, 5 February 2010 (UTC) OK I got it working again by disabling "Enable navigable table of contents" in the edit tab of Special:Preferences. - ukexpat (talk) 15:08, 5 February 2010 (UTC) I had the same problem. Had to disable "Enable enhanced editing toolbar" as well before wikEd worked again. No experimental features for wikEd right now, I suppose. OdinFK (talk) 15:49, 5 February 2010 (UTC) That worked for me, too, Odin. Thanks. Waltham, The Duke of 17:52, 5 February 2010 (UTC) Thanks for the hint guys, it worked. Seems the problem appeared after I played with the setting mentioned. But can any of you tell what do I miss with the two of those unchecked? - Kochas (talk) 00:27, 7 February 2010 (UTC) It is related to the newest Usability Initiative release, I have already filed the bug 22400 and work on a temporary workaround / fix. 00:38, 6 February 2010 (UTC) ## Would you consider making the duplicate edit notices optional? I know you explained further up the page (October) that you do it on purpose so that people will see the edit notices even though the focus moves to the main edit box, but getting it to line up right is pretty hit or miss, and for pages with long editnotices it means scrolling down through two or three more screens to get to the edit box. Obviously it's just a minor annoyance, nothing major, but since this is a behavior you added with a particular piece of code I would think it would be easy to disable. -- Soap Talk/Contributions 13:32, 5 February 2010 (UTC) Thanks for your suggestion, in wikEd 0.9.90e you can now add var wikEdDoCloneWarnings = false; to your monobook.js/vector.js. Cacycle (talk) 08:41, 15 February 2010 (UTC) Thanks! --Soap-- 22:29, 15 February 2010 (UTC) Thanks for the option. However, I did notice that when wikEd duplicated edit notices, it actually showed TWO copies of the edit notice below the preview. Please fix this, thanks. Gary King (talk) 18:37, 21 March 2010 (UTC) It doesn't seem to have a problem on most pages. It does have a problem here though. Gary King (talk) 19:33, 21 March 2010 (UTC) That is because the text between the two notices, i.e. the {{Statustop}} template, has been positioned out of the normal text flow to the top of the page. Cacycle (talk) 22:58, 21 March 2010 (UTC) So why does that cause wikEd to create a second edit notice below the preview? Gary King (talk) 00:24, 22 March 2010 (UTC)b The two edit notices are intended. If you have another problem you need to explain it in detail as requested above. Cacycle (talk) 07:42, 22 March 2010 (UTC) On the link that I gave above (and in this screenshot), there is one edit notice that appears above the preview, as is expected. And then wikEd creates a second edit notice below the preview, as is expected. But then wikEd creates a THIRD edit notice, which appears below the preview (I call it the "second" edit notice to appear below the preview because there are two that appear below the preview now). Gary King (talk) 16:49, 22 March 2010 (UTC) I see, thanks for mentioning this, I will check into it. In the future, would you mind to be more elaborate with your your bug reports, it is always a bit frustrating and time consuming to have to squeeze the facts out of you - I cannot read your mind :-) Cacycle (talk) 08:12, 23 March 2010 (UTC) ## wikiEd on wikia (3) Wikia made this change, and now there's another <ul> element inside #wikia_header, making the wikEd icon go inside there, and not being shown (because of the special style that ul element has). The logo should go inside #userData, but the way it's done you can't define #userData for the mocaco skin, because that's the ID of the <ul> element, and your code gets the <ul> with a getElementsByTagName, not picking the current element. You should do that instead: Also, at the next line you evaluate if (list != null ), and firefox accessing a getElementsByTagName( ... )[0] when there's an empty array evaluates it as undefined and not null. It probably should never occur, though. Here you can view the DOM node tree of the wikia skin, where it's currently loaded the WikEd icon and where it should be placed (at the end of the bottom list) --Ciencia Al Poder (talk) 17:25, 5 February 2010 (UTC) I have added your changes to wikEd 0.9.90e, please could you check if it works? Cacycle (talk) 08:39, 15 February 2010 (UTC) Yes! Now works like a charm!. Many thanks! --Ciencia Al Poder (talk) 21:08, 15 February 2010 (UTC) ## Extra spaces inserted Hi lately im experiencing that WikEd is inserting extra spaces both in the editbox and summary line, except those spaces are not regular spaces. Example in summary line: "Copy-over from [[some wiki page with spaces]]", next time you try to re-use that text, the link will be shown as red in preview-mode of the summary, while the first time it was a correct link. • WikEd used: v0.9.90d (30 January 2010) • Browser used: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729) • WikEd is remote loaded by personal skin-js at FoM-Wiki <=?©TriMoon(TM) Talk @ 10:54, 4 February 2010 (UTC) My experience also. Happens when I copy and paste something within the editbox. I made a test edit here: I wanted to duplicate the line, but extra whitespace was somehow introduced in the process. This was done with wikEd 0.9.90g, Firefox 3.6 (Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)). Apparently works fine with wikEd 0.9.90g and Firefox 3.5.8 (Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8). GregorB (talk) 23:24, 13 March 2010 (UTC) Also, extra spaces are visible only upon saving or previewing the page. Clicking on "[T]" apparently does not have any effect. GregorB (talk) 15:40, 15 March 2010 (UTC) Hopefully fixed in 0.9.90h. Please could start a new topic at the bottom of this page if you still experience this problem. Thanks, Cacycle (talk) 18:18, 20 March 2010 (UTC) ## References segregator vs. wikEd beta ref hiding I have written an alternative script to hide refs, since wikEd Beta doesn't work on Firefox 3.6 yet. It really doesn't hide refs completely, but rather it takes a simpler approach that works with plain text boxes: it replaces the first occurrence of a ref with the short code as if it were already used. Then it puts the ref's old code in a box below the main edit box. I have tested the final script on several featured articles and Comparison of Windows and Linux and with no changes to the textboxes, it does not affect the page (doesn't change the citation style on other editors unnecessarily). For more details (including how it handles unnamed refs), you will have to read the documentation and the script itself (the latter both includes informative comments and passes JSLint). The script is limited to just looking at the first ref for the contents, but this limitation should not impact its usefulness to remove the clutter of a hundred unnamed refs, for example. Despite its limitations, would you find such a script useful, since citation templates can be quite lengthy? If so, maybe the MediaWiki software itself should incorporate this idea (of course simplifying the ref format so that the content is automatically in the first ref). My script, however, doesn't work with the Wikipedia Beta editor, which seems to be some strange code that actually removes the textarea and replaces it with an iframe (according to a quick glance at Firebug). How can I make my script Beta-compatible (like wikEd is supposed to be)? And had you thought of the idea of moving refs into a separate box when you designed wikEd's reference hiding? PleaseStand (talk) 05:39, 6 February 2010 (UTC) I found the ref-hiding feature in the stable version (the "R" button at the upper right). However, as noted at User talk:SlimVirgin/templates, it can be cumbersome to use. I still support my idea of replacing refs with short codes for editing purposes but believe that the ref format should be simplified for automatic parsing. So I'd like to see this in wikEd, but preferably, I'd like to see this in MediaWiki so that it is available to all editors. PleaseStand (talk) 23:53, 14 February 2010 (UTC) The only way to make your script compatible under standard textarea and the Usability Initiative beta iframe is have to separate parts of your code for them... Their iframe solution will break all existing textarea-related userscripts. I am not planning a separate ref editing window, I think the current ref hiding in wikEd beta (will be made compatible with UI-beta and Firefox 3.6 today or tomorrow) is a more intuitive and easier to edit solution.Cacycle (talk) 08:35, 15 February 2010 (UTC) ## WikEd beta doesn't work in FF 3.6? Huh? Damn. What else can I say...? Three days ago I installed the beta, since for several days WikEd had not been working, and the default editor or whatever that thing is which I keep seeing instead is rather painful. The beta didn't work either. I've tried a number of things to try to better understand it. Just tonight I disabled all my addons, theme, etc. and the result was rather underwhelming. Then, just now, coming here to post a report and ask for help, I read the comment about the beta not working with FF 3.6 - in the text of the post immediately above. Incredible. That's NOT where I should be learning about this. Could someone put a notice up in VERY plain sight so that someone else doesn't have the experience I just had? I can't believe that this problem exists, yet at the top of this page people are still being invited to install the beta. I appreciate WikEd, and all the time that surely must have been invested in it, but people really shouldn't be installing code known to be bad, yes? Tom Cloyd (talk) 05:01, 7 February 2010 (UTC) Also, it seems that under Firefox 3.6, the stable version of wikEd is adding extra spaces into wikitext, as if it is trying to fully justify the wikitext. This is hard to find (spaces aren't red colored on diffs of course) and doesn't make a difference in the page as seen in a Web browser, but nevertheless, the issue seems to exist. (wikEd has had other Firefox 3.6 issues: the text cursor used to not show in Firefox 3.6.) I agree, a big notice should be posted that wikEd (neither stable nor beta) does not work properly in Firefox 3.6 and users should use Firefox 3.5 instead, and wikEd should then be fixed. PleaseStand (talk) 20:04, 7 February 2010 (UTC) A diff and/or a test case would be really helpful to reproduce the issue and try to fix it. --Ciencia Al Poder (talk) 19:07, 8 February 2010 (UTC) "diff"? Don't know what that is. As for test case - well, ANYTHING on Wikipedia fails, for me. I'm running Kubuntu Linux 9.10, and Firefox 3.6. Is that enough for a replication? Tom Cloyd (talk) 22:06, 8 February 2010 (UTC) I have diffs, here are the links: [5][6]. Note that the text isn't the same in both diff links, but the common link is that I copied wikitext out of wikEd, then pasted it back in (with or without wikEd active). I'm running wikEd stable, on Firefox 3.6 on Windows 7. PleaseStand (talk) 22:10, 8 February 2010 (UTC) I should add, I ran a binary compare using frhed (compares values of corresponding byte locations, not like a text diff) and the first difference I found between raw text downloads [7] and [8] is that {{cite web| changed to {{cite web |. And the issue only occurs when the text is copied from wikEd, not copied from the plain text editor, which has me suspecting that the browser is not copying correctly from wikEd's iframe. Bug in wikEd or in Firefox? I don't know. Cutting/pasting text is so common to reorganize pages, perhaps that's why I noticed. PleaseStand (talk) 22:30, 8 February 2010 (UTC) Yeah, cut and paste is a huge problem. EOLs get inserted, and when text is pasted they are still there and have to be manually removed. A large pain. I've removed that importScript('User:Cacycle/wikEd_dev.js'); line from my skin.js page, and reloaded all pages in Firefox. It didn't fix the problem. A real mess, as I'm doing editing for hours, right now, and those renegade EOLs are a significant problem. Tom Cloyd (talk) 06:54, 9 February 2010 (UTC) It is difficult to describe how awful this problem has become! I simply cannot cut and put in the default editor without having huge problems in randomly inserted EOLs. I do NOT know what is causing this, and certainly am not capable of figuring it out. Can someone please attempt to replicate this? In Firefox (ver. 3.6), remove all traces of WicEd, and see if the default editor is inserting these EOLs for you as well. We need to figure this out. My thanks to anyone who can help. Tom Cloyd (talk) 10:43, 9 February 2010 (UTC) Yes, I have already tried that. There is no change in the page when copying and pasting an article (tested in my sandbox #2). No diff for you though since if there's no difference in the saved text, MediaWiki doesn't save the page; there is no change. This is using Monobook skin with the plain text (not experimental/Usability) editor. PleaseStand (talk) 22:04, 9 February 2010 (UTC) To add, I just tested the "Babaco" editor (the latest version of the Usability/Beta editor with the section jump links on the right side). You are certainly correct--it's far worse than wikEd stable in that it is actually breaking the formatting with copy-paste. Again this is on FF 3.6 and here's the diff from cutting all then pasting: [9]. You might want to note that on the watchlist page they are warning not to use "experimental features" including the new Beta text editor. PleaseStand (talk) 22:15, 9 February 2010 (UTC) The standard wikEd has been made compatible with the Usability Initiative beta in version 0.9.90e, I will update wikEd beta in the next few days so that it will become compatible with the Usability Initiative beta. I will also check for the EOL problem. Cacycle (talk) 08:18, 15 February 2010 (UTC) wikEd beta has been updated to version 0.9.91beta3.2 and is now compatible with the Usability Initiative beta changes. Please update with Shift-Reload. Cacycle (talk) 21:18, 15 February 2010 (UTC) !!!! It's working! Yahooo! So nice to have it back. Thanks for all your work. Tom Cloyd (talk) 19:42, 22 February 2010 (UTC) ## Erratic cursor movement I have serious problems with erratic cursor movements. When I run the cursor to the end of the line, it often jumps to a location much farther ahead then the beginning of the next line. Scrolling back past the beginning of a line move the cursor to a place forwards in the text. I am using window 7 and chrome 4.0.249.78. It may be related to pasting. Can be reproduced as follows. Put "one two three four five six" in the edit buffer. Go to an empty edit screen. Paste several times to fill a few lines. Now put the cursor in the first line and scroll right past the end of it. The cursor skips the remainder of the pasted string and moves forward to the first following "one". Now scroll backwards past the begin of the reached line. The cursor does not go up one line, but reverts to the same "one" forward. After pressing [w], I see several nested "div" blocks. Now scrolling to the right skips all remaining text. P.S. I also experience the seemingly random insertion/deletion of empty lines mentioned by many in earlier sections. -Woodstone (talk) 05:23, 11 February 2010 (UTC) This is a Webkit/Chrome bug that has already been reported (https://bugs.webkit.org/show_bug.cgi?id=34377 bug 34377). Cacycle (talk) 08:16, 15 February 2010 (UTC) ## WikiEd is not working with Wikipedia's beta features Few days ago, WikiEd could work with Wikipedia's beta features. But now, it's not working with it. So I turned WP beta off, and WikiEd works well. What should I do? -- JSH-alive talk o cont o mail 07:13, 11 February 2010 (UTC) I am working on it, please give me a few more days :-) Cacycle (talk) 13:46, 12 February 2010 (UTC) Fixed in version 0.9.90e. Cacycle (talk) 08:12, 15 February 2010 (UTC) ## wikEd beta Safari 4."" I have the following error: HIERARCHY_REQUEST_ERR: DOM Exception 3: A Node was inserted somewhere it doesn't belong. Line has: "wikEdCaptchaWrapper.appendChild(node);" after comment "fill captcha wrapper with elements between form and textarea (table)" --TheDJ (talk o contribs) 15:32, 12 February 2010 (UTC) The standard wikEd has been made compatible with the Usability Initiative beta in version 0.9.90e, I will update wikEd beta in the next few days so that it will become compatible with the Usability Initiative beta. Cacycle (talk) 08:11, 15 February 2010 (UTC) Fixed in wikEd 0.9.91beta3.2. Cacycle (talk) 08:10, 16 February 2010 (UTC) I'm trying to use 0.9.91beta3.2 with Safari 4.0.4, and when I try to edit a page WikEd seems to load the toolbar, but the little icon in the monobook title bar at the top shows a red X and says "load error"; Safari's error console reports "ReferenceError: Can't find variable: regExpComments". I'm loading WikEd from my "monobook.js" page with this code: var wikEdUseLocalImages = true; var wikEdImagePathLocal = 'http://hhappsweb/wiki/images/wikedimg/'; document.write('<script type="text/javascript" src="' + 'http://en.wikipedia.org/w/index.php?title=User:Cacycle/wikEd_dev.js' + '&action=raw&ctype=text/javascript"></' + 'script>'); I'm not sure if this is a bug with WikEd itself, or a problem with the way I'm attempting to load it. Viktor Haag (talk) 14:54, 17 February 2010 (UTC) ## wikEd is disabling signature button Started today. With wikEd enabled, neither the signature button nor the insert javascript at the bottom of the edit box works for inserting four tildes. All other buttons and pieces of insert javascript appear to work. Reproducible - turn off wikEd and the sig button works. Turn it does and it doesn't. This has affected both myself and ukexpat. Any more info needed, just ask --Elen of the Roads (talk) 19:43, 15 February 2010 (UTC) For me, wikiEd disabled the javascript Wiki markup insertions below the edit box (all the ones I tried) except the paired tags; the insertion point disappears when I click on a symbol to insert. This happened around the same time that wikiEd stopped working with Mediawiki's beta interface, and the javascript still does not work even after I stopped using the beta interface. I enabled wikiEd from the gadgets in my preferences. I'm curious what broke wikiEd, which quickly became indispensable to me as soon as I started using it a month or two ago.--Finell 21:32, 15 February 2010 (UTC) Just fixed with 0.9.90f, please update with Shift-Reload. Cacycle (talk) 21:49, 15 February 2010 (UTC) Confirm it's fixed for me, thanks. - ukexpat (talk) 21:52, 15 February 2010 (UTC) Fixed for me too. Thanks so much, fantastic piece of kit. Elen of the Roads (talk) 22:43, 15 February 2010 (UTC) Thanks, it's also fixed for me. I'm still curious as to what happened, if you don't mind a very brief explanation.--Finell 01:36, 16 February 2010 (UTC) Check the diff, it was just a random bug, nothing special :-) (the .html instead of the correct .plain crashed the custom insert function). No need for more confirms but it is nice to hear that people actually use and like wikEd :-) Cacycle (talk) 08:00, 16 February 2010 (UTC) Thanks. And, thanks very much for wikEd!--Finell 05:04, 18 February 2010 (UTC) ## wikEd trouble with regular expressions and jumping insertion point in find box I had the same problem as above with javascript insertions until I restarted Firefox today, which fixed it, but now regular expressions in the find box don't work, and the insertion point erratically jumps to the start of the find box. I'm dead in the water. Chris the speller (talk) 17:04, 17 February 2010 (UTC) If there is a way for me to use the version of February 15, or even a few days before that, I'd love to hear about it. Chris the speller (talk) 17:21, 17 February 2010 (UTC) Fixed in 0.9.90g, please update with Shift-Reload. Thanks for reporting, Cacycle (talk) 18:28, 17 February 2010 (UTC) Magnificent! Chris the speller (talk) 03:43, 18 February 2010 (UTC) But now I can't use recently invoked expressions from the drop-down list in the find box; the down arrow key just flashes right though the entries without any real chance of selecting one of them. For the record, I upgraded to Firefox 3.6 yesterday, about the same time this started. This isn't a show-stopper for me, since I keep a list of useful regular expressions in an external file, so I can cut and paste into the find box, it just slows me down from warp 7 to warp 3. Chris the speller (talk) 22:53, 20 February 2010 (UTC) Please disregard that complaint. I had to restart Windows, and now the problem can not be reproduced. Go back to sleep. Chris the speller (talk) 15:59, 22 February 2010 (UTC) ## Customization question Hi Cacycle, I have a (probably stupid) question for you. I have WikEd installed as a gadget via my preferences menu. Can I still customize WikEd features by pasting the codes specified on the customization page into my Monobook.js file? Or do I have to do it another way? Thanks, --Eastlaw talk / contribs 01:14, 21 February 2010 (UTC) Cacycle appears to be taking a well-deserved rest, so I will say that I added the sample customization code for custom buttons to my monobook.js page, and it works with wikEd selected as a gadget in my preferences page. See User:Chris the speller/monobook.js         Chris the speller (talk) 18:13, 21 February 2010 (UTC) Thanks, I took your advice and it appears to be working just fine. --Eastlaw talk / contribs 07:12, 22 February 2010 (UTC) UPDATE: It was working yesterday, but now it isn't! WTF?! --Eastlaw talk / contribs 18:36, 22 February 2010 (UTC) UPDATE #2: I got it working again...the .js code is really persnickety. --Eastlaw talk / contribs 23:13, 22 February 2010 (UTC) Most gadgets cannot be configured via your monobook.js / vector.js, but wikEd is an exception. Check the error console of your browser to check for errors in your code. Cacycle (talk) 08:02, 24 February 2010 (UTC) ## Any plans to support WikEd in Opera As above - I'd just like to know if I should hope or give up. Thanks. Dougweller (talk) 13:25, 22 February 2010 (UTC) Opera had a notoriously poor support for web standards (Javascript, CSS) and scripts ran rather slow. I am currently working to make wikEd compatible with their next release, Opera 10.50, which is still in beta. Looks doable so far :-) Cacycle (talk) 18:02, 26 February 2010 (UTC) In theory, Opera's upcoming 10.50 version should be compatible with wikEd BUT the currently available Opera 10.50 Beta (3273) has some serious bugs (e.g. related to inserhtml and parentNode) and I have given up on finding workarounds. I have reported these bugs, but it is impossible to know if they will fix these bugs due to their completely intransparent closed bug tracking system which is a BIG nuisance for web developers. I will monitor their upcoming releases. Sorry, Cacycle (talk) 17:57, 20 March 2010 (UTC) ## Edit summary (Moved here from User_talk:Cacycle/wikEd help) I am using WikEd with Safari and I have a problem with the edit summary field. When I go to edit a page the editor comes up with WikEd, however, this particular field is off the screen to the far left requiring me to scroll over to type a summary and then scroll back to the right to click save. What can I do? Supertouch (talk) 14:34, 27 January 2010 (UTC) Please could you fill out a bug report (see the top of this page) as I cannot reproduce that. Thanks, Cacycle (talk) 18:02, 26 February 2010 (UTC) (Moved here from User_talk:Cacycle/wikEd help) Wikied is not loading properly. I have a flame in the little box on the top right. Any suggestions as to the solution? 15:52, 7 February 2010 (UTC) I've gone through the documentation, and finally clicked the Beta off. It's working fine now, but it used to work with Beta. Auntieruth55 (talk) 16:19, 7 February 2010 (UTC) wikEd has been updated and should now be running fine under beta. Cacycle (talk) 18:02, 26 February 2010 (UTC) Yes, for me it is again working. Many thanks! Tom Cloyd (talk) 13:43, 4 March 2010 (UTC) ## wikiEd on wikia (4) Hi! Wikia today has changed part of the design of the monaco skin. Now, the #userData element is no longer a list, but a div with each link inside a span. This change makes the wikEd icon to not appear. I've debugged it and the solution is to change the second parameter of the wikEdMediaWikiSkinIds object from true to false (line 1414) so it simply appends the icon instead of trying to find a UL element. I've tested it myself and works. Thanks in advance! --Ciencia Al Poder (talk) 20:12, 17 March 2010 (UTC) PS: I mean the monaco skin --Ciencia Al Poder (talk) 17:01, 19 March 2010 (UTC) Fixed in 0.9.90h. Thanks for reporting, Cacycle (talk) 17:44, 20 March 2010 (UTC) I'm afraid, but the change wasn't pushed (see diff), although you mentioned it in the edit summary. --Ciencia Al Poder (talk) 13:11, 22 March 2010 (UTC) Any follow up on this? It's a simple change: only swap true to false --Ciencia Al Poder (talk) 17:11, 26 March 2010 (UTC) Finally fixed in the current version 0.9.90l, thanks for reporting this :-) Cacycle (talk) 10:12, 28 March 2010 (UTC) Many thanks. --Ciencia Al Poder (talk) 17:28, 28 March 2010 (UTC) ## Newlines in copypasting There seems to be some problem with copy-pasting in wikEd, which causes newlines to appear: [10], [11]. Some other users have the same problem. I use Firefox 3.6 on a Mac OS X 10.5. Ucucha 03:56, 21 March 2010 (UTC) I'm having the same problem.[12] I use FF 3.6 in Windows Vista. -- Malik Shabazz Talk/Stalk 05:36, 21 March 2010 (UTC) Hopefully fixed in 0.9.90i, please push Shift-Reload to update. Cacycle (talk) 12:25, 21 March 2010 (UTC) Yes, no problems now. Thanks! Ucucha 12:26, 21 March 2010 (UTC) ## Extra spaces inserted - part II Unfortunately the problem with extra spaces is made even worse with 0.9.90h - it now inserts line breaks (example). I believe this is limited to Firefox 3.6, 3.5.x is not affected. GregorB (talk) 10:45, 21 March 2010 (UTC) Hopefully fixed in 0.9.90i, please push Shift-Reload to update. Cacycle (talk) 12:24, 21 March 2010 (UTC) I see now it has been reported already... Anyway: looks good to me too, thanks! GregorB (talk) 19:30, 21 March 2010 (UTC) I've just lived with this glitch since it appeared perhaps more than a year ago. Since spaces don't affect the rendering (AFAICT) I let it go, and watched for "buggy linefeeds", fixing them as they turned up. I will updat this entry if I notice that the plague has come to an end. Schweiwikist (talk) 11:55, 28 March 2010 (UTC) ## highlight HTML entities Could HTML entities be highlighted in some fashion, please? There is a discussion here which tangentially discusses the reasons for this request. Thank you. -- V = IR (Talk o Contribs) 09:08, 23 March 2010 (UTC) Good idea, I will add this. Cacycle (talk) 07:44, 25 March 2010 (UTC) ## Proposed integration of parts of wikiEd into the Usability Initiative I've proposed the addition of parts of wikiEd into the beta prototypes at the Usability Wiki. Just letting you know... ManishEarthTalk o Stalk 09:15, 23 March 2010 (UTC) ## WikEd doubling bug As reported here, WikEd seems to be causing double-uploads of files and doubling of editnotices. I can confirm the editnotice effect (in FF 3.6.2). Algebraist 13:43, 25 March 2010 (UTC) wikEd is purposely doubling the edit notice below the preview, just in case you missed it when wikEd jumps you to the edit box. At least you aren't getting triple editnotices, heh. Gary King (talk) 14:54, 25 March 2010 (UTC) There is no preview. I just click "edit" (on WP:HD, for example) and get an edit page with two editnotices (with me jumped to the top of the second), with the "This page is 80 kilobytes long." note between them. Anyway, the real problem is the upload doubling. Algebraist 14:58, 25 March 2010 (UTC) Okay; I don't know if wikEd still creates the second editnotice or not if there hasn't been a preview yet. Personally, I've got it setup so that I see a preview when I first click edit. Gary King (talk) 15:07, 25 March 2010 (UTC) I can now confirm that wikiEd is the gadget that causes the double upload error. I've just uploaded this logo after disabling the wikiEd option (using FF 3.6.2). Arteyu ? Blame it on me ! 15:18, 25 March 2010 (UTC) Well, I've tried another method, by disabling wikiEd and enabling the "Firefogg" option (under User interface gadgets) upon uploading this image. Found that the firefogg option by itself doesn't upload the image twice, but it adds this "== Summary ==" word to the comment. Arteyu ? Blame it on me ! 15:43, 25 March 2010 (UTC) The tripling has been fixed in 0.9.90j, the doubling is intentional so that you see the notices after autoscrolling to the edit field. Cacycle (talk) 08:18, 26 March 2010 (UTC) So how about the doubling of images? As per examples above, the image doubling bug occur only when user enables both firefogg and wikiEd together; and I don't think that was intentional; I also do think that it has something to do with the doubling of editnotices. Arteyu ? Blame it on me ! 11:05, 26 March 2010 (UTC) When you do a test upload with only wikEd enabled, do you still see double uploads of images? Cacycle (talk) 13:27, 26 March 2010 (UTC) Nope, I've tried it just now. See here. Imho, the double upload only happens when user enables both gadgets. Arteyu ? Blame it on me ! 14:26, 26 March 2010 (UTC) Fixed in the current version 0.9.90l, thanks for reporting this :-) Cacycle (talk) 10:11, 28 March 2010 (UTC) Wow! Is it? Thank you very much Arteyu ? Blame it on me ! 19:35, 1 April 2010 (UTC) ## Funny cursor movement Here's how to reproduce: 1. Go e.g. here 2. Position the cursor at the end of the first line 3. Press Arrow down repeatedly until the cursor reaches the penultimate line 4. Press Arrow down once again 5. The cursor jumps to the third line, instead of the last line This is reproducible on both Firefox 3.5.8 (Win2k machine) and 3.6.2 (WinXP machine), but - interestingly enough - works fine in Google Chrome 4.1.249.1042. I'll supply more details if necessary. Backspace also works funny: 1. Edit any page 2. Go to the end of the last line and press Enter 3. Type: 123 4. Press Backspace three times 5. All digits should be deleted, but "1" remains Again: broken in Firefox 3.5 and 3.6, works fine in Chrome. GregorB (talk) 20:56, 28 March 2010 (UTC) Problem #1: Confirmed on Firefox 3.6 on a Mac. Problem #2: Confirmed. However, step #5 is a bit more complicated than that. Type "12345", then hit backspace once, "5" is deleted. Hit backspace once more, however, then nothing happens; so, for some reason you need to hit backspace twice to delete the penultimate character in that line. Gary King (talk) 21:04, 28 March 2010 (UTC) Whoa, that was quick... Yes, that's what I'm seeing too. There is also a problem when one tries to merge two lines by going to the beginning of a line and pressing Backspace. It's as if some invisible characters are being deleted, and nothing happens after the first keypress. Probably the same as problem #2. GregorB (talk) 21:34, 28 March 2010 (UTC) Fixed in 0.9.90m, please Shift-Reload to update. Thanks for reporting this, it was a bug in the new feature that prevents highlighted code to "bleed out" if you start typing right before or after a colored/highlighted block. Cacycle (talk) 23:09, 28 March 2010 (UTC) Looks good - could not reproduce any of the above problems in Firefox 3.6.2. Thanks! GregorB (talk) 08:54, 29 March 2010 (UTC) ## Bug with inserting the 4 tildes When I'm editing a talk page and I click on the link below the editing window labeled "Sign your posts on talk pages: ~~~~" which links to javascript:insertTags('~~~~','','') ("Insert" selected from popup menu), the 4 tildes get inserted at the beginning of the edit text even if the insertion point is at the end (Firefox 3.6) Hgrosser (talk) 07:00, 30 March 2010 (UTC) Fixed in 0.9.90n, please update with Shift-Reload. Thanks for reporting this! Cacycle (talk) 20:57, 30 March 2010 (UTC) Don't know if you want to still support Firefox 2, but in that version, which is the one on library computers at UC Berkeley and required for Macs running Mac OS X 10.3, the 4 tildes (or any other character) insertion doesn't work (does nothing) when the insertion point is at the end of a line (which is where you'd usually insert them). This is both in 0.9.90q and the beta version. You can get Firefox 2 at ftp://archive.mozilla.org/pub/firefox/releases/2.0.0.20 . As for the beta version, I like the image preview, but it's really annoying to have the wikitext right over the image. Can you make the text wrap around the image? Thanks. Hgrosser (talk) 02:42, 29 April 2010 (UTC) ## Losing content of edit window upon navigating away from page and returning Is there any way to prevent one from losing the content of the edit window when one navigates away from the page and comes back? When WikEd is disabled, this doesn't happen, in Firefox anyway. Tisane (talk) 08:37, 2 April 2010 (UTC) It is a bug with the Usability Initiative Beta, see bug 22680. It looks as if they did not have the time to check into it :-( Cacycle (talk) 12:03, 6 April 2010 (UTC) ## Examples of regular expressions I haven't seen any place to share knowledge about regular expressions for searching and replacing text in Wikipedia using WikEd. I can offer a few examples on User:Chris the speller/regular for those who are new to the subject. If there is another place to find examples, please let me know. I will also accept requests to produce expressions for those who would like help. Chris the speller (talk) 23:00, 2 April 2010 (UTC) ## More funny cursor movements WikEd is 0.9.90n, tried in Firefox 3.5.9 (likely the same in 3.6 - can't test at the moment). Problem #1: 1. Click e.g. here 2. Position the cursor at the first character of the first line 3. Press Enter 4. Go to the beginning of the first (now empty) line 5. Type in any character 6. It is displayed in the second line, not in the first, as would be expected Problem #2 (possibly related): 1. Click e.g. here 2. Go to the end of the last item in the first numbered list ("The cursor jumps to the", etc.) 3. Press Enter 4. Go back to the same position (i.e. end of the last item) 5. Type any character 6. It is displayed in the following line, not in the same line, as would be expected GregorB (talk) 16:51, 8 April 2010 (UTC) Thanks, will check :-) Cacycle (talk) 21:25, 8 April 2010 (UTC) Tried in Firefox 3.6.3, indeed the same behavior. Also, works fine in Google Chrome 4.1.249.1045, just like the original funny cursor movements. :-) GregorB (talk) 14:11, 9 April 2010 (UTC) Still having the same problem - any news on this one? GregorB (talk) 19:27, 25 May 2010 (UTC) It's on the list :-) Cacycle (talk) 19:42, 25 May 2010 (UTC) Works fine in 0.9.91i, thanks! GregorB (talk) 07:57, 22 July 2010 (UTC) ## wikEd and beta toolbar interference Hallo, did you ever seen (fixed) this problem of incompatibility at the pic? --Perhelion (talk) 20:53, 7 April 2010 (UTC) Fixed in next release (0.9.90o), thanks for reporting, Cacycle (talk) 22:27, 14 April 2010 (UTC) ## Problem in French Version WikiEdit bugs the French Wikipedia, but only on firefox. That is what he writes me Bad Request Your browser sent a request that this server could not understand. Size of a request header field exceeds server limit. Cookie: wikEdAutoUpdate=Thu%2C%2015%20Apr%202010%2016%3A23%3A27%20GMT; wikEdFindHistory=%25E2%2597%258A%25E2%2597%258A%2520l'athl%25C3%25A8te%2520%255B%255Balg%25C3%25A9rie%255D%255Dnne%2520la%2520plus%2520titr%25C3%25A9e%2520est%2520%255B%255BHassiba%2520Boulmerka%255D%255D%2520%2520est%2520devenu%2520la%2520premi%25C3%25A8re%2520femme%2520africaine%2520%25C3%25A0%2520gagner%2520un%2520titre%2520mondial%2520en%2520%255B%255BAthl%25C3%25A9tisme%255D%255D%252C%2520et%2520la%2520premi%25C3%25A8re%2520%255B%255Balg%25C3%25A9rie%255D%255Dnne%2520%25C3%25A0%2520gagner%2520un%2520titre%2520olympique%2520%253F%257C%250A%25E2%2597%258A%25E2%2597%258A%2520AS%2520A%25C3%25AFn%2520Melila%250A%25E2%2597%258A%25E2%2597%258A%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%250A%25E2%2597%258A%25E2%2597%258A%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%250A%25E2%2597%258A%25E2%2597%258A%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%250A%25E2%2597%258A%25E2%2597%258A%25201998-1999%250A%25E2%2597%258A%25E2%2597%258A%2520---%2520align%253D%2522left%2522%250A%25E2%2597%258A%25E2%2597%258A%2520%257C-----%2520align%253D%2522left%2522%2520%2520%250A%25E2%2597%258A%25E2%2597%258A%2520%257C-----%2520align%253D%2522center%2522%2520%2520%250A%25E2%2597%258A%25E2%2597%258A%2520%257C%257C%2520%257B%257B; wikEdReplaceHistory=%25E2%2597%258A%25E2%2597%258A%2520l'athl%25C3%25A8te%2520%255B%255Balg%25C3%25A9rie%255D%255Dnne%2520la%2520plus%2520titr%25C3%25A9e%2520est%2520%255B%255BHassiba%2520Boulmerka%255D%255D%2520%2520est%2520devenu%2520la%2520premi%25C3%25A8re%2520%255B%255Balg%25C3%25A9rie%255D%255Dnne%2520%25C3%25A0%2520gagner%2520un%2520titre%2520olympique%2520%253F%257C%250A%25E2%2597%258A%25E2%2597%258A%2520AS%2520Ain%2520M'lila%250A%25E2%2597%258A%25E2%2597%258A%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%250A%25E2%2597%258A%25E2%2597%258A%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%2520%250A%25E2%2597%258A%25E2%2597%258A%25201997-1998%250A%25E2%2597%258A%25E2%2597%258A%2520---%2520align%253D%2522center%2522%250A%25E2%2597%258A%25E2%2597%258A%2520%257C-----%2520align%253D%2522center%2522%2520%2520%250A%25E2%2597%258A%25E2%2597%258A%2520%257C-----%2520align%253D%2522left%2522%2520%2520%250A%25E2%2597%258A%25E2%2597%258A%2520%257C%257C%2520align%253D%2522center%2522%2520%257C%257B%257B%250A%25E2%2597%258A%25E2%2597%258A%2520%2520align%253D%2522center%2522%2520; wikEdSummaryHistory=mise%2520en%2520page%250Aalg%25C3%25A9rie%2520pas%2520tunisie%250AEffectif%2520en%2520Mod%25C3%25A8le%2520!%250A%252BSmain%2520Ibrir%250AModification%2520de%2520la%2520cat%25C3%25A9gorie%2520%255B%255BCat%25C3%25A9gorie%253ANaissance%2520%25C3%25A0%2520Alg%25C3%25A9rie%255D%255D%2520%25E2%2586%2592%2520%255B%255BCat%25C3%25A9gorie%253ANaissance%2520en%2520Alg%25C3%25A9rie%255D%255D%2520(avec%2520%255B%255BMediaWiki%253AGadget-HotCats.js%257CHotCats%255D%255D)%250A%252B%2520H.Bouchache%250A%255B%255Ben%253ACar%2520classification%255D%255D%250Aproposition%2520de%2520cessesion%2520des%2520articles%250Addn%2520defnoun%250A%252Blien%2520Defnoun; wikEdButtonBarFindHidden=0; wikEdRefHide=0; wikEdUseClassic=1; frwikiUserID=585589; frwikiUserName=Clapsus; botsDeluxeHistory=ABotSupreme||AHbot||AStarBot||Aca-bot||Adlerbot||AdrilleBot||Aibot||AkhtaBot||Albambot||Alecs.bot||Alexbot||AlleborgoBot||Almabot||AlmabotJunior||AlnoktaBOT||AmaraBot||Amirobot||Analphabot||ArthurBot||AsgardBot||AstaBOTh15||AttoBot||BOT-Superzerocool||BOTarate||BOTijo||Badmood||BenjiBot||BenoniBot||BenzolBot||BetBot||Bocianski.bot||BodhisattvaBot||BokimBot||Bot de Sept Lieues||Bot de paille||BotMultichill||BotSottile||BotdeSki||Botozor||Broadbot||Bub's wikibot||CaBot||CarsracBot||ChenzwBot||Chicobot||Chlewbot||Chobot||CommonsDelinker||D'ohBot||DSisyphBot||DaBot||DanBot||Darkicebot||DeepBot||DirlBot||DixonDBot||DodekBot||DorganBot||Dr Bot||DrFO.Tn.Bot||DragonBot||DroopigBot||DumZiBoT||EivindBot||ElMeBot||EleferenBot||EmausBot||EpopBot||Escalabot||Escarbot||Estirabot||Eybot||FANSTARbot||Ficbot||FiriBot||FlaBot||Gerakibot||GhalyBot||GnawnBot||Gpvosbot||GrouchoBot||GrrrrBot||H92Bot||HAL||HRoestBot||HariBot||HasharBot||HerculeBot||Hexabot||Hxhbot||HyuBoT||Idioma-bot||Ir4ubot||JAnDbot||Jbot||Je suis trop bot||Jotterbot||Jujubot||Kal-El-Bot||KelBot||Ken123BOT||KhanBot||Korribot||Kwjbot||Kyle the bot||LSG1-Bot||LaaknorBot||Lait ribot||Le Pied-bot||Le plus bot||LinkFA-Bot||Liquid-aim-bot||Lockalbot||LordAnubisBOT||Louperibot||Loveless||LucienBOT||Luckas-bot||Ludo Thécaire||MMBot||MSBOT||MagnusA.Bot||Maksim-bot||MastiBot||MauritsBot||MediaWiki default||Melancholie For information, the site was buggy when I made a replacement of words by another on the WikEd Clapsus (talk) 22:02, 15 April 2010 (UTC) As a quick fix please push the [X] button in the top right button bar to delete the cookie. I will have a closer look at it when I find some time. Thanks for reporting, Cacycle (talk) 20:11, 18 April 2010 (UTC) ## WikEd breaks RevisionDelete diff message If RevisionDelete is enabled, you have the revisiondelete right, and you view a diff where one of the revisions has been hidden, a message appears inside the diff table, and WikEdDiffLinkify breaks links in that message. The structure looks like this: <table class="diff"> <tr> <td class="diff-otitle" colspan="2"><div id="mw-diff-otitle1"> [navigation links, left side] </div></td> <td class="diff-ntitle" colspan="2"><div id="mw-diff-ntitle1"> [navigation links, right side] </div></td> </tr> <tr> <td colspan="4"><div class="mw-warning plainlinks"> [RevisionDelete warning message with links which are broken by WikEdDiffLinkify] </div></td> </tr> <tr> [usual diff structure with diff-lineno, diff-marker, diff-context etc. classes] ... Maybe diff linkification could be limited to the diff-context class. WikEd version: 0.9.90n (disabled) Browser: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) skin: Vector --Tgr (talk) 16:21, 17 April 2010 (UTC) I have fixed wikEdDiff.js as you suggested. Please could you check and report back if it works (Shift-Reload)? Thanks, Cacycle (talk) 21:02, 20 April 2010 (UTC) You also disabled diff linkification outside the diff-context for people without revisiondelete, like me, which is rather annoying as the links were very useful. Any chance they're coming back? Ucucha 20:34, 22 April 2010 (UTC) Has already been fixed yesterday :-) (Shift-Reload this link to update). Cacycle (talk) 06:54, 23 April 2010 (UTC) Thanks. Yes, it's working now. Ucucha 11:09, 23 April 2010 (UTC) ## AfD bug See thread at Wikipedia:Administrators'_noticeboard/Incidents#Someone_has_broken_AfD ... I am really stumped on this one. My browser is Firefox 3.0 (kind of behind the times, I know) and Im using monobook (ditto). It says there's a "loading error" when editing any AfD page (but every other page seems to be fine) and the problem disappears when I disable WikEd (and refresh teh cache). --Soap-- 21:13, 17 April 2010 (UTC) I'm using Modern. and Firefox 3.6.3. I don't get an error message, but the editbox comes up blank, and any edits I make don't appear. Disappears when WikEd is disabled. Might be related, any page that has a message on the edit page (as ANI has), the edit message appears in between the editbox toolbar and the WikEd toolbar. Hadn't noticed it doing this before, but perhaps I just missed it.Elen of the Roads (talk) 21:40, 17 April 2010 (UTC) I have reverted wikEd back to the previous version, please push Shift-Reload to update. Sorry for that, Cacycle (talk) 00:00, 18 April 2010 (UTC) ## wikEd fails on some articles; script busy or stopped responding Script: http://en.wikipedia.org/w/index.php?title=User:Cacycle/wikEd.js&action=raw&ctype=text/javascript:11085 It fails every time on West Ham United F.C. with Firefox 3.6.3. Chris the speller (talk) 19:46, 25 April 2010 (UTC) Works fine for me - please could you post a complete bug report (see top of this page), maybe it is some kind of incompatibility. Thanks, Cacycle (talk) 20:05, 25 April 2010 (UTC) I removed custom button code from my monobook.js, and now I can edit this and other pages, such as Racine Unified School District that gave me trouble. You may look at the February 23 version of my custom code if you like, to see if I did something exceptionally stupid. The strange thing is that the custom code caused me no problems for two months, and then suddenly I hit 4 or 5 articles in the last couple of days. I'm OK now, so don't hit your head against a wall, unless you want to satisfy your curiosity. I'm on Windows Vista, and the only Firefox add-ons are Linky 3.0.0, Java console 6.020, .NET Framework Assistant 1.1. and Norton IPS 1.0. If you want me to dig up any other bug report info, I can do that. Chris the speller (talk) 18:44, 26 April 2010 (UTC) Even works with your code for me... Cacycle (talk) 22:08, 26 April 2010 (UTC) I only use that custom button code once in a thousand articles. It seems like delving further into this would be a great waste of your talent. Thanks anyway, and happy editing! Chris the speller (talk) 03:50, 27 April 2010 (UTC) ## Beta version I absolutely love the new HTML character entity hiding, as well as the click-to-hold-open feature! You should change the rollover text for the button so that it mentions char. ent. hiding as well as [REF] and [TEMPL]. I see now that it also gets rid of text over image preview -- fantastic! One bug though, when I apply <sup>.../<sup> to something within the ref block using your toolbar's button, it completely messes up WikiEd's conception of where the boundaries of the ref block are, and I have to use the textify button to fix it. In the info block on the beta version, you got rid of the instructions on how to install it. Perhaps you could restore these, as well as mentioning that one could switch between versions easier by just changing the name of the .js file in one's Monobook.js, rather than re-enabling it as a gadget. Hgrosser (talk) 03:38, 29 April 2010 (UTC) Oh, and the image preview doesn't work when it is referenced by some templates w/o image tag, such as Phi Beta Kappa Society Hgrosser (talk) 03:55, 29 April 2010 (UTC) ## Requested 'bug log' • Your wikEd version: 0.9.90n GM • Your browser id: "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3") • Error console errors: way too many to list... is there something specific I should be looking for? no errors, but a lot of warnings • Which browser add-ons have you installed: Adblock Plus, BitDefender QuickScanner, Firebug, Greasemonkey, .NET Framework Assistant, Web Developer • Which Wikipedia skin do you use: Monaco • Are you using the experimental Wikipedia Beta user interface: no • Which user scripts have you installed on your Special:Mypage/skin.js: none • Which operating system do you use: XP SP 2 • Describe the problem, please be as specific as possible about what is wrong: detailed at [13] same problem as before. Won't allow me to edit in the edit box once the mouse interacts with the box. Moving around via tab and arrow keys work fine. • Steps to reproduce: Click edit and move mouse over edit field and click or scroll mouse wheel. • What exactly happens if you follow these steps: edit box renders itself useless. -- 68.101.94.165 (talk) 06:33, 12 May 2010 (UTC) What are your Wikia settings, userscripts, and gadgets? Have you tried to uncheck Wikia's editing extensions and additions? Cacycle (talk) 20:27, 24 May 2010 (UTC) ## Gadget installation still uses old version When I tried to install wikEd as a Gadget, the old version, wikEd version 0.9.90 (forgot the letter, but the date is Apr. 19, 2010) appears. Both wikEd.js and wikEd_dev.js added to my .js file give the same new version. Hgrosser (talk) 01:00, 14 May 2010 (UTC) There were some delays with the new version (0.9.91x). Seems to be working now, but I will give it a few days more beta-testing... Cacycle (talk) 20:24, 24 May 2010 (UTC) ## Edit Notices • Version 0.9.90Q G (April 19 2010) • Firefox/3.6.3 (.NET CLR 3.5.30729) • No Beta (unless it's default) • Both script pages listed below "Here" is the quick version of my problem. I think this is enough information to explain it. Here are some links for research purposes. • This is what I see (assuming your version is the same)HERE • Here is my edit notice Template for my User Pages: User Page Template • Same for my Talk Pages: Talk Page Template Here are my .js Pages (I might have the wrong script) • Monobook: Monobook • Vector: Vector I sure hope there's a fix for this, I love wikEd, Please I need help with this, If there's no fix, Patch there needs to be ! Thank you very much, if you need to make any changes to any of my pages Please Do of course I need to what was done so I can work with it in the future... Mlpearc pull my chain Trib's 19:11, 24 May 2010 (UTC) • I just read this doubling bug. I'm using the "gadget" form do I need to switch to a java script ? If so could you please leave a link to the page ? Or is something else ? You probably have to remove the "<source lang="JavaScript">...</source>" tags as these are not valid javascript and are probably breaking your code (check your browser's error console for JavaScript errors). I would not suggest to disable the doubling because you might miss important messages that are sitting on top of the page. Cacycle (talk) 20:21, 24 May 2010 (UTC) No, removing "<source lang="JavaScript">...</source>" tags did not work. But there must be a patch for this override ? Mlpearc pull my chain 'Tribs 17:49, 25 May 2010 (UTC) Did you push Shift-Reload to update to the new skin.js versions? Cacycle (talk) 19:47, 25 May 2010 (UTC) Yes, after each change with Monobook and Vector. Mlpearc pull my chain 'Tribs 20:55, 25 May 2010 (UTC) • Could a patch be written ? Mlpearc pull my chain 'Tribs 04:40, 26 May 2010 (UTC) You have an error in your vector.js (missing "]"), pleease check your browser's error console and fix your code. Again, I suggest not to use that override, better do not (mis)use an edit notice for your talk page. Cacycle (talk) 06:56, 26 May 2010 (UTC) ## WikEd completely broken! I don't know what happened, but all of a sudden I can't edit with WikEd at all. I'm using Firefox 3.0 (old, I know, I'll test with other browsers when I get a chance), and it seems to happen on all skins that support WikEd. What happens is that the little toolbar above the edit window is magnified to the size of an entire screen, a horizontal scrollbar appears at the bottom window, even if it isn't needed, pressing [tab] to get to the Edit summary field doesnt work, and all edits fail to go through even if it looks like they did. (Previewing an edit shows no changes.) This all happened suddenly around 22:00 GMT on May 24. I appreciate all the work youve done on WikEd, and if this problem is just isolated to me (which I imagine it might be since if it was global I'd expect a flood of comments here), I will try to find out the source of the problem and do what I have to do to get it fixed, even if it means upgrading/switching to a new browser. Thank you. --Soap-- 12:10, 25 May 2010 (UTC) Comment: Problem not appearing on another computer w/ very similar browser (FF 3.0.15). Still checking other stuff. 3 ¢ soap Talk/Contributions 12:36, 25 May 2010 (UTC) It seems to happen only on Firefox 3.0 for me. Again, the skin I use doesnt seem to matter, so I dont think this is a Vector/Monobook thing. Unless a bunch of other people start complaining I wouldnt worry about this. I'll try upgrading FF and see if it helps. --Soap-- 13:09, 25 May 2010 (UTC) Hmm, upgrading the browser didn't help. It only seems to happen on this user account (not 3centsoap) so it might be something to do with the cache, but it doesnt seem possible to clear the cache. I'll just have to stop using WikEd. --Soap-- 13:51, 25 May 2010 (UTC) Okay, this is the last message I hope. I restored functionality of WikEd by unclicking in the Prefs page under editing "Widen the edit box to fill the entire screen" and "Enable enhanced editing toolbar", "Enable dialogs for inserting tables, links, and more", and "Enable navigable table of contents". Yet, none of those features caused any problems under another computer ,even with the same browser, skin, css, js, and every other conceivable setting the same. --Soap-- 14:20, 25 May 2010 (UTC) ## Unresponsiveness with certain articles... ...such as D8 (Croatia). Attempting to edit at various places in the article leads to general unresponsiveness and high (or at least higher-than-normal) CPU usage. Can be reproduced with Firefox 3.6.3 on both Windows 7 and Windows XP SP3, but seems to work fine with Google Chrome 5.0.375.55 on Windows 7. GregorB (talk) 13:59, 4 June 2010 (UTC) I can not reproduce this any more on either the June 4 or the current version of the article with Firefox 3.5.11, so it is apparently fixed now. GregorB (talk) 21:36, 23 August 2010 (UTC) ## Category sort Could you please fix this bug. It's really an annoying bug. --Schnark (talk) 09:02, 14 June 2010 (UTC) Fixed in the next release. Cacycle (talk) 21:12, 21 June 2010 (UTC) Fixed in 0.9.91i. Cacycle (talk) 20:43, 22 July 2010 (UTC) ## Bug: Editor locks on Programmer's Wiki mainpage wikEd 0.9.90q (April 19, 2010) Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.3) Gecko/20091020 Ubuntu/9.10 (karmic) Firefox/3.5.3 No errors were displayed. My only add-on is Ubuntu Firefox Modifications (unless you're counting plug-ins as well as extensions). The skin in use is Monaco. I am not using Wikipedia Beta. Your script is the only one installed as my userscript, and I believe there are no custom scripts used wiki-wide. I am running Ubuntu 9.10 "Karmic Koala." Manual editing and manual scrolling within the editor are completely disabled. Undo/redo and the buttons, both the standard ones and the special wikEd ones, still work, and effect scrolling when the text they affect is outside the current editor viewport. I managed to accidentally highlight a character while playing with the buttons and undo, but otherwise this is not possible and I can't manually change the highlighted text anyway. Fortunately I can disable/enable the script while on the page in question, a very wise feature. The editing page also displays a notice of anti-anon protection twice. To reproduce the problem, create a Wikia account, paste your script here (or skip this step and use Greasemonkey), and visit the problem page's editor. I also noticed the behavior on a subpage as well as the original with only a redirect in it, but not a duplicate page. The name of the main page happens to also be the alias for the Project: namespace. When you open the editor for the page in question with wikEd enabled, you should experience the same issues as I described above. --Jesdisciple (talk) 16:38, 14 June 2010 (UTC) It seems to work for me with the newest wikEd version (0.9.91j) using Greasemonkey as well as monaco.js installation. Do you still have problems? Cacycle (talk) 20:42, 22 July 2010 (UTC) ## Not working For a private (non-web) wiki on my computer, I've tried to install the latest version of WikEd following the steps of the procedure 'Wikis without internet connection', but I can't get it to work. I'm using: Windows XP Professional, MediaWiki 1.15.4, IIS 5.1, PHP 5.3.2, MySQL 5.1 Essentials, phpMyAdmin 3.3.3 and Mozilla Firefox 3.6.3. The memory limit is ok; I added '$wgUseSiteJs = true;' to the local settings (and also tried '$wgAllowUserJs = true;'); created the required pages ('wikEd.js', 'wikEd current version', etc.) as well as the optional 'AutoWikiBrowser typos', but not the translation page; manually uploaded the 88 images; copied the installation code at 'MediaWiki:Common.js', replacing the relevant lines with http://localhost/mediawiki and http://localhost/mediawiki/images; and protected the .js pages. After restarting the IIS server and refreshing the Firefox cache, no WikEd logo or WikEd buttons appear in the editing box. The JavaScript error report says: Error: syntax error Source file: http://localhost/mediawiki/index.php?title=-&action=raw&smaxage=0&gen=js&useskin=monobook Line: 28, Column: 113 Source code: var wikEdAutoUpdateUrl = 'http://localhost/mediawiki/index.php?title=wikEd_current_version&action=raw&maxage=0'; } Any thoughts? Cavila (talk) 10:58, 15 June 2010 (UTC) Incidentally, I've also tried setting wikEdAutoUpdate to 'false', but that gives us another error: Error: syntax error Source file: http://localhost/mediawiki/index.php?title=-&action=raw&smaxage=0&gen=js&useskin=monobook Line: 31, Column: 106 Source code: var wikEdRegExTypoFixURL = 'http://localhost/mediawiki/index.php?title=AutoWikiBrowser_typos&action=raw'; } Cavila (talk) 12:36, 15 June 2010 (UTC) Maybe it is the "}" at the end of the lines? Cacycle (talk) 19:55, 15 June 2010 (UTC) Odd as that may sound, it is! I copied the text just as I found it at User:Cacycle/wikEd installation#Wikis without internet connection. Do you think that those braces were intended to be at the line directly below or did javascript change its preferences (I'm only guessing)? The extra images do not (yet) appear on my screen, possibly because the uploads were automatically stored in subfolders of /images/, but I'm happy to have short descriptions instead, as I have now (the reason being that it works better with my display preferences). Thanks! Cavila (talk) 21:24, 15 June 2010 (UTC) I have fixed the code, sorry for that. Also make sure to read the customization options at the top of the wikEd code. For the image page you have to use the newest release 0.9.90r. Cacycle (talk) 21:35, 15 June 2010 (UTC) ## how to call a function in onkeypress event Thanks for great tool. I wish to create a transliteration, language editor I still don't know how to call a function onkeypress of this editor. I have the js files and want to call a function (addCharKeyPress(thisobj, keypress,engToTam)). Please help me. Mahir78 (talk) 12:53, 29 June 2010 (UTC) Please could you elaborate on what exactly you want to accomplish? Thanks, Cacycle (talk) 20:55, 29 June 2010 (UTC) ## Wiked is only working sometimes Right now, in this post, it is not working at all. However, I just opened Wikipedia talk:WikiProject Professional wrestling and it was fine. Before that, I was editing Wikipedia:Administrator's Noticeboard and it wasn't working either. I thought it was in certain pages, but I have now noticed it is completely random. Help? Feedback ? 03:28, 3 July 2010 (UTC) ## variable misnaming, I think. In the order asked: The balloon that pops up for that widget next to "log out" reads"Loading error - wikEd 0.9.90r G (June 14, 2010) Click to disable" (I was able to get the properties of it and copy/paste that text in FF) Not that it would matter much, but my one browser says it's "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.20) Gecko/20081217 Firefox/2.0.0.20", and the other "Opera/9.80 (X11; Linux i686; U; en) Presto/2.6.30 Version/10.60" (wikiEd does not work in either/both) ...and now the key: the FF error console says: "Error: tag is not defined Source File: http://en.wikipedia.org/w/index.php?title=User:Cacycle/wikEd.js&action=raw&ctype=text/javascript Line: 12178" From what I saw, you define a variable "tags" and then reference tag[i]; don't know if that's what you wanted, and somehow I don't think so. I think you wanted "tags[i]", not "tag[i]". Y'know, considering what I found from FF's error console, I didn't try disabling add-ons. Sorry 'bout that, but I didn't think it was relevant. I use the Monobook skin. I actually tried putting in a simple alert() call into monobook.js, but I never saw an alert. Hmmmm...dunno why not. What happens is, I no longer see the wikiEd toolbars, just the pretty much standard ones (bolding, italic, link, advanced, special characters, help, and so on.) I might also add that for several months now, using the case-changing widget (at least in FF) loses the selection after using it. In other words, after selecting some text and clicking the case-change widget, it would for example change all the highlighted text to upper case, but then the text just operated upon would become deselected. Therefore I couldn't step to the next "case case", for example all lower without reselecting that text. It didn't used to do that; the text would remain selected and highlighed in the textarea, up until a few months ago anyway. • hmmm....on further inspection...Opera is not supported. OK. Fair enough. But as far as the deselection problem: "Error: [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIDOMRange.setStart]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: http://en.wikipedia.org/w/index.php?title=User:Cacycle/wikEd.js&action=raw&ctype=text/javascript :: anonymous :: line 6447" data: no] Source File: http://en.wikipedia.org/w/index.php?title=User:Cacycle/wikEd.js&action=raw&ctype=text/javascript Line: 6447 This could very well be a FF bug that was fixed in later versions. As you have seen, this is a rather old version. That error looks nasty. Still, it could have been an addon. This line looks like it's frobbing the range object. Ohwell. Oddly enough, I started experimenting with that a little bit, by going in and disabling wikiEd in my prefs, and reenabling it, then editing my user page. Wikipedia showed the wikiEd toolbar once, but now it pretty much doesn't show it anymore. Odd indeed that it seemed to work once. • It also appears I may have toggled the little beastie on and off with that widget near the top right of the page. Sometimes I click on it and the wikiEd toolbar will appear. Hmmm...dunno what causes that "load error" stuff though. hope that helps. -- Joe (talk) 23:07, 5 July 2010 (UTC) Thanks, I have fixed that typo in the next release. The selection code will change in the next release. You probably want to update your browser anyway for security reasons. Cacycle (talk) 21:22, 7 July 2010 (UTC) Fixed in 0.9.91i. Cacycle (talk) 20:32, 22 July 2010 (UTC) ## Error Vector on Windows XP, no clue what version my WikEd is: I tried the Search and Repla{{red|[[user:ce feature, {{red|[[user: replacing {{red|[[user: with [[user: and I got this: hidividedby5 (talk) 22:19, 6 July 2010 (UTC) Fixed in next release, thanks for reporting. Cacycle (talk) 21:35, 7 July 2010 (UTC) Fixed in 0.9.91i. Cacycle (talk) 20:34, 22 July 2010 (UTC) ## Bug: Find and replace, code highlighting not working on wikia.com Applogies, it was my error - managed to get the functionality back by removing a global javascript setting RandomTime 00:36, 9 August 2010 (UTC) ## Syntaxhighlight tag Hello, thanks for this great tool. I have seen, in wikEd source code, that it takes into account this tag for preview displays. But when I use the "<>" button for check html, the WikifyHTML function remove the syntaxhighlight tag. This tag is not in the list of allowed wiki tags (it's after this line //<> remove not allowed tags in the source code of wikEd). Can you add this tag please ? Instead I can use the source tag which is used by the SyntaxHighlight_GeSHi extension. But, an other issue is that source allow to write html tags like this : So, wikEd removed the tag myTags. But all tags nested in source tag and syntaxhighlight tag are transformed in text by SyntaxHighlight_GeSHi extension. So, I think that it should not be deleted by wikEd. Have you an idea for this issue ? Thanks (and sorry for my english) --Gobygoba (talk) 22:17, 10 October 2010 (UTC) I have just added the syntaxhighlight tag to the next release of wikEd. I am not sure what you mean with myTags. Visible text will not be deleted by wikEd, only invisible formatting. Cacycle (talk) 21:54, 11 October 2010 (UTC) (Added to 0.9.95b, Cacycle (talk) 21:32, 14 October 2010 (UTC)) Thanks. I just want to says that the tag myTags is deleted by "<>" button. For exemple, if you edit this section, and if you use the "<>" button, then the tag myTags is deleted. Yet, this tag is not a html tag, it's a simple text because it's in between a source tags (so this tag is parse by syntaxhighlight extention). So, I think that it should not be deleted by wikEd. But it may be complex to be taken into account. --Gobygoba (talk) 21:19, 17 October 2010 (UTC) Ah, I see. The [<>] button uses the same logic as the [W]ikify button. Unknown tags are stripped during wikification and therefore also during html fixing. This is done with text pattern matching, not with real parsing, and nested tags are not handled differently. Cacycle (talk) 21:03, 19 October 2010 (UTC) ## magic Words is it possible to create magic words with wikiEd?? A Word Of Advice From A Beast: Don't Be Silly, Wrap Your Willy! 17:37, 14 October 2010 (UTC) I'm not sure what you mean, please could you elaborate? Thanks, Cacycle (talk) 21:31, 14 October 2010 (UTC) ## Double stringcourses and categories Sory for my english, I'm french and my first ask is here In the modification of article by the editor with 3 windows, but without the wikEd preference, when I prévisualise or publish, the stringcourses and Infoboxs of the window of high edit-window are also at the beginning of the main edit-window ; and the gates and categories are at the end of the main edit-window and in the low edit-window. That causes doubled text when one preview or validate. Of course, one can temporarily empty the windows high and low to benefit from this editor, but rigth fonction is better. Thanks in advance. --Ricima (talk) 14:20, 16 October 2010 (UTC) Some little point : In wikEd the field of summary is too short, perharps "width:500px;" or "rigth:50px;" is better. --Ricima (talk) 14:20, 16 October 2010 (UTC) For the main question: I am not exactly sure what you mean. Is it about duplication of warning boxes from the top of the page above the edit box? This is a feature as you would not notice them otherwise with wikEd's autoscrolling to the edit bix. As for the summary field: its size is always be the maximum length (it is dynamically scaled). Which browser and OS are you using? Cacycle (talk) 20:58, 19 October 2010 (UTC) The bug exist only when internal editor has 3 windows, that is to say : • in internal editor, not in wikEd, • one edit areas for stringcourses and Infoboxs, one for main text, one for categories • in fr.wikipedia, not in en.wikipedia, • in main space and not in edit user page, • in editing whole the page and not only a chapter. • I don't know the version of internal editor now in french WP • I use the last Firefox 3.6.10 on last MacOSX 10.6.4. (to day with wikEd 0.9.95b G(13 oct 2010) the summary is OK) --Ricima (talk) 12:44, 20 October 2010 (UTC) Sorry, but I have no idea what the first problem is. Where can I find edit pages with three windows? Is the problem caused by wikEd? Maybe somebody from the French Wikipedia could jump in and help translating the problem. Cacycle (talk) 18:51, 30 October 2010 (UTC) By email I explained to you how to edit with 3 windows. I changed my english count for SUL from Ricima to : --Rical (talk) 08:35, 31 October 2010 (UTC) Modify your french preferences/gadget/without wikEd, then edit a french page with banner and categories, edit has 3 windows, then preview 2 or 3 times, banners are mutiplied. --Rical (talk) 09:53, 2 November 2010 (UTC) The problem was the less edit clutter gadget. wikEd now stops loading and displaysan error message. Cacycle (talk) 22:01, 8 December 2010 (UTC) Sorry, but when I just tried, the issue continue with my account and the french page gave 3 lines above. --Rical (talk) 22:54, 11 December 2010 (UTC) That is because you have selected the less edit clutter gadget under your preferences. This has nothing to do with wikEd. Cacycle (talk) 12:15, 12 December 2010 (UTC) It's OK. Thanks. --Rical (talk) 07:19, 13 December 2010 (UTC) • wikEd.programVersion = '0.9.95b' • Browser: Chrome 6 • Error message: Uncaught TypeError: Cannot call method 'replace' of undefined • line : 7216 Case of templates only. title is undefined. Please fix this.--Frozen-mikan (talk) 17:11, 17 October 2010 (UTC) Fixed in wikEd 0.9.95c and wikEdDiff 0.9.13b. Thanks, Cacycle (talk) 22:55, 1 November 2010 (UTC) ## [REF] and [TEMPL] hiding It seems only templates whose name is more than one word are hidden. Is there anyway to hide all templates?--Netheril96 (talk) 14:44, 18 October 2010 (UTC) Templates are not hidden if they do not have parameters or are shorter than wikEdConfig.templNoHideLength characters. If you set var wikEdConfig = {}; wikEdConfig.templNoHideLength = 0; you would at least hide all templates with parameters. See also User:Cacycle/wikEd customization. There is currently no switch to always hide even short no-paramater templates. Cacycle (talk) 20:36, 19 October 2010 (UTC) With the newest release 0.9.95c all templates are hidden, independent of their length and number of parameters. Cacycle (talk) 22:54, 1 November 2010 (UTC) • wikEd.programVersion = '0.9.95b'; Now, URL is bad links, and title is encoded string. --Frozen-mikan (talk) 09:37, 20 October 2010 (UTC) Thanks, I have added this to wikEd 0.9.95c and wikEdDiff 0.9.13b. Cacycle (talk) 22:52, 1 November 2010 (UTC) ## Preview to clic twice in fr.wikipedia • In wikEd, when I modify a page, one clic on Preview reload the page but before the modification. Another clic reload and show the modification. • I try to empty the cache without succes. • This happen, for about a week, only in fr.wikipedia, not in en..., not in fr.wikibooks and other, • in Firefox 3.6.11 and 3.6.12 and Safari 5.0.2, on iMAC MacOSX 10.6.4 --Rical (talk) 11:00, 30 October 2010 (UTC) Fixed in wikEd 0.9.95c, now waiting for the Wikipedia software fix to get live. In the meantime, please disable "Use live preview (requires JavaScript) (experimental)" in your Wikipedia edit preferences. Cacycle (talk) 22:51, 1 November 2010 (UTC) wikEd or not wikEd, that is the question. I choose it, understand and thank you. --Rical (talk) 10:24, 2 November 2010 (UTC) ## Russian Translate This is translate wikEd in Russian: • Interface: User:IGW/wikEd_international_ru.js (for this version, last on this moment); • Main page (partly): ru:user:IGW/wikEd (for this version); • Help: ru:user:IGW/wikEd ?????? ( for this version). --IGW (talk) 13:37, 31 October 2010 (UTC) Cool, thanks a lot! Cacycle (talk) 22:59, 1 November 2010 (UTC) We have our own Wiki and in IE8 i get the following javascript error (in Dutch). In Chrome I don't get the error. The problem is that we have a lot of Wiki readers, who use IE. The editors use Chrome. Bericht: 'wikEd.head.baseURI' is leeg of geen object Regel: 1747 Teken: 2 Code: 0 URI: http://en.wikipedia.org/w/index.php?title=User:Cacycle/wikEd dev.js&action=raw&ctype=text/javascript What can I do? Ploegvde (talk) 14:14, 3 November 2010 (UTC) You are using the test version "wikEd_dev.js" that it is outdated and is used for experiments and tests. The correct URL is http://en.wikipedia.org/wiki/User:Cacycle/wikEd.js. The baseURI bug has been fixed a few days ago. Cacycle (talk) 23:56, 3 November 2010 (UTC) Thanks, problem solved Ploegvde (talk) 11:08, 4 November 2010 (UTC) Usage of the test version is now indicated through the main logo on top of the page in 0.9.96. Cacycle (talk) 23:32, 6 November 2010 (UTC) ## broken on Appropedia Hi Cacycle, Something has changed, where we use wikEd on our Appropedia page: http://www.appropedia.org/index.php?title=Wikedbox&action=edit E.g. if I copy a link on Wikipedia to the Singapore article, "wikify" converts it to: title="Singapore" href="http://en.wikipedia.org/wiki/Singapore" _moz_dirty="", I think the best option for us is probably to directly copy the code from an older version that worked, and hack it to show the wikify button and as little else as possible (since that's all we use it for, on that page). I'll look at this later when I have time - if you have any tips, they're very welcome. Thanks --Chriswaterguy talk 05:44, 4 November 2010 (UTC) Looks easy to fix for me, I will check later. Cacycle (talk) 09:15, 4 November 2010 (UTC) Fixed in 0.9.96. Sorry, Cacycle (talk) 23:30, 6 November 2010 (UTC) Thanks. It seems to handle links fine now, but has trouble with other formatting, such as headings and indents. I've had trouble finding the exact pattern, and when it's triggered, but the most common thing is that if I select a section including a header or maybe an indent and click wikify, it hangs, until I get a browser alert about an unresponsive script, which lets me stop it running. For cases where it works - I've noticed bold and links - the conversion is almost instant, and there is no problem. I tried both in the Appropedia page and on Wikimedia (using the .js page in userspace, not the gadget). Thanks. --Chriswaterguy talk 08:05, 11 November 2010 (UTC) In order to fixthis I need to replicate the problem. Please could you find me an article and the exact text fragment (still in the clipboard after a crash) that crashes the script? Thanks in advance, Cacycle (talk) 09:14, 13 November 2010 (UTC) I've been looking at this again - I seem to be getting it mixed up with the "#Firefox 4 clash?" issue, above. The lack of a consistent pattern that I can make out has been confusing. I'll comment again in that section. --Chriswaterguy talk 16:29, 3 December 2010 (UTC) ## wikEd entirely gone Safari 5.0.2 (6533.18.5) - wikEd just doesn't show up at all anymore, it's on in my preferences, I purged the website a couple of times, but wikEd doesn't show up anymore. Xeworlebi (talk) 09:48, 10 November 2010 (UTC) Oops, should be fixed in 0.9.96a. Sorry, Cacycle (talk) 22:03, 10 November 2010 (UTC) • My browser is Google Chrome 8.0.552.200 (auto-updated). I use wikEd on FFXIclopedia with the On-wiki Complete method. Yesterday, wikEd disappeared -- along with the regular edit bar. All I had left was the edit box. Today, the standard editing tools reappeared, but wikEd has not. I double-checked that my monobook.js is still intact, and it is. I also inspected the page with Google Chrome (Developer Tools). The script tag is being rendered, and the loaded source shows a version of 0.9.96a. The only error in the Console is for a Wikia JS in the restoreWatchlistLink function.. I can't pinpoint anything that would stop yours. Any ideas? AbbydonKrafts (talk) 14:57, 18 November 2010 (UTC) Happening to me too (Firefox, on es.pokemon.wikia.com). I think I know what the problem is. Debugging the code it reaches line 1888 (wikEd.AddEventListener(window, 'load', wikEd.Setup, false);) but wikEd.Setup is never called. Calling wikEd.Setup() manually initializes wikEd correctly. This could happen because wikEd.AddEventListener relies on the browser event handling, and maybe when this line is reached the page was already loaded, so the load event isn't fired again. You probably have to detect if the load is complete and fire the functions that you want to attach to the window.load inmediately, maybe having a separate function to add events to the window.load that would do this special check. --Ciencia Al Poder (talk) 20:42, 20 November 2010 (UTC) Forum:Anyone still able to use WikEd? <=?©TriMoon(TM) Talk @ 18:08, 21 November 2010 (UTC) I will check into this as soon as I find some time (probably later this week - sorry). Cacycle (talk) 07:28, 22 November 2010 (UTC) I don't know why, but now it's working for me... Before writing here I did several reloads of the cache and didn't work. --Ciencia Al Poder (talk) 21:30, 22 November 2010 (UTC) (I forgot to login) I also was missing wikEd on Wikia for a few days, but it's working for me now. I think it was a Wikia issue and not a wikEd issue, as there were a couple of other JS things that stopped working at the same time as wikEd and were fixed at the same time wikEd started working again. 99.139.146.60 (talk) 01:03, 23 November 2010 (UTC) • It is also working for me now. I have to agree that the problem with the Wikia scripts was halting the wikEd script. Perhaps there is a way to get wikEd to work even if an unrelated script bombs out? AbbydonKrafts (talk) 14:17, 23 November 2010 (UTC) The next release would have loaded on Wikia (at least with Firefox 3.6)... Cacycle (talk) 22:58, 24 November 2010 (UTC) ## Automatically highlighting text matching predefined string With my dabfix tool it would be incredibly useful to highlight what text was added by machine. The text strings are already stored to do automatic removal. Is there a quick way of implementing this (i.e. I don't feel like reading threw your parser code)? -- Dispenser 04:55, 24 November 2010 (UTC) Please could you explain a bit more what you are trying to accomplish? I am not sure if I understand how a wikicode parser could help highlighting text added by a tool. Do you want general syntax highlighting? Do you have links to the dabfix tool and its help page? Thanks, Cacycle (talk) 22:48, 24 November 2010 (UTC) Dabfix is a Toolserver tool with WikEd integration for improving disambiguation pages. It adds a description to the entry which needs to be copy edited by a human. Sometimes it is hard to tell which to copy edit with a large number of entries. So I would like WikEd to highlight the added text as shown below: *[[Cirrus (rocket)]], a German sounding rocket You can try the tool on William Fowler to get an idea of what it does and how well WikEd integrates into it. -- Dispenser 05:09, 25 November 2010 (UTC) I have added support for keeping html code in version 0.9.97, just use span, div, ins, or del tags with an id, name, or class name starting with "wikEdKeep". See the following code for how to add your highlighted text to the wikEd edit area. Please feel free to ask here if you have any questions. Cacycle (talk) 00:53, 28 November 2010 (UTC) One small problem UpdateTextarea(html) removes the newlines in Firefox and Chrome when using the regular textarea. You can try it out on the Fowler page above. -- Dispenser 04:01, 19 December 2010 (UTC) Have you tried to replace newlines with <br>? If that doesn't help, please give me a detailed and stepwise description so that I can try to replicate the problem. Cacycle (talk) 22:37, 19 December 2010 (UTC) Yes that did the trick. I've updated the sample code above to include it. -- Dispenser 04:50, 23 December 2010 (UTC) ## Disable on Personal Level Hello Cacycle, I am an administrator of a wiki where WikiEd is enabled wiki wide using [[MediaWiki:Common.js]], but as an experienced user, WikiEd only annoys me. I have disabled it using the logo next to the log out button, but it randomly decides to enable itself, which is annoying. Is there any way to disable it on a personal level using [[User:MyName/vector.js]]? I also know of at least on other admin who would like to do this. Thanks! Multiple Protection LevelsTalk 10:51, 24 November 2010 (UTC) You can add var wikEdConfig = { 'scrollToEdituseWikEdPreset': true }; to your vector.js page. That sets the default state after the settings cookie has been lost to disabled. You will still be able to enable wikEd with a logo click if you wish so. Hope that helps, Cacycle (talk) 22:33, 24 November 2010 (UTC) Oops, typo - see correction above. Cacycle (talk) 21:24, 29 November 2010 (UTC) ## wikEd does not stay off when turned off Sometimes I turn off wikEd by clicking the button at the top right corner of the page. However, the next time I edit a page, it turns itself on again. I am using version 0.9.96a of wikEd in Google Chrome 7.0.517.44. I have no add-ons installed, nor any user-scripts that might be interfering with wikEd. Intelligentsium 23:08, 25 November 2010 (UTC) That might be a problem with the persistence or detection of web storage and/or cookies. I will check into it. Cacycle (talk) 08:00, 29 November 2010 (UTC) ## Callback on preview (Moved from User_talk:Cacycle/wikEd_development. Cacycle (talk) 22:33, 27 November 2010 (UTC)) Does wikEd support a callback mechanism such that I can execute custom scripts when the "preview below" is activated? Thanks, Nageh (talk) 11:42, 23 November 2010 (UTC) Not yet, but I could add it. What do you want to accomplish? Should it fire before or after the preview has finished? Cacycle (talk) 23:00, 24 November 2010 (UTC) Thanks for the reply. Ideally, it would fire after processing so custom scripts can go through the newly added text. What do you think would be a good way to implement this? I was thinking of registering on a callback chain a function that takes a single argument, which would be the newly added DOM wiki text element (wikEdBox, I think). Nageh (talk) 10:12, 28 November 2010 (UTC) Please see wikEd API, I will probably add the last two event hooks to the next release. What exactly do you want to do with them? Cacycle (talk) 22:45, 28 November 2010 (UTC) I was looking for an easy way to render maths elements in the wikEd preview box instead of going DOM level 2 event programming for my MathJax port. (See discussion here.) Thanks to the new hook it was easy enough to do this. Thanks a lot! Nageh (talk) 16:46, 2 December 2010 (UTC) ## Update to install code Hi, it me again, this time with an updated install code: // install [[wikipedia:User:Cacycle/wikEd]] in-browser text editor importScriptURI("http://en.wikipedia.org/w/index.php?action=raw&ctype=text/javascript&title=User:Cacycle/wikEd.js"); This will look, and hopefully work, much better as the non-DOM version "as-is-now"... PS: I already use this on wikia and it seems to work with no problem on FF4.0b7 <=?©TriMoon(TM) Talk @ 00:08, 28 November 2010 (UTC) Yupp, that works fine. Actually, it does *exactly* the same as the long code, importScriptURI generates the long code internally for you. The reason for the long version is that not all wiki installations have the importScriptURI function implemented. Cacycle (talk) 00:57, 28 November 2010 (UTC) ## Having to disable/enable greasemonkey I have this really weird issue where I get a red cross over the icon in the top right this not letting me edit pages unless I disable the script by clicking the icon in the top right then disable grease monkey, then reload the page, enable grease monkey and reload again and then enable the wikEd. I had no problems until it prompted me to update a few hours ago. --salle --Preceding unsigned comment added by 80.217.241.142 (talk) 00:43, 29 November 2010 (UTC) It might be fixed now, to bypass old copies in the cache, please open this link with GM enabled: [14]. Sorry, Cacycle (talk) 07:52, 29 November 2010 (UTC) It does not work in some cases, I have to check this later. Cacycle (talk) 08:03, 29 November 2010 (UTC) Fixed in 0.9.97a, open [15] to update. Cacycle (talk) 22:37, 29 November 2010 (UTC) ## Updating image template When I use the image button, it generates this code: [[Image:filename|thumb|widthpx| ]] I always have to add <br style="clear:both" /> at the end so text wraps correctly. Is it possible to add this to the emplate so clicking on the image buttons generates this: [[Image:filename|thumb|widthpx| ]]<br style="clear:both" /> Is this something I can do myself in my own wiki (I have one with siteground)? Better yet, it would replace filename with the most recent image that was uploaded. Scott216 (talk) 22:12, 29 November 2010 (UTC) You can add your own custom buttons, please see User:Cacycle/wikEd_customization#Custom_buttons. The last image uploaded idea is interesting, but probably too difficult to implement. Cacycle (talk) 21:55, 8 December 2010 (UTC) ## Croatian translation Here you can find translation of wikEd in Croatian language: • Messages: User:SpeedyGonsales/wikEd_international_hr.js • Main page: hr:User:SpeedyGonsales/wikEd (work not finished, but somewhere you need to start :)) Good job with wikEd! SpeedyGonsales (talk) 13:25, 11 December 2010 (UTC) Great, thanks! I have added it to the next release. Cacycle (talk) 07:30, 15 December 2010 (UTC) ## WikEd inline preview container should be inserted outside edit form WikEd inline preview container should be inserted outside edit form, because some MediaWiki extensions insert HTML forms onto the page, and when inline preview is loaded, these forms are inserted into editform, which causes different bugs, i.e. extension could handle the submmited editform like its own form and discard text changes. I propose inserting wikEd.localPrevWrapper after editform. The one problem here is that preview block goes below edittools, templatesUsed and hiddencats, which is not just after textarea. So I think wikEd could also move templatesUsed and hiddencats after editform. VitaliyFilippov (talk) 15:19, 14 December 2010 (UTC) Moving the original page elements around could also cause havoc. Maybe it is easier to fix the extension. Please could you give me the name of the incompatible extension(s) and an example wiki / page with the problems so that I can get a better idea of what exactly is happening? Thanks, Cacycle (talk) 07:36, 15 December 2010 (UTC) ## Opera Do you think making WikEd compatible with Opera? I like these two very much but isn't usefull to switch to Firefox every time when I use Wikipedia.Aku506 (talk) 18:40, 26 December 2010 (UTC) In general, wikEd is compatible with Opera, it is just that the last time I checked, Opera had some nasty bugs. I will check again next year, maybe they have fixed some bugs (nobody can know because they have no public bug tracking system %@#) or I can find some new workarounds. Cacycle (talk) 21:36, 26 December 2010 (UTC) that would be rally cool. thanks for u're work =) greetings Shadak (talk) 15:12, 27 December 2010 (UTC) ## WikEd not displaying in FireFox At some point in the fall of 2010 WikEd stopped displaying for me when using Firefox 3.6.xx. I have not changed my preferences, in fact, it is still selected as an option in my preferences. WikEd shows up fine in Chrome but not at all in IE. How do I get it working in FF? (And did something happen external to my account that caused it stop working?) -- btphelps (talk) (contribs) 01:20, 5 January 2011 (UTC) Are you using the monobook or the vector skin? (BTW, there is a missing semicolon after line 10 in your monobook.js). Are you seeing a wikEd icon on top of the page next to the logout link? Are you trying to load via monobook.js or via Wikipedia preferences as a gadget? Please could you fill out a complete bug report form from the top? Thanks, Cacycle (talk) 09:47, 5 January 2011 (UTC) Fixed the missing semicolon, thanks. I am not seeing a WikEd icon at the top of the page next to the logout link. For a screen shot of what my editing page looks like, see here. I am trying to load WikEd via monobook.js. I'd be happy to fill out a bug report. How do I do that? -- btphelps (talk) (contribs) 04:30, 7 January 2011 (UTC) Please check wikEd help; pushing the button right above the edit area turns wikEd on :-) Cacycle (talk) 02:33, 8 January 2011 (UTC) Ah! Duh! <forhead slap> -- btphelps (talk) (contribs) 04:54, 11 January 2011 (UTC) ## Drop down menu for Cite Hi It seems the "Cite" drop down has disappeared - any reason for this ? It used to say "advanced" "Special characters" "XX" "Cite" Chaosdruid (talk) 00:23, 10 January 2011 (UTC) Fixed Chaosdruid (talk) 00:52, 10 January 2011 (UTC) ## wikEd Compatibility Hello Cacycle, Thank you for developing and maintaining wikEd. I am currently planning on using it with a wiki for my team at work and there are strict rules in place regarding programs and browsers that are in use. Is there any way that I can get the "wikify" button to work on the latest IE? I highly agree that IE is far from the recommended, but that is outside of my hands. Specs: • IE 8.0.6 • mediaWiki software • Monobook probably Andrew. -- Preceding unsigned comment added by AndrewM90 (talk o contribs) 14:57, 10 January 2011 (UTC) Hi Andrew, I am sorry, there is no way to get wikEd working under IE 8. It might be possible with version 9, but I cannot test or develop for that because it requires Windows 7 @#\$!. Somebody could also invest a few days of programming to dissect out the wikify logic which should run under IE (the incompatible parts are related to the selection model (needed for grabbing and inserting text into the edit area) and, as I just noticed, to events). The easiest solution would probably be to ask for a special permit to run Firefox. Sorry, Cacycle (talk) 23:57, 12 January 2011 (UTC) ## Wikia's new skin After only playing with WikiEd for a little bit I can see that it will be very useful the highlighting is superb. My only gripe is that it has not been updated for the new Wikia skin so I have to switch to monobook to use it. Thank you for creating and developing this great code and I hope to see it working under the new Wikia skin soon. (Awesome3000 on Wikia)125.237.165.60 (talk) 08:34, 17 January 2011 (UTC) I will try to adapt wikEd to the new skin, might take a few days, though. Cacycle (talk) 07:42, 21 January 2011 (UTC) No rush at all. Good luck with the coding. 222.155.206.208 (talk) 07:47, 21 January 2011 (UTC) ## Problems with Firefox 4 I had to disable wikEd because of some problems that it had in Firefox 4 beta 9. First of all, the script takes by far the longest time to load, on any page. When a page is loaded, it still says "Waiting for en.wikipeida.org" so I began wondering what was taking so long. Then I noticed that the wikEd icon in the top-right corner had not appeared yet, and only shows up when the page finishes loading. Secondly, I usually have wikEd disabled (by clicking on the wikEd logo), but every once in a while it will reactivate itself and so I have to disable it again. I didn't have these problems in Firefox 3.6. Gary King (talk · scripts) 02:02, 26 January 2011 (UTC) ## Current wikEd doesn't work for me in Chrome on Mac Cacycle, hats off to you for such a useful tool! But I'm stuck and hope you can help me: I was earlier using wikEd 0.9.91j (from July 22, 2010) in Firefox 3.6.13 on Mac and it worked fine across all wikis I noticed. I upgraded to the newest one 0.9.97a ("Last update Nov 29, 2010") and it seems to work in Firefox (it didn't at first but I kept refreshing), yet doesn't show up for me in Chrome (9.0.597.84 beta). Across all wikis, it gives me deformed, small text areas like this and that. So I decided to downgrade to the older 0.9.91j in Chrome to see what would happen. More strangeness: in Chrome on the lindenlab.com company internal wiki (MediaWiki 1.14.1) that's connected to the web, wikEd doesn't appear. However, wikEd still works on the public http://wiki.secondlife.com (MediaWiki 1.15.5), but ONLY if all lindenlab.com wiki tabs are closed. Otherwise, having a tab with the lindenlab.com wiki editor open makes wikEd disappear on all open tabs that have a working wikEd (after refreshing them) and it seems I have to close the lindenlab.com wiki tab, restart Chrome, and refresh the wiki.secondlife.com tab for wikEd to come back. This earlier confused me into thinking that wikEd wasn't working at all. Furthermore, I disabled the FCKeditor in the lindenlab.com wiki preferences and that didn't bring back wikEd. I wonder what factor is "breaking" it? The lindenlab.com wiki does use the FCKeditor extension, so I wonder if that in some way is wreaking havoc? Strange that an earlier version played fine with it though in Firefox, but not Chrome. To sum up in Chrome: older 0.9.91j works partially. 0.9.97a doesn't work for me at all, and gives the deformed text areas I reported above. Thanks in advance for your help. Torley (talk) 16:18, 7 February 2011 (UTC) I cannot login into these sites to trouble shoot this problems - would you mind emailing me a username/password? Also, in order to replicate this, please could you fill out a structured bug report (see the top of this page)? Thanks in advance, Cacycle (talk) 08:28, 9 February 2011 (UTC) ## Redirects in galleries I am working on cleaning up issues where an image was renamed and a redirect created, but the article was not updated. The Fix redirect button works great unless the image is enclosed within a <gallery> tag. ----- Gadget850 (Ed) talk 16:11, 15 February 2011 (UTC) I cannot replicate this. Please could you give me a link to an article version where it does not work? Thanks, Cacycle (talk) 20:50, 16 February 2011 (UTC) ## wikED broken for mediawiki 1.17wmf1? Hi, wikipedia recently upgraded to mediawiki 1.17wmf1. On LI wiki (my home wiki) gadgets are not enabled, so I enable WikED by including it in vector.js. It however doesn't seem to load anymore. Is there a fix for this? - Pahles (talk) 15:04, 16 February 2011 (UTC) It looks like gadgets have been disabled on your wiki. Please use the On-wiki installation code (complete version) on your li:User_talk:Pahles/vector.js page. This works for me. Good luck, Cacycle (talk) 20:40, 16 February 2011 (UTC) I've found the problem. There was a javascript error in my personal common.js, preventing wikED to load. Strange thing is this javascript was there since 2009, and I did not have a problem before the switch to mediawiki 1.17wmf1. Anyway, it is working now! Sorry to have bothered you. - Pahles (talk) 09:50, 17 February 2011 (UTC) ## refToolbarPlus Compatibility Please add a link to Wikipedia_talk:RefToolbar_1.0#wikEd_compatibility under the compatible scripts section of the project page. If User:Apoc2400 picks up the change in the gadget version, a link to the gadget can then be substituted. However, User:Apoc2400 hasn't been active recently so I don't know when this might happen. --UncleDouggie (talk) 07:51, 18 February 2011 (UTC) ## Convert the png buttons to svg? I love WikiEd, but one thing I notice is that the buttons are very ugly and pixelated. Any way to make them into .svg? We could ask the Graphic Lab to create the icons if needed. Headbomb {talk / contribs / physics / books} 22:22, 18 February 2011 (UTC) Yes, svg would be nice, but there would be a lot of work involved. It would be great if sombody would help out with this, and I would gladly support that project as good as I can. Cacycle (talk) 01:05, 20 February 2011 (UTC) Well if you give me a list of the files, that would probably all that is needed to kickstart things. Headbomb {talk / contribs / physics / books} 01:39, 20 February 2011 (UTC) They're all in commons:Category:WikEd. And speaking of buttons, could we perhaps rearrange and remove some of them. -- Dispenser 04:31, 20 February 2011 (UTC) Well here's a list of what currently exists (minus screenshots and gifs). Cacycle can remove what is uneeded for conversion, and then it would be off to the Graphic Lab. Headbomb {talk / contribs / physics / books} 04:49, 20 February 2011 (UTC) Poke? Headbomb {talk / contribs / physics / books} 02:24, 22 June 2011 (UTC) ## localstorage memory issue? WikEd not loading after find/replace in a big article Hi, I tried translating 2010 Indian Premier League into ta:User:Mahir78/2010 ???????? ????????? ???? using a js tool. Everything went fine. But at one stage if try to find/replace words i get this error. is there any memory issue in localstorage? please add try catch at line 14347. Let me try and give u feedback. After this error happen WikEd never loaded into my user area in both enwiki and tawiki. In enwiki also shows this error and not loaded, but the error initiated while editing in tawiki. I use FF3.6.14 and vista home. uncaught exception: [Exception... "Component returned failure code: 0x80630002 [nsIDOMStorage.getItem]" nsresult: "0x80630002 (<unknown>)" location: "JS frame :: http://en.wikipedia.org/w/index.php?title=User:Cacycle/wikEd.js&action=raw&ctype=text/javascript :: anonymous :: line 14347" data: no] -- Mahir78 (talk) 11:06, 4 March 2011 (UTC) Wgat happens if you delete content in Tools - Options - Advanced - Network - Offline Storage? Cacycle (talk) 07:36, 11 March 2011 (UTC) ## MS Word to Wiki Conversion Freezes WikEd versin: 0.9.98 Browser: Google Chrome 10.0.648.127 errors: None, other than the browser asking if I want to kill a frozen script Add-Ons: Flashblock (with this page whitelisted), adblock, personal blocklist, sexy undo close tab, google speed tracer, web developer Wikipedia beta: no User scripts: None OS: OSX 10.6.6 Theme: Classic Description: When I paste content from Microsoft Word or Textedit into WikEd running on Chrome or Firefox and then click the MS Word->Wiki button, it stalls and after a while tells me that the script is unresponsive. When debugging, I can see that the loop in the code never gets past: while ( (regExpMatch = /(\w+)\s*=\s*(('|")(.*?)\3|(\w+))/g.exec(attributes)) != null) { With local vars of: attrib: "class" attribValue: "MsoNormal" attributes: "class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-outline-level:1"" common: "dir" regExpMatch: Array[6] relaxed: false sanitized: "" table: "|border|cellspacing|cellpadding|align|bgcolor" tablealign: "|align|valign" tablecell: "|rowspan|colspan|nowrap|bgcolor" tag: "p" this: Object valid: false As far as I can tell, it never modifies the attributes or breaks out of the loop, so there is no way this loop could ever end. --Preceding unsigned comment added by 173.161.6.33 (talk) 17:53, 8 March 2011 (UTC) That is strange, it works fine for me, even with the provided values. Also, I do not see a reason why the loop would ever run more than a few times: It works its way through the attributes string till the end, then exec return null, and the loop terminates. Maybe something else is broken. Can you find a test case so that I can exactly repeaqt your problem? Also, it might help if you fill out the bug report form from the top of this page. Thanks in advance, Cacycle (talk) 07:58, 10 March 2011 (UTC) ## New script to view references rendered in section? Any chance that the feature that wikEd provides which previews a section and shows the references in it rendered in the preview page, could be split off into its own script for those of us that only want to install that feature? wikEdDiff is great, for instance, and this would be, too. Thanks in advance! Gary King (talk · scripts) 05:27, 14 March 2011 (UTC) There is User:Anomie/ajaxpreview.js that takes extra care of the references when editing a section: if it finds a <ref name="xxx" /> which is defined in other section it will pull the whole article from the server, find that ref and still display it properly in the preview. If you don't care that much about refs then there is also my script User:Js/ajaxPreview which simply inserts <references /> like WikEd but has some other features: Ajax "changes" button, preview of edit summary and other areas, executing sortable and collapsible scripts on the preview, etc. -- AlexSm 21:51, 15 March 2011 (UTC) Thanks, I'll try the former first. I copied it and modified it because the spinner immediately annoyed me when I first used it. I think either static "Loading" text or at least slower animation (1000+ ms rather than 250 ms) should be better. Gary King (talk · scripts) 18:02, 16 March 2011 (UTC) I think it would improve usability if the tool bars (except maybe the replace tool bar) are replaced by menus. The biggest advantage is that the function descriptions would be obvious without having to hover over the mouse the buttons. Some of the button icons are difficult to memorize simply because the concepts they represent appear only in wikEd. Another advantage is that the controls would take up less space in the browser window. Yes, it is possible to collapse the tool bars, but they still take up the same amount of vertical space. Also, then you have to remember which collapsed tool bar holds which buttons, because the collapsed tool bars are not labeled. -Pgan002 (talk) 08:04, 17 March 2011 (UTC) Yes, that'd be great idea! :) I love wikEd but that's a bit weird, that bars are collapsible vertically so that there's no space spared for other uses... :l Vinne2 (talk) 18:51, 13 August 2011 (UTC) ## wikEd always on, problem with Safari+quick-preview • These days, wikEd's editing formatting is always on, I have to turn in off every-time I edit a page due to the WebKit issue of magically added enters. This quit bothersome, it didn't used to do this. • The quick-preview doesn't work fully in Safari (5.0.3 (6533.19.4)), it shows, but templates stay as text formatted, in Chrome it takes a second and then they also get the correct formatting, in Safari they just stay as the text as you typed it. Signatures (~~~~), don't format as eventually shown but just gives the basic formatting (for me it looks like "Xeworlebi 14:53, 18 Mar 2011 (UTC)", in quick preview, instead of the way it eventually turns out as you can see at the end of this message. Making the feature quite useless in a lot of cases. • var wikEdDoCloneWarnings = false doesn't work anymore, edit-notices still show up twice. Xeworlebi (talk) 14:59, 18 March 2011 (UTC) Cacycle: I think I can explain the Safari issue which I tracked to this explanation when I had to fix my own script; all you have to do is replace headers['Content-Type'] = 'multipart/form-data; boundary=' + boundary; with headers['Content-Type'] = 'multipart/form-data; charset=UTF-8; boundary=' + boundary; Or you could switch from using index.php to api.php which requires even less code; I can show you the code if you're interested. -- AlexSm 15:48, 18 March 2011 (UTC) Xeworlebi: what do you mean by " WebKit issue of magically added enters."? There are problems with cookie/web storage that probably cause these problems. I am already working on it, but it may take a while due to real life commitments and another programming project I am working on. Alex: Thanks, I will try to add that. Xeworlebi: Please see the new syntax for configuration settings under [User:Cacycle/wikEd customization]. Essentially, you now have to use var wikEdConfig = { 'doCloneWarnings': false }; Cacycle (talk) 16:09, 18 March 2011 (UTC) Ah, thanks. And this Webkit "bug". Xeworlebi (talk) 21:00, 19 March 2011 (UTC) The charset=UTF-8 fix has been added to 0.9.99. Thanks, Cacycle (talk) 21:45, 8 May 2011 (UTC) ## Not working in Vector I tried to paste the text from monobook.js into vector.js and got a load error. Now when I am in Vector Wikipedia won't let me edit any pages at all. Could someone please help me? (In the meantime I'm using MonoBook.) Someone the Person (talk) 18:40, 27 March 2011 (UTC) Scratch that, it doesn't work in any skin. My best guess is that the regular edit box is being updated, and this is messing up wikiEd somehow. The only way I can get it to work is to reload the page with wikiEd off, and then turn it on, which it wasn't letting me do before... Someone the Person (talk) 18:49, 27 March 2011 (UTC) It works fine for me... Please could you fill out the bug report form from the top of this page? Does it give an error message popup on hovering over the logo on top of the page? Are there wikEd-related JavaScript console errors? Thanks in advance, Cacycle (talk) 20:05, 28 March 2011 (UTC) ## How to set default font size The font size in WikEd edit area are too large and I have to cycle two times whenever I open a new editing form. I don't want to adjust the browser zoom level because that will shrink the font in preview and reading area. What is the option of default zoom level?--Netheril96 (talk) 01:09, 2 April 2011 (UTC) What browser are you using? Is the browser zoom set to 100 %? Have you used wikEd's zoom button ()? Cacycle (talk) 21:41, 8 May 2011 (UTC) ## Firefox 4 Preview Problem • Windows 7 x64 and Windows XP x86 SP 3 • Firefox 4.0 Final • Greasemonkey 0.9.1/0.9.2 Since the Firefox 4 Final Release its not possible to use the wiked preview function. Instead of the preview content it shows "..." in the preview window. Can you confirm this issue? Legend811510 (talk) 08:27, 2 April 2011 (UTC) It works fine for me. --UncleDouggie (talk) 00:06, 5 April 2011 (UTC) Hmm ok i've made some tests: A fresh new windows xp x86 machine, install firefox 4.0, greasemonkey 0.9.1 and latest wiked script - same problem, no preview, still "..." in the result! An other fresh new windows xp x86 machine, install latest firefox 3 (3.6.16) version, greasemonkey 0.9.1 and latest wiked script - no problem with the preview, the output is correct. Mysterious ^^ Legend811510 (talk) 16:52, 10 April 2011 (UTC) Any JavaScript console errors? Please see the top of this page for the bug report form, it contains some important questions in order to figure this out. Cacycle (talk) 06:55, 13 April 2011 (UTC) ok sorry i understand what u mean. I followed the steps and here the result: -> Wiked Version: 0.9.98 GM -> Brower ID: Mozilla/5.0, Windows NT 5.1 rv 2.0 Gecko/20100101 Firefox/4.0 (from the xp machine) -> Error Console: (one warning message only) Warning: Expected end of value but found ':'. Error in parsing value for 'float'. Declaration dropped. Source File: (wiki link), Line: 83 (internal and offical wiki page, both the same warning) -> Browser Addons: Greasmonkey 0.9.2, Java Console 6.0.23, Java Quick Starter 1.0 -> Wiki Skin: standard -> Special User Scripts: none -> Operating System: Windows 7 x64 SP 1 and Windows XP x86 SP 3 (both the same problem) hope this is what u need to figure out the problem Legend811510 (talk) 08:40, 13 April 2011 (UTC) I'm getting the same thing (no JS preview), firefox 4, 9.98 GM, using Monobook at Wikia.com, but it works fine when editing at Wikipedia. I got this error in firebug's console: functionsHook is undefined for (var i = 0; i < functionsHook.length; i ++) { RandomTime 15:13, 18 April 2011 (UTC) Might be fixed with 0.9.99. Please could you check and report back? Thanks, Cacycle (talk) 21:42, 8 May 2011 (UTC) ## Compatible scripts Could someone change * Lupin Navigation popups * AzaToth Twinkle to * Lupin * Navigation popups * AzaToth * Twinkle unless it is wrong to (in which event I'd appreciate to know why). Thanks. kcylsnavS{screechharrass} 23:44, 4 April 2011 (UTC) Lupin and AzaToth are (were) the (original) authors of these tools. I have removed their names. Cacycle (talk) 17:04, 8 May 2011 (UTC) ## Remove some maths characters I believe the support of some maths characters is so bad that they should not be be offered in the menu 'Math and logic'. The ones I think should be removed are the blackboard bold symbols because they look so dreadful with the default of Times Roman going to MS Mincho on IE, and the angle brackets which are not supported with the default on IE. The other browsers do them fine so it is yet again Microsoft causing problems. To try them out yourself I'll list the characters here: standard non-serif C H N P Q R Z ? ? serif using {{math}} C H N P Q R Z ? ?. Dmcq (talk) 10:06, 5 April 2011 (UTC) That function is not provided by wikEd, that is the standard toolbar. Cacycle (talk) 07:09, 6 April 2011 (UTC) Thanks, okay I better have a look for that then. Dmcq (talk) 08:10, 6 April 2011 (UTC) ## Reference expansion distracting? Currently, if a user hovers over a hidden reference or template, it immediately expands. In my experience, hovering happens too easily by accident and the expanded reference is distracting. I find myself being careful not to move the mouse over a hidden reference. What do you think about showing a tool tip on hover but showing the reference if the hidden reference is clicked. It's a button anyway. -Pgan002 (talk) 22:18, 13 April 2011 (UTC) I have added a customization option that unhides only when the shift or ctrl key is pressed at the same time. You can test it by adding var wikEdConfig = {}; wikEdConfig.unhideShift = true; to your vector.js page (please see also User:Cacycle/wikEd_customization). What do you think? Cacycle (talk) 21:36, 8 May 2011 (UTC) Thank you. It sounds good, though it does not seem to work for me. I will investigate what I'm doing wrong. -Pgan002 (talk) 08:09, 14 May 2011 (UTC) ## Script does not respond Hi ! This message appears after a few minutes using WikEd : You can stop or waiting for it to answer (french translation) http://en.wikipedia.org/w/index.php?title=User:Cacycle/WikEd.js&action=raw&ctype=text/javascript&dontcountme=s:10512. Then, sometimes when i press "Stop the Script", it works and I can continue (1/4), or sometimes, Firefox doesn't answer either, and I have to close it and restart from the beginning. What's wrong ? (Firefox 4.0, Windows 7 (64) • It works fine for me with Firefox 3.6.16 on Linux. Does this behavior happen when you have just one window and one tab open, and no Firefox extensions? Does it happen every time? -Pgan002 (talk) 03:41, 26 April 2011 (UTC) VitaliyFilippov (talk) 13:43, 27 April 2011 (UTC) See "Firefox 4 infinite loop" below. No, I don't think that's it (i.e. that's another problem). I experience the same problem on the same combination of OS and Firefox 4 (Firefox 5 has the same problem). It occurs after I highlight a portion of the text. Not every time, but often enough (say, once in 10 selections, and I select quite often; when my connection is slower, failure rate seems higher) that I had to turn WikEd off. Sometimes "stop the script" helps, sometimes not even that. FF 4 seems to have one thread per window so I don't always have to kill the whole browser, but I lose the work I had done in the edit window. I am a programmer (not in web/scripting area though) so maybe I could help debugging, if only I knew how. No such user (talk) 08:10, 12 July 2011 (UTC) ## Firefox 4 infinite loop in [Wikify] VitaliyFilippov (talk) 13:40, 27 April 2011 (UTC) WikEd does an infinite loop in Firefox 4, for example, when wikifying copy-pasted tables. This is because WikEd uses the following code (in 2 places): while (//g.match() != null). ECMAScript 5 standard tells us // is not a special literal regex syntax (as it was in ECMAScript 3), but is a constructor for a new regexp object, and a new regexp object always has last match position = 0. So if there is even a single match, this would be an infinite loop, which is incorrect. All other browsers behave differently Not really, Opera now also conforms to ECMAScript 5, Chrome and IE do not. See Mozilla Bug 98409. Nevertheless, the following code: var re = //g; while(re.match() != null) is correct for Firefox 4 and other ECMAScript5-compliant browsers. The following patch fixes this problem: Thanks, I will add it this weekend. Cacycle (talk) 06:58, 6 May 2011 (UTC) Fixed in 0.9.99. Thanks again! Cacycle (talk) 16:52, 8 May 2011 (UTC) ## 0.9.35h removal Hi. I have inherited some MediaWiki administration tasks for my company's internal wiki and am having a terrible time locating and removing and old version of wikEd which is installed on it (their version is ancient - 0.9.35h and I want to upgrade it). Despite the fact that it appears to be a site-wide install the code is not located in MediaWiki:Common.js, there are no preferences options for Gadgets or wikEd. Also, no code shows up under User:<any user>/monobook.js or any of our other skins. I have also done a "grep -R -i wikEd ." through my wiki installation directory without getting any hits. I am at a loss! I can install the newest version by adding your code into MediaWiki:Common.js but then both versions are active (or at least the little icon for both show up in the top right). Am I missing something? Where can I locate and obliterate the old version? Does the "h" at the end of the version number signify an installation type that I missed? Xaev (talk) 21:16, 16 May 2011 (UTC) You could try to find out from where the code is loaded, e.g. using the Web Developer add-on for Firefox. Or check the source code for Javascript links and then check their source codes. You could also cearch the database for the wikEd code... Cacycle (talk) 21:39, 16 May 2011 (UTC) ## Templates required in external wiki to run wikEd I'm preparing to install wikEd on a MediaWiki on an intranet which may or may not be connected to the Internet. So I'm copying the whole wikEd program code. WikEd is to be active for all users, so I'm installing the scripts in the MediaWiki namespace. For now, I'm experimenting with wikEd on a live on-line wiki that I control, because it's available. I realize I should set that up differently, but then it wouldn't help me understand how to set up the intranet version. The most obvious problem is that on the page http://www.informationtamers.com/WikIT/index.php?title=MediaWiki:WikEd.js The following are reported as used but missing: Template:FUNCTION:parameter Template:Function:param Template:Lang Template:Modifier:... Template:TABLE Template:Variable Template:Variable:R Template:\s*lang\s*\ But I can't find most of these in Wikipedia and those I can find lead to a cascade of dependencies. And ones I do include look very different once in my wiki. When I get beyond content editing and basic setup, my wiki fu is not so hot! Is there a recognized way of acquiring the full tree of templates required for wikEd? I can't get any sign that wikEd is active - editing appears in plain text as normal. I'm using Chrome 12.0.742.100 on WinXP SP3 and refreshing after any changes. The problem occurs in IE8 and FF3.6 as well. The wiki is on v.1.16.5, monobook (with very minor customization) and wikEd is the version on the wikEd Installation page now. There's a full list of what I've done for the installation here: http://www.informationtamers.com/WikIT/index.php?title=User_talk:WikITSysop#wikEd_installation and apart from the templates, I think everything has been done. Thanks Argey (talk) 06:46, 21 June 2011 (UTC) Addendum. I have noticed now that most of the templates that were redlisted at the foot of http://www.informationtamers.com/WikIT/index.php?title=MediaWiki:WikEd.js were actually mentioned in comments and were not really template invocations. I was caught out by the wiki code parsing text after the "//" The only use of template code that looks as if it might indicate the need for a template are the following: var regExp = /{{\s*lang\s*\|(.|\n)*?}}/gi; and return('{{doi|' + regExpMatchDOI[1] + '}}'); ['\\{{2,}', 'paramTempl', 'open'], // template or parameter start Argey (talk) 08:42, 24 June 2011 (UTC) wikEd itself does not need any templates. The problem is in your MediaWiki:Common.js, simply remove the semicolon at the end of the line starting with 'wikEdHelpPageLink', list items are separated by commas. Also, check your JavaScript error console for errors. Cacycle (talk) 20:49, 3 September 2011 (UTC) Oh! I see. Thank you very much for your wisdom infusion, I'll do that. Argey (talk) 03:35, 5 September 2011 (UTC) ## Helper programs? Hello Cacycle, is it possible to make your code from "Syntax highlighting" as a separate user script like "wikEdDiff". Best Greetings --Perhelion (talk) 10:40, 27 June 2011 (UTC) ## Slow to load in HTTPS wiki I'm 90% sure that it's so slow because it loads each icon one at a time, at about half a second each, which totals maybe 20-30 seconds on my system. Problem is icons are flushed from cache fast on my system, for whatever reason. It would be vastly more efficient from a network perspective and much faster to load to make them all views on a single large png strip/box. There are some online tutorials for this. Also, making it SVG per the earlier suggestion would slow the loading time down well beyond what it is now, so I vote against that (although some of the icons are nicer and could definitely replace the current ones). Foxyshadis(talk) 05:53, 17 July 2011 (UTC) ## Fixes for regex rules parsing {{editprotected}} Hi! Could someone sync the "regExp" used to detect the rules with the AWB source code and also add a mw.log command to the loop which is used to parse the regex rules? The regex change would make the name of the word optional and white spaces to be ignored (as it is on AWB) and the log command would allow users to use ?debug=1 to review the set of rules and find out which ones needs to be fixed (and would do nothing on production mode). Helder 16:10, 30 July 2011 (UTC) Umm, I don't understand what you mean by several different things you've said, including <sync the "regExp">. Could you perhaps create a sandbox page, copy/paste the current text into it, make the changes, and then post a link to it here with a note of "just copy/paste it over"? Nyttend (talk) 12:50, 1 August 2011 (UTC) Ok. Take a look at this diff which shows the proposed changes. The first change is to make sure AWB and WikEd use the same regular expression to detect the rules on Wikipedia:AutoWikiBrowser/Typos. Helder 13:01, 1 August 2011 (UTC) Done, I hope. Please check to make sure that I did the right thing. Nyttend (talk) 14:36, 1 August 2011 (UTC) Almost done: the script should go to the page User:Cacycle/wikEd.js, not to User:Cacycle/wikEd ;-) Helder 14:40, 1 August 2011 (UTC) So what do I do with User:Cacycle/wikEd? Simply undo the edit? Nyttend (talk) 14:52, 1 August 2011 (UTC) Yep! PS: I just posted my request on this page because the User talk:Cacycle/wikEd.js redirects here. Helder 14:58, 1 August 2011 (UTC) I was going to ask why you posted here, so thanks for telling me :-) Such a large page makes my browser lock up, but rollback makes the process easy. For future reference: when you have a complicated requested edit, ALWAYS provide the exact code that you want -- in your initial request, either give the entirety of the code that should be deleted and the entirety of the code that should replace it (make sure that the code to be deleted is unique on the page), or simply provide the entire code in a sandbox. As well, PLEASE specify the page to be edited if it's not the one on whose talk page you're posting the request. Nyttend (talk) 15:06, 1 August 2011 (UTC) Indeed, my browser also has the same problem. I kind of did what you suggested in the initial request, but I could have made it more explicit (I've copied the part of the code which should be changed, added the new code and used the highlight="2,11" parameter of the <source> tag to indicate which lines would change). But next time I will provide a usual diff to avoid confusion ;-). As for the page redirect, I only noticed it after seeing that the wrong page was edited accidentally. Helder 15:27, 1 August 2011 (UTC) Could you change the position of the mw.log? It should be executed only in case of an error in the regex (when I've copied the code above to the sandbox I changed this accidentally). Helder 16:30, 1 August 2011 (UTC) +----------------------------------------------------------------------------------------------------+Done. --Closedmouth (talk) 05:45, 20 August 2011 (UTC) Thanks! Helder 18:40, 20 August 2011 (UTC) Added a check for the availability of mw.log to wikEd 0.9.100. Cacycle (talk) 20:35, 3 September 2011 (UTC) ## Some bugs in WikEd 0.9.99 and Firefox 5 In Firefox 5, wikifying copy-pasted text often gives things like: name="cutid6" _moz_dirty="" Also, the cursor jumps to the end after wikifying. It didn't in Firefox 4. This is inconvenient. VitaliyFilippov (talk) 15:13, 4 August 2011 (UTC) Does it still do this in Firefox 6? Please could you give me an example of pasted text where this happens? Thanks, Cacycle (talk) 20:29, 3 September 2011 (UTC) ## Cannot install script Hi there, I've used the wikEd script before (with FireFox 3.6) On my new system I'm running firefox 5.0 and I cannot run the script anymore. I've installed Greasmonkey and when I try to install the wikEd Script the following error msg. pops up: "Script could not be installed TypeError: match[2] is undefined" I try to install it from this link: http://en.wikipedia.org/w/index.php?action=raw&ctype=text/javascript&title=User:Cacycle/wikEd.user.js -- Preceding unsigned comment added by 82.92.248.212 (talk) 14:05, 12 August 2011 (UTC) I have the same issue with Firefox 6. My workaround was to make a bookmarklet so I can at least turn it on when I need it (see below) --207.171.191.60 (talk) 18:06, 23 August 2011 (UTC) Greasemonkey loading has been fixed in wikEd version 0.9.100. It is a Greasemonkey bug for empty metadata blocks, in this case the empty @exclude. Cacycle (talk) 20:21, 3 September 2011 (UTC) ## wikEd button doesn't really hide old standard wiki toolbar (I give also solution) Well, I think that would be a solution...) The button hides wikEdToolbarWrapper. The problem is that, unlike the new standard wiki toolbar, You don't append the old one to that wikEdToolbarWrapper. You should change the code below. It says something like: "if there's a wikEd.toolbar then append wikEd.toolbarWrapper to wikEd.editorWrapper" and that's all. There is missing the appendig of the toolbar to wikEd.toolbarWrapper. Vinne2 (talk) 18:28, 5 September 2011 (UTC) This has been fixed a while ago. Cacycle (talk) 11:45, 4 January 2012 (UTC) ## Why no signiature button? I am really tempted to replace the original editing bar with wikiEd, but one key button is missing: the signature button. for talk pages. Could it be added to the wikiEd? Or, barring that, as wikiEd supports custom buttons, but frankly, User:Cacycle/wikEd_customization#Custom_buttons is to arcane for me to use, perhaps a kind soul would code this button as an option for me? Also, there are certain templates I often use, I'd love it if somebody could tell me how to add template buttons (i.e. a button that would add a template). I should be able to duplicate this for the several templates I care about, once I understand the basics. Thanks, --Piotr Konieczny aka Prokonsul Piotrus| talk 21:00, 20 August 2011 (UTC) Why wasting precious button space for something simple as four tilde strokes? Cacycle (talk) 20:15, 3 September 2011 (UTC) Because without it, the "Disable regular bar" is useless. Clicking signature button is faster than typing it. It is a useful and common feature. And WikiEd has plenty of space for buttons. --Piotr Konieczny aka Prokonsul Piotrus| talk to me 17:11, 4 September 2011 (UTC) ## Bookmarklet Version I had issues with installing the GreaseMonkey script, so instead I made a bookmarklet to load wikEd on demand: just drag this wikEd link to your bookmarks toolbar and click it to enable wikEd for the page you are currently on. Source: javascript:(function(){_wikEd_script=document.createElement('SCRIPT');_wikEd_script.type='text/javascript';_wikEd_script.src='http://en.wikipedia.org/w/index.php?title=User:Cacycle/wikEd.js&action=raw&ctype=text/javascript';document.getElementsByTagName('head')[0].appendChild(_wikEd_script);})(); -- Preceding unsigned comment added by 207.171.191.60 (talk) 18:11, 23 August 2011 (UTC) Dragging the link doesn't work - apparently Wikipedia dislikes javascript links. To make the bookmarklet, you'll have to create a bookmark yourself with the above source code as the URL. -- Preceding unsigned comment added by 207.171.191.61 (talk) 18:15, 23 August 2011 (UTC) BTW, Greasemonkey loading has been fixed in wikEd version 0.9.100. Cacycle (talk) 20:13, 3 September 2011 (UTC) ## Galician translation I've just made the Galician translation for wikEd interface. You can find it here: User:Toliño/wikEd international gl.js.Could you add it please? Thanks a lot! --Toliño (talk) 11:48, 25 August 2011 (UTC) Change made to User:Cacycle/wikEd template. -- Martin (MSGJ · talk) 20:32, 29 August 2011 (UTC) How much time do we have to wait to get the translation live? Thanks again. --Toliño (talk) 10:20, 30 August 2011 (UTC) Not sure. I can see the new link. Perhaps you need to purge the page? -- Martin (MSGJ · talk) 09:07, 31 August 2011 (UTC) Added to version 0.9.100. Cacycle (talk) 20:11, 3 September 2011 (UTC) ## Script could not be installed I just installed Greasemonkey and than, after restarting FF, I tried to install the script and I got the following error message: "Script could not be installed TypeError: match[2] is undefined" Since I have NoScript installed and the issue was with Javascript, I figured out, that that was the issue, so I allowed all script globally but I got the same error message. I run FF 4.0.1 on Kubuntu. Has anyone got an Idea? Or can I download the script in some other way? Thanks in advance! --Dia^ (talk) 10:12, 4 September 2011 (UTC) I just tried from here http://userscripts.org/scripts/show/12529 and I still get the same error message. Is it an incompatibility with Linux?--Dia^ (talk) 10:24, 4 September 2011 (UTC) This has just been fixed with version 0.9.100, please push Shift-Reload to update and try again. Cacycle (talk) 12:36, 4 September 2011 (UTC) ## Bug in September 4, 2011, version There is a rather major bug in the most recent (4 Sept 2011) version of wikiEd. When editing pages with large lists, the editor removes newlines from the lists, or sometimes does not load the entire list. An example can be seen here. It initially will not load the entire page into the editor. Then if you click changes, it will load the entire page but it will have removed all the line breaks from the list at the bottom of the page. --Odie5533 (talk) 17:08, 4 September 2011 (UTC) See also WP:VPT#My talk page got corrupted, which may show the same problem. I can't reproduce this using the gadget version of wikEd, in Firefox 6 on a Mac. Ucucha (talk) 17:12, 4 September 2011 (UTC) Actually, it does happen now that I cleaned my cache. After the line :* http://www.eurogamer.net/articles/more-on-red-alert-3-expansion-in-09, all the newlines in the list get eaten. Ucucha (talk) 17:15, 4 September 2011 (UTC) Confirmed, I had to disable wikiEd in my prefs. The bug is critical. PS. I was using Firefox 6.0.1 for Win, and the bug was not affected by the vector.js (I test blanked it before going to preferences when tracing the bug). --Piotr Konieczny aka Prokonsul Piotrus| talk to me 17:12, 4 September 2011 (UTC) Workaround Upstream (Greasemonkey) had a recent bug for installing scripts that was only just fixed, so I installed that version. I was then able to install old versions of wikiEd from userscripts.org. However, the bug is in fact in wikiEd since the newest wikiEd contained the bug using GM 0.9.10, but the older versions of wikiEd on GM 0.9.10 did not have the bug (and the Gadget wikiEd exhibits the bug as well). The current workaround is to install GM 0.9.10 from the first link, and then install an old version of wikiEd from the userscripts.org link. --Odie5533 (talk) 17:29, 4 September 2011 (UTC) I just realized that GM 0.9.10 has a fun bug where middle-clicking on webpage links no longer opens then in a new tab (a feature I use incessantly). GM 0.9.9 + wikiEd from May 8 seems to do the trick. --Odie5533 (talk) 17:45, 4 September 2011 (UTC) I've reverted user:Cacycle/wikEd.js for now, since this is such a major bug. Cleaning your cache should now remove the bug. I hope Cacycle will be around soon to fix the underlying bug and restore the other fixes in version 0.9.100. Ucucha (talk) 17:47, 4 September 2011 (UTC) Thanks for reverting and sorry for the problems. The version was actually tested on several pages (including Barack Obama). I have now put version 0.9.100a online with the highlighter changes reverted. Cacycle (talk) 20:23, 4 September 2011 (UTC) I am on the verge of giving up on wikEd. I like the syntax highlighting, but I am annoyed by the clutter of buttons I mostly don't use. I could live with clutter, but often when I edit a page, wikEd takes something like 3-4 seconds to load itself over the editing Window; what seems to take most of that is loading buttons (I use Firefox 6.0.1 on Win 7). It seems like they are not cached, or need to be generated every single time. Any ideas why this delay is occurring? (Also, how to kill most of the buttons I don't want...). --Piotr Konieczny aka Prokonsul Piotrus| talk to me 20:51, 4 September 2011 (UTC) I have had this problem too. Perhaps if wikiEd employed CSS sprites, that would fix the problem? --Odie5533 (talk) 21:13, 4 September 2011 (UTC) Looks like the following bug 22680 (page caching broken) which has been fixed on Wikipedia with standard settings. You might want to check your addons, user scripts, and gadgets for anything that breaks page caching. Cacycle (talk) 21:54, 4 September 2011 (UTC) Well, neither of them "says" they break it. And it is in fact not broken in anything but wikiEd... --Piotr Konieczny aka Prokonsul Piotrus| talk to me 02:54, 5 September 2011 (UTC) I (Firefox 3.6 with many addons, Linux, Monobook) have the same behavior (since ever - not appeared recently): Sometimes the buttons do not seem to be cached or whatever and load each after each. The whole button bar takes about ten seconds and the complete browser is frozen in the meantime. After all is loaded and I return to a page WikEd displays instantaneous. So it seems like the button images are not cached. But the loading of the images in the "failure" case takes far too long for a normal image load operation. Cheers --Saibo (?) 00:28, 6 September 2011 (UTC) This problem is caused by other scripts, addons, or gadgets that break page caching in general. Youy only notice it for wikEd because of the many images. Therefore, I would be great if you could help to identify those problematic scripts so that we could then ask their authors to fix these issues. Thanks in advance, Cacycle (talk) 07:11, 7 September 2011 (UTC) All right. I temporarily blanked my vector page, test loaded more than a dozen pages in edit mode, and you are right, the dalay was not there. This at least tells us it is not a gadget, but a js script. Perhaps if we compare mine, Saibo's and Odie's js pages, we will see the common culprit? I am going to restore my js page now. --Piotr Konieczny aka Prokonsul Piotrus| talk to me 17:33, 7 September 2011 (UTC) Hmm - the problem is for me that I cannot reproduce the problem. At least I have not noticed the pattern. I will try to look at it.. Cheers --Saibo (?) 19:35, 14 September 2011 (UTC) ## Error message about local diff script When I click on the "Delta" box ("Show current changes below"), I see no changes, but instead get only error message "Error: Local diff script not installed." I am using FF 6.0.1 on Windows Vista, and have made no changes to anything, AFAIK. Chris the speller yack 03:56, 5 September 2011 (UTC) Thanks much! I was feeling like a fish out of water. Chris the speller yack 00:53, 6 September 2011 (UTC) This error made me realize how dependent I became on this diff. Good opportunity to say thank you, and report that indeed the fix works for me as well. --Muhandes (talk) 17:42, 6 September 2011 (UTC) I'm having this problem with Chrome (13.0.782.220 m). The Shift-Reload described above does not fix it for me. --Kvng (talk) 13:55, 8 September 2011 (UTC) Working now for me. --Kvng (talk) 20:30, 11 September 2011 (UTC) wikEd is not loading at all for me on Wikia. When I try to edit a page, the wikEd icon in the upper right corner displays a red X and its tooltip says "Loading error". My JS page is at wikia:starwars:User:Master Jonathan/monobook.js; I'm running Firefox 6.0.2 on Windows Vista. Errors from my error console, all produced upon loading the edit page for wikia:starwars:Gricha: (removed, Cacycle) Any ideas as to what might be causing this? jcgoble3 (talk) 18:29, 9 September 2011 (UTC) P.S. Have you thought about archiving this page? Just a suggestion, given that it's, oh, around TWO HUNDRED THIRTY KILOBYTES in size. :P Ya know, just a suggestion. ;). jcgoble3 (talk) 18:34, 9 September 2011 (UTC) Wikia compatibility has been re-added to wikEd 0.9.100b. Please chose the monobook skin in your preferences, otherwise things get a bit funky and you will not have wikEd's local preview and diff buttons. Cacycle (talk) 21:27, 17 September 2011 (UTC) Thanks for the fix. I already use Monobook; I hate the default Wikia skin. jcgoble3 (talk) 23:57, 17 September 2011 (UTC) ## on mobile browser When I use my Android 2.2's native browser to login to wikipedia and edit, the wikiEd still shown up and due to unknown reason it make me unable to edit normally...(without login and without wikiED I can edit with it...)C933103 (talk) 09:33, 17 September 2011 (UTC) Please could you describe your problems in detail (feel free to use the bug form on top of the page). Do you see any error messages in the JavaScript console? Do you see the wikEd logo on top of the page? Do you see the wikEd editing toolbars? Cacycle (talk) 19:34, 18 September 2011 (UTC) ## HTTPS images With the new HTTPS roll out across the Wikimedia verse, I thought I'd check what was "breaking the lock". The image repository now supports HTTPS ;-). You many also want to change the auto-linking for RFC and PMID to use protocol relative URLs. -- Dispenser 13:20, 29 September 2011 (UTC) Fixed with version 0.9.101, RFC and PMID will be fixed in 0.9.101a. Cacycle (talk) 19:52, 8 November 2011 (UTC) ## Did the diff go away The diff delta (the one that used to appear after the regular diff) suddenly does not appear. I also tried installed wikEdDiff manually, but it did not help. Thanks. --Muhandes (talk) 07:45, 5 October 2011 (UTC) Looks like everything went away. What's up? --Kvng (talk) 13:28, 5 October 2011 (UTC) Same for me. I assume it has to do with the recent MediaWiki version upgrade which just happened. Anyone have a link for information about that upgrade? --danhash (talk) 15:12, 5 October 2011 (UTC) Many upgrade issues including this one have been reported at Wikipedia:Village pump (technical)#Improved diff view not working. PrimeHunter (talk) 15:15, 5 October 2011 (UTC) wikEd has disappeared on me too. On Chrome and Firefox. TuckerResearch (talk) 16:44, 5 October 2011 (UTC) I fixed wikEdDiff. This change needs to be made. You guys can either wait for an admin or Cacycle to make the edit, or just import my version for now. I haven't looked at wikEd because I don't use it, and it's probably a more complicated problem, anyway. Although it's also very likely going to be merely a problem with finding the right classes, because a lot of new classes and DIVs have been added in the latest MediaWiki update. Gary King (talk · scripts) 19:44, 7 October 2011 (UTC) I have added the fix to wikEdDiff 0.9.15. Thanks, Cacycle (talk) 15:24, 8 October 2011 (UTC) Works for me, thanks! --Muhandes (talk) 22:14, 8 October 2011 (UTC) I still have the issue, with monobook and "Improved diff view" gadget. Regards, Freewol (talk) 07:11, 9 October 2011 (UTC) I too still have the problem when viewing differences in history as the button does not show up. The button does work when editing an article, but not in the history view. --Odie5533 (talk) 03:04, 10 October 2011 (UTC) MediaWiki:Gadget-wikEdDiff.js probably also needs to be updated. Gary King (talk · scripts) 17:22, 10 October 2011 (UTC) By the way, with WikEd gadget enabled on Wikipedia in french (there is no "improved diff view" gadget there), the diff doesn't work either. Regards, Freewol (talk) 08:30, 11 October 2011 (UTC) Just clarifying what I said above. For me wikEdDiff works, but "improved diff view" in WikEd does not. In other words, you need to enable wikEdDiff manually to get this capability. I hope this helps. --Muhandes (talk) 12:04, 11 October 2011 (UTC) Thanks. Importing manually User:Cacycle/wikEdDiff.js works for me, on this Wikipedia and on the one in french. Freewol (talk) 20:41, 12 October 2011 (UTC) I have fixed the wikEdDiff gadget code that previously used a completely outdated version from 2007 (!). Cacycle (talk) 19:15, 16 October 2011 (UTC) ## WikEd not work anymore (German WP) also not in an external wiki. Same problem? http://de.wikipedia.org/wiki/Wikipedia_Diskussion:Helferlein/Extra-Editbuttons#Ausfall_des_Helferleins_mit_Monobook_am_06.10.2011 Thanks for helping. de:Tom Jac as IP: 194.94.134.90 (talk) 10:02, 8 October 2011 (UTC) wikEd under vector as a gadget works fine for me on de:. Please could you provide more details (see bug report form on top of the page). Thanks for reporting, Cacycle (talk) 13:32, 8 October 2011 (UTC) Works fine after generating a new profile for firefox. Reason for error was not to be reproduced. Thanks. Tom Jac. --194.94.134.90 (talk) 07:14, 3 January 2012 (UTC) ## wikEd loads insecure images on new secure site i.e. at least one http image (the one in the top right) even on the https site. Do protocol-relative URLs need to be inserted somewhere they're not being used at the moment? This triggers warnings in Chrome and prevents the "you are browsing securely" display in Firefox. Thanks, - Jarry1250 [Weasel? Discuss.] 16:33, 29 October 2011 (UTC) It works fine for me in Firefox. Please could you give me more details about zour wikEd and Firefox versions and how to reproduce this error in detail (see form at the top of the page). Thanks, Cacycle (talk) 09:40, 30 October 2011 (UTC) Okay, so steps to reproduce: 1) Load https://en.wikipedia.org/wiki/Main_Page . 2) Expected: URL for the image in the very top right of the screen (on/off button) begins https:// . Actual: it begins http://, triggering warnings. This occurs in Firefox 7.0.1. and Chrome 15.0 on my Windows Vista laptop. - Jarry1250 [Weasel? Discuss.] 09:59, 30 October 2011 (UTC) Oh, and I am using the standard install of wikEd via the gadget; it otherwise works as intended. - Jarry1250 [Weasel? Discuss.] 12:15, 30 October 2011 (UTC) Reported a month ago, #HTTPS images. Basically images should be loaded over HTTPS. -- Dispenser 01:22, 31 October 2011 (UTC) Ah, yes. Then consider this a +1 on that bug :) - Jarry1250 [Weasel? Discuss.] 20:40, 31 October 2011 (UTC) Fixed in 0.9.101, thanks for your patience. Cacycle (talk) 12:32, 7 November 2011 (UTC) ## advanced diff not working anymore Hello. I think that this change broke something, because the advance diff function stopped working for me yesterday afternoon. Regards, Freewol (talk) 08:25, 5 November 2011 (UTC) Same thing for me. The advanced diff stopped working yesterday. --Tryptofish (talk) 20:46, 5 November 2011 (UTC) Same trouble, I'm using it from fr.wikisource.org, Chrome give me an error "GET ... undefinedw/index.php?title=User:Cacycle/diff.js&action=raw&ctype=text/javascript 404 (Not Found), look like in wikEd.config.diffScriptSrc = wikEd.config.homeBaseUrl + ... homeBaseUrl is now undefined. - phe 20:17, 6 November 2011 (UTC) Seems to still work for me; I've got only User:Cacycle/wikEdDiff.js imported, without wikEd, also. Gary King (talk · scripts) 07:09, 7 November 2011 (UTC) I checked again on en.wikipedia, it's still not working. I'm using https protocol with monobook, and I have the same line than Gary King in my monobook.js (that is, importScript('User:Cacycle/wikEdDiff.js');). The gadget, on the other hand, is currently working fine. Regards, Freewol (talk) 09:50, 7 November 2011 (UTC) Still not working for me either. I use Firefox, Vector, and I have the same import in my vector.js file as Gary, also without full wikEd. --Tryptofish (talk) 18:23, 7 November 2011 (UTC) I can't recall if it's necessary, but I also have User:Cacycle/diff.js, if that helps. It wouldn't hurt to also import that to try. Be sure to put it BEFORE User:Cacycle/wikEdDiff.js. Gary King (talk · scripts) 18:41, 7 November 2011 (UTC) Yes, that made it work! Thanks Gary! --Tryptofish (talk) 19:17, 7 November 2011 (UTC) Sorry. It's now fixed in wikEdDiff version 0.9.15c. It was the problem phe pointed out above. Use SHIFT-Reload to update. Cacycle (talk) 19:50, 8 November 2011 (UTC) Thanks, it's now working fine. Freewol (talk) 09:10, 9 November 2011 (UTC) ## Error on cross wiki requests When I accessed gl:Especial:Lista_de_vixilancia I got the following message in my error console (on Google Chrome 15.0.874.106): XMLHttpRequest cannot load https://en.wikipedia.org/w/index.php?title=User:Cacycle/wikEd_current_version&action=raw&maxage=0. Origin https://gl.wikipedia.org is not allowed by Access-Control-Allow-Origin. I wasn't able to reproduce this again, but it may be worth investigating. Helder 21:46, 8 November 2011 (UTC) ## User-configured edit summaries Is it possible to create a variable in common.js with our own edit summaries, or to pull a list of summaries from a user subpage? It would be very helpful to be able to configure the defaults. --danhash (talk) 13:48, 9 November 2011 (UTC) Nvm, found it at wikEd customization. --danhash (talk) 14:40, 9 November 2011 (UTC) ## Custom button to insert dated {{citation needed}} tags I am relatively comfortable with basic JavaScript and have just started a little wikEd customization. I would like to add a custom button that inserts dated {{citation needed}} tags at the current cursor location. The current date could be determined with JavaScript and added to the tag or else you could use subst'ed magic words. (Or of course this could be integrated into the wikEd code.) Has this been done before? --danhash (talk) 15:25, 9 November 2011 (UTC) At least on Wikipedia the date is automatically added by a bot as far as I know. Cacycle (talk) 11:01, 11 November 2011 (UTC) It is, but it'd be very useful to have a button to insert {{citation needed}} (or other maintenance tags) so I don't have to type them each time (or copy/paste), and inserting dated tags would be little extra work to implement. --danhash (talk) 14:33, 11 November 2011 (UTC) See User:Cacycle/wikEd_customization#Custom_buttons for how to make your own buttons. Cacycle (talk) 22:58, 16 November 2011 (UTC) ## Double Edit Notice I'm getting double edit notices when I turn on Wiked. My browser ID is Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2,I'm using Vector, my OS is XP, I'm using chrome and have no extensions, I'm not getting a console error message and I have Igloo, Twinkle, User:Js's ajax preview and watchlist on. I have Ale jrb's CSDH, userhist and Status Checker. I also have Splarka's ajax massrollback and Pathoschild's template script and regex menu framework. I also have a "custom" timer, Mike.Lifeguard's remote.js from meta and Johm254's mass rollback script along with some code that adds a portlet link in toolbox to link to Special:Abuselog. --Kangaroopowah 02:07, 28 November 2011 (UTC) I've been getting this too for quite a while. It also doubles protection notices as well, but not editintros loaded through an &editintro= parameter in the URL (such as the JS-powered BLP notice). I'm using latest version of Firefox on Windows Vista, Vector skin. No errors in the error console. My JS page is here; I also have the lead section edit link and UTC clock gadgets enabled. This also affects me on Wikia (Monobook skin, JS page here). jcgoble3 (talk) 02:46, 28 November 2011 (UTC) This is a wikEd feature, please see User:Cacycle/wikEd help#Double edit notice for help. Cacycle (talk) 22:17, 3 January 2012 (UTC) Ah. I was not aware this was how it is intended to operate. Thanks for the reply. jcgoble3 (talk) Source of article : Wikipedia
2019-02-21 04:22:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.576697587966919, "perplexity": 4615.395388620508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247499009.48/warc/CC-MAIN-20190221031117-20190221053117-00556.warc.gz"}
https://chemistry.stackexchange.com/questions/159730/major-product-formed-in-wurtz-reaction-in-the-given-molecule-1-3-dibromo-2-2-di
major product formed in wurtz reaction in the given molecule[ 1,3-dibromo-2,2-di(1-bromomethyl)propane ] This is the reactant molecule which undergoes wurtz reaction. All possible products are- What will be the most favorable in all of these? In my opinion that the last one should not be the one because it forms a three member ring which is quite unstable due to angle strain. Still it is the answer to the question. why any other molecule is not the major product? • According to doi.org/10.1021/jo01118a042, it is indeed the last one, spiropentane (approx. 50 % conversion). You can foster the search process knowing your precursor's more common name (pentaerythrityl tetrabromide) and CAS number (3229-00-3). Nov 17 '21 at 9:05 • @ashank Yes, it is a strained species, but there are very many cyclopropanes known so clearly the strain is not a determining factor. Nov 17 '21 at 9:27 • @ashank No prob, this was just a published confirmation for the given answer. It doesn't seem authors provide an in-depth explanation as to why spiropentane is favored, so your question remains. Nov 17 '21 at 10:21 • Spiropentane is preferred, because it does not contain any bromine atom. If enough sodium is available, all bromine atoms will be transformed into $\ce{NaBr}$, so that the final product must be a hydrocarbon with $5$ atoms, or a polymer. Nov 17 '21 at 10:50 • @ashank Exactly so. That is why chemistry remains an experimental science. You cannot predict the outcome of many reactions so you have the run the experiment Nov 17 '21 at 11:22
2022-01-19 09:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48081710934638977, "perplexity": 1430.191045570183}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00339.warc.gz"}
https://jarrettmeyer.com/2018/11/21/creating-animated-chart-d3
Yesterday, I wrote about creating an animated GIF with R. Today, I am following up with the same thing, this time written with D3. As usual, the first thing we need to do is to define some constants and get our data values. let margin = { bottom: 20, left: 120, right: 25, top: 20 }; let canvas = d3.select("#bar-chart"); let width = +canvas.attr("width"); let height = +canvas.attr("height"); // Get the sources and years from the data set. let sources = data.map(row => row.source).reduce(reduceUnique, []); let years = data.map(row => row.year).reduce(reduceUnique, []); // Create a dictionary of colors by cost source. let color = {}; sources.forEach((source, index) => { color[source] = d3.schemeSet2[index]; }); We also need to be able to filter the data for the current year. This is a simple data manipulation step. let index = 0; let yearData = data.filter(d => d.year === years[index]); draw(yearData); As you may have guessed, the drawing happens in the draw function. We will use our trusted enter/merge/exit pattern for navigating the data changes. function draw(data) { let gData = canvas.select("g.data"); if (gData.empty()) { gData = canvas .append("g") .attr("class", "data") .attr("transform", translate(${margin.left},${margin.top})); } let bars = gData.selectAll("rect").data(data); bars.enter() .append("rect") .attr("fill", d => color[d.source]) .attr("x", 0) .attr("y", d => yScale(d.source)) .attr("height", yScale.bandwidth()) .attr("width", d => xScale(d.cost)); // Update existing values. bars.merge(bars) .transition() .duration(1000) .attr("width", d => xScale(d.cost)); // Remove deleted values. bars.exit().remove(); } The only thing left to do is to loop through the data. The setInterval function makes this quite easy. setInterval(() => { yearIndex = yearIndex === years.length - 1 ? 0 : yearIndex + 1; yearData = data.filter(d => d.year === years[yearIndex]); draw(yearData); }, 3000); This code is available in Github. Happy visualizing!
2021-05-16 03:56:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2153978943824768, "perplexity": 10824.217814193877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00339.warc.gz"}
https://www.prepanywhere.com/prep/textbooks/functions-11-nelson/chapters/chapter-6-sinusoidal-functions/materials/6-8-mid-chapter-review
Mid Chapter Review Chapter Chapter 6 Section Mid Chapter Review Solutions 5 Videos Sketch the graph of a periodic function whose period is 10 and whose range is \{y\in \mathbb{R} \vert 4 \leq y\leq 10 \} 0.57mins Q1 a. Graph the function g(x) = 5 \cos(2x) + 7without using a graphing device when 0^o \leq x \leq 360^o and 0 \leq g(x) \leq 15. Determine the period, equation of the axis, amplitude, and range of the function. b. Explain why the function is sinusoidal. c. Calculate g(125). d. Determine the values of x, 0^o \leq x \leq 360^o, for which g(x) = 12. 3.02mins Q3 Determine the coordinates of the new point after a rotation of 64° about (0, 0) from the point (7, 0). 1.29mins Q4 Two white marks are made on a car tire by a parking meter inspector. One mark is made on the outer edge of the tire; the other mark is made a few centimetres from the edge. The two graphs show the relationship between the heights of the white marks above the ground in terms of time as the car moves forward. a. What is the period of each function, and what does it represent in this situation? b. What is the equation of the axis of each function, and what does it represent in this situation? c. What is the amplitude of each function, and what does it represent in this situation? d. Determine the range of each function. e. Determine the speed of each mark, in centimetres per second. f. If a third mark were placed on the tire but closer to the centre, how would the graph of this function compare with the other two graphs? 0.00mins Q5 The position, P(d), of the Sun at sunset, in degrees north or south of due west, depends on the latitude and the day of the year, 51’. For a specific latitude, the position in terms of the day of the year can be modelled by the function \displaystyle P(d) = 28\sin(\frac{360}{365}d - 81)^o . a. Graph without using a graphing device. b. What is the period of the function, and what does it represent in this situation? c. What is the equation ofthe axis ofthe funCtion, and what does it represent in this situation? d. What is the amplitude of the function, and what does it represent in this situation. e. Determine the range of the function. f. What is the angle of sunset on February 15.
2020-07-06 20:51:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351711869239807, "perplexity": 560.117111662069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890181.37/warc/CC-MAIN-20200706191400-20200706221400-00484.warc.gz"}
https://idaes-pse.readthedocs.io/en/2.0.0a3/reference_guides/model_libraries/gas_solid_contactors/unit_models/moving_bed.html
# Moving Bed Reactor¶ The IDAES Moving Bed Reactor (MBR) model represents a unit operation where two material streams – a solid phase and a gas phase – pass through a linear reactor vessel while undergoing chemical reaction(s). The two streams have opposite flow directions (counter-flow). The MBR mathematical model is a 1-D rigorous first-principles model consisting of a set of differential equations obtained by applying the mass, energy (for each phase) and momentum balance equations. Assumptions: • The radial concentration and temperature gradients are assumed to be negligible. • The reactor is assumed to be adiabatic. • The solid phase is assumed to be moving at a constant velocity determined by the solids feed rate to the reactor. Requirements: • Property package contains temperature and pressure variables. • Property package contains minimum fluidization velocity. The MBR model is based on: 1. Ostace, A. Lee, C.O. Okoli, A.P. Burgard, D.C. Miller, D. Bhattacharyya, Mathematical modeling of a moving-bed reactor for chemical looping combustion of methane, in: M.R. Eden, M. Ierapetritou, G.P. Towler (Eds.),13th Int. Symp. Process Syst. Eng. (PSE 2018), Computer-Aided Chemical Engineering 2018, pp. 325–330 , San Diego, CA. ## Degrees of Freedom¶ MBRs generally have at least 2 (or more) degrees of freedom, consisting of design and operating variables. The design variables of reactor length and diameter are typically the minimum variables to be fixed. ## Model Structure¶ The core MBR unit model consists of two ControlVolume1DBlock Blocks (named gas_phase and solid_phase), each with one Inlet Port (named gas_inlet and solid_inlet) and one Outlet Port (named gas_outlet and solid_outlet). ## Constraints¶ In the following, the subscripts $$g$$ and $$s$$ refer to the gas and solid phases, respectively. In addition to the constraints written by the control_volume Block, MBR units write the following Constraints: ### Geometry Constraints¶ Area of the reactor bed: $A_{bed} = \pi \left( \frac{ D_{bed} }{ 2 } \right)^2$ Area of the gas domain: $A_{g,t,x} = \varepsilon A_{bed}$ Area of the solid domain: $A_{s,t,x} = (1 - \varepsilon) A_{bed}$ Length of the gas domain: $L_{g} = L_{bed}$ Length of the solid domain: $L_{s} = L_{bed}$ ### Hydrodynamic Constraints¶ Superficial velocity of the gas: $u_{g,t,x} = \frac{ F_{mol,g,t,x} }{ A_{bed} \rho_{mol,g,t,x} }$ Superficial velocity of the solids: $u_{s,t} = \frac{ F_{mass,s,t,inlet} }{ A_{bed} \rho_{mass,s,t,inlet} }$ Pressure drop: The constraints written by the MBR model to compute the pressure drop (if has_pressure_change is ‘True’) in the reactor depend upon the construction arguments chosen: If pressure_drop_type is simple_correlation: $- \frac{ dP_{g,t,x} }{ dx } = 0.2 \left( \rho_{mass,s,t,x} - \rho_{mass,g,t,x} \right) u_{g,t,x}$ If pressure_drop_type is ergun_correlation: $- \frac{ dP_{g,t,x} }{ dx } = \frac{ 150 \mu_{g,t,x} {\left( 1 - \varepsilon \right)}^{2} \left( u_{g,t,x} + u_{s,t} \right) }{ \varepsilon^{3} d_{p}^2 } + \frac{ 1.75 \left( 1 - \varepsilon \right) \rho_{mass,g,t,x} \left( u_{g,t,x} + u_{s,t} \right)^{2} }{ \varepsilon^{3} d_{p} }$ ### Reaction Constraints¶ Gas phase reaction extent: If gas_phase_config.reaction_package is not ‘None’: $\xi_{g,t,x,r} = r_{g,t,x,r} A_{g,t,x}$ Solid phase reaction extent: If solid_phase_config.reaction_package is not ‘None’: $\xi_{s,t,x,r} = r_{s,t,x,r} A_{s,t,x}$ Gas phase heterogeneous rate generation/consumption: $M_{g,t,x,p,j} = A_{s,t,x} \sum_{r}^{reactions} {\nu_{s,j,r} r_{s,t,x,r}}$ ### Dimensionless numbers, mass and heat transfer coefficients¶ Particle Reynolds number: $Re_{p,t,x} = \frac{ u_{g,t,x} \rho_{mass,g,t,x} }{ \mu_{g,t,x} d_{p}}$ Prandtl number: $Pr_{t,x} = \frac{ c_{p,t,x} \mu_{g,t,x} }{ k_{g,t,x} }$ Particle Nusselt number: $Nu_{p,t,x} = 2 + 1.1 Pr_{t,x}^{1/3} \left| Re_{p,t,x} \right|^{0.6}$ Particle to fluid heat transfer coefficient $h_{gs,t,x} d_{p} = Nu_{p,t,x} k_{g,t,x}$ If energy_balance_type not EnergyBalanceType.none: Gas phase - gas to solid heat transfer: $H_{g,t,x} = - \frac{ 6 } { d_{p} } h_{gs,t,x} \left( T_{g,t,x} - T_{s,t,x} \right) A_{s,t,x}$ Solid phase - gas to solid heat transfer: $H_{s,t,x} = \frac{ 6 } { d_{p} } h_{gs,t,x} \left( T_{g,t,x} - T_{s,t,x} \right) A_{s,t,x}$ ## List of Variables¶ Variable Description Reference to $$A_{bed}$$ Reactor bed cross-sectional area bed_area $$A_{g,t,x}$$ Gas phase area (interstitial cross-sectional area) gas_phase.area $$A_{s,t,x}$$ Solid phase area solid_phase.area $$c_{p,t,x}$$ Gas phase heat capacity (constant $$P$$) gas_phase.properties.cp_mass $$D_{bed}$$ Reactor bed diameter bed_diameter $$F_{mass,s,t,inlet}$$ Total mass flow rate of solids, at inlet ($$x=1$$) solid_phase.properties.flow_mass $$F_{mol,g,t,x}$$ Total molar flow rate of gas gas_phase.properties.flow_mol $$H_{g,t,x}$$ Gas to solid heat transfer term, gas phase gas_phase.heat $$H_{s,t,x}$$ Gas to solid heat transfer term, solid phase solid_phase.heat $$h_{gs,t,x}$$ Gas-solid heat transfer coefficient gas_solid_htc $$k_{g,t,x}$$ Gas thermal conductivity gas_phase.properties.therm_cond $$L_{bed}$$ Reactor bed height bed_height $$L_{g}$$ Gas domain length gas_phase.length $$L_{s}$$ Solid domain length solid_phase.length $$M_{g,t,x,p,j}$$ Rate generation/consumption term, gas phase gas_phase.mass_transfer_term $$Nu_{p,t,x}$$ Particle Nusselt number Nu_particle $$dP_{g,t,x}$$ Total pressure derivative w.r.t. $$x$$ (axial position) gas_phase.deltaP $$Pr_{t,x}$$ Prandtl number Pr $$r_{g,t,x,r}$$ Gas phase reaction rate gas_phase.reactions.reaction_rate $$r_{s,t,x,r}$$ Solid phase reaction rate solid_phase.reactions.reaction_rate $$Re_{p,t,x}$$ Particle Reynolds number Re_particle $$T_{g,t,x}$$ Gas phase temperature gas_phase.properties.temperature $$T_{s,t,x}$$ Solid phase temperature solid_phase.properties.temperature $$u_{g,t,x}$$ Superficial velocity of the gas velocity_superficial_gas $$u_{s,t}$$ Superficial velocity of the solids velocity_superficial_solid Greek letters $$\varepsilon$$ Reactor bed voidage bed_voidage $$\mu_{g,t,x}$$ Dynamic viscosity of gas mixture gas_phase.properties.visc_d $$\xi_{g,t,x,r}$$ Gas phase reaction extent gas_phase.rate_reaction_extent $$\xi_{s,t,x,r}$$ Solid phase reaction extent solid_phase.rate_reaction_extent $$\rho_{mass,g,t,inlet}$$ Density of gas mixture gas_phase.properties.dens_mass $$\rho_{mass,s,t,inlet}$$ Density of solid particles solid_phase.properties.dens_mass_particle $$\rho_{mol,g,t,x}$$ Molar density of the gas gas_phase.properties.dens_mole ## List of Parameters¶ Parameter Description Reference to $$d_{p}$$ Solid particle diameter solid_phase.properties._params.particle_dia $$\nu_{s,j,r}$$ Stoichiometric coefficients solid_phase.reactions.rate_reaction_stoichiometry ## Initialization¶ The initialization method for this model will save the current state of the model before commencing initialization and reloads it afterwards. The state of the model will be the same after initialization, only the initial guesses for unfixed variables will be changed. The model allows for the passing of a dictionary of values of the state variables of the gas and solid phases that can be used as initial guesses for the state variables throughout the time and spatial domains of the model. This is optional but recommended. A typical guess could be values of the gas and solid inlet port variables at time $$t=0$$. The model initialization proceeds through a sequential hierarchical method where the model equations are deactivated at the start of the initialization routine, and the complexity of the model is built up through activation and solution of various sub-model blocks and equations at each initialization step. At each step the model variables are updated to better guesses obtained from the model solution at that step. The initialization routine proceeds in as follows: • Step 1: Initialize the thermo-physical and transport properties model blocks. • Step 2: Initialize the hydrodynamic properties. • Step 3a: Initialize mass balances without reactions and pressure drop. • Step 3b: Initialize mass balances with reactions and without pressure drop. • Step 3c: Initialize mass balances with reactions and pressure drop. • Step 4: Initialize energy balances. ## MBR Class¶ class idaes.models_extra.gas_solid_contactors.unit_models.moving_bed.MBR(*args, **kwds) Parameters • rule (function) – A rule function or None. Default rule calls build(). • concrete (bool) – If True, make this a toplevel model. Default - False. • ctype (class) – Pyomo ctype of the block. Default - pyomo.environ.Block • default (dict) – Default ProcessBlockData config Keys dynamic Indicates whether this model will be dynamic or not, default = useDefault. Valid values: { useDefault - get flag from parent (default = False), True - set as a dynamic model, False - set as a steady-state model.} has_holdup Indicates whether holdup terms should be constructed or not. Must be True if dynamic = True, default - False. Valid values: { useDefault - get flag from parent (default = False), True - construct holdup terms, False - do not construct holdup terms} finite_elements Number of finite elements to use when discretizing length domain (default=20) length_domain_set length_domain_set - (optional) list of point to use to initialize a new ContinuousSet if length_domain is not provided (default = [0.0, 1.0]) transformation_method Method to use to transform domain. Must be a method recognised by the Pyomo TransformationFactory, default - “dae.finite_difference”. Valid values: { “dae.finite_difference” - Use a finite difference transformation method, “dae.collocation” - use a collocation transformation method} transformation_scheme Scheme to use when transforming domain. See Pyomo documentation for supported schemes, default - None. Valid values: { None - defaults to “BACKWARD” for finite difference transformation method, and to “LAGRANGE- RADAU” for collocation transformation method, “BACKWARD” - Use a finite difference transformation method, “FORWARD”” - use a finite difference transformation method, “LAGRANGE-RADAU”” - use a collocation transformation method} collocation_points Number of collocation points to use per finite element when discretizing length domain (default=3) flow_type Flow configuration of Moving Bed - counter_current: gas side flows from 0 to 1 solid side flows from 1 to 0 material_balance_type Indicates what type of mass balance should be constructed, default - MaterialBalanceType.componentTotal. Valid values: { MaterialBalanceType.none - exclude material balances, MaterialBalanceType.componentPhase - use phase component balances, MaterialBalanceType.componentTotal - use total component balances, MaterialBalanceType.elementTotal - use total element balances, MaterialBalanceType.total - use total material balance.} energy_balance_type Indicates what type of energy balance should be constructed, default - EnergyBalanceType.enthalpyTotal. Valid values: { EnergyBalanceType.none - exclude energy balances, EnergyBalanceType.enthalpyTotal - single enthalpy balance for material, EnergyBalanceType.enthalpyPhase - enthalpy balances for each phase, EnergyBalanceType.energyTotal - single energy balance for material, EnergyBalanceType.energyPhase - energy balances for each phase.} momentum_balance_type Indicates what type of momentum balance should be constructed, default - MomentumBalanceType.pressureTotal. Valid values: { MomentumBalanceType.none - exclude momentum balances, MomentumBalanceType.pressureTotal - single pressure balance for material, MomentumBalanceType.pressurePhase - pressure balances for each phase, MomentumBalanceType.momentumTotal - single momentum balance for material, MomentumBalanceType.momentumPhase - momentum balances for each phase.} has_pressure_change Indicates whether terms for pressure change should be constructed, default - False. Valid values: { True - include pressure change terms, False - exclude pressure change terms.} pressure_drop_type Indicates what type of pressure drop correlation should be used, default - “simple_correlation”. Valid values: { “simple_correlation” - Use a simplified pressure drop correlation, “ergun_correlation” - Use the Ergun equation.} gas_phase_config gas phase config arguments dynamic Indicates whether this model will be dynamic or not, default = useDefault. Valid values: { useDefault - get flag from parent (default = False), True - set as a dynamic model, False - set as a steady-state model.} has_holdup Indicates whether holdup terms should be constructed or not. Must be True if dynamic = True, default - False. Valid values: { useDefault - get flag from parent (default = False), True - construct holdup terms, False - do not construct holdup terms} has_equilibrium_reactions Indicates whether terms for equilibrium controlled reactions should be constructed, default - True. Valid values: { True - include equilibrium reaction terms, False - exclude equilibrium reaction terms.} property_package Property parameter object used to define property calculations (default = ‘use_parent_value’) - ‘use_parent_value’ - get package from parent (default = None) - a ParameterBlock object property_package_args A dict of arguments to be passed to the PropertyBlockData and used when constructing these (default = ‘use_parent_value’) - ‘use_parent_value’ - get package from parent (default = None) - a dict (see property package for documentation) reaction_package Reaction parameter object used to define reaction calculations, default - None. Valid values: { None - no reaction package, ReactionParameterBlock - a ReactionParameterBlock object.} reaction_package_args A ConfigBlock with arguments to be passed to a reaction block(s) and used when constructing these, default - None. Valid values: { see reaction package for documentation.} solid_phase_config solid phase config arguments dynamic Indicates whether this model will be dynamic or not, default = useDefault. Valid values: { useDefault - get flag from parent (default = False), True - set as a dynamic model, False - set as a steady-state model.} has_holdup Indicates whether holdup terms should be constructed or not. Must be True if dynamic = True, default - False. Valid values: { useDefault - get flag from parent (default = False), True - construct holdup terms, False - do not construct holdup terms} has_equilibrium_reactions Indicates whether terms for equilibrium controlled reactions should be constructed, default - True. Valid values: { True - include equilibrium reaction terms, False - exclude equilibrium reaction terms.} property_package Property parameter object used to define property calculations (default = ‘use_parent_value’) - ‘use_parent_value’ - get package from parent (default = None) - a ParameterBlock object property_package_args A dict of arguments to be passed to the PropertyBlockData and used when constructing these (default = ‘use_parent_value’) - ‘use_parent_value’ - get package from parent (default = None) - a dict (see property package for documentation) reaction_package Reaction parameter object used to define reaction calculations, default - None. Valid values: { None - no reaction package, ReactionParameterBlock - a ReactionParameterBlock object.} reaction_package_args A ConfigBlock with arguments to be passed to a reaction block(s) and used when constructing these, default - None. Valid values: { see reaction package for documentation.} • initialize (dict) – ProcessBlockData config for individual elements. Keys are BlockData indexes and values are dictionaries described under the “default” argument above. • idx_map (function) – Function to take the index of a BlockData element and return the index in the initialize dict from which to read arguments. This can be provided to overide the default behavior of matching the BlockData index exactly to the index in initialize. Returns (MBR) New instance ## MBRData Class¶ class idaes.models_extra.gas_solid_contactors.unit_models.moving_bed.MBRData(component)[source] Standard Moving Bed Unit Model Class. build()[source] Begin building model (pre-DAE transformation). Parameters None Returns None initialize_build(gas_phase_state_args=None, solid_phase_state_args=None, outlvl=0, solver=None, optarg=None)[source] Initialization routine for MB unit. Keyword Arguments • gas_phase_state_args – a dict of arguments to be passed to the property package(s) to provide an initial state for initialization (see documentation of the specific property package) (default = None). • solid_phase_state_args – a dict of arguments to be passed to the property package(s) to provide an initial state for initialization (see documentation of the specific property package) (default = None). • outlvl – sets output level of initialization routine • optarg – solver options dictionary object (default=None, use default solver options) • solver – str indicating which solver to use during initialization (default = None, use default solver) Returns None results_plot()[source] Plot method for common moving bed variables Variables plotted: Tg : Temperature in gas phase Ts : Temperature in solid phase vg : Superficial gas velocity P : Pressure in gas phase Ftotal : Total molar flowrate of gas Mtotal : Total mass flowrate of solid Cg : Concentration of gas components in the gas phase y_frac : Mole fraction of gas components in the gas phase x_frac : Mass fraction of solid components in the solid phase
2023-03-31 18:57:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25410789251327515, "perplexity": 9355.464894174911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00057.warc.gz"}
https://biholyzyjywa.hamaikastudio.com/van-der-waals-molecules-book-14004ge.php
Last edited by Mamuro Friday, October 16, 2020 | History 1 edition of Van Der Waals Molecules (Faraday Discussions of the Chemical Society, No 73) found in the catalog. Van Der Waals Molecules (Faraday Discussions of the Chemical Society, No 73) # Van Der Waals Molecules (Faraday Discussions of the Chemical Society, No 73) Written in English Subjects: • Physical chemistry, • Molecular Chemistry • The Physical Object FormatPaperback Number of Pages423 ID Numbers Open LibraryOL11600367M ISBN 100851866883 ISBN 109780851866888 Van der Waals forces are distance-dependent forces between atoms and molecules not associated with covalent or ionic chemical bonds. Sometimes the term is used to encompass all intermolecular forces, although some scientists only include among them the London dispersion force, Debye force, and Keesom force. Each atom and molecule has its own characteristic van der Waals radius, although since most molecules are not spherical, it is better to refer to a molecule’s van der Waals surface. This surface is the closest distance that two molecules can approach one another before repulsion kicks in and drives them back away from one another. Johannes Diderik van der Waals Biographical J ohannes Diderik van der Waals was born on Novem in Leyden, The Netherlands, the son of Jacobus van der Waals and Elisabeth van den Burg. After having finished elementary education at his birthplace he became a schoolteacher. In chemistry, van der Waals' forces are a type of intermolecular intermolecular force is a relatively weak force that holds molecules together. Van der Waals' forces are the weakest type of intermolecular force. They are named after the Dutch scientist Johannes Diderik van der Waals (–).. Negatively charged electrons orbit molecules or ions. The van der Waals equation uses two additional experimentally determined constants: a, which is a term to correct for intermolecular forces, and b, which corrects for the volume of the gas molecules (Table “Selected van der Waals Constants for Gas Molecules”). Van der Waals forces are weak intermolecular forces that are dependent on the distance between atoms or molecules. These forces arise from the interactions between uncharged atoms/molecules. For example, Van der Waals forces can arise from the fluctuation in the polarizations of two particles that are close to each other. You might also like Changing patterns in Christian education Changing patterns in Christian education Calclabs With Maple for Stewarts Multivariable Calculus Calclabs With Maple for Stewarts Multivariable Calculus Parlamentarismuskritik vom Kaiserreich zur Bundesrepublik Parlamentarismuskritik vom Kaiserreich zur Bundesrepublik Question of land and infrastructure development in India Question of land and infrastructure development in India Dutch in the Caribbean and on the Wild Coast 1580-1680 Dutch in the Caribbean and on the Wild Coast 1580-1680 Raising partner Raising partner Thermal insulation, materials, and systems for energy conservation in the 80s. Thermal insulation, materials, and systems for energy conservation in the 80s. Later Greek religion Later Greek religion Strategy to improve the health of looked after children and care leavers, 2003-2006. Strategy to improve the health of looked after children and care leavers, 2003-2006. Postal express mail service Postal express mail service Practical English-Turkish handbook = Practical English-Turkish handbook = RACER # 3464248 RACER # 3464248 Cults that kill Cults that kill Economic analysis of forestry projects Economic analysis of forestry projects List of members of the Bureau, Parliament, political groups, committees and interparliamentary delegations. List of members of the Bureau, Parliament, political groups, committees and interparliamentary delegations. ### Van Der Waals Molecules (Faraday Discussions of the Chemical Society, No 73) Download PDF EPUB FB2 Publisher of Humanities, Social Science & STEM Books Skip to main content. Free Standard Shipping. Cluster Ions and Van Der Waals Molecules By B.M. Smirnov. Hardback $This product is currently out of stock. ISBN Published Ma by CRC Press Pages. Buy Cluster Ions and Van Der Waals Molecules on FREE SHIPPING on qualified orders Cluster Ions and Van Der Waals Molecules: B.M. Smirnov: : Books Skip to Cited by: Forgot your password. Enter your email address below. If your address has been previously registered, you will receive an email with instructions on how to reset your password. IfCited by: Get this from a library. Van der Waals molecules. [Royal Society of Chemistry (Great Britain). Faraday Division.;]. The chapter explores important features of van der Waals forces and discusses the origin of the van der Waals-dispersion force between the neutral molecules. The van der Waals equation of states was also explained. Select 7 - Repulsive Steric Forces, Total Intermolecular Pair Potentials, and Liquid Structure Book chapter Full text access. van der Waals interactions: A weak force of attraction between electrically neutral molecules that collide with or pass very close to each other. The van der Waals force is caused by temporary attractions between electron-rich regions of one. Van der Waals Forces: A Handbook for Biologists, Chemists, Engineers, and Physicists - Kindle edition by Parsegian, V. Adrian. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Van der Waals Forces: A Handbook for Biologists, Chemists, Engineers, and Physicists.5/5(3). Van der Waals complexes formed by a bromine molecule and one or several He atoms are analyzed from first principles. Multidimensional potential energy surfaces and the structure and dynamics of Author: Donald H. Levy. Each type of atom has a van der Waals radius at which it is in van der Waals contact with other atoms. The van der Waals radius of an H atom is nm, and the radii of O, N, C, and S atoms are between and nm. Two covalently bonded atoms are closer together than two atoms that are merely in van der Waals contact. Van der Waals interaction (also known as London dispersion energies) Van der Waals (VDW) interactions are probably the most basic type of interaction imaginable. Any two molecules experience Van der Waals interactions. Even macroscopic surfaces experience VDW. Van der Waals forces are weak interactions between molecules that involve dipoles. Polar molecules have permanent dipole-dipole interactions. Non-polar molecules can interact by way of London dispersion forces. At these low temperatures van der Waals (vdW) molecules are stable and can be investigated by microwave, far-infrared spectroscopy and other modes of spectroscopy. Also in cold equilibrium gases vdW molecules are formed, albeit in small, temperature dependent, concentrations. This diagram shows how a whole lattice of molecules could be held together in a solid using van der Waals dispersion forces. An instant later, of course, you would have to draw a quite different arrangement of the distribution of the electrons as they shifted around - but always in synchronisation. Van der Waals equation for real gases is the corrected form of ideal gas equation which includes the effects of intermolecular forces of attraction and space occupied by gas molecules. We do not go into deriving van der Waals equation now but we can express it as $\left({p + a\frac{{{n^2}}}{{{V^2}}}} \right)(V - nb) = nRT \tag{3} \label{3. Van der Waals Forces. The first type of intermolecular force we will consider are called van der Waals forces, after Dutch chemist Johannes van der Waals ( - ). Van der Waals forces are the weakest intermolecular force and consist of dipole-dipole forces and dispersion forces. Two molecules can interact by Van der Waals forces when they are at a certain distance apart. The molecules are stabilized by Van der Waals interaction at the Van der Waals contact distance because the potential energy of the system at this distance is at its lowest. The Van der Waals radius of a given atom is the halve of the shortest distance observed in crystals betweeen the nuclei of atoms of the same nature belonging to different molecules. "Intermolecular Forces" Huyskens, P. Luck, W. New York: Springer-Verlag, About this book. Here, readers are introduced to the current experimental techniques of laser spectroscopy of van der Waals complexes produced in supersonic beams. The book is unique in its style, subject and scope, and provides information on recent research not only for researchers focusing on molecular spectroscopy but also for those. The attractive or repulsive forces between @[email protected] (or between groups within the same molecular entity) other than those due to @[email protected] formation or to the electrostatic interaction of ions or of ionic groups with one another or with neutral molecules. the strengths of van der waals dispersion forces Towards the bottom of the last page, I described dipole-dipole attractions as being "fairly minor compared with dispersion forces". A student challenged me about this, pointing out that many web sources and books say that dispersion forces are the weakest form of intermolecular attraction. The van der Waals Model. Let’s come back to the equation of state of an ideal gas \ref{c2v:ideal}: \[P=\frac{nRT}{V} \label{c2v:ideal}$ In order to improve our description of gases we need to take into account the two factors that the ideal gas model neglects: the size of the molecules, and the interactions between them.Print book: EnglishView all editions and formats Summary: This review discusses current ideas in the physics and chemistry of cluster ions and Van der Waals molecules as well as presenting numerical data on their parameters and the processes involving them. This book will be a useful introduction the field of van der Waals forces for both theory and experimental graduate students. It will also prove to be a useful reference book for looking up formula an will be kept in easy reach of this reviewers desk."Price:$
2022-05-28 21:00:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.590342104434967, "perplexity": 2582.386463709344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00041.warc.gz"}
https://www.shaalaa.com/textbook-solutions/c/ncert-solutions-chemistry-textbook-class-11-part-2-chapter-11-the-p-block-elements_144
CBSE (Commerce) Class 11CBSE Account It's free! Share Books Shortlist # NCERT solutions Chemistry Class 11 Part 2 chapter 11 The p-Block Elements ## Chapter 11 - The p-Block Elements #### Pages 323 - 325 Discuss the pattern of variation in the oxidation states of  B to Tl Q 1.1 | Page 323 Discuss the pattern of variation in the oxidation states of C to Pb. Q 1.2 | Page 323 How can you explain higher stability of BClas compared to TlCl3? Q 2 | Page 323 Why does boron trifluoride behave as a Lewis acid? Q 3 | Page 323 Consider the compounds, BCland CCl4. How will they behave with water? Justify. Q 4 | Page 323 Is boric acid a protic acid? Explain. Q 5 | Page 323 Explain what happens when boric acid is heated. Q 6 | Page 323 Describe the shapes of BF3 and BH4. Assign the hybridisation of boron in these species. Q 7 | Page 323 Write reactions to justify amphoteric nature of aluminium. Q 8 | Page 323 What are electron deficient compounds? Are BCl3 and SiCl4 electron deficient species? Explain Q 9 | Page 324 Write the resonance structure of  CO_3^(2-) Q 10.1 | Page 324 Write the resonance structures of HCO_3^(-) Q 10.2 | Page 324 What is the state of hybridisation of carbon in CO_3^(2-) Q 11.1 | Page 324 What is the state of hybridisation of carbon in Diamond ? Q 11.2 | Page 324 What is the state of hybridisation of carbon in graphite? Q 11.3 | Page 324 Explain the difference in properties of diamond and graphite on the basis of their structures. Q 12 | Page 324 Rationalise the given statements and give chemical reactions: • Lead(II) chloride reacts with Cl2 to give PbCl4. • Lead(IV) chloride is highly unstable towards heat. • Lead is known not to form an iodide, PbI4 Q 13 | Page 324 Suggest reasons why the B–F bond lengths in BF3 (130 pm) and BF_4^(-) (143 pm) differ. Q 14 | Page 324 If B–Cl bond has a dipole moment, explain why BCl3 molecule has zero dipole moment. Q 15 | Page 324 Aluminium trifluoride is insoluble in anhydrous HF but dissolves on the addition of NaF. Aluminium trifluoride precipitates out of the resulting solution when gaseous BF3 is bubbled through. Give reasons. Q 16 | Page 324 Suggest a reason as to why CO is poisonous. Q 17 | Page 324 How is an excessive content of COresponsible for global warming? Q 18 | Page 324 Explain Structures of Diborane and Boric Acid. Q 19 | Page 324 What happens when Borax is heated strongly Q 20.1 | Page 324 What happens when Boric acid is added to water Q 20.2 | Page 324 What happens when Aluminium is treated with dilute NaOH Q 20.3 | Page 324 What happens when BF3 is reacted with ammonia? Q 20.4 | Page 324 Explain the following reactions Silicon is heated with methyl chloride at high temperature in the presence of copper Q 21.1 | Page 324 Explain the following reactions Silicon dioxide is treated with hydrogen fluoride; Q 21.2 | Page 324 Explain the following reactions CO is heated with ZnO; Q 21.3 | Page 324 Explain the following reactions Hydrated alumina is treated with aqueous NaOH solution. Q 21.4 | Page 324 Give reasons for Conc. HNO3 can be transported in aluminium container. Q 22.1 | Page 324 Give reasons: A mixture of dilute NaOH and aluminium pieces is used to open drain. Q 22.2 | Page 324 Give reasons: Graphite is used as a lubricant. Q 22.3 | Page 324 Give reasons: Diamond is used as an abrasive. Q 22.4 | Page 324 Give reasons: Aluminium alloys are used to make aircraft body. Q 22.5 | Page 324 Give reasons: Aluminium utensils should not be kept in water overnight. Q 22.6 | Page 324 Give reasons:  Aluminium wire is used to make transmission cables. Q 22.7 | Page 324 Explain why is there a phenomenal decrease in ionisation enthalpy from carbon to silicon? Q 23 | Page 324 How would you explain the lower atomic radius of Ga as compared to Al? Q 24 | Page 324 What are allotropes? Sketch the structure of two allotropes of carbon namely diamond and graphite. What is the impact of structure on physical properties of two allotropes? Q 25 | Page 324 Classify following oxides as neutral, acidic, basic or amphoteric: CO, B2O3, SiO2, CO2, Al2O3, PbO2, Tl2O3 Write suitable chemical equations to show their nature. Q 26 | Page 325 In some of the reactions thallium resembles aluminium, whereas in others it resembles with group I metals. Support this statement by giving some evidences. Q 27 | Page 325 When metal X is treated with sodium hydroxide, a white precipitate (A) is obtained, which is soluble in excess of NaOH to give soluble complex (B). Compound (A) is soluble in dilute HCl to form compound (C). The compound (A) when heated strongly gives (D), which is used to extract the metal. Identify (X), (A), (B), (C) and (D). Write suitable equations to support their identities. Q 28 | Page 325 What do you understand by inert pair effect Q 29.1 | Page 325 What do you understand by Allotropy? Q 29.2 | Page 325 What do you understand by catenation? Q 29.3 | Page 325 A certain salt X, gives the following results. (i) Its aqueous solution is alkaline to litmus. (ii) It swells up to a glassy material Y on strong heating. (iii) When conc. H2SO4 is added to a hot solution of X, white crystal of an acid Z separates out. Write equations for all the above reactions and identify XY and Z. Q 30 | Page 325 Write balanced equations for: BF3 + LiH → Q 31.1 | Page 325 Write balanced equations for: B2H6 + H2O → Q 31.2 | Page 325 Write balanced equations for:  NaH + B2H6 → Q 31.3 | Page 325 Write balanced equations for: Q 31.4 | Page 325 Write balanced equations for: Al + NaOH → Q 31.5 | Page 325 Write balanced equations for: B2H6 + NH3 → Q 31.6 | Page 325 Give one method for industrial preparation and one for laboratory preparation of CO and CO2 each. Q 32 | Page 325 Boric acid is polymeric due to (a) its acidic nature (b) the presence of hydrogen bonds (c) its monobasic nature (d) its geometry Q 34 | Page 325 The type of hybridisation of boron in diborane is (a) sp (b) sp2 (c) sp3 (d) dsp2 Q 35 | Page 325 Thermodynamically the most stable form of carbon is (a) diamond (b) graphite (c) fullerenes (d) coal Q 36 | Page 325 Elements of group 14 (a) exhibit oxidation state of +4 only (b) exhibit oxidation state of +2 and +4 (c) form M2– and M4+ ion (d) form M2+ and M4+ ions Q 37 | Page 325 If the starting material for the manufacture of silicones is RSiCl3, write the structure of the product formed. Q 38 | Page 325 S
2018-09-20 03:52:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3873945474624634, "perplexity": 11508.773603475061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156376.8/warc/CC-MAIN-20180920020606-20180920040606-00462.warc.gz"}
https://www.nature.com/articles/s41467-018-06798-7?error=cookies_not_supported&code=ca2297fa-a1f2-418f-93bb-91a94707342a
# Gene synthesis allows biologists to source genes from farther away in the tree of life ## Abstract Gene synthesis enables creation and modification of genetic sequences at an unprecedented pace, offering enormous potential for new biological functionality but also increasing the need for biosurveillance. In this paper, we introduce a bioinformatics technique for determining whether a gene is natural or synthetic based solely on nucleotide sequence. This technique, grounded in codon theory and machine learning, can correctly classify genes with 97.7% accuracy on a novel data set. We then classify 19,000 unique genes from the Addgene non-profit plasmid repository to investigate whether natural and synthetic genes have differential use in heterologous expression. Phylogenetic analysis of distance between source and expression organisms reveals that researchers are using synthesis to source genes from more genetically-distant organisms, particularly for longer genes. We provide empirical evidence that gene synthesis is leading biologists to sample more broadly across the diversity of life, and we provide a foundational tool for the biosurveillance community. ## Introduction Biologists and bioengineers often transfer genes across organisms to test genetic hypotheses or to endow their favorite model organisms with novel traits or functionality1,2. In the first industrial example of recombinant DNA technology, Eli Lilly and Genentech expressed a synthetic gene encoding human insulin in the model bacterium Escherichia coli for drug manufacturing3. Soon afterwards, biologists began sourcing genes encoding thermostable polymerases4 from thermophilic bacteria and the well-known green fluorescent protein (GFP)5 from the jellyfish as research tools. More recent biological research focused on mammalian models has featured considerable introduction of bacterial genes, notably the targeted genome editing tool CRISPR-Cas96,7,8 and tools for optogenetics9,10. The growing field of synthetic biology also drives gene transfer because the genome sequences of non-model organisms present a treasure trove of potentially novel and orthogonal genes for testing in model organisms11,12. Using DNA synthesis to transfer synthetic gene sequences from one organism to another may succeed where transferring natural gene sequences would fail. Although natural genes have the potential for direct transfer from one organism to another because of the universality of the genetic code, many such sequences would express poorly when moved into a new organism because of differences in codon usage, GC content, or the presence of expression-limiting regulatory elements13,14. These concerns only worsen as sequence length increases because the potential for problematic codons increases, as does the time required to manually convert these codons using PCR-based or restriction enzyme-based approaches. Such constraints can limit what genetic engineers accomplish. In contrast with these restrictions on moving genes using traditional methods, gene synthesis can faithfully and rapidly recode natural sequences of large lengths15,16. Recoding algorithms harness synonymous codons that more closely reflect the expression organism and preserve the natural protein sequence17. Though the subtle implications of codon choice for the rate and quality of protein production are still being understood18,19, such codon-optimization is so valuable for expression that commercial gene synthesis service providers typically offer this option by default. We posit that codon-optimization offers a promising way to identify synthetic genes and the engineered organisms that contain them and thus provides the first way, to the best of our knowledge, to identify synthetic sequences from sequence alone. In the past, such engineering efforts could have been detected through the scars from gene editing, but such methods are becoming obsolete because of advances in scar-less molecular cloning20,21 and genome engineering techniques22. The ability to accurately identify synthetic genes enhances biosurveillance for organisms taking on non-native traits, which may be harmful or illicit. Although commercial DNA synthesis suppliers screen orders for similarity to select agents23,24,25,26, detection of synthetic genes within organismal genomes is particularly valuable for cases where conventional biosecurity control could be circumvented, such as when synthesis is done on a non-regulated machine. Such detection is also relevant for biosafety in the event of accidental release of engineered organisms. The importance of additional biosurveillance capability has been articulated widely, for example by a major U.S. bipartisan biodefense study27, ongoing U.S. intelligence agency research programs28 and in agricultural contexts by the USDA Animal and Plant Health Inspection Service29. Furthermore, a June 2018 report commissioned by the U.S. National Academies identified that making existing bacteria more dangerous and in situ production of harmful biochemicals are two topics that warrant the most concern30. Based on the biological and engineering implications of synthesis, we postulate a set of features that have the potential to distinguish synthetic sequences. To discern which of these are most predictive, we construct two reference sample sets, each comprised of known synthetic and known natural sequences. Using the first of these, our training set, we evaluate the predictiveness of the features using machine learning techniques. Interestingly, some commonly-known distinguishers (e.g., rare codon content) provide no additional benefit for our predictor. Having decided on a predictor, we examine its predictiveness out-of-sample using our larger second reference sample or test set. We can correctly classify 97.7% of those sequences, confirming that our scalable, sequence-only method for detecting synthetic genes is highly effective. After analyzing ~19,000 Addgene sequences, we find that the average genetic distance between source and expression organism is greater for synthetic genes than natural genes and that this difference increases at longer sequence lengths. Our findings of how gene synthesis is being used in public repositories reinforce the importance of our technique for biosurveillance and affirm that synthesis accelerates human-directed gene transfer across the tree of life. ## Results ### Classification scheme for natural or synthetic genes Many plausible definitions could be used for defining whether a sequence is natural or synthetic. We define a natural gene sequence as one that is found in naturally occurring genomes or metagenomes, including sequences that contain small deviations such as those resulting from natural evolution or from minor human interventions such as appending of short tags. We also consider complementary DNA (cDNA) sequences as natural given that they can be generated from naturally occurring messenger RNA using reverse transcription. In contrast, we apply the term synthetic to gene sequences that contain significant deviations from any single known contiguous naturally occurring sequence. We determined what constituted a significant deviation empirically by applying machine learning techniques to training and test sets of natural and synthetic sequences that we validated manually by sourcing them from sequence databases or from publications. Our definition is pragmatic and has limitations which reflect that we are only using the nucleotide sequence for the classification. For example, it is necessarily the case that if a researcher ordered synthesis of a gene sequence that was identical in every base to a natural sequence, we would classify that gene as natural. To learn which attributes best predict this classification, we considered two sets of quantitative attributes: intrinsic properties that we could determine from the sequence (such as GC content and rare codon percentage); or comparative properties that we could determine through similarity comparisons with a reference sequence database (such as query coverage—“QCov”—or percentage identity – “%Id”) (Fig. 2a, see Methods for full set of properties considered). We hypothesized that most of these properties would improve classification accuracy. To gather the comparative information, we used nucleotide Basic Local Alignment Search Tool35,36 (BLASTn) to test each sequence against the National Center for Biotechnology Information (NCBI) RefSeq database, a comprehensive database of naturally occurring genomes, metagenomes, and cDNA libraries37,38, and extracted comparison data for the best alignment entry. Because many published or publicly disclosed codon-optimization procedures use a weighted Monte Carlo approach proportional to codon abundance (or codon adaptation index, CAI)39,40,41,42,43, we surmised that there might be an effective cutoff value for %Id below which there should only be synthetic sequences. To quantify this theoretical cutoff, we pursued two strategies. First, we computed the average expected %Id of a nucleotide sequence assuming randomized codon-substitution without weighting by the codon usage of any particular organism. Each codon substitution thus produced an expected %Id based only on the number of codon possibilities for each amino acid and the nucleotide substitutions between them. Weighting these values by the amino acid occurrence frequency in nature44 indicates that a randomly codon-substituted sequence should, on average, have 78 %Id compared to the starting non-substituted sequence (Supplementary Tables 13). This provides the baseline against which we compare our second strategy, which tests the expected %Id from codon-optimization for specific organisms. We did stochastic simulations of all potential pairs of 16 different organisms, using their actual codon usage tables. On average, these simulations provided similar results. For example, expression of human sequences in other organisms (Fig. 2b) had an average of 75 %Id. All simulation averages fell below 85 %Id. Codon optimization across other organism pairs revealed important variation from the 75 %Id average: organisms with highly-dissimilar codon usage produced 65 %Id on average, whereas optimizing organisms with highly-concentrated codon usage back to themselves produced 85 %Id on average (overall distribution summarized in Fig. 2c, the ‘shoulders’ of which, at 65 and 85%, represent these extremes). Because model organisms contain more typical codon usage, transfers of genes between two organisms with extreme codon-usage are infrequent. Together our theoretical analysis suggests that Monte-Carlo based codon optimization methods leave telltale signals in the %Id when compared back to the pre-codon-optimization sequence. To empirically determine which attributes could inform our classification and what quantitative thresholds would be appropriate, we pursued a supervised machine learning approach that considered the full set of previously mentioned variables. We constructed a training set consisting of 83 gene sequences populated with natural and synthetic genes for expression in E. coli, Saccharomyces cerevisiae (baker’s yeast), and Homo sapiens (Supplementary Tables 411). Synthetic genes included in training and test sets were identified from several independent databases using keyword searches for terms such as “synthetic” or “codon-optimized” and manually verified from user-provided annotation or the methods section from the corresponding publication. We applied random forest machine learning45,46 to this training set and determined that sequence %Id below 85% was the best predictor of a synthetic sequence, aligning well with our theoretical results. Using this classification criterion on a test set of 173 manually identified sequences yielded 97.7% accuracy (Fig. 2d). This also aligns well with our theoretical simulation results, which predict that 98.6% of synthesized sequences will lie below the 85 %Id threshold. To further validate this threshold, we performed a simple parameter sensitivity analysis using our test set. This demonstrated that other %ID cutoffs are not as effective (Supplementary Table 12). Interestingly, our random forest approach did not identify GC content or rare codon content as an effective predictor. ### Application of the classification scheme to the Addgene data We applied our classification scheme to the 19,334 unique genes contained in the Addgene database from 2006–2015 to determine which were synthetic and which were natural (Fig. 2e). For this analysis, we excluded genes encoding known antibiotic resistances based on BLASTn of the Addgene database against reference antibiotic resistance sequences. We also excluded genes that were likely to encode fusion proteins (see Online Methods). We found that the share of synthetic gene sequences deposited in Addgene has increased over time (Fig. 2f). By 2015, synthetic sequences made up over 20% of the genes in newly deposited plasmids, up from less than 1% in 2006. The increasing abundance of synthetic sequences is consistent with the order of magnitude decrease in the cost of gene synthesis over this period47,48. ### Examination of differential transgene expression Using our classification and BLASTn results, we investigated patterns of source and expression organisms for natural and synthetic gene sequences. Because Addgene expression fields contained terms broader than specific organisms, we grouped expression into six categories: Mammalian, Worm, Insect, Plant, Yeast, and Bacteria (Supplementary Table 13) and use this as the expression organism. We determine the source organism by considering the organism corresponding to the best alignment entry (also known as the maximal-scoring segment pair) for a gene sequence. An alternative approach to finding the source organism would have been to use BLASTx to identify the source organism in addition to BLASTn to identify %QCov and %Id. In practice such an approach has important drawbacks, for example more sparsely populated reference databases (see Online Methods). Many synthetic sequences resulted in no BLAST alignment to any sequence in the RefSeq database, and these were designated as “No Hit.” Sequences that result in “No Hit” are likely to be de novo synthetic sequences that deviate from any known protein. We report the proportion of sequences that fit into this category and where they are expressed because this is of independent interest, but we ignore these sequences for subsequent genetic distance calculations since they lack a source organism. Additionally, for sequences that best aligned to viral sequences, we included a “Virus” category that exists outside of taxonomic relationships for living organisms. We binned all sequences with available source organisms by phylum in accordance with NCBI taxonomic practice. A precise determination of genetic distance between source and expression organisms is not possible using existing taxonomic systems because they are not quantitative and because such comparisons cannot be made at the phylum level. Instead we estimate genetic distance using 16 S or 18 S ribosomal RNA (rRNA) sequence from the SILVA database49. rRNA is highly evolutionarily conserved and can function as an evolutionary chronometer since 18 S rRNA is the eukaryotic nuclear homolog of 16 S rRNA in prokaryotes50,51. We constructed a phylogenetic tree using the web tool Phylogeny.fr52 and extracted genetic distance estimates for each source-expression pair based on the most-common organism in that phylum in the Addgene database (Fig. 3a and Supplementary Fig. 1). These genetic distances represent the fraction of mismatches at aligned positions, as is conventional in phylogenetic analysis. Because we are measuring the distance between the source and expression organisms (and not to the specific query sequence), our measure of genetic distance for the usage of a sequence is independent of whether or not it is classified as synthetic. We display heatmaps showing the number of natural and synthetic gene sequences in the Addgene database corresponding to source-expression category pairs across the 22 most common phyla (Fig. 3b). From these heatmaps, we can make several observations about the relative magnitude of phylum sourcing, the kinds of gene transfers occurring, and the differences in these aspects between natural and synthetic genes. Though the most common expression system for Addgene plasmids is Mammalian, the largest source of unique gene sequences by a significant margin based on BLASTn is Phylum Proteobacteria. The next largest sources of unique gene sequences are Phylum Chordata, viruses, and Phylum Cnidaria, respectively. This may reflect the relative focus on studying vertebrate and viral genes, as well as the importance of GFP in biological research. Sequences sourced from Proteobacteria are used at approximately similar levels in Mammalian and Bacterial expression systems, regardless of whether the sequence is identified as natural or synthetic. On the other hand, sequences sourced from Chordata are predominantly used in Mammalian expression systems, regardless of whether the sequence is identified as natural or synthetic. These heatmaps demonstrate significant transgene expression for both natural and synthetic gene sequences. The most frequent type of transfer is from source phylum Proteobacteria to Mammalian expression. Though this may be consistent with the predominance of deposits and orders of mammalian expression constructs from Addgene, it is striking that the frequency of transfer from the source phylum Chordata to bacterial expression (essentially the reverse phenomenon) is far lower. A higher-level pattern observable in the heatmaps of Fig. 3b is their relationship with genetic distance shown in Fig. 3a. If genes were being most commonly expressed in their source organism, one would observe hotspots in Fig. 3b along a diagonal axis roughly from upper-left to lower-right. These hotspots are clear for animal expression platforms for both natural and synthetic genes. For natural genes, the pattern extends into many bacterial sequences (hotspots on the lower-right). However, for synthetic genes there is a marked change in the trend for bacterially sourced sequences. Hotspots frequently appear on the lower-left, indicating a high-frequency of mammalian expression of bacterially derived, synthetic sequences. ### Genetic distances between source and expression organisms From these heatmaps it is difficult to quantify the differences in expression of natural and synthetic genes. Thus, we calculated genetic distances between the source and expression organism for each sequence. Overall, consistent with our main hypothesis, we find that the average genetic distance between source and expression organisms is greater for synthetic than for natural gene sequences, and that this distinction is highly statistically significant. Table 1 shows these results through a series of regression specifications. In all cases, Synthetic is a binary variable which is 1 if our classification system deems that sequence synthetic, and 0 otherwise. Specification (1) shows that expression with synthetic sequences is, on average, 0.077 units (t-test p-value < 0.01) farther from the source organism than are natural sequences. Specification (2) shows that the gap between the genetic distance between synthetic and natural sequence use grows with sequence length, with each extra kilobase adding 0.117 units (t-test p-value < 0.01) to the difference. Specifications (3) and (4) confirm the finding of specification (2), but use the alternative dependent variable Cross Kingdom, which is a binary variable equal to 1 if the expression is cross-kingdom and 0 otherwise. These trends remain even if CRISPR-Cas9 sequences are excluded from the analysis (Supplementary Table 14). Figure 4 uses a non-parametric local regression (loess) to show the relationship between gene length and genetic distance, for both natural and synthetic sequences. The shaded regions represent one standard error. Our results suggest that the longer a natural gene sequence is, the less likely it is to be transferred into another organism by researchers. This observation is consistent with the perception that longer unmodified gene sequences are generally more difficult to express and that, as sequence length grows, so does the likelihood that there will be sequence regions that are troublesome to express in another organism. In contrast, synthetic sequences experience little to no drop in genetic distance as gene length grows, and at large lengths are used predominantly for transfer across distant organisms. Thus we conclude that gene synthesis enables transgene expression at a much higher rate than traditional techniques, and that this difference is both scientifically and statistically significant. ## Discussion This paper introduces a nucleotide-only-based method for determining whether a genetic sequence is synthetic or natural. Grounded both in codon theory and in empirical testing using machine learning, we find that we can correctly predict with 97.7% accuracy on a novel data set. The key heuristics that enable this classification are the percentage nucleotide identity and query coverage of a gene sequence compared across a reference database of natural sequences. Somewhat surprisingly, our machine learning approach did not find GC content or rare codon usage to be an effective predictor. Very usefully, BLASTn queries against the RefSeq genomic collection simultaneously provide the data needed for sequence classification, as well as the organismal origin of the gene. Using our classification method and phylogenetic distance calculations on sequences in the Addgene database, we provide empirical evidence that gene synthesis is being widely used by practitioners to source genes from genetically-distant organisms, which is a finding of important consequence for biotechnology and biosurveillance communities. The genetic distance between the organism used for gene expression and the organism from which the gene was sourced is not only notably more distant for synthetic rather than natural sequences, but this gap grows as sequence length increases. Our finding sheds light on the tension in using synthesis for longer gene sequences. On one hand, a longer natural gene sequence would be more likely to contain codons problematic for gene transfer, making synthesis more attractive for these sequences. On the other hand, methods and pricing for synthesis vary widely based on DNA length15, and thus community behavior may be influenced by many factors including size limits on common synthetic gene offerings (e.g., gBlocks from Integrated DNA Technologies) towards not using synthesis for long sequences. Our results suggest that, at the margin, scientists are more influenced by the ability of gene synthesis to access the treasure trove of natural genetic diversity and transfer it to new organisms. Determining the provenance of a genetic sequence is typically the first step of forensic attribution associated with biosurveillance. As gene synthesis technology is further democratized and genetically-engineered organisms increase in capability, such sequence classification tools are vital to identifying and monitoring engineered organisms that may be accidentally or deliberately released into new environments. Commercial gene synthesis suppliers already provide some security in this area by screening orders for potentially hazardous sequences24. But, as a major U.S. bipartisan biodefense study27 and ongoing U.S. intelligence agency research programs28 recently highlighted, there are limited tools to detect engineered organisms, which may have been constructed by circumventing gene synthesis regulations. In these circumstances, there is significant value in being able to analyze the sequences after-the-fact, for example based on an environmental sample obtained from a suspicious site. The classification method reported here can form part of a suite of tools and strategies that help identify an engineered organism (see Supplementary Discussion for a proposed workflow). Once an organism of interest is isolated, conventional tools can be used for whole genome sequencing, de novo genome assembly or reference genome alignment, and then open reading frame (ORF) detection. Our classification scheme can subsequently be applied to a subset of ORFs or every ORF to identify synthetic genes. Since transfer of natural genes is also of interest for biosecurity purposes, our more general approach of using existing BLASTn and phylogenetic tools to examine ORFs can help identify transgenes and evaluate the likelihood of horizontal or engineered transfer. If an engineered organism cannot be isolated and is part of an impure environmental sample, additional approaches such as 16 s rDNA sequencing and knowledge of environmental baselines may be needed. Upon identification of a synthetic gene, BLAST can provide functional annotation and guide response strategies to the engineered organism harboring the synthetic gene. For example, these approaches would accurately identify a recently engineered yeast strain designed to produce opioids as numerous synthetic genes were required to achieve this feat53. Other approaches would be needed to identify engineering modifications to non-ORF regions as these are outside the scope of our tools. A particularly apt setting for our approach is agricultural risk. For decades, through the Coordinated Framework for Regulation of Biotechnology, the USDA Animal and Plant Health Inspection Service has had oversight of genetically engineered organisms that may pose agricultural risk29. However, genetically modified organism (GMO) detection in agriculture has been limited to PCR-based methods with primers designed to target known genes associated with GMOs, which are most commonly synthetic transgenes54,55. Detection of GMO crops or food ingredients is of heightened interest in the European Union given stricter regulation. While state of the art methods for GMO detection in the EU have featured more extensive databases, they remain associated with PCR methodology. The Joint Research Center (JRC) of the European Commission constructed the GMOMETHODS database which contained 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample as of 201256. In 2015, the JRC followed up with a database specifically aimed at storing GMO-related sequences called JRC GMO-Amplicons57. Though the database name refers to amplicons, the authors note that the availability of an updated GMO sequence databases has increased relevance after the advent of next-generation sequencing. When coupled with next-generation sequencing, our classification method should provide a more general and complementary approach to comparisons against known GMO-associated sequences because it can identify uncatalogued synthetic transgenes, such as those not intended for crop enhancement. In addition to agriculturally oriented agencies, public health, environmental, and biosecurity agencies would benefit from the ability to screen for untargeted genes in organisms to identify unusual risks. Engineered organisms containing synthetic genes are of particular interest because in academic settings they have been demonstrated to produce non-native illicit substances53, to express non-native toxins58, or to execute complex programs designed to alter human cell fates59. Furthermore, engineered microbes released into the environment can persist for years60,61. Because the RefSeq reference genome collection includes pathogen genomes, our classification approach can also be used to identify gene transfers into known pathogens. As an enabling technology for life science research, gene synthesis has changed the behavior of scientists. In the absence of affordable gene synthesis, researchers could look for parts in a narrow genetic neighborhood where transfer would be relatively easy, or source more broadly across organisms at the cost of potentially incurring much greater engineering effort with little guarantees of eventual success. Today those same researchers can source their parts from wherever makes the most biological sense with the knowledge that gene synthesis should help them overcome one of the main expression challenges. As such, gene synthesis could allow biologists to source genes from farther away in the tree of life. This paper shows that it does, providing a bird’s-eye view of the community’s preference for relying on gene synthesis to transfer genes across large genetic distances. This trend promises to be of scientific, industrial, and medical use, to the great benefit of biologists and society at large. At the same time, society must be prepared in the event of accidental or deliberate release of genetically engineered organisms, and tools for synthetic sequence identification constitute a foundational part of these efforts. ## Methods ### Codon-substitution sequence percentage identity calculation As a first approximation for a cutoff value for sequence percentage identity, we calculated the expected sequence identity of any given gene sequence after codon substitution, without accounting for the relative differences in codon usage (Supplementary Table 1). First, we determined the expected percentage identity associated with codon-substitution at the amino acid level. Barring very rare exceptions, the 20 canonical amino acids are each encoded by the same codons throughout all known life. For all but three amino acids, the third nucleotide in the codon is the only variable position. At the codon level, this means that the sequence encoding an amino acid with two codon choices will either remain identical after optimization (3/3 bases unchanged) or be 67% identical (2/3 bases unchanged). Thus, for amino acids with only two synonymous codon choices, the average expected sequence percentage identity is 83%. Similarly, amino acids with four codon choices have an average expected percentage identity of 75% after codon-substitution. The three amino acids feature nucleotide changes at positions other than just the third position. These each have six codon choices. For two of these amino acids—R (arginine) and L (leucine)—there are two codons where the first nucleotide also varies. This case is illustrated in Supplementary Table 2 for R. An R or L codon is expected to have 61% sequence identity on average after optimization. In the case of S (serine) (Supplementary Table 3), two of the six codon choices have differences in the first and second nucleotide in addition to the usual third nucleotide variation. Thus, after determining the expected percentage identity associated with codon-substitution for each amino acid, we obtained a weighted average of 78% using the natural frequency of occurrence of each amino acid44. Creating a threshold level that separates natural and synthetic would need to be higher than this, to account for random variance of codon usage. To do this, and more precisely align with actual amino acid usage, we performed a simulation. ### Codon-optimization stochastic simulation We performed a stochastic simulation to model the transfer of genes between 16 organisms: A. thaliana, B. subtilis, C. crescentus CB15, C. elegans, D. melanogaster, D. rerio, E. coli, G. gallus, H. sapiens, M. musculus, N. tobacum, P. falciparum, R. norvegicus, S. cerevisiae, S. coelicolor A3, and T. thermophilus HB27. Codon usage tables for these organisms were obtained from the Codon Usage Database (http://www.kazusa.or.jp/codon/). We considered every pairwise transfer of genes within these (including transfer back to the organism itself) and modeled what percentage identity would be expected upon codon optimization. In short, we completed the following steps: (i) Source organism amino acid sequence—Created a random sequence of 1000 amino acids, based on the natural occurrence rate of such amino acids in the source organism; (ii) Source organism nucleotide sequence—For each amino acid in the source organism amino acid sequence we randomly chose a codon that represents it, weighting the choice by the source organism’s codon usage table; (iii) Expression organism nucleotide sequence—For each amino acid in the source organism amino acid sequence we randomly chose a codon that represents it, weighting the choice by the expression organism’s codon usage table; (iv) Comparison of the Source Organism and Expression Organism nucleotide sequences—we compare sequences codon-by-codon to determine whether they are identical. The set of steps was repeated 1000 times for each of the 162 pairings, yielding 256,000 simulation runs. R code used for the simulation can be found in the Supplemental Code section. ### Determination of classification criteria To identify a suitable classification criteria, we compiled a set of variables that could potentially determine whether a part is naturally occurring (“natural”) or was produced synthetically (“synthetic”). Initially, we considered the following variables: Percentage of rare codons (less than 2% occurrence in the host); Percentage of rare codons (less than 5% occurrence in the host); Percentage of rare codons (less than 10% occurrence in the host); Average codon abundancy; GC content; BLAST output variables, such as maximum query coverage, maximum percent identity, maximum percent identity with query coverage greater 50%, maximum percent identity with query coverage greater 50%, maximum percent identity with query coverage greater 85%, maximum percent identity with query coverage greater 95%, number of hits with query coverage greater 50%, number of hits with query coverage greater 85%, and number of hits with query coverage greater 95%. We also tested different combinations of the above variables to assess potential multiplicative or correlative effects. Host codon occurrences were determined from OpenWetWare for E. coli (http://openwetware.org/wiki/Escherichia_coli/Codon_usage) and the Kasuza Codon Usage Database for S. cerevisiae and H. sapiens (http://www.kazusa.or.jp/codon/). ### Construction of training and test sets for empirical testing To gain a sense of the percentage sequence identity differences that we would observe and to test the influence of other variables, we constructed a training set consisting of synthetic sequences that were known to be codon optimized for expression in specific organisms and a control set of natural sequences. A complete description of the training and test sets is included in Supplementary Methods. ### Procedure for variable reduction using random forest To determine the most useful set of variables that would distinguish between natural and synthetic sequences, we applied the R package random forest (‘randomForest’—https://cran.r-project.org/web/packages/randomForest/randomForest.pdf). Random forest is a learning method that can be used for classification by constructing a multitude of decision trees using a training data set. With a test set, the individual trees output the mode of the classes. We observe that percent identity is sufficient to predict whether a sequence occurs naturally or was made synthetically. Additional variables did not improve the classification result. Using our training set of sequences, we identified 85% identity as the threshold. Sequences that have a higher percent identity when performing BLASTn against the RefSeq database can be classified as natural, while sequences with a lower percent identity are likely produced synthetically. ### Approach for nucleotide BLAST (BLASTn) To align sequences pairwise or to a database, we used the NCBI BLAST + suite. We calculated pairwise alignments using the standalone version on a local machine (ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/—version 2.5.0). For alignments to a larger data base such as RefSeq (see below), we used NCBI BLAST + on Amazon Web Services (https://aws.amazon.com/marketplace/pp/B00N44P7L6/ref=mkt_wir_ncbi_blast#—version2.5.0). In both cases, since we only have nucleotide sequences in our database and are looking for only related sequences, we use the ‘BLASTn’ algorithm and apply following parameters: Maximum target sequences = 999,999; Expect threshold = 1; Word size = 11 (4 for pairwise alignment); Match score = 2; Mismatch score = −3; Gap cost – Existence = 5, Extension = 2. We chose to perform BLASTn against the Reference Sequence (RefSeq) database. RefSeq is maintained and provided freely by the National Center for Biotechnology Information (NCBI) and is, to our knowledge, the most comprehensive database of the genetic sequences found in natural organisms37,38. ### Application of classification scheme to Addgene data For the alignment of all Addgene sequences against the RefSeq data base, we used NCBI BLAST + on Amazon Web Services. We ran a c3.8xlarge instance (https://aws.amazon.com/ec2/instance-types/?nc1=h_ls) with 32 virtual CPUs and 60 GiB memory. The BLAST+ suite contains only the tax ID for each entry. To access the kingdom and scientific name of each hit we use the taxonomy database (ftp://ftp.ncbi.nlm.nih.gov/blast/db/taxdb.tar.gz). We ran the BLAST+ commands directly on the Amazon Web Service instance using this command line option: blastn -query AddgeneSequenes.fasta -db refseq_genomic -evalue 1 -max_target_seqs 999999 -word_size 11 -gapopen 5 -gapextend 2 -penalty −3 -reward 2 -outfmt “6 qseqid sseqid sacc sskingdoms staxids sscinames scomnames length evalue pident nident mismatch qcovs qcovhsp qstart qend sstart send” -out RefSeq_AddgeneSequenes.txt. We are grateful to Addgene for sharing their data with us for this research project. The data was received in multiple CSV files. The Addgene data contains a wide range of information for each plasmid in the repository. For this research project we focused on the following information: Plasmid name/ID; The year a plasmid was deposited with Addgene; Plasmid expression system (vector type); Plasmid sequence; Features on the plasmid (e.g., ORFs, ribozyme binding sites, promotors) and the start and end position of each feature. We subsample the available data and only considered plasmids for which a submission date, a full sequence, and a list of annotated biological parts was provided. Two pieces of information from the previous list needed to be cleaned for this research project. The ‘features’ information was cleaned and summarized to reduce the computational power that was needed to align the sequences to the RefSeq data base. We removed all features except for ORFs. Theoretical ORFs within plasmid sequences were detected by Addgene. Prior to June 26, 2017 (the launch date of SnapGene-powered maps), Addgene in-house software was used to detect theoretical ORFs. An arbitrary minimum ORF length of 150 amino acids was set and start codons (ATG) were searched for in all six reading frames. We then aligned all Addgene-identified ORFs pairwise using BLAST+. Each sequence received a unique ID, and if two sequences had 100% query coverage and 100% identity, then the same ID was given to the identical sequences. We also cleaned the plasmid expression system information by converting each entry into one of seven simplified expression categories. This was necessary because the information is not curated by Addgene and scientists often add more than one expression system. This was done by making two assumptions: (1) that information with multiple entries were most likely cloned in a lower life form and primarily expressed in the highest life form listed; (2) that information containing viral expression platforms were primarily intended for mammalian expression. Supplementary Table 13 lists the full set of original categories and the corresponding simplified expression category assigned. ### Exclusion of antibiotic resistance gene sequences Antibiotic resistances are used differently than other genes. Notably, the requirement of every plasmid to contain an antibiotic resistance or other selective marker means that a small number of genes are used highly redundantly. In addition, antibiotic resistances have been acquired by natural pathogens of high medical interest and therefore synthetic versions of these sequences are more likely to be found in the RefSeq database, potentially leading to false natural classification. Therefore, we removed them from our sample. First, we retrieved the most common antibiotic resistances from the Addgene website (http://blog.addgene.org/plasmids-101-everything-you-need-to-know-about-antibiotic-resistance-genes) and created a list of all the features in the Addgene database that are labeled as one of the antibiotic resistances. We retrieved the sequences of these features and built a database that we aligned to all other sequences from the Addgene database. If an ORF shares more than 85% query coverage and more than 85% identity with one of the previous identifies antibiotic resistances, then we consider it as an antibiotic resistance. With this approach we identified 534 unique antibiotic resistance sequences in our sample, and we excluded these sequences from further analyses. ### Exclusion of sequences with query coverage between 15–85% As described above, in our test and training sets we classified sequences that with a more than 85% identity hit in the RefSeq data base as natural. In the Addgene data we add one additional constraint, reflecting the usage of fusion proteins (which were not in our training or test data). This is important because we would not want to classify an entire sequence as natural if 50% of the sequence has 100% sequence identity, whereas the other 50% has 0%—but this is exactly the result that would be obtained if we ignore the query coverage (which reveals this percentage of the sequence that is being matched). We impose an additional requirement that genes must also have more than 85% query coverage to be deemed natural. Sequences that have less than 15% query coverage with any sequence in the RefSeq database, or those than result in “No Hit”, are likely to be so unnatural (true de novo sequences) as to be considered synthetic. As already mentioned, in our training data we observed too few instances with low query coverage and high percentage identity, to determine a precise tradeoff for how much query coverage would be optimal. Nevertheless, we chose an 85% query coverage cutoff as a form of robustness against misclassifications with BLASTn. This was guided by not wanting to pick too high a level, lest we exclude natural sequences with added tags for common purposes such as purification or localization, which are often 20–100 base pairs (and therefore less than 10% of a standard 1000 base pair gene). Similarly, we did not want to pick too low a level, since BLASTn searches preferentially for highly identical regions, and thus might cut off the end of a synthetic sequence yielding a maximal-scoring segment pair with lower query coverage and higher percentage identity (thus falsely classified as natural). For sequences with less than 15% query coverage, we assume that they are fully synthetic. This threshold is somewhat arbitrarily to reflect that longer sequences are unlikely to be fusion proteins (and to be consistent with an upper threshold of 85%). We hypothesized that sequences resulting in query coverages between 15 and 85% are very likely to be fusion proteins. We test this for the set of putative fusion proteins by removing the portion of the query that successfully aligned in the first BLAST and re-running BLAST on the remaining shortened sequence. We found that many of the remaining sequence queries aligned with high query coverage on the second BLAST, suggesting that they were indeed fusion proteins. ### Regression analysis For the regression analyses shown in Table 1 we used the software package StataIC 12(details in Supplementary Code). We ran ordinary least square regressions (OLS) for the following specifications: 1. 1. $${\mathbf{Genetic}}\,{\mathbf{Distance}} ={\alpha} + \beta \,{\mathbf{Synthetic}}$$ 2. 2. $${\mathbf{Genetic}}\,{\mathbf{Distance}} = \alpha + \beta \,{\mathbf{Synthetic}} + \gamma \,{\mathbf{Gene}}\,{\mathbf{Length}} + \psi \,{\mathbf{Gene}}\,{\mathbf{Length}} \times {\mathbf{Synthetic}}$$ 3. 3. $${\mathbf{Cross}}\,{\mathbf{Kingdom}} = \alpha + \beta \,{\mathbf{Synthetic}} + \gamma \,{\mathbf{Gene}}\,{\mathbf{Length}} \\ + \psi \,{\mathbf{Gene}}\,{\mathbf{Length}} \times {\mathbf{Synthetic}}$$ In specification (4) we repeated the third specification but estimated it using a logit regression, to reflect the binary outcome variable. In Fig. 4 we estimate two regressions, one on synthetic genes and the other on natural one, using the loess regression function in ggplot (details in Supplemental Code). In each case we used a local regression of GeneticDistance on GeneLength with a span of 0.9. ### Evaluation of transgene expression We assigned a source organism for each gene sequence based on the organism of best BLASTn alignment. To determine source organisms, we faced a choice of whether to use BLASTn or BLASTx (which translates the queried nucleotide sequence into a protein sequence and searches for that). We chose to use BLASTn for several reasons. First, the use of BLASTx would require an additional BLAST run. Second, although protein-based BLAST strategies are recommended for determining the structure and function of proteins encoded by genes, BLASTx may be less accurate than BLASTn for source organism determination because NCBI protein collections are more sparsely populated than NCBI nucleotide collections and because protein sequences are more highly conserved. Spot testing confirmed this, with BLASTx appearing to offer lower resolution than BLASTn for source organism identification. In future studies, one could envision evaluating a wide range of BLAST strategies for source organism determination, including weighting nucleotides by codon position. In any case, we expect that these differences in source organism assignment would be negligible if organisms were grouped by phylum as we have done. To compare the performance of BLASTn and BLASTx in determination of the phyla of source organisms, we manually evaluated 50 randomly chosen sequences from the Addgene data. In 25 out of the 50 sequences, the two approaches led to differing conclusions for the maximal-scoring segment pair (most of which had lower %QCov and %Id scores from BLASTx than from BLASTn). Within these, however, only 5 displayed differences in phylum. We determined the expression category using the cleaned Addgene data for expression system (Supplementary Table 13). We determined a representative organism for each phylum by most common member of that phylum in the Addgene database and obtained 16 S/18 S ribosomal RNA sequences for each organism (see Supplementary Fig. 1). ### Code availability All the authors’ analysis code is included in the Supplementary Information. Our analyses were run on: R version 3.3.1 (for the codon simulation) and 3.4.1, Python version 3.5, and Stata version IC12. ## Data availability The following data were used for this paper: Addgene plasmid data: proprietary to Addgene, viewable but not downloadable at https://www.addgene.org/. RefSeq reference genome collection: Available at https://www.ncbi.nlm.nih.gov/refseq/. Codon Usage Databases: Available at https://www.kazusa.or.jp/codon, and https://openwetware.org/wiki/Escherichia_coli/Codon_usage. SILVA 16S rRNA Database: Available at https://www.arb-silva.de/. All other data available upon request from the authors. Machine readable versions of the data presented in the Supplementary Information are available at https://github.com/AKunjapur/Synthetic-gene-classification. ## References 1. 1. Wilkinson, B. & Micklefield, J. Mining and engineering natural-product biosynthetic pathways. Nat. Chem. Biol. 3, 379–386 (2007). 2. 2. An homage to unusual creatures. [Editorial]. Nat. Methods 14, 827 (2017) https://www.nature.com/articles/nmeth.4428. 3. 3. Johnson, I. S. I. Human insulin from recombinant DNA technology. Science 219, 632–637 (1983). 4. 4. Saiki, R. K. et al. Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase. Science 239, 487–491 (1988). 5. 5. Chalfie, M., Tu, Y., Euskirchen, G., Ward, W. & Prasher, D. Green fluorescent protein as a marker for gene expression. Science 263, 802–805 (1994). 6. 6. Jinek, M. et al. RNA-programmed genome editing in human cells. eLife 2, 331–338 (2013). 7. 7. Cong, L. et al. Multiplex genome engineering using CRISPR/Cas systems. Science 339, 819–823 (2013). 8. 8. Mali, P. et al. RNA-guided human genome engineering via Cas9. Science 339, 823–6 (2013). 9. 9. Boyden, E. S., Zhang, F., Bamberg, E., Nagel, G. & Deisseroth, K. Millisecond-timescale, genetically targeted optical control of neural activity. Nat. Neurosci. 8, 1263–1268 (2005). 10. 10. Zhang, F. et al. Multimodal fast optical interrogation of neural circuitry. Nature 446, 633–639 (2007). 11. 11. Smanski, M. J. et al. Synthetic biology to access and expand nature’s chemical diversity. Nat. Rev. Microbiol. 14, 135–149 (2016). 12. 12. Ziemert, N., Alanjary, M. & Weber, T. The evolution of genome mining in microbes–a review. Nat. Prod. Rep. 33, 988–1005 (2016). 13. 13. Gustafsson, C., Govindarajan, S. & Minshull, J. Codon bias and heterologous protein expression. Trends Biotechnol. 22, 346–353 (2004). 14. 14. Plotkin, J. B. & Kudla, G. Synonymous but not the same: the causes and consequences of codon bias. Nat. Rev. Genet. 12, 32–42 (2011). 15. 15. Czar, M. J., Anderson, J. C., Bader, J. S. & Peccoud, J. Gene synthesis demystified. Trends Biotechnol. 27, 63–72 (2009). 16. 16. Kosuri, S. & Church, G. M. Large-scale de novo DNA synthesis: technologies and applications. Nat. Methods 11, 499–507 (2014). 17. 17. Quax, T. E. F., Claassens, N. J., Sö, D. & Van Der Oost, J. Codon bias as a means to fine-tune gene expression. Mol. Cell 59, 149–161 (2015). 18. 18. Goodman, D. B., Church, G. M. & Kosuri, S. Causes and effects of N-terminal codon bias in bacterial genes. Science 342, 475–479 (2013). 19. 19. Chaney, J. L. et al. Widespread position-specific conservation of synonymous rare codons within coding sequences. PLoS Comput. Biol. 13, e1005531 (2017). 20. 20. Gibson, D. G. et al. Enzymatic assembly of DNA molecules up to several hundred kilobases. Nat. Methods 6, 343–345 (2009). 21. 21. Engler, C., Gruetzner, R., Kandzia, R. & Marillonnet, S. Golden gate shuffling: a one-pot DNA shuffling method based on type IIs restriction enzymes. PLoS ONE 4, e5553 (2009). 22. 22. Reisch, C. R. & Prather, K. L. J. The no-SCAR (Scarless Cas9 Assisted Recombineering) system for genome editing in Escherichia coli. Sci. Rep. 5, 15096 (2015). 23. 23. Carlson, R. The pace and proliferation of biological technologies. Biosecurity Bioterrorism Biodefense Strateg. Pract. Sci. 1, 203–214 (2003). 24. 24. Bügl, H. et al. DNA synthesis and biological security. Nat. Biotechnol. 25, 627–629 (2007). 25. 25. Garfinkel, M. S., Endy, D., Epstein, G. L. & Friedman, R. M. Synthetic genomics-options for governance. Biosecurity Bioterrorism Biodefense Strateg. Pract. Sci. 5, 359–362 (2007). 26. 26. Adam, L. et al. Strengths and limitations of the federal guidance on synthetic DNA. Nat. Biotechnol. 29, 208–210 (2011). 27. 27. Blue Ribbon Study Panel on Biodefense. A National Blueprint for Biodefense: Leadership and Major Reform Needed to Optimize Efforts – Bipartisan Report of the Blue Ribbon Study Panel on Biodefense (Hudson Institute, Washington, DC, 2015). 28. 28. IARPA-BAA-17-07 Finding Engineering-Linked Indicators (FELIX). Intelligence Advanced Research Projects Agency. (Office of the Director of National Intelligence, Washington, DC, 2017). 29. 29. Office of Science and Technology Policy. Modernizing the Regulatory System for Biotechnology Products: Final Version of the 2017 Update to the Coordinated Framework for the Regulation of Biotechnology. Available at: https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2017_coordinated_framework_update.pdf (Office of Science and Technology Policy, Washington, DC, 2017). 30. 30. National Academies of Sciences, E. and M. Biodefense in the Age of Synthetic Biology. (National Academies Press, 2018). 31. 31. Herscovitch, M., Perkins, E., Baltus, A. & Fan, M. Addgene provides an open forum for plasmid sharing. Nat. Biotechnol. 30, 316–317 (2012). 32. 32. Marx, V. Plasmids: shopping in the age of plenty. Nat. Meth 11, 795–798 (2014). 33. 33. Thompson, N. & Zyontz, S. Who tries (and who succeeds) in staying at the forefront of science: evidence from the DNA-editing technology, CRISPR. SSRN Electron. J. https://doi.org/10.2139/ssrn.3073227 (2017). 34. 34. Long, M. & Gillespie, J. H. Codon usage divergence of homologous vertebrate genes and codon usage clock. J. Mol. Evol. 32, 6–15 (1991). 35. 35. Altschul, S. F., Gish, W., Miller, W., Myers, E. W. & Lipman, D. J. Basic local alignment search tool. J. Mol. Biol. 215, 403–410 (1990). 36. 36. Johnson, M. et al. NCBI BLAST: a better web interface. Nucleic Acids Res. 36, W5–W9 (2008). 37. 37. Pruitt, K., Katz, K., Sicotte, H. & Maglott, D. Introducing RefSeq and LocusLink: curated human genome resources at the NCBI. Trends Genet. 16, 44–47 (2000). 38. 38. Pruitt, K. D. et al. RefSeq: an update on mammalian reference sequences. Nucleic Acids Res. 42, D756–D763 (2014). 39. 39. Sharp, P. M. & Li, W. H. The codon Adaptation Index--a measure of directional synonymous codon usage bias, and its potential applications. Nucleic Acids Res. 15, 1281–1295 (1987). 40. 40. Carbone, A., Zinovyev, A. & Képès, F. Codon adaptation index as a measure of dominating codon bias. Bioinformatics 19, 2005–2015 (2003). 41. 41. Grote, A. et al. JCat: a novel tool to adapt codon usage of a target gene to its potential expression host. Nucleic Acids Res. 33, W526–W531 (2005). 42. 42. Puigbò, P., Guzmán, E., Romeu, A. & Garcia-Vallvé, S. OPTIMIZER: a web server for optimizing the codon usage of DNA sequences. Nucleic Acids Res. 35, W126–W131 (2007). 43. 43. Shen, S. Benefits of codon optimization. DECODED Online Newsletter. http://www.idtdna.com/pages/decoded/decoded-articles/synthetic-biology/decoded/2016/04/27/benefits-of-codon-optimization. (Accessed 15 Dec 2016). 44. 44. Gilis, D., Massar, S., Cerf, N. J. & Rooman, M. Optimality of the genetic code with respect to protein stability and amino-acid frequencies. Genome Biol. 2, research0049.1 (2001). 45. 45. Breiman, L. Random forests. Mach. Learn. 45, 5–32 (2001). 46. 46. Liaw, A. & Wiener, M. Classification and regression by randomForest. R news 2, 18–22 (2002). 47. 47. Carlson, R. The changing economics of DNA synthesis. Nat. Biotechnol. 27, 1091–1094 (2009). 48. 48. Carlson, R. (2016) On DNA and transistors. http://www.synthesis.cc/synthesis. (Accessed 14 Sep 2017). 49. 49. Quast, C. et al. The SILVA ribosomal RNA gene database project: improved data processing and web-based tools. Nucleic Acids Res. 41, D590–D596 (2013). 50. 50. Field, K. et al. Molecular phylogeny of the animal kingdom. Science 239, 748–753 (1988). 51. 51. Yarza, P. et al. The all-species living tree project: a 16S rRNA-based phylogenetic tree of all sequenced type strains. Syst. Appl. Microbiol. 31, 241–250 (2008). 52. 52. Dereeper, A. et al. Phylogeny.fr: robust phylogenetic analysis for the non-specialist. Nucleic Acids Res. 36, W465–W469 (2008). 53. 53. Galanie, S., Thodey, K., Trenchard, I. J., Filsinger Interrante, M. & Smolke, C. D. Complete biosynthesis of opioids in yeast. Science 349, 1095–100 (2015). 54. 54. James, D., Schmidt, A.-M., Wall, E., Green, M. & Masri, S. Reliable detection and identification of genetically modified maize, soybean, and canola by multiplex PCR analysis. J. Agric. Food Chem. 51, 5829–5834 (2003). 55. 55. Holst-Jensen, A. Testing for genetically modified organisms (GMOs): Past, present and future perspectives. Biotechnol. Adv. 27, 1071–1082 (2009). 56. 56. Bonfini, L., van den Bulcke, M. H., Mazzara, M., Ben, E. & Patak, A. GMOMETHODS: The European Union Database of reference methods for GMO analysis. J. AOAC Int. 95, 1713–1719 (2012). 57. 57. Petrillo, M. et al. JRC GMO-Amplicons: a collection of nucleic acid sequences related to genetically modified organisms. Database 2015, pii: bav101 (2015). 58. 58. Clayton, M. A., Clayton, J. M., Brown, D. R. & Middlebrook, J. L. Protective vaccination with a recombinant fragment of Clostridium botulinum neurotoxin serotype A expressed from a synthetic gene in Escherichia coli. Infect. Immun. 63, 2738–2742 (1995). 59. 59. Kitada, T., DiAndreth, B., Teague, B. & Weiss, R. Programming gene and engineered-cell therapies with synthetic biology. Science 359, eaad1067 (2018). 60. 60. Krumme, M. Lou et al. Behavior of pollutant-degrading microorganisms in aquifers: predictions for genetically engineered organisms. Environ. Sci. Technol. 28, 1134–1138 (1994). 61. 61. Ripp, S. et al. Controlled field release of a bioluminescent genetically engineered microorganism for bioremediation process monitoring and control. Environ. Sci. Technol. 34, 846–853 (2000). ## Acknowledgements We are tremendously indebted to Addgene for sharing their data, answering questions as they arose, and providing manuscript feedback. We thank Dr. Alec Nielsen (MIT), Dr. Darrell Ricke (MIT), and Dr. James Comolli (MIT) for discussions about BLAST. We are grateful to Dr. Nili Ostrov (Harvard) and George Chao (Harvard) for discussions about genetic distance calculations. A.M.K. was supported by a National Science Foundation Graduate Research Fellowship. P.P. was supported by two grants of the European Union’s Seventh Framework Programme FP7. The collaborative research project ST-FLOW (KBBE-2011-5—Grant Agreement number 289326) and the People Programme (Marie Skłodowska-Curie Actions—Grant Agreement number 612614) and N.C.T. was supported by a grant from MIT. ## Author information Authors ### Contributions A.M.K. and N.C.T. conceived the study. P.P. tabulated Addgene descriptive statistics and developed/implemented classification scheme under guidance from A.M.K. and N.C.T. A.M.K. and N.C.T. performed codon-substitution and codon-optimization analyses, respectively. P.P. performed all data analysis of BLAST results, genetic distances, and regressions under guidance from N.C.T. A.M.K. performed phylogenetic analysis to determine genetic distances and categorized source/expression organisms. A.M.K. and N.C.T. jointly led manuscript writing and A.M.K. led figure production. All authors read and approved the final manuscript. ### Corresponding authors Correspondence to Aditya M. Kunjapur or Neil C. Thompson. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Kunjapur, A.M., Pfingstag, P. & Thompson, N.C. Gene synthesis allows biologists to source genes from farther away in the tree of life. Nat Commun 9, 4425 (2018). https://doi.org/10.1038/s41467-018-06798-7 • Accepted: • Published: • ### Insights into the Molecular Mechanism of Arsenic Phytoremediation • Sapna Thakur • , Shruti Choudhary • , Aasim Majeed • , Amandeep Singh •  & Pankaj Bhardwaj Journal of Plant Growth Regulation (2020) • ### Linking Genes to Molecules in Eukaryotic Sources: An Endeavor to Expand Our Biosynthetic Repertoire • Jack G. Ganley •  & Emily R. Derbyshire Molecules (2020) • ### The Pathway Less Traveled: Engineering Biosynthesis of Nonstandard Functional Groups • Morgan Sulzbach Trends in Biotechnology (2020)
2020-08-03 10:20:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5370407700538635, "perplexity": 5518.9587336441755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735792.85/warc/CC-MAIN-20200803083123-20200803113123-00374.warc.gz"}
https://www.physicsforums.com/threads/gauss-lemma-number-theory.268989/
# Gauss Lemma (Number Theory) 1. Nov 3, 2008 ### mathsss2 Use Gauss Lemma (Number theory) to calculate the Legendre Symbol $$(\frac{6}{13})$$. I know how to use Gauss Lemma. However we use the book: Ireland and Rosen. They define Gauss Lemma as: $$(\frac{a}{p})=(-1)^n$$. They say: Let $$\pm m_t$$ be the least residue of $$ta$$, where $$m_t$$ is positive. As $$t$$ ranges between 1 and $$\frac{(p-1)}{2}$$, n is the number of minus signs that occur in this way. I don't understand how to use this form of Gauss's Lemma Last edited: Nov 3, 2008 2. Nov 3, 2008 ### gabbagabbahey What are $a$ and $p$ in this case? What does that make $\frac{(p-1)}{2}$ ? What does that make the least residue of $ta$ in this case? 3. Nov 3, 2008 ### mathsss2 Could you be more specific, I really do not know how to use this version of Gauss's Lemma. Could you show me some steps on how to start it this way? 4. Nov 3, 2008 ### gabbagabbahey You want to use the lemma for $\left( \frac{6}{13} \right)$, which means you want an "a" and "p" such that $\left( \frac{a}{p} \right) = \left( \frac{6}{13} \right)$ where "p" is a prime....surely you can think of at least one "a" and one "p" for which this will hold true?
2017-02-27 23:33:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9659104943275452, "perplexity": 342.25975536310773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00045-ip-10-171-10-108.ec2.internal.warc.gz"}
https://ask.sagemath.org/answers/48514/revisions/
# Revision history [back] The preparser seems turned off in Pycharm. In a plain console you got sage: p = 5256107634024827443646803157596584915370839195839782329597601469354483229307063 sage: j = 3611062252397503822281535488379195436991347721427144349104935225639485573271142 sage: K = GF(p) sage: K((3 * j) / (1728 - j)) 1674511800600631022371777328069227143110125063664501651628290807871952520681596 versus sage: preparser(False) sage: p = 5256107634024827443646803157596584915370839195839782329597601469354483229307063 sage: j = 3611062252397503822281535488379195436991347721427144349104935225639485573271142 sage: K = GF(p) sage: K((3 * j) / (1728 - j)) 5256107634024827443646803157596584915370839195839782329597601469354483229307060 In the console (and when preparser is on) integers are considered as Sage integers for which division produces rationals sage: preparser(True) sage: a = 3 sage: type(a) <type 'sage.rings.integer.Integer'> sage: 1 / 3 1/3 sage: type(1 / 3) <type 'sage.rings.rational.Rational'> whereas in Python (or when the preparser is off) these are Python integers for which division is the floor division or Euclidean division sage: preparser(False) sage: a = 3 sage: type(a) <class 'int'> sage: 1 / 3 0 sage: type(1 / 3) <type 'int'> If you want to fix the behavior in Pycharm you can for example write K(3 * j) / K(1728 - j) that will avoid the Python integer division.
2021-05-08 02:34:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26751047372817993, "perplexity": 14948.156148828755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00428.warc.gz"}
https://chemistry.stackexchange.com/questions/34779/calculated-13c-nmr-shifts-of-brominated-carbons
# Calculated 13C NMR Shifts of brominated Carbons Since I calculate NMR spectra for a while by simply using one of recommendations by CHEmical SHIft REpository: mPW1PW91/6-311+G(2d,p)-SCRF//B3LYP/6-31+G(d,p) without thinking to much how it actually works*, there is a common misprediction that I'd like to understand. It seems to be that the shift/shielding of the brominated carbons is always way off the measured shift. Compared to all other "normal" substituted carbons (C, H, O, N...) it is also far from being predicted good. Luckily for me, as I don't want to show my own molecules :D, the Repository gives an example (Aplydactone) with a brominated carbon that shows exactly what I mean: The overall $^1$H Mean Absolute Deviation (MAD) is about 0.08 ppm and for the $^{13}$C shifts the MAD is about 1.5 ppm but without the brominated carbon (right side of picture). The predicted shift has a failure of 12.25 ppm (measured 65.5 ppm vs. calculated 77.75 ppm) which is about ten times the MAD of all other carbon shifts. Is there a common explanation for that? Is it simply that there were no brominated samples within the test set or is there a much deeper reason? Like Bromine must be treated with a different/super special basis set or sth similar. This comes right now into my mind ... As there might already be some influences of the relativistic effect, could that be a reason? Please tell me of your deeper insight into the calculation of NMR shieldings that could yield to this misprediction that I either understand how to correct those errors or that I at least have an explanation. I did no further research throughout the internet yet, as I have not much time right now. But as this question keeps in my mind, maybe someone of you has done the research and would share it with me. * I know that it uses a regression function which was calculated through comparing lots of experimental vs. predicted shifts for a bunch of small organic substances. What I mean is more the quantum theoretical aspect. • My suggestion would be to e-mail one of the authors your question directly. The review article suggests Tantillo as the corresponding author. I suspect they may have ideas. Aug 11 '15 at 13:32 • The article suggests "In fact, error in computed chemical shifts for carbon atoms can approach several dozen ppm when, for example, three chlorine atoms, or fewer bromine or iodine atoms, are attached to the carbon atom in question." Aug 11 '15 at 13:33 The authors note in their review article* that such effects are expected as "heavy atom effects" due to electron-correlation effects and spin-orbit coupling: (references stripped for clarity) This being said, one way to improve the computed chemical shifts of heavy-atom-bound carbon atoms is to utilize a method that better captures electron correlation effects, such as MP2 or higher-level methods along with correlation-consistent basis sets (see section 4.3). This will likely result in significant improvement for carbon atoms attached to sulfur or phosphorus as well as to halogens, although the improvement is expected to be greater for sulfur or phosphorus. Further improvement requires the use of methods that determine relativistic spin–orbit corrections. [...] Another option for improving chemical shift calculations with heavy-atom errors is to utilize a linear regression method, as discussed in section 3.6. Due to the systematic nature of heavy-atom effects, they are amenable to improvement via this approach, provided that the linear regression data consists of nuclei similar to the nucleus of interest. That is, a linear correlation correction to carbon nuclei attached to a given number of bromine atoms, for example, should be derived from empirical data for other carbon nuclei attached to the same number and type of halogen atoms. In fact, this approach has been demonstrated specifically for such cases. The latter section cites two relevant articles: Both articles suggest ways to use DFT methods with empirical corrections that work fairly well. * Michael W. Lodewyk, Matthew R. Siebert, and Dean J. Tantillo, Chem. Rev., 2012, 112 (3), 1839–1862.
2022-01-23 12:28:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6011258959770203, "perplexity": 1578.3525961600074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00585.warc.gz"}
https://physics.stackexchange.com/questions/196706/flow-in-diverging-channel-is-unstable-according-to-bernoullis-equation
# Flow in diverging channel is unstable according to Bernoulli's equation So let's say we have a circular pipe with a variable cross section with an inviscid incompressible fluid flowing through it. The areas of the two ends are $A_1$ and and $A_2$. Let's say we also control the pressure at the two ends $p_1$ and $p_2$ with $p1>p2$. The unsteady Bernoulli's equation written down for the two ends states $(p_2 - p_1) + \frac{\rho}{2} (u_2^2 - u_1^2) + \rho \frac{d}{dt} (\phi_2 - \phi_1) =0$ Use conservation of mass we can write $u_1 = \frac{A_2}{A_1} u_2$. We can put this into the above equation to get $(p_2 - p_1) + \frac{\rho}{2} u_2^2 (1 - (\frac{A_2}{A_1})^2) + \rho \frac{d}{dt} (\phi_2 - \phi_1) =0$ To find the fixed points we set the time derivatives equal to 0 and solve for $u_2^*$. $u_2^* = \pm \sqrt{ \frac{2}{\rho} \frac{(p_1 - p_2)}{(1 - (\frac{A_2}{A_1})^2)}}$ Now of $A_1>A_2$ everything is fine and we get a stable fixed point. However if $A_2>A_1$ the expression under the square root is negative and the fixed point is destroyed. The flow diverges to infinity. This is obviously nonphysical. So what went wrong? If $A_2>A_1$, due to continuity of flow, fluid will decelerate and thus pressures will obey $p_2>p_1$. It is not possible to set up stationary flow for every pair of pressure values.
2021-09-23 22:05:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9143743515014648, "perplexity": 92.7551695334323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00182.warc.gz"}
https://scicomp.stackexchange.com/questions/5520/testing-1d-poisson-solver
# Testing 1D Poisson Solver I'm trying to test a simple 1D Poisson solver to show that a finite difference method converges with $\mathcal{O}(h^2)$ and that using a deferred correction for the input function yields a convergence with $\mathcal{O}(h^4)$. So, the equation is $- u'' = f$ with boundary conditions $u(0) = u(1) = 0$. The method I'm trying to use is using the discretized operator $$A = \left[ \begin{array}{c} 2&-1&0&0&0&0 \\-1&2&-1&0&0&0 \\ 0&-1&2&-1&0&0 \\ 0&0&-1&2&-1&0 \\ 0&0&0&-1&2&-1 \\ 0&0&0&0&-1&2\end{array}\right]$$ (the example matrix is for $h = \frac{1}{5}$.) Then solve for $Au=h^2f$. I've shown that theoretically this should converge with $\mathcal{O}(h^2)$, but when I test it on Matlab, I'm getting only $\mathcal{O}(h)$ convergence. Then, I'm trying what my course instructor called "deferred correction", and altering $f$ before solving. I concluded that the correction should be $f \mapsto f + \frac{h^2}{12} Af$. I've shown that this should converge with $\mathcal{O}(h^4)$, but in Matlab I still get $\mathcal{O}(h)$. Here's the Matlab script: function [u err] = threeptsolve(ureal, du2, h) % INPUT: 'ureal' is the function handle for the real solution. % 'du2' is the function handle for the second derivative of 'ureal' % 'h' is the step size % OUTPUT: 'u' is the approximated solution % 'err' is the error at each point x = [0:h:1]'; n = length(x); f = -du2(x); realu = ureal(x); A = 2 * eye(n); A = A + diag(-1*ones(n-1,1), 1) + diag(-1*ones(n-1,1), -1); A = (1/h^2) * A; % uncomment if using deferred correction % f = f + h^2/12 * A * f; u = A\f; err = (realu - u); end When I try this with some sample smooth functions (with appropriate boundary values), and then try again with $h/2$, I get a vector of (approximate) twos when I compute err1 ./ err2(1:2:end). Is my math wrong, or is it my code? • What is your ureal? If it isn't sufficiently smooth, you may lose high order accuracy. I doubt for a test case that this is the issue though. – Godric Seer Mar 12 '13 at 15:50 • @GodricSeer, I use polynomials that are zero at the boundaries. E.g. ureal = @(x)(-10*x.^4 + 5*x.^3 + 2*x.^2 + 3*x). – jake Mar 12 '13 at 15:58 • Try with this: you have make a minor mistake from the beginning in writing your equation to solve as $Au=h^2 f$ but in fact it is $Au=h f$ may be that is the problem. – atiliomor Nov 6 '16 at 16:25 Your formulation of $A$ assumes that $u_0$ and $u_{n+1}$ are zero, which is correct. However, you are then including your boundaries at $u_1$ and $u_n$. Your exact answer satisfies these later BC's but not the imposed BC's in $A$. These discontinuities floating around cause you to lose your higher order accuracy. To get second order convergence you only need to change one line: x = [0:h:1]'; to x = [h:h:1-h]'; And I get a column of 4's (for 2nd order conv.) when I execute err1 ./ err2(2:2:end-1) (Note the shift by 1 index since the solutions now line up at the even indices rather than the odd). I have not gotten 4th order yet from your "deferred correction", however this solve a part of your problem. • Thank you! I've been looking at this all day and didn't catch that. Now, I'm finally on to fixing the deferred correction method. (It's more possible for that one that my math is off. I'll try re-calculating.) – jake Mar 12 '13 at 18:30 For defect correction you will need a 4th-order discretization. Let $A_2 u = f_2$ be your second-order discretization and let $A_4 u = f_4$ be a 4th-order discretization. Defect corrections then become $$A_2 u^{(0)} = f_2 \\ A_2 u^{(k)} = f_4 - A_4 u^{(k-1)} + A_2 u^{(k-1)} \quad k=1,2,...$$ As the defect corrections converge $u^{(k)} \approx u^{(k-1)}$ so the $A_2 u$ terms cancel and you have solved the 4th-order discretization without ever "inverting" $A_4$. There is ample theory on how many iterations are needed, see for example Hackbush, Multi-grid methods and applications, Springer 1985.
2020-05-25 05:40:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8395207524299622, "perplexity": 1096.362655182763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00396.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1804416
MathSciNet bibliographic data MR1804416 (2001m:35016) 35A22 (35A20 35B60 81R30) Wunsch, Jared; Zworski, Maciej The FBI transform on compact $\scr C\sp \infty$$\scr C\sp \infty$ manifolds. Trans. Amer. Math. Soc. 353 (2001), no. 3, 1151–1167 (electronic). Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2014-11-24 01:18:01
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9924452900886536, "perplexity": 14102.951919831687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380236.36/warc/CC-MAIN-20141119123300-00259-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.eolymp.com:443/en/problems/1285
Problems # Goldbach Division Everybody knows Goldbach Conjecture! Here is one edition of it: 1) Every odd integer greater than 17 can be written as three different odd primes’ sum; 2) Every even integer greater than 6 can be written as two different odd primes’ sum. Loving the magical math conjecture very much, iSea try to have a closer look on it. Now he has a new definition: Goldbach Division. If we express an even integer as two different odd primes’ sum, or odd integer as three different odd primes’ sum, we call it a form of Goldbach Division of N, using a symbol G(N). For example, if N = 18, there are two ways to divide N. 18 = 5 + 13 = 7 + 11 If N = 19, there is only one way to divide N. 19 = 3 + 5 + 11 Here comes your task, give a integer N, find |G(N)|, the number of different G(N). Input There are several test cases in the input. Each test case includes one integer N (1N20000). The input terminates by end of file marker. Output For each test case, output one integer, indicating |G(N)|. Time limit 1 second Memory limit 64 MiB Input example #1 18 19 Output example #1 2 1
2022-07-03 21:46:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5612999796867371, "perplexity": 1428.035152058269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104249664.70/warc/CC-MAIN-20220703195118-20220703225118-00053.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-6-rational-expressions-and-equations-review-exercises-chapter-6-page-437/15
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $$\frac{3\left(y-3\right)}{2\left(y+3\right)}$$ Recall, to simplify rational expressions, we look for common factors in the numerator and the denominator. Doing this, we find: $$\frac{6y^2-36y+54}{4y^2-36}\\ \frac{3\left(y-3\right)^2}{2\left(y+3\right)\left(y-3\right)}\\ \frac{3\left(y-3\right)}{2\left(y+3\right)}$$
2019-12-14 10:35:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357431530952454, "perplexity": 604.7519828947668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540586560.45/warc/CC-MAIN-20191214094407-20191214122407-00436.warc.gz"}
https://mijn.bsl.nl/a-prospective-study-of-rumination-and-irritability-in-youth/18440930?fulltextView=true
main-content ## Swipe om te navigeren naar een ander artikel 01-10-2020 | Uitgave 12/2020 Open Access # A Prospective Study of Rumination and Irritability in Youth Tijdschrift: Research on Child and Adolescent Psychopathology > Uitgave 12/2020 Auteurs: Eleanor Leigh, Ailsa Lee, Hannah M. Brown, Simone Pisano, Argyris Stringaris Belangrijke opmerkingen ## Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Introduction Irritability in youth is one of the most common reasons for referral to mental health services, a predictor of future depression and suicidality, and associated with role impairment (Stringaris et al. 2018). Yet, it remains to be understood how person-specific characteristics contribute to variation in irritability. We test the hypothesis that increases in adolescent irritability are predicted by the tendency to engage in angry rumination. Irritability, defined as individual differences in proneness to anger (Vidal-Ribas et al. 2016) and a reaction to blocked goal attainment (RDoC, Insel et al. ( 2010)), has been the focus of increasing research interest. Paediatric irritability has been linked to the development of a range of internalising disorders, and depression in particular (Stringaris et al. 2009; Stringaris and Goodman 2009; Leibenluft et al. 2006; Krieger et al. 2013). Understanding factors that contribute to the development of irritability may therefore provide opportunity for early prevention and intervention. Person-specific characteristics are likely to contribute to irritability in adolescents. One such candidate characteristic is rumination. Rumination is a repetitive, negative thinking process, that is passive and internally-focused (e.g. Nolen-Hoeksema et al. ( 1993)), amplifying current mood states and impairing instrumental behaviour and problem-solving (Nolen-Hoeksema et al. 2008). Whilst first invoked as a risk factor for depression, rumination has since been established as a transdiagnostic risk factor across the lifespan (Aldao et al. 2010; Rood et al. 2009; Watkins 2008). This has led to the development of psychological interventions targeting rumination in order to prevent or reduce various forms of psychopathology, including adolescent depression and anxiety (for example, Jacobs et al. ( 2016), Topper et al. ( 2017)). There have been far fewer studies examining angry rumination and its association with irritability and anger. Angry rumination is understood to be prompted when an individual’s goals are frustrated (Martin and Tesser 1996), as they seek to understand the causes and consequences of the disappointment and dwell on the feeling of anger. Angry rumination has been found to be associated with anger in experimental (Bushman 2002; Denson et al. 2012; Gerin et al. 2006; Rusting and Nolen-Hoeksema 1998) and cross-sectional (Sukhodolsky et al. 2001) studies. It has also been shown to be associated with aggressive behaviours in community samples of children, adolescents and adults (Bushman 2002; Harmon et al. 2019; Smith et al. 2016; Sukhodolsky et al. 2001). Two studies have shown that this association persists when controlling for trait anger (Peled and Moretti 2007, 2010), although this was not replicated in a third study with a sample of juvenile offenders (Smith et al. 2016). Whilst these findings are broadly consistent with the proposal that angry rumination predicts angry feelings and its behavioural correlates, only one of the studies (Smith et al. 2016) utilised a prospective design, which is necessary in order to determine the temporal ordering of the association. A small number of studies have included measures of both depressive and angry rumination, which allows examination of the question of whether these two forms of rumination are better conceptualised as a unitary factor or distinct constructs. Whilst it has been reported that individuals who tend to engage in angry rumination are also more likely to engage in depressive rumination (Peled and Moretti 2007, 2010), a factor analytic study with a sample of clinic referred youth indicated that these two forms of rumination reflect two distinct factors (Peled and Moretti 2007). Furthermore, differential patterns of association between the two forms of rumination and depression, anger, and aggression have been observed. For example, a study by Gilbert and colleagues with adults (Gilbert et al. 2005) demonstrated that depressive, but not angry, rumination was associated with depression symptoms, however no measure of irritability or aggression was included. Studies with unselected adults (Peled and Moretti 2010), clinically referred adolescents (Peled and Moretti 2007), and unselected pre-adolescent children (Harmon et al. 2019) have found that angry, but not depressive, rumination is associated with feelings of anger and aggression, after controlling for shared variance. However, as yet no longitudinal study has been undertaken to examine the prospective association between angry rumination and irritability, over and above depressive rumination. This will be important to establish in order to determine whether angry rumination specifically, rather than rumination generally, should be targeted in strategies to prevent and treat problematic irritability. ### The Current Study In order to better understand risk factors for youth irritability, and angry rumination in particular, we undertook a prospective questionnaire-based study with a community sample of British adolescents assessed two times over a six-month period. We focused on an adolescent sample for two reasons. First, irritability at this stage of life predicts important negative outcomes in adulthood (including reduced income and educational attainment; Stringaris et al. ( 2009)), and so establishing risk factors for irritability may provide opportunities for early intervention. Second, the shift from childhood to adolescence sees an increase in the use of rumination as a coping strategy in response to stress. For example, developmental increases in rumination were observed across a large sample of German youth aged 8 to 13 stratified by age (Hampel and Petermann 2005). We measured angry and depressive rumination at baseline and monitored change in irritability over time. It was hypothesised that, after controlling for baseline levels of irritability, angry rumination would be associated with later irritability over and above depressive rumination. ## Methods ### Design Ethical (Institutional Review Board, IRB) approval for the study was granted by the King’s College London Psychiatry, Nursing and Midwifery Research Ethics Subcommittee (Reference: HR-15/16-1919). The study was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. The study is part of a larger prospective project. The present study focused on two stages of classroom-based data collection over 6 months. At Time 1, demographics, irritability, and angry and depressive rumination were measured. At Time 2 (month 6), irritability was measured again. ### Recruitment and Sample Data was collected from a non-selective secondary school that serves a culturally diverse community in London, UK (Ofsted 2013). The school has a higher proportion of black and ethnic minority students compared to the local borough. The local borough has a higher proportion of black and ethnic minority students compared to most other London boroughs and to the rest of England (London Borough of Lambeth 2018). The proportion of students eligible for free school meals, an index of deprivation (Taylor 2018), is very high (75%) (Ofsted 2013). All students in years 8 and 9 (aged 12 to 14 years) were invited to participate. Written informed young person assent and opt-out parental consent was sought to ensure a representative sample and maximise participation rates across time points. Data collection was carried out in schools, as part of a larger study. Researchers attended year assemblies to explain the project and hand information sheets to the students for themselves and their parents. Information sheets and opt-out parental consent forms were sent to parents by post and were also given to students to hand to their parents/carers. Parents/carers were given two weeks to opt-out on behalf of their child and were provided with three different methods for opting out: telephone, email and post (stamped, addressed return envelopes were provided). At both measurement points, researchers attended form periods and/or allocated humanities lesson and provided reminder information about the research project. Adolescents who agreed to participate signed assent forms. Those who did not wish to participate or who did not meet eligibility criteria were asked to read quietly. Young people signed opt-in assent forms at both time points and were free to withdraw at any point during the study. There were 251 students on the school register for Years 8 and 9. Non-participation was due to absence on the day of testing, young-person non-assent, and parent opt-out. 165 students (94 (57%) male and 71 (43%) female) participated at Time 1. The average age was 13.22 years (SD = 0.63; Range = 12y 2 m-14y 5 m). 54.5% of adolescents identified their ethnicity as Black, 18.8% as White, 6.7% as Asian, 14.5% as Mixed and 4.8% as Other. The gender and ethnicity distribution of the final sample was representative of the school population. Of the students that participated at Time 1, 156 participated at Time 2. There was no significant association between gender (χ 2(1) = 0.74, p = 0.39) or ethnicity (χ2(4) = 1.32, p = 0.86) and incompletion at Time 2. ### Measures Irritability was measured with the Affective Reactivity Index (ARI; Stringaris et al. ( 2012)), a self-report questionnaire with 6 items, each ranging from 0 to 2 (total score: 0–12), including items such as ‘ I am easily annoyed by others’. The ARI is an instrument that has been validated to measure irritability (Stringaris et al. 2012). There have been a number of studies that have demonstrated its associations with internalizing (Stoddard et al. 2014) and externalising symptoms (Humphreys et al. 2019) as well with behavioural correlates of irritability, such as aggression (Ezpeleta et al. 2020). The measure has been found to be reliable in healthy and clinical youth samples (Mulraney et al. 2014; Stringaris et al. 2012). Cronbach’s alpha for the ARI in the current study was 0.87 at Time 1 and 0.88 at Time 2. Depressive Rumination was measured with the Rumination Subscale of the Children’s Response Styles Questionnaire (CRSQ; Abela et al. ( 2002)), total scores range from 0 to 39, and includes items such as ‘ When I am sad, I think “I’m ruining everything”. The Rumination subscale of the CRSQ has been shown to predict onset of major depressive episodes over two years, controlling for baseline depressive symptoms and history of episodes (Abela and Hankin 2011), and predict increases in depressive symptoms at 6 week follow-up in children of parents with a history of depression (Abela et al. 2007). Adequate internal consistency has been demonstrated (Abela et al. 2004; 2007). Cronbach’s alpha for the rumination subscale in the current study was 0.92. Angry Rumination was measured with the Children’s Anger Rumination Scale (CARS; Smith et al. ( 2016)) a 19-item measure with scores ranging from 19 to 76, including items such as ‘ When I am angry, I think a lot about other times when I was angry’. The measure has been shown to be associated with peer and teacher rated aggression in a sample of male juvenile offenders and a sample of healthy adolescents (Smith et al. 2016). It has been shown to be reliable (Harmon et al. 2019; Smith et al. 2016). Cronbach’s alpha for the CARS in the current study was 0.94. ### Data Analysis Descriptive statistics and intercorrelations were inspected, before undertaking three regression models. In all regression models, baseline irritability levels were entered first to test whether the independent variables predict prospective elevations in irritability over time. This provides a conservative and strong test of our hypotheses, because it accounts for possible overlap between symptoms and predictor variables (e.g. irritability and angry rumination), and also for the continuity of symptoms over time. Age and gender were also included at the first step. Then in the first two multiple linear regressions, we tested whether angry rumination (Model 1) and depressive rumination (Model 2) predict prospective irritability, by entering one of these rumination variables in the second step in each of the two models. Finally, in the third regression model, both rumination variables were added at the second step, to test the hypothesis that angry rumination makes an independent contribution to irritability, over and above depressive rumination (Model 3). To determine the best-fitting model for the data, we compared both of the nested models (Models 1 and 2) to the more complex model (Model 3) using log-likelihood tests. Akaike’s Information Criteria (AIC) and Bayesian Information Criteria (BIC) statistics were used to evaluate model parsimony, with lower scores indicating a more parsimonious model. ## Results ### Preliminary Analysis After data was entered, a random 10% was checked with double entry and analysis was undertaken in SPSS v.25 and R (R Core Team 2019). Mean substitution was performed when less than 5% of items were missing in each questionnaire. Questionnaires with more than 5% of missing items were treated as missing variables (data was complete for age, gender, and baseline irritability; n = 2 cases were missing the depressive rumination variable; n = 5 cases were missing the angry rumination variable; and n = 9 cases were missing outcome irritability). Little’s Missing Completely At Random (MCAR) test showed a non-significant result ( p > 0.05), meaning that the data was MCAR. Missing data were accounted for using expectation maximization that included all variables entered in the regression models. Regression analysis was repeated with complete case analysis with no differences in findings. Assumptions of multiple regression were met, namely: scatterplots indicated linear relationships between independent variables and the outcome variable; inspection of the Q-Q-Plot indicated multivariate normality; there was no evidence of multicollinearity based on pairwise correlations between predictor variables (all below 0.8; Berry et al. ( 1985)) and on variance inflation factor (VIF) values (all below 5; Neter et al. ( 1996)); finally, the scatterplot of residuals versus predicted values indicated that the assumption of homoscedasticity was met. Mahalanobis distance indicated two multivariate outliers. Analyses were run with and without these two cases, with no difference in results, so results with all 165 cases are presented. All variables, except for sex and age, were standardized. ### Descriptive Statistics Descriptive statistics for the main variables are presented in Table 1. The mean irritability scores at baseline ( $$\overline{X}$$=3.68 [SD = 3.21]) and outcome ( $$\overline{X}$$=3.01 [SD = 2.96]) were within the range of scores observed in other studies with community samples. For example, a mean score of 4.00 (SD = 3.37) was reported in a community sample of Brazilian adolescents, and a mean score of 1.96 (SD = 2.25) was reported in a sample of unselected Australian adolescents ( $$\overline{X}$$=1.96 [SD = 2.25]) (Mulraney et al. 2014). Mean depressive rumination scores ( $$\overline{X}$$=10.82 [SD = 8.79]) were comparable to those observed in other samples of unselected youth; for example, a mean of 10.94 (SD = 7.65) in a large sample of US adolescents (McLaughlin and Nolen-Hoeksema 2012), and a mean of 14.78 (SD = 7.40) in a school-based study of 367 adolescents (Abela et al. 2009). The mean angry rumination score ( $$\overline{X}$$=36.13 [SD = 12.96]) in the current study is comparable to that reported in the study of Harmon et al. ( 2019) ( $$\overline{X}$$=39.61 [SD = 12.97]), in a sample of unselected US youth (average age: 10.61 [SD = 1.78]). Table 1 Descriptive statistics and correlation matrix for main variables Total Sample Mean [SD] Females Mean [SD] Males Mean [SD] Gender Difference t 1 2 3 4 Time 1 Baseline Irritability 3.68 [3.21] 3.99 [3.00] 3.45 [3.34] t(163) = 1.08 1 Depressive Rumination 10.82 [8.79] 13.32 [9.14] 8.95 [8.07] t(161) = 3.23** 0.38** 1 Angry Rumination 36.13 [12.96] 37.20 [13.18] 35.33 [12.80] t(158) = 0.90 0.64** 0.67** 1 Time 2 Outcome Irritability 3.01 [2.96] 3.39 [3.08] 2.72 [2.83] t(193) = 1.57 0.62** 0.26** 0.50** 1 Significance levels (two-tailed): ** p ≤ 0.01; * p ≤ 0.05. Correlations for the whole sample are presented, as no gender differences were observed. Girls scored higher than boys in depressive rumination ( $$\overline{X}$$(girls) = 13.32 [SD = 9.14] vs. $$\overline{X}$$(boys) = 10.82 [SD = 8.79]), but not in angry rumination ( $$\overline{X}$$(girls) = 37.20 [SD = 13.18] vs. $$\overline{X}$$(boys) = 36.13 [SD = 12.96]). There were no other gender differences on the main variables. As can be seen from the correlation matrix, putative predictors were significantly correlated with outcome irritability. As expected, there was a large correlation between angry rumination and outcome irritability and a small correlation between depressive rumination and outcome irritability (Cohen 1988). The correlation between angry and depressive rumination was large. ### Regression Analysis Two multiple linear regressions examined the contribution of each rumination variable to later irritability. In Model 1 angry rumination was added to the baseline variables at the second step. The addition of angry rumination significantly improved the model ( F(1, 160) = 5.31 ( p < 0.05), explaining an additional 2% of the variance (total variance explained: 41.4%). As can be seen in Table 2, baseline irritability and angry rumination were significant predictors of later irritability. In Model 2, depressive rumination was added to the baseline variables at the second step. The addition of depressive rumination did not improve the model ( F(1, 160) = 0.002 ( p > 0.05)). As can be seen in Table 2, baseline irritability was the only significant predictor of later irritability. Table 2 Regression models predicting outcome irritability Model Predictors Dependent Variable: Irritability β t 1 Age 0.03 0.44 Sex 0.05 0.81 Baseline irritability 0.52 6.67 ** Angry rumination 0.18 2.31 ** 2 Age 0.03 0.55 Sex 0.05 0.84 Baseline irritability 0.63 9.60 ** Depressive rumination 0.003 0.05 3 Age 0.03 0.45 Sex 0.09 1.23 Baseline irritability 0.50 6.51 ** Angry rumination 0.29 2.90 ** Depressive rumination -0.15 -1.73 * p < 0.05; ** p < 0.01. β = standardised beta coefficient Independent prospective associations between the rumination variables and irritability were then examined in another multiple regression (Model 3). Standardized betas are presented in Table 2. The addition of angry and depressive rumination to baseline variables in the second step improved the model ( F(2, 159) = 4.24, p ≤ 0.05), although only a modest proportion of additional variance was explained (ΔR 2 = 0.03). Angry rumination significantly predicted outcome irritability in addition to baseline irritability. Depressive rumination showed a negative, although non-significant, association with outcome irritability in this model. Although all VIF were below the suggested threshold of 5 for this model, it may be that this is a suppression effect due to the high correlation between angry and depressive rumination ( r = 0.67). The final model was significant, ( F(5, 159) = 24.77, p < 0.001), explaining a total of 42.0% of the variance. ### Model Comparison A log likelihood test comparing Model 1 (baseline variables plus angry rumination) to Model 2 (baseline variables plus depressive rumination) indicated that Model 1 was a significantly better fit to the data compared to Model 2 ( $$\chi$$ 2(1) = 5.39, p < 0.001). A log likelihood test comparing Model 1 to Model 3 (baseline variables plus both rumination variables) identified with no significant differences between the models ( $$\chi$$ 2(1) = 3.09, p = 0.08). A log likelihood test comparing Model 2 with Model 3 showed that Model 3 was a significantly better fit to the data compared to Model 2 ( $$\chi$$ 2(1) = 8.48, p < 0.001). AIC values were 386.86 for Model 1, 392.24 for Model 2, and 385.76 for Model 3. BIC values were 405.49 for Model 1, 410.88 for Model 2, and 407.51 for Model 3. The AIC and BIC both indicated that Models 1 and 3 are more parsimonious than Model 2. Minimal differences in AIC and BIC were observed between Models 1 and 3. AIC indicated Model 3 is marginally more parsimonious than Model 1, whilst and BIC indicated the reverse. BIC penalises more heavily for more complex models and so taken together, the results indicate that Model 1, in which angry rumination is included in the model in addition to baseline variables, may provide the best and most parsimonious fit to the data. ### Examining Angry Rumination and Irritability Item Overlap It is possible that the observed associations between angry rumination and irritability are not a result of an underlying association between the two constructs but instead an artefact caused by overlap in item content. For example, items such as “ I feel angry about certain things in my life” and “ I keep thinking about events that angered me for a long time” in the CARS angry rumination measure may overlap with items in the ARI irritability measure, such as “ I stay angry for a long time”. We examined this possibility in additional analysis. Previous factor analysis of the CARS (Smith et al. 2016; Sukhodolsky et al. 2001) has indicated a four factor model: ‘angry afterthoughts’, ‘thoughts of revenge’, ‘angry memories’, and ‘understanding of causes’. The ‘understanding of causes’ factor is comprised of four items: “ I think about the reasons people treat me badly”, “ I have had times when I could not stop thinking about a particular conflict”, “ I try to figure out what makes me angry” and “ When someone makes me angry, I keep wondering why this happened to me”. These items appear to show less overlap with ARI irritability items. We therefore reran the regression analysis of the optimal model (Model 1: baseline irritability, age, gender, and angry rumination) with this subscale of the CARS as the angry rumination predictor. Results are presented in Table 3. The results remained the same, with baseline irritability and the ‘understanding causes’ subscale of the angry rumination scale the only two independent predictors of later irritability. Although not presented here, results of regression model 3 were also the same when the ‘understanding causes’ subscale was used as the index of angry rumination. 1 Table 3 Results of multiple regression with ‘understanding causes subscale’ of the angry rumination scale Predictors Dependent Variable: Irritability β Age 0.06 Sex 0.09 Baseline irritability 0.56 ** Angry rumination – ‘understanding causes’ subscale 0.16 * * p < 0.05; ** p < 0.01. β = standardised beta coefficient ## Discussion In the present study, we examined rumination as a predictor of irritability in a community sample of adolescents. Angry rumination was associated with increasing irritability over time, whilst depressive rumination was not. Furthermore, angry rumination was associated with later elevations in irritability in adolescents, over and above depressive rumination. We found that adolescents who tend to engage in depressive rumination are also more likely to engage in angry rumination. This is consistent with studies with adults (Peled and Moretti 2010), conduct disordered youth (Peled and Moretti 2007), and children (Harmon et al. 2019) and indicates that angry and depressive rumination are related constructs. For example, in the present study we observed a large correlation between angry and depressive rumination (r = 0.67), which falls between that reported in children (r = 0.56; Harmon et al. 2019), and in adults (r = 0.76; Peled and Moretti 2010). Findings are consistent with the hypothesis that whilst the two forms of rumination are related, they are distinct constructs. Specifically, angry and depressive rumination showed differential prospective associations with irritability. Whilst angry rumination was a significant predictor, depressive rumination was not. Then, when both forms of rumination were added to the regression model simultaneously, angry rumination was a significant independent positive predictor of later irritability, whilst depressive rumination was not. Again, this is consistent with cross-sectional findings of a unique association between angry rumination and angry feelings and aggressive behaviours in adult and child samples (Harmon et al. 2019; Peled and Moretti 2010). An alternative explanation for our findings is that the observed differences in association between angry rumination and depressive rumination and irritability are due to item differences. In other words, it may be that differences in items (i.e. form) rather than focus (depression vs. anger) can account for the differences. However, Peled and Moretti ( 2010) used analogous scales (the only difference being words related to sadness and anger), to measure the two forms of rumination. They reported unique relations between each type of rumination and emotional and behavioural outcomes, suggesting they are distinct constructs. Our findings are pertinent to understanding cognitive vulnerability factors for adolescent irritability. Research to date has indicated positive cross-sectional associations between angry rumination and angry feelings. Examination of Time 1 cross-sectional correlations in this study are of a similar magnitude to those reported elsewhere; for example, 0.64 with irritability in the current study, 0.50 with anger in the study of Peled and Moretti ( 2007). Extending beyond cross-sectional analysis, we observed that angry rumination at baseline was positively associated with outcome irritability six months later ( r = 0.50). One possible account of these findings is that the association between angry rumination and subsequent irritability symptoms is due to the strong concurrent correlations between angry rumination and irritability scores. In other words, an adolescent may score highly on an angry rumination item such as ‘ I think a lot about other times when I was angry’ because they are prone to this kind of thinking, or because they are irritable and so preoccupied with these thoughts at that time. However, we tested the prospective relationship whilst controlling for baseline levels of irritability, and the angry rumination – irritability association persisted. It therefore seems that a tendency to engage in angry rumination increases feelings of irritability in adolescents over time, rather than being due to a confound. We note that the increase in variance in outcome irritability explained with the addition of angry rumination is small (2–3%). However, small effects can accumulate over time (Funder and Ozer 2019); the tendency to engage in angry rumination may only modestly affect how irritated you feel, but this could accumulate fairly quickly over time, leading to more substantial effects in the long-term. A further plausible explanation for the positive association between angry rumination and irritability is item overlap, by which the scales are correlated due to similarities in items rather than similarity in the underlying constructs. On inspection of the ARI irritability index and CARS measure of angry rumination, certain items of the two scales were indeed similar. We therefore reran the multiple regression analysis using a subscale of the CARS, ‘understanding causes’, as the index of angry rumination. This subscale was chosen because the constituent items showed less overlap with the ARI items. Findings of the multiple regression remained unchanged, which speaks against the suggestion that the observed associations are due to item overlap. It is important to note a number of limitations of the study. All measures were self-report, and so replication of these results with multiple methods and multiple informants is needed. Participants in the present study were recruited from a community sample: whilst studies such as those of Stringaris and colleagues (Stringaris et al. 2012) support a dimensional view of irritability, and therefore indicate that findings regarding the associations between irritability and relevant constructs may be comparable across the severity spectrum, replication with a clinical sample will be important. Whilst the study was adequately powered to test the hypotheses of interest, the sample size was fairly modest, which limits closer examination of the associations of interest, such as the interactive effect of angry and depressive rumination, and the possible moderating role of gender. The use of a prospective design represents a strength of the study, however further studies measuring rumination at multiple time points and examining its temporal interplay with irritability, depression and their behavioural correlates will be valuable. For example, whilst the present study has demonstrated that angry rumination leads to increased irritability, it is very plausible that the association also operates the other way, with more irritable youth showing an increased tendency to engage in angry rumination over time. The findings, should they be replicated in a clinical sample, highlight opportunities for delivering clinical benefit. Identifying psychological mechanisms that contribute to the persistence of irritability and can be modified has implications for the treatment of problematic irritability. For example, adolescents may be given psychoeducation about the unhelpful effects of angry rumination as an emotion regulation strategy, and encouraged to look out for warning signs and triggers for when it occurs. They may be trained to use more adaptive coping strategies, such as mindfulness (Wright et al. 2009), directed imagery, and active problem-solving (Watkins 2015) that they can then engage in as an alternative to rumination (Leigh et al. 2012). ## Acknowledgments We would like to thank Lilian Baylis School and their staff, and all the young people who participated. We would also like to thank Patrick Smith for his advice and help at the planning stages of the project, and to our Research Assistants, Kylie Leones, Anna Lucock, Anna Morris, and Jay Olajide, who supported with data collection and entry. ## Compliance with Ethical Standards ### Conflict of Interest The authors declare that they have no conflicts of interest. ### Ethical Approval Ethical (Institutional Review Board, IRB) approval for the study was granted by the King’s College London Psychiatry, Nursing and Midwifery Research Ethics Subcommittee (Reference: HR-15/16-1919). The study was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. ### Informed Consent Written informed young person assent and opt-out parental consent was sought to ensure a representative sample and maximise participation rates across time points. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. ## Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Footnotes 1 Results available on request from the corresponding author. ## Onze productaanbevelingen ### BSL Psychologie Totaal Met BSL Psychologie Totaal blijf je als professional steeds op de hoogte van de nieuwste ontwikkelingen binnen jouw vak. Met het online abonnement heb je toegang tot een groot aantal boeken, protocollen, vaktijdschriften en e-learnings op het gebied van psychologie en psychiatrie. Zo kun je op je gemak en wanneer het jou het beste uitkomt verdiepen in jouw vakgebied. Literatuur Over dit artikel Naar de uitgave
2021-01-18 01:43:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5712215304374695, "perplexity": 4370.53686906478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00253.warc.gz"}
http://gradestack.com/NTSE-Complete-Course/Letter-Series/Corresponding-Letters/19235-3851-37243-study-wtw
Loading.... # Corresponding Letters Two letters having the same position number in both the normal order and the reverse order are called corresponding letters. A and Z are corresponding letters. Similarly, B and Y; C and X; D and W; E and V; F and U; G and T; H and S; I and R; J and Q; K and P; L and O; M and N are pairs of the corresponding letters. Further, A and Z are consecutive letters also. Similarly, M and N are consecutive as well as corresponding letters. Except these two pairs, no other pair of letters can be both consecutive as well as corresponding letters simultaneously.
2016-10-28 14:07:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578920602798462, "perplexity": 1767.8253170277162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722653.96/warc/CC-MAIN-20161020183842-00171-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.ias.ac.in/listing/bibliography/jcsc/Harjinder_Singh
• Harjinder Singh Articles written in Journal of Chemical Sciences • On scattering from fractal lattices Gas-surface scattering is speculated as a meaningful problem for understanding the physics of fractals. Fractal behaviour can be associated with a self-similar geometry on a solid surface. The interaction potential for a gas atom or molecule approaching the lattice depends primarily on local factors but a parametric dependence of the cross-section data on the fractal dimension can be conceived. Such a dependence on the self-similar character of a multi-centred target is more explicit when multiple scattering is included. Application of approximation schemes like the previously developed average wavefunction method to this problem is suggested. • On scattering from fractal lattices • Lattice gas automata: A tool for exploring dynamical processes The lattice gas automata (LGA) technique as an alternative to the partial differential equation (PDE) approach for studying dynamical processes, including those in reaction-diffusion systems, is reviewed. The LGA approach gained significance after the simulation of Navier-Stokes equation by Hardyet al (1976). In this approach, the dynamics of a system are simulated by constructing a microlattice on each node of which Boolean bits are associated with the presence or absence of particles indistinct velocity states. A complete description involves the composition of anelastic collision operator, areactive collision operator and apropagation operator. The Hardy, de Pazzis and Pomeau (HPP) model does not have the desired isotropy, but its subsequent modification in 1986, known as the Frisch, Hasselacher and Pomeau (FHP) model (Frischet al 1986), has been applied to a variety of nonequilibrium processes. Reaction-diffusion systems have been simulated in a manner analogous to the master equation approach in a continuum framework. The Boltzmann kinetic equation as well as the expected complex features at the macroscopic level are obtained in LGA simulations. An increasing trend is to use real numbers instead of Boolean bits for the velocity states. This gives the lattice Boltzmann (LB) model which is not only less noisy than LGA but also numerically superior to finite-difference approximations (FDAs) to PDEs. The most significant applications of LGA appear to be in the molecular-level understanding of reactive processes. • Fluorescence resonance energy transfer (FRET) in chemistry and biology: Non-Förster distance dependence of the FRET rate Fluorescence resonance energy transfer (FRET) is a popular tool to study equilibrium and dynamical properties of polymers and biopolymers in condensed phases and is now widely used in conjunction with single molecule spectroscopy. In the data analysis, one usually employs the Förster expression which predicts (l/R6) distance dependence of the energy transfer rate. However, critical analysis shows that this expression can be of rather limited validity in many cases. We demonstrate this by explicitly considering a donor-acceptor system, polyfluorene (PF6)-tetraphenylporphyrin (TPP), where the size of both donor and acceptor is comparable to the distance separating them. In such cases, one may expect much weaker distance (as l/R2 or even weaker) dependence. We have also considered the case of energy transfer from a dye to a nanoparticle. Here we find l/R4 distance dependence at large separations, completely different from Förster. We also discuss recent application of FRET to study polymer conformational dynamics. • Quantum control of vibrational excitations in a heteronuclear diatomic molecule Optimal control theory is applied to obtain infrared laser pulses for selective vibrational excitation in a heteronuclear diatomic molecule. The problem of finding the optimized field is phrased as a maximization of a cost functional which depends on the laser field. A time dependent Gaussian factor is introduced in the field prior to evaluation of the cost functional for better field shape. Conjugate gradient method$^{21,24}$ is used for optimization of constructed cost functional. At each instant of time, the optimal electric field is calculated and used for the subsequent quantum dynamics, within the dipole approximation. The results are obtained using both Morse potential as well as potential energy obtained using ab initio calculations. • Controlling dynamics in diatomic systems Controlling molecular energetics using laser pulses is exemplified for nuclear motion in two different diatomic systems. The problem of finding the optimized field for maximizing a desired quantum dynamical target is formulated using an iterative method. The method is applied for two diatomic systems, HF and OH. The power spectra of the fields and evolution of populations of different vibrational states during transitions are obtained. • Base pairing in RNA structures: A computational analysis of structural aspects and interaction energies The base pairing patterns in RNA structures are more versatile and completely different as compared to DNA. We present here results of ab-initio studies of structures and interaction energies of eight selected RNA base pairs reported in literature. Interaction energies, including BSSE correction, of hydrogen added crystal geometries of base pairs have been calculated at the HF/6-31G∗∗ level. The structures and interaction energies of the base pairs in the crystal geometry are compared with those obtained after optimization of the base pairs. We find that the base pairs become more planar on full optimization. No change in the hydrogen bonding pattern is seen. It is expected that the inclusion of appropriate considerations of many of these aspects of RNA base pairing would significantly improve the accuracy of RNA secondary structure prediction. • Design of optimal laser pulses to control molecular rovibrational excitation in a heteronuclear diatomic molecule Optimal control theory in combination with time-dependent quantum dynamics is employed to design laser pulses which can perform selective vibrational and rotational excitations in a heteronuclear diatomic system. We have applied the conjugate gradient method for the constrained optimization of a suitably designed functional incorporating the desired objectives and constraints. Laser pulses designed for several excitation processes of the $HF$ molecule were able to achieve predefined dynamical goals with almost 100% yield. • # Journal of Chemical Sciences Volume 135, 2023 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019 Click here for Editorial Note on CAP Mode © 2022-2023 Indian Academy of Sciences, Bengaluru.
2023-03-23 07:50:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42214787006378174, "perplexity": 1211.9220240537834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00473.warc.gz"}
http://pgmpy.org/models.html
# Models¶ ## Bayesian Model¶ class pgmpy.models.BayesianModel.BayesianModel(ebunch=None)[source] Base class for bayesian model. A models stores nodes and edges with conditional probability distribution (cpd) and other attributes. models hold directed edges. Self loops are not allowed neither multiple (parallel) edges. Nodes can be any hashable python object. Edges are represented as links between nodes. Parameters data (input graph) – Data to initialize graph. If data=None (default) an empty graph is created. The data can be an edge list, or any NetworkX graph object. Examples Create an empty bayesian model with no nodes and no edges. >>> from pgmpy.models import BayesianModel >>> G = BayesianModel() G can be grown in several ways. Nodes: Add one node at a time: >>> G.add_node('a') Add the nodes from any container (a list, set or tuple or the nodes from another graph). >>> G.add_nodes_from(['a', 'b']) Edges: G can also be grown by adding edges. >>> G.add_edge('a', 'b') a list of edges, >>> G.add_edges_from([('a', 'b'), ('b', 'c')]) If some edges connect nodes not yet in the model, the nodes are added automatically. There are no errors when adding nodes or edges that already exist. Shortcuts: Many common graph features allow python syntax for speed reporting. >>> 'a' in G # check if node in graph True >>> len(G) # number of nodes in graph 3 add_cpds(*cpds)[source] Add CPD (Conditional Probability Distribution) to the Bayesian Model. Parameters cpds (list, set, tuple (array-like)) – List of CPDs which will be associated with the model Examples >>> from pgmpy.models import BayesianModel >>> from pgmpy.factors.discrete.CPD import TabularCPD ... [0.1,0.1,0.1,0.1,0.1,0.1], ... [0.8,0.8,0.8,0.8,0.8,0.8]], ... evidence=['diff', 'intel'], evidence_card=[2, 3]) diff: easy hard intel: dumb avg smart dumb avg smart gradeA 0.1 0.1 0.1 0.1 0.1 0.1 gradeB 0.1 0.1 0.1 0.1 0.1 0.1 gradeC 0.8 0.8 0.8 0.8 0.8 0.8 add_edge(u, v, **kwargs)[source] Add an edge between u and v. The nodes u and v will be automatically added if they are not already in the graph Parameters u,v (nodes) – Nodes can be any hashable python object. Examples >>> from pgmpy.models import BayesianModel/home/abinash/software_packages/numpy-1.7.1 >>> G = BayesianModel() check_model()[source] Check the model for various errors. This method checks for the following errors. • Checks if the sum of the probabilities for each state is equal to 1 (tol=0.01). • Checks if the CPDs associated with nodes are consistent with their parents. Returns check – True if all the checks are passed Return type boolean copy()[source] Returns a copy of the model. Returns BayesianModel Return type Copy of the model on which the method was called. Examples >>> from pgmpy.models import BayesianModel >>> from pgmpy.factors.discrete import TabularCPD >>> model = BayesianModel([('A', 'B'), ('B', 'C')]) >>> cpd_a = TabularCPD('A', 2, [[0.2], [0.8]]) >>> cpd_b = TabularCPD('B', 2, [[0.3, 0.7], [0.7, 0.3]], evidence=['A'], evidence_card=[2]) >>> cpd_c = TabularCPD('C', 2, [[0.1, 0.9], [0.9, 0.1]], evidence=['B'], evidence_card=[2]) >>> copy_model = model.copy() >>> copy_model.nodes() ['C', 'A', 'B'] >>> copy_model.edges() [('A', 'B'), ('B', 'C')] >>> copy_model.get_cpds() [<TabularCPD representing P(A:2) at 0x7f2824930a58>, <TabularCPD representing P(B:2 | A:2) at 0x7f2824930a90>, <TabularCPD representing P(C:2 | B:2) at 0x7f2824944240>] fit(data, estimator=None, state_names=[], complete_samples_only=True, **kwargs)[source] Estimates the CPD for each variable based on a given data set. Parameters • data (pandas DataFrame object) – DataFrame object with column names identical to the variable names of the network. (If some values in the data are missing the data cells should be set to numpy.NaN. Note that pandas converts each column containing numpy.NaNs to dtype float.) • estimator (Estimator class) – One of: - MaximumLikelihoodEstimator (default) - BayesianEstimator: In this case, pass ‘prior_type’ and either ‘pseudo_counts’ or ‘equivalent_sample_size’ as additional keyword arguments. See BayesianEstimator.get_parameters() for usage. • state_names (dict (optional)) – A dict indicating, for each variable, the discrete set of states that the variable can take. If unspecified, the observed values in the data set are taken to be the only possible states. • complete_samples_only (bool (default True)) – Specifies how to deal with missing data, if present. If set to True all rows that contain np.Nan somewhere are ignored. If False then, for each variable, every row where neither the variable nor its parents are np.NaN is used. Examples >>> import pandas as pd >>> from pgmpy.models import BayesianModel >>> from pgmpy.estimators import MaximumLikelihoodEstimator >>> data = pd.DataFrame(data={'A': [0, 0, 1], 'B': [0, 1, 0], 'C': [1, 1, 0]}) >>> model = BayesianModel([('A', 'C'), ('B', 'C')]) >>> model.fit(data) >>> model.get_cpds() [<TabularCPD representing P(A:2) at 0x7fb98a7d50f0>, <TabularCPD representing P(B:2) at 0x7fb98a7d5588>, <TabularCPD representing P(C:2 | A:2, B:2) at 0x7fb98a7b1f98>] get_cardinality(node=None)[source] Returns the cardinality of the node. Throws an error if the CPD for the queried node hasn’t been added to the network. Parameters node (Any hashable python object(optional)) – The node whose cardinality we want. If node is not specified returns a dictionary with the given variable as keys and their respective cardinality as values. Returns int or dict – If node is not specified returns a dictionary with the given variable as keys and their respective cardinality as values. Return type If node is specified returns the cardinality of the node. Examples >>> from pgmpy.models import BayesianModel >>> from pgmpy.factors.discrete import TabularCPD >>> cpd_diff = TabularCPD('diff',2,[[0.6,0.4]]); >>> cpd_intel = TabularCPD('intel',2,[[0.7,0.3]]); ... [0.9, 0.1, 0.8, 0.3]], ... ['intel', 'diff'], [2, 2]) >>> student.get_cardinality() defaultdict(int, {'diff': 2, 'grade': 2, 'intel': 2}) >>> student.get_cardinality('intel') 2 get_cpds(node=None)[source] Returns the cpd of the node. If node is not specified returns all the CPDs that have been added till now to the graph Parameters node (any hashable python object (optional)) – The node whose CPD we want. If node not specified returns all the CPDs added to the model. Returns Return type A list of TabularCPDs. Examples >>> from pgmpy.models import BayesianModel >>> from pgmpy.factors.discrete import TabularCPD >>> cpd = TabularCPD('grade', 2, [[0.1, 0.9, 0.2, 0.7], ... [0.9, 0.1, 0.8, 0.3]], ... ['intel', 'diff'], [2, 2]) >>> student.get_cpds() get_factorized_product(latex=False)[source] get_markov_blanket(node)[source] Returns a markov blanket for a random variable. In the case of Bayesian Networks, the markov blanket is the set of node’s parents, its children and its children’s other parents. Returns list(blanket_nodes) Return type List of nodes contained in Markov Blanket Parameters node (string, int or any hashable python object.) – The node whose markov blanket would be returned. Examples >>> from pgmpy.models import BayesianModel >>> from pgmpy.factors.discrete import TabularCPD >>> G = BayesianModel([('x', 'y'), ('z', 'y'), ('y', 'w'), ('y', 'v'), ('u', 'w'), ('s', 'v'), ('w', 't'), ('w', 'm'), ('v', 'n'), ('v', 'q')]) >>> bayes_model.get_markov_blanket('y') ['s', 'w', 'x', 'u', 'z', 'v'] is_imap(JPD)[source] Checks whether the bayesian model is Imap of given JointProbabilityDistribution Parameters JPD (An instance of JointProbabilityDistribution Class, for which you want to) – check the Imap Returns boolean – False otherwise Return type True if bayesian model is Imap for given Joint Probability Distribution Examples >>> from pgmpy.models import BayesianModel >>> from pgmpy.factors.discrete import TabularCPD >>> from pgmpy.factors.discrete import JointProbabilityDistribution >>> diff_cpd = TabularCPD('diff', 2, [[0.2], [0.8]]) >>> intel_cpd = TabularCPD('intel', 3, [[0.5], [0.3], [0.2]]) ... [[0.1,0.1,0.1,0.1,0.1,0.1], ... [0.1,0.1,0.1,0.1,0.1,0.1], ... [0.8,0.8,0.8,0.8,0.8,0.8]], ... evidence=['diff', 'intel'], ... evidence_card=[2, 3]) >>> val = [0.01, 0.01, 0.08, 0.006, 0.006, 0.048, 0.004, 0.004, 0.032, 0.04, 0.04, 0.32, 0.024, 0.024, 0.192, 0.016, 0.016, 0.128] >>> JPD = JointProbabilityDistribution(['diff', 'intel', 'grade'], [2, 3, 3], val) >>> G.is_imap(JPD) True predict(data, n_jobs=-1)[source] Predicts states of all the missing variables. Parameters data (pandas DataFrame object) – A DataFrame object with column names same as the variables in the model. Examples >>> import numpy as np >>> import pandas as pd >>> from pgmpy.models import BayesianModel >>> values = pd.DataFrame(np.random.randint(low=0, high=2, size=(1000, 5)), ... columns=['A', 'B', 'C', 'D', 'E']) >>> train_data = values[:800] >>> predict_data = values[800:] >>> model = BayesianModel([('A', 'B'), ('C', 'B'), ('C', 'D'), ('B', 'E')]) >>> model.fit(values) >>> predict_data = predict_data.copy() >>> predict_data.drop('E', axis=1, inplace=True) >>> y_pred = model.predict(predict_data) >>> y_pred E 800 0 801 1 802 1 803 1 804 0 ... ... 993 0 994 0 995 1 996 1 997 0 998 0 999 0 predict_probability(data)[source] Predicts probabilities of all states of the missing variables. Parameters data (pandas DataFrame object) – A DataFrame object with column names same as the variables in the model. Examples >>> import numpy as np >>> import pandas as pd >>> from pgmpy.models import BayesianModel >>> values = pd.DataFrame(np.random.randint(low=0, high=2, size=(100, 5)), ... columns=['A', 'B', 'C', 'D', 'E']) >>> train_data = values[:80] >>> predict_data = values[80:] >>> model = BayesianModel([('A', 'B'), ('C', 'B'), ('C', 'D'), ('B', 'E')]) >>> model.fit(values) >>> predict_data = predict_data.copy() >>> predict_data.drop('B', axis=1, inplace=True) >>> y_prob = model.predict_probability(predict_data) >>> y_prob B_0 B_1 80 0.439178 0.560822 81 0.581970 0.418030 82 0.488275 0.511725 83 0.581970 0.418030 84 0.510794 0.489206 85 0.439178 0.560822 86 0.439178 0.560822 87 0.417124 0.582876 88 0.407978 0.592022 89 0.429905 0.570095 90 0.581970 0.418030 91 0.407978 0.592022 92 0.429905 0.570095 93 0.429905 0.570095 94 0.439178 0.560822 95 0.407978 0.592022 96 0.559904 0.440096 97 0.417124 0.582876 98 0.488275 0.511725 99 0.407978 0.592022 remove_cpds(*cpds)[source] Removes the cpds that are provided in the argument. Parameters *cpds (TabularCPD object) – A CPD object on any subset of the variables of the model which is to be associated with the model. Examples >>> from pgmpy.models import BayesianModel >>> from pgmpy.factors.discrete import TabularCPD >>> cpd = TabularCPD('grade', 2, [[0.1, 0.9, 0.2, 0.7], ... [0.9, 0.1, 0.8, 0.3]], ... ['intel', 'diff'], [2, 2]) >>> student.remove_cpds(cpd) remove_node(node)[source] Remove node from the model. Removing a node also removes all the associated edges, removes the CPD of the node and marginalizes the CPDs of it’s children. Parameters node (node) – Node which is to be removed from the model. Returns Return type None Examples >>> import pandas as pd >>> import numpy as np >>> from pgmpy.models import BayesianModel >>> model = BayesianModel([('A', 'B'), ('B', 'C'), ... ('A', 'D'), ('D', 'C')]) >>> values = pd.DataFrame(np.random.randint(low=0, high=2, size=(1000, 4)), ... columns=['A', 'B', 'C', 'D']) >>> model.fit(values) >>> model.get_cpds() [<TabularCPD representing P(A:2) at 0x7f28248e2438>, <TabularCPD representing P(B:2 | A:2) at 0x7f28248e23c8>, <TabularCPD representing P(C:2 | B:2, D:2) at 0x7f28248e2748>, <TabularCPD representing P(D:2 | A:2) at 0x7f28248e26a0>] >>> model.remove_node('A') >>> model.get_cpds() [<TabularCPD representing P(B:2) at 0x7f28248e23c8>, <TabularCPD representing P(C:2 | B:2, D:2) at 0x7f28248e2748>, <TabularCPD representing P(D:2) at 0x7f28248e26a0>] remove_nodes_from(nodes)[source] Remove multiple nodes from the model. Removing a node also removes all the associated edges, removes the CPD of the node and marginalizes the CPDs of it’s children. Parameters nodes (list, set (iterable)) – Nodes which are to be removed from the model. Returns Return type None Examples >>> import pandas as pd >>> import numpy as np >>> from pgmpy.models import BayesianModel >>> model = BayesianModel([('A', 'B'), ('B', 'C'), ... ('A', 'D'), ('D', 'C')]) >>> values = pd.DataFrame(np.random.randint(low=0, high=2, size=(1000, 4)), ... columns=['A', 'B', 'C', 'D']) >>> model.fit(values) >>> model.get_cpds() [<TabularCPD representing P(A:2) at 0x7f28248e2438>, <TabularCPD representing P(B:2 | A:2) at 0x7f28248e23c8>, <TabularCPD representing P(C:2 | B:2, D:2) at 0x7f28248e2748>, <TabularCPD representing P(D:2 | A:2) at 0x7f28248e26a0>] >>> model.remove_nodes_from(['A', 'B']) >>> model.get_cpds() [<TabularCPD representing P(C:2 | D:2) at 0x7f28248e2a58>, <TabularCPD representing P(D:2) at 0x7f28248e26d8>] to_junction_tree()[source] Creates a junction tree (or clique tree) for a given bayesian model. For converting a Bayesian Model into a Clique tree, first it is converted into a Markov one. For a given markov model (H) a junction tree (G) is a graph 1. where each node in G corresponds to a maximal clique in H 2. each sepset in G separates the variables strictly on one side of the edge to other. Examples >>> from pgmpy.models import BayesianModel >>> from pgmpy.factors.discrete import TabularCPD >>> diff_cpd = TabularCPD('diff', 2, [[0.2], [0.8]]) >>> intel_cpd = TabularCPD('intel', 3, [[0.5], [0.3], [0.2]]) ... [[0.1,0.1,0.1,0.1,0.1,0.1], ... [0.1,0.1,0.1,0.1,0.1,0.1], ... [0.8,0.8,0.8,0.8,0.8,0.8]], ... evidence=['diff', 'intel'], ... evidence_card=[2, 3]) >>> sat_cpd = TabularCPD('SAT', 2, ... [[0.1, 0.2, 0.7], ... [0.9, 0.8, 0.3]], ... evidence=['intel'], evidence_card=[3]) >>> letter_cpd = TabularCPD('letter', 2, ... [[0.1, 0.4, 0.8], ... [0.9, 0.6, 0.2]], >>> jt = G.to_junction_tree() to_markov_model()[source] Converts bayesian model to markov model. The markov model created would be the moral graph of the bayesian model. Examples >>> from pgmpy.models import BayesianModel >>> mm = G.to_markov_model() >>> mm.nodes() >>> mm.edges() ## Markov Model¶ class pgmpy.models.MarkovModel.MarkovModel(ebunch=None)[source] Base class for markov model. A MarkovModel stores nodes and edges with potentials MarkovModel holds undirected edges. Parameters data (input graph) – Data to initialize graph. If data=None (default) an empty graph is created. The data can be an edge list, or any NetworkX graph object. Examples Create an empty Markov Model with no nodes and no edges. >>> from pgmpy.models import MarkovModel >>> G = MarkovModel() G can be grown in several ways. Nodes: Add one node at a time: >>> G.add_node('a') Add the nodes from any container (a list, set or tuple or the nodes from another graph). >>> G.add_nodes_from(['a', 'b']) Edges: G can also be grown by adding edges. >>> G.add_edge('a', 'b') a list of edges, >>> G.add_edges_from([('a', 'b'), ('b', 'c')]) If some edges connect nodes not yet in the model, the nodes are added automatically. There are no errors when adding nodes or edges that already exist. Shortcuts: Many common graph features allow python syntax for speed reporting. >>> 'a' in G # check if node in graph True >>> len(G) # number of nodes in graph 3 add_edge(u, v, **kwargs)[source] Add an edge between u and v. The nodes u and v will be automatically added if they are not already in the graph Parameters u,v (nodes) – Nodes can be any hashable Python object. Examples >>> from pgmpy.models import MarkovModel >>> G = MarkovModel() add_factors(*factors)[source] Associate a factor to the graph. See factors class for the order of potential values Parameters *factor (pgmpy.factors.factors object) – A factor object on any subset of the variables of the model which is to be associated with the model. Returns Return type None Examples >>> from pgmpy.models import MarkovModel >>> from pgmpy.factors.discrete import DiscreteFactor >>> student = MarkovModel([('Alice', 'Bob'), ('Bob', 'Charles'), ... ('Charles', 'Debbie'), ('Debbie', 'Alice')]) >>> factor = DiscreteFactor(['Alice', 'Bob'], cardinality=[3, 2], ... values=np.random.rand(6)) check_model()[source] Check the model for various errors. This method checks for the following errors - • Checks if the cardinalities of all the variables are consistent across all the factors. • Factors are defined for all the random variables. Returns check – True if all the checks are passed Return type boolean copy()[source] Returns a copy of this Markov Model. Returns MarkovModel Return type Copy of this Markov model. Examples >>> from pgmpy.factors.discrete import DiscreteFactor >>> from pgmpy.models import MarkovModel >>> G = MarkovModel() >>> G_copy = G.copy() >>> G_copy.edges() [(('a', 'b'), ('b', 'c'))] >>> G_copy.nodes() [('a', 'b'), ('b', 'c')] >>> factor = DiscreteFactor([('a', 'b')], cardinality=[3], ... values=np.random.rand(3)) >>> G.get_factors() [<DiscreteFactor representing phi(('a', 'b'):3) at 0x...>] >>> G_copy.get_factors() [] get_cardinality(node=None)[source] Returns the cardinality of the node. If node is not specified returns a dictionary with the given variable as keys and their respective cardinality as values. Parameters node (any hashable python object (optional)) – The node whose cardinality we want. If node is not specified returns a dictionary with the given variable as keys and their respective cardinality as values. Examples >>> from pgmpy.models import MarkovModel >>> from pgmpy.factors.discrete import DiscreteFactor >>> student = MarkovModel([('Alice', 'Bob'), ('Bob', 'Charles')]) >>> factor = DiscreteFactor(['Alice', 'Bob'], cardinality=[2, 2], ... values=np.random.rand(4)) >>> student.get_cardinality(node='Alice') 2 >>> student.get_cardinality() defaultdict(<class 'int'>, {'Bob': 2, 'Alice': 2}) get_factors(node=None)[source] Returns all the factors containing the node. If node is not specified returns all the factors that have been added till now to the graph. Parameters node (any hashable python object (optional)) – The node whose factor we want. If node is not specified Examples >>> from pgmpy.models import MarkovModel >>> from pgmpy.factors.discrete import DiscreteFactor >>> student = MarkovModel([('Alice', 'Bob'), ('Bob', 'Charles')]) >>> factor1 = DiscreteFactor(['Alice', 'Bob'], cardinality=[2, 2], ... values=np.random.rand(4)) >>> factor2 = DiscreteFactor(['Bob', 'Charles'], cardinality=[2, 3], ... values=np.ones(6)) >>> student.get_factors() [<DiscreteFactor representing phi(Alice:2, Bob:2) at 0x7f8a0e9bf630>, <DiscreteFactor representing phi(Bob:2, Charles:3) at 0x7f8a0e9bf5f8>] >>> student.get_factors('Alice') [<DiscreteFactor representing phi(Alice:2, Bob:2) at 0x7f8a0e9bf630>] get_local_independencies(latex=False)[source] Returns all the local independencies present in the markov model. Local independencies are the independence assertion in the form of .. math:: {X perp W - {X} - MB(X) | MB(X)} where MB is the markov blanket of all the random variables in X Parameters latex (boolean) – If latex=True then latex string of the indepedence assertion would be created Examples >>> from pgmpy.models import MarkovModel >>> mm = MarkovModel() >>> mm.add_nodes_from(['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7']) >>> mm.add_edges_from([('x1', 'x3'), ('x1', 'x4'), ('x2', 'x4'), ... ('x2', 'x5'), ('x3', 'x6'), ('x4', 'x6'), ... ('x4', 'x7'), ('x5', 'x7')]) >>> mm.get_local_independecies() get_partition_function()[source] Returns the partition function for a given undirected graph. A partition function is defined as where m is the number of factors present in the graph and X are all the random variables present. Examples >>> from pgmpy.models import MarkovModel >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = MarkovModel() >>> G.add_nodes_from(['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7']) >>> G.add_edges_from([('x1', 'x3'), ('x1', 'x4'), ('x2', 'x4'), ... ('x2', 'x5'), ('x3', 'x6'), ('x4', 'x6'), ... ('x4', 'x7'), ('x5', 'x7')]) >>> phi = [DiscreteFactor(edge, [2, 2], np.random.rand(4)) for edge in G.edges()] >>> G.get_partition_function() markov_blanket(node)[source] Returns a markov blanket for a random variable. Markov blanket is the neighboring nodes of the given node. Examples >>> from pgmpy.models import MarkovModel >>> mm = MarkovModel() >>> mm.add_nodes_from(['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7']) >>> mm.add_edges_from([('x1', 'x3'), ('x1', 'x4'), ('x2', 'x4'), ... ('x2', 'x5'), ('x3', 'x6'), ('x4', 'x6'), ... ('x4', 'x7'), ('x5', 'x7')]) >>> mm.markov_blanket('x1') remove_factors(*factors)[source] Removes the given factors from the added factors. Examples >>> from pgmpy.models import MarkovModel >>> from pgmpy.factors.discrete import DiscreteFactor >>> student = MarkovModel([('Alice', 'Bob'), ('Bob', 'Charles')]) >>> factor = DiscreteFactor(['Alice', 'Bob'], cardinality=[2, 2], ... values=np.random.rand(4)) >>> student.remove_factors(factor) to_bayesian_model()[source] Creates a Bayesian Model which is a minimum I-Map for this markov model. The ordering of parents may not remain constant. It would depend on the ordering of variable in the junction tree (which is not constant) all the time. Examples >>> from pgmpy.models import MarkovModel >>> from pgmpy.factors.discrete import DiscreteFactor >>> mm = MarkovModel() >>> mm.add_nodes_from(['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7']) >>> mm.add_edges_from([('x1', 'x3'), ('x1', 'x4'), ('x2', 'x4'), ... ('x2', 'x5'), ('x3', 'x6'), ('x4', 'x6'), ... ('x4', 'x7'), ('x5', 'x7')]) >>> phi = [DiscreteFactor(edge, [2, 2], np.random.rand(4)) for edge in mm.edges()] >>> bm = mm.to_bayesian_model() to_factor_graph()[source] Converts the markov model into factor graph. A factor graph contains two types of nodes. One type corresponds to random variables whereas the second type corresponds to factors over these variables. The graph only contains edges between variables and factor nodes. Each factor node is associated with one factor whose scope is the set of variables that are its neighbors. Examples >>> from pgmpy.models import MarkovModel >>> from pgmpy.factors.discrete import DiscreteFactor >>> student = MarkovModel([('Alice', 'Bob'), ('Bob', 'Charles')]) >>> factor1 = DiscreteFactor(['Alice', 'Bob'], [3, 2], np.random.rand(6)) >>> factor2 = DiscreteFactor(['Bob', 'Charles'], [2, 2], np.random.rand(4)) >>> factor_graph = student.to_factor_graph() to_junction_tree()[source] Creates a junction tree (or clique tree) for a given markov model. For a given markov model (H) a junction tree (G) is a graph 1. where each node in G corresponds to a maximal clique in H 2. each sepset in G separates the variables strictly on one side of the edge to other. Examples >>> from pgmpy.models import MarkovModel >>> from pgmpy.factors.discrete import DiscreteFactor >>> mm = MarkovModel() >>> mm.add_nodes_from(['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7']) >>> mm.add_edges_from([('x1', 'x3'), ('x1', 'x4'), ('x2', 'x4'), ... ('x2', 'x5'), ('x3', 'x6'), ('x4', 'x6'), ... ('x4', 'x7'), ('x5', 'x7')]) >>> phi = [DiscreteFactor(edge, [2, 2], np.random.rand(4)) for edge in mm.edges()] >>> junction_tree = mm.to_junction_tree() triangulate(heuristic='H6', order=None, inplace=False)[source] Triangulate the graph. If order of deletion is given heuristic algorithm will not be used. Parameters • heuristic (H1 | H2 | H3 | H4 | H5 | H6) – The heuristic algorithm to use to decide the deletion order of the variables to compute the triangulated graph. Let X be the set of variables and X(i) denotes the i-th variable. • S(i) - The size of the clique created by deleting the variable. • E(i) - Cardinality of variable X(i). • M(i) - Maximum size of cliques given by X(i) and its adjacent nodes. • C(i) - Sum of size of cliques given by X(i) and its adjacent nodes. The heuristic algorithm decide the deletion order if this way: • H1 - Delete the variable with minimal S(i). • H2 - Delete the variable with minimal S(i)/E(i). • H3 - Delete the variable with minimal S(i) - M(i). • H4 - Delete the variable with minimal S(i) - C(i). • H5 - Delete the variable with minimal S(i)/M(i). • H6 - Delete the variable with minimal S(i)/C(i). • order (list, tuple (array-like)) – The order of deletion of the variables to compute the triagulated graph. If order is given heuristic algorithm will not be used. • inplace (True | False) – if inplace is true then adds the edges to the object from which it is called else returns a new object. References http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.3607 Examples >>> from pgmpy.models import MarkovModel >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = MarkovModel() >>> G.add_nodes_from(['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7']) >>> G.add_edges_from([('x1', 'x3'), ('x1', 'x4'), ('x2', 'x4'), ... ('x2', 'x5'), ('x3', 'x6'), ('x4', 'x6'), ... ('x4', 'x7'), ('x5', 'x7')]) >>> phi = [DiscreteFactor(edge, [2, 2], np.random.rand(4)) for edge in G.edges()] >>> G_chordal = G.triangulate() ## Factor Graph¶ class pgmpy.models.FactorGraph.FactorGraph(ebunch=None)[source] Class for representing factor graph. DiscreteFactor graph is a bipartite graph representing factorization of a function. They allow efficient computation of marginal distributions through sum-product algorithm. A factor graph contains two types of nodes. One type corresponds to random variables whereas the second type corresponds to factors over these variables. The graph only contains edges between variables and factor nodes. Each factor node is associated with one factor whose scope is the set of variables that are its neighbors. Parameters data (input graph) – Data to initialize graph. If data=None (default) an empty graph is created. The data is an edge list. Examples Create an empty FactorGraph with no nodes and no edges >>> from pgmpy.models import FactorGraph >>> G = FactorGraph() G can be grown by adding variable nodes as well as factor nodes Nodes: Add a node at a time or a list of nodes. >>> G.add_node('a') >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) Edges: G can also be grown by adding edges. >>> G.add_edge('a', phi1) or a list of edges >>> G.add_edges_from([('a', phi1), ('b', phi1)]) add_edge(u, v, **kwargs)[source] Add an edge between variable_node and factor_node. Parameters v (u,) – Nodes can be any hashable Python object. Examples >>> from pgmpy.models import FactorGraph >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) add_factors(*factors)[source] Associate a factor to the graph. See factors class for the order of potential values. Parameters *factor (pgmpy.factors.DiscreteFactor object) – A factor object on any subset of the variables of the model which is to be associated with the model. Examples >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) ... ('b', phi2), ('c', phi2)]) check_model()[source] Check the model for various errors. This method checks for the following errors. In the same time it also updates the cardinalities of all the random variables. • Check whether bipartite property of factor graph is still maintained or not. • Check whether factors are associated for all the random variables or not. • Check if factors are defined for each factor node or not. • Check if cardinality information for all the variables is availble or not. • Check if cardinality of random variable remains same across all the factors. copy()[source] Returns a copy of the model. Returns FactorGraph Return type Copy of FactorGraph Examples >>> import numpy as np >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) ... ('b', phi2), ('c', phi2)]) >>> G_copy = G.copy() >>> G_copy.nodes() [<Factor representing phi(b:2, c:2) at 0xb4badd4c>, 'b', 'c', 'a', <Factor representing phi(a:2, b:2) at 0xb4badf2c>] get_cardinality(node=None)[source] Returns the cardinality of the node Parameters node (any hashable python object (optional)) – The node whose cardinality we want. If node is not specified returns a dictionary with the given variable as keys and their respective cardinality as values. Returns int or dict – If node is not specified returns a dictionary with the given variable as keys and their respective cardinality as values. Return type If node is specified returns the cardinality of the node. Examples >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) ... ('b', phi2), ('c', phi2)]) >>> G.get_cardinality() defaultdict(<class 'int'>, {'c': 2, 'b': 2, 'a': 2}) >>> G.get_cardinality('a') 2 get_factor_nodes()[source] Returns factors nodes present in the graph. Before calling this method make sure that all the factors are added properly. Examples >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) ... ('b', phi2), ('c', phi2)]) >>> G.get_factor_nodes() [<DiscreteFactor representing phi(b:2, c:2) at 0x4b8c7f0>, <DiscreteFactor representing phi(a:2, b:2) at 0x4b8c5b0>] get_factors(node=None)[source] Returns the factors that have been added till now to the graph. If node is not None, it would return the factor corresponding to the given node. Examples >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) ... ('b', phi2), ('c', phi2)]) >>> G.get_factors() >>> G.get_factors(node=phi1) get_partition_function()[source] Returns the partition function for a given undirected graph. A partition function is defined as where m is the number of factors present in the graph and X are all the random variables present. Examples >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) ... ('b', phi2), ('c', phi2)]) >>> G.get_factors() >>> G.get_partition_function() get_variable_nodes()[source] Returns variable nodes present in the graph. Before calling this method make sure that all the factors are added properly. Examples >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) ... ('b', phi2), ('c', phi2)]) >>> G.get_variable_nodes() ['a', 'b'] remove_factors(*factors)[source] Removes the given factors from the added factors. Examples >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> G.remove_factors(phi1) to_junction_tree()[source] Create a junction treeo (or clique tree) for a given factor graph. For a given factor graph (H) a junction tree (G) is a graph 1. where each node in G corresponds to a maximal clique in H 2. each sepset in G separates the variables strictly on one side of edge to other Examples >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) ... ('b', phi2), ('c', phi2)]) >>> mm = G.to_markov_model() to_markov_model()[source] Converts the factor graph into markov model. A markov model contains nodes as random variables and edge between two nodes imply interaction between them. Examples >>> from pgmpy.models import FactorGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = FactorGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) ... ('b', phi2), ('c', phi2)]) >>> mm = G.to_markov_model() ## Cluster Graph¶ class pgmpy.models.ClusterGraph.ClusterGraph(ebunch=None)[source] Base class for representing Cluster Graph. Cluster graph is an undirected graph which is associated with a subset of variables. The graph contains undirected edges that connects clusters whose scopes have a non-empty intersection. Formally, a cluster graph is for a set of factors over is an undirected graph, each of whose nodes is associated with a subset . A cluster graph must be family-preserving - each factor must be associated with a cluster C, denoted , such that . Each edge between a pair of clusters and is associated with a sepset . Parameters data (input graph) – Data to initialize graph. If data=None (default) an empty graph is created. The data is an edge list Examples Create an empty ClusterGraph with no nodes and no edges >>> from pgmpy.models import ClusterGraph >>> G = ClusterGraph() G can be grown by adding clique nodes. Nodes: Add a tuple (or list or set) of nodes as single clique node. >>> G.add_node(('a', 'b', 'c')) >>> G.add_nodes_from([('a', 'b'), ('a', 'b', 'c')]) Edges: G can also be grown by adding edges. >>> G.add_edge(('a', 'b', 'c'), ('a', 'b')) or a list of edges >>> G.add_edges_from([(('a', 'b', 'c'), ('a', 'b')), ... (('a', 'b', 'c'), ('a', 'c'))]) add_edge(u, v, **kwargs)[source] Add an edge between two clique nodes. Parameters v (u,) – Nodes can be any list or set or tuple of nodes forming a clique. Examples >>> from pgmpy.models import ClusterGraph >>> G = ClusterGraph() >>> G.add_nodes_from([('a', 'b', 'c'), ('a', 'b'), ('a', 'c')]) >>> G.add_edges_from([(('a', 'b', 'c'), ('a', 'b')), ... (('a', 'b', 'c'), ('a', 'c'))]) add_factors(*factors)[source] Associate a factor to the graph. See factors class for the order of potential values Parameters *factor (pgmpy.factors.factors object) – A factor object on any subset of the variables of the model which is to be associated with the model. Returns Return type None Examples >>> from pgmpy.models import ClusterGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> student = ClusterGraph() >>> factor = DiscreteFactor(['Alice', 'Bob'], cardinality=[3, 2], ... values=np.random.rand(6)) add_node(node, **kwargs)[source] Add a single node to the cluster graph. Parameters node (node) – A node should be a collection of nodes forming a clique. It can be a list, set or tuple of nodes Examples >>> from pgmpy.models import ClusterGraph >>> G = ClusterGraph() add_nodes_from(nodes, **kwargs)[source] Add multiple nodes to the cluster graph. Parameters nodes (iterable container) – A container of nodes (list, dict, set, etc.). Examples >>> from pgmpy.models import ClusterGraph >>> G = ClusterGraph() >>> G.add_nodes_from([('a', 'b'), ('a', 'b', 'c')]) check_model()[source] Check the model for various errors. This method checks for the following errors. • Checks if factors are defined for all the cliques or not. • Check for running intersection property is not done explicitly over here as it done in the add_edges method. • Checks if cardinality information for all the variables is availble or not. If not it raises an error. • Check if cardinality of random variable remains same across all the factors. Returns check – True if all the checks are passed Return type boolean copy()[source] Returns a copy of ClusterGraph. Returns ClusterGraph Return type copy of ClusterGraph Examples >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = ClusterGraph() >>> phi1 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi2 = DiscreteFactor(['b', 'c'], [2, 2], np.random.rand(4)) >>> graph_copy = G.copy() >>> graph_copy.factors [<DiscreteFactor representing phi(a:2, b:2) at 0xb71b19cc>, <DiscreteFactor representing phi(b:2, c:2) at 0xb4eaf3ac>] >>> graph_copy.edges() [(('a', 'b'), ('b', 'c'))] >>> graph_copy.nodes() [('a', 'b'), ('b', 'c')] get_cardinality(node=None)[source] Returns the cardinality of the node Parameters node (any hashable python object (optional)) – The node whose cardinality we want. If node is not specified returns a dictionary with the given variable as keys and their respective cardinality as values. Returns int or dict – If node is not specified returns a dictionary with the given variable as keys and their respective cardinality as values. Return type If node is specified returns the cardinality of the node. Examples >>> from pgmpy.models import ClusterGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> student = ClusterGraph() >>> factor = DiscreteFactor(['Alice', 'Bob'], cardinality=[2, 2], ... values=np.random.rand(4)) >>> student.get_cardinality() defaultdict(<class 'int'>, {'Bob': 2, 'Alice': 2}) >>> student.get_cardinality(node='Alice') 2 get_factors(node=None)[source] Return the factors that have been added till now to the graph. If node is not None, it would return the factor corresponding to the given node. Examples >>> from pgmpy.models import ClusterGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = ClusterGraph() >>> G.add_nodes_from([('a', 'b', 'c'), ('a', 'b'), ('a', 'c')]) >>> G.add_edges_from([(('a', 'b', 'c'), ('a', 'b')), ... (('a', 'b', 'c'), ('a', 'c'))]) >>> phi1 = DiscreteFactor(['a', 'b', 'c'], [2, 2, 2], np.random.rand(8)) >>> phi2 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi3 = DiscreteFactor(['a', 'c'], [2, 2], np.random.rand(4)) >>> G.get_factors() >>> G.get_factors(node=('a', 'b', 'c')) get_partition_function()[source] Returns the partition function for a given undirected graph. A partition function is defined as where m is the number of factors present in the graph and X are all the random variables present. Examples >>> from pgmpy.models import ClusterGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> G = ClusterGraph() >>> G.add_nodes_from([('a', 'b', 'c'), ('a', 'b'), ('a', 'c')]) >>> G.add_edges_from([(('a', 'b', 'c'), ('a', 'b')), ... (('a', 'b', 'c'), ('a', 'c'))]) >>> phi1 = DiscreteFactor(['a', 'b', 'c'], [2, 2, 2], np.random.rand(8)) >>> phi2 = DiscreteFactor(['a', 'b'], [2, 2], np.random.rand(4)) >>> phi3 = DiscreteFactor(['a', 'c'], [2, 2], np.random.rand(4)) >>> G.get_partition_function() remove_factors(*factors)[source] Removes the given factors from the added factors. Examples >>> from pgmpy.models import ClusterGraph >>> from pgmpy.factors.discrete import DiscreteFactor >>> student = ClusterGraph() >>> factor = DiscreteFactor(['Alice', 'Bob'], cardinality=[2, 2], ... value=np.random.rand(4)) >>> student.remove_factors(factor) ## Junction Tree¶ class pgmpy.models.JunctionTree.JunctionTree(ebunch=None)[source] Class for representing Junction Tree. Junction tree is undirected graph where each node represents a clique (list, tuple or set of nodes) and edges represent sepset between two cliques. Each sepset in G separates the variables strictly on one side of edge to other. Parameters data (input graph) – Data to initialize graph. If data=None (default) an empty graph is created. The data is an edge list. Examples Create an empty JunctionTree with no nodes and no edges >>> from pgmpy.models import JunctionTree >>> G = JunctionTree() G can be grown by adding clique nodes. Nodes: Add a tuple (or list or set) of nodes as single clique node. >>> G.add_node(('a', 'b', 'c')) >>> G.add_nodes_from([('a', 'b'), ('a', 'b', 'c')]) Edges: G can also be grown by adding edges. >>> G.add_edge(('a', 'b', 'c'), ('a', 'b')) or a list of edges >>> G.add_edges_from([(('a', 'b', 'c'), ('a', 'b')), ... (('a', 'b', 'c'), ('a', 'c'))]) add_edge(u, v, **kwargs)[source] Add an edge between two clique nodes. Parameters v (u,) – Nodes can be any list or set or tuple of nodes forming a clique. Examples >>> from pgmpy.models import JunctionTree >>> G = JunctionTree() >>> G.add_nodes_from([('a', 'b', 'c'), ('a', 'b'), ('a', 'c')]) >>> G.add_edges_from([(('a', 'b', 'c'), ('a', 'b')), ... (('a', 'b', 'c'), ('a', 'c'))]) check_model()[source] Check the model for various errors. This method checks for the following errors. In the same time also updates the cardinalities of all the random variables. • Checks if clique potentials are defined for all the cliques or not. • Check for running intersection property is not done explicitly over here as it done in the add_edges method. Returns check – True if all the checks are passed Return type boolean copy()[source] Returns a copy of JunctionTree. Returns JunctionTree Return type copy of JunctionTree Examples >>> import numpy as np >>> from pgmpy.factors.discrete import DiscreteFactor >>> from pgmpy.models import JunctionTree >>> G = JunctionTree() >>> G.add_edges_from([(('a', 'b', 'c'), ('a', 'b')), (('a', 'b', 'c'), ('a', 'c'))]) >>> phi1 = DiscreteFactor(['a', 'b'], [1, 2], np.random.rand(2)) >>> phi2 = DiscreteFactor(['a', 'c'], [1, 2], np.random.rand(2)) >>> modelCopy = G.copy() >>> modelCopy.edges() [(('a', 'b'), ('a', 'b', 'c')), (('a', 'c'), ('a', 'b', 'c'))] >>> G.factors [<DiscreteFactor representing phi(a:1, b:2) at 0xb720ee4c>, <DiscreteFactor representing phi(a:1, c:2) at 0xb4e1e06c>] >>> modelCopy.factors [<DiscreteFactor representing phi(a:1, b:2) at 0xb4bd11ec>, <DiscreteFactor representing phi(a:1, c:2) at 0xb4bd138c>] ## Markov Chain¶ class pgmpy.models.MarkovChain.MarkovChain(variables=None, card=None, start_state=None)[source] Class to represent a Markov Chain with multiple kernels for factored state space, along with methods to simulate a run. Examples Create an empty Markov Chain: >>> from pgmpy.models import MarkovChain as MC >>> model = MC() And then add variables to it >>> model.add_variables_from(['intel', 'diff'], [2, 3]) Or directly create a Markov Chain from a list of variables and their cardinalities >>> model = MC(['intel', 'diff'], [2, 3]) >>> intel_tm = {0: {0: 0.25, 1: 0.75}, 1: {0: 0.5, 1: 0.5}} >>> diff_tm = {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6 }, 2: {0: 0.7, 1: 0.15, 2: 0.15}} Set a start state >>> from pgmpy.factors.discrete import State >>> model.set_start_state([State('intel', 0), State('diff', 2)]) Sample from it >>> model.sample(size=5) intel diff 0 0 2 1 1 0 2 0 1 3 1 0 4 0 2 add_transition_model(variable, transition_model)[source] Adds a transition model for a particular variable. Parameters • variable (any hashable python object) – must be an existing variable of the model. • transition_model (dict or 2d array) – dict representing valid transition probabilities defined for every possible state of the variable. array represent a square matrix where every row sums to 1, array[i,j] indicates the transition probalities from State i to State j Examples >>> from pgmpy.models import MarkovChain as MC >>> model = MC() >>> grade_tm = {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6 }, 2: {0: 0.7, 1: 0.15, 2: 0.15}} >>> grade_tm_matrix = np.array([[0.1, 0.5, 0.4], [0.2, 0.2, 0.6], [0.7, 0.15, 0.15]]) add_variable(variable, card=0)[source] Add a variable to the model. Parameters • variable (any hashable python object) – • card (int) – Representing the cardinality of the variable to be added. Examples >>> from pgmpy.models import MarkovChain as MC >>> model = MC() add_variables_from(variables, cards)[source] Add several variables to the model at once. Parameters • variables (array-like iterable object) – List of variables to be added. • cards (array-like iterable object) – List of cardinalities of the variables to be added. Examples >>> from pgmpy.models import MarkovChain as MC >>> model = MC() copy()[source] Returns a copy of Markov Chain Model. Returns MarkovChain Return type Copy of MarkovChain. Examples >>> from pgmpy.models import MarkovChain >>> from pgmpy.factors.discrete import State >>> model = MarkovChain() >>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {0: 0.3, 1: 0.3, 2: 0.4}} >>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}} >>> model.set_start_state([State('intel', 0), State('diff', 2)]) >>> model_copy = model.copy() >>> model_copy.transition_models >>> {'diff': {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6}, 2: {0: 0.7, 1: 0.15, 2: 0.15}}, 'intel': {0: {0: 0.25, 1: 0.75}, 1: {0: 0.5, 1: 0.5}}} generate_sample(start_state=None, size=1)[source] Generator version of self.sample Returns Return type List of State namedtuples, representing the assignment to all variables of the model. Examples >>> from pgmpy.models.MarkovChain import MarkovChain >>> from pgmpy.factors.discrete import State >>> model = MarkovChain() >>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {0: 0.3, 1: 0.3, 2: 0.4}} >>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}} >>> gen = model.generate_sample([State('intel', 0), State('diff', 0)], 2) >>> [sample for sample in gen] [[State(var='intel', state=2), State(var='diff', state=1)], [State(var='intel', state=2), State(var='diff', state=0)]] is_stationarity(tolerance=0.2, sample=None)[source] Checks if the given markov chain is stationary and checks the steady state probablity values for the state are consistent. Parameters • tolerance (float) – represents the diff between actual steady state value and the computed value • sample ([State(i,j)]) – represents the list of state which the markov chain has sampled Returns True, if the markov chain converges to steady state distribution within the tolerance False, if the markov chain does not converge to steady state distribution within tolerance Return type Boolean Examples >>> from pgmpy.models.MarkovChain import MarkovChain >>> from pgmpy.factors.discrete import State >>> model = MarkovChain() >>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {0: 0.3, 1: 0.3, 2: 0.4}} >>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}} >>> model.is_stationarity() True prob_from_sample(state, sample=None, window_size=None)[source] Given an instantiation (partial or complete) of the variables of the model, compute the probability of observing it over multiple windows in a given sample. If ‘sample’ is not passed as an argument, generate the statistic by sampling from the Markov Chain, starting with a random initial state. Examples >>> from pgmpy.models.MarkovChain import MarkovChain as MC >>> from pgmpy.factors.discrete import State >>> model = MC(['intel', 'diff'], [3, 2]) >>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {2: 0.5, 1:0.5}} >>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}} >>> model.prob_from_sample([State('diff', 0)]) array([ 0.27, 0.4 , 0.18, 0.23, ..., 0.29]) random_state()[source] Generates a random state of the Markov Chain. Returns Return type List of namedtuples, representing a random assignment to all variables of the model. Examples >>> from pgmpy.models import MarkovChain as MC >>> model = MC(['intel', 'diff'], [2, 3]) >>> model.random_state() [State('diff', 2), State('intel', 1)] sample(start_state=None, size=1)[source] Sample from the Markov Chain. Parameters • start_state (dict or array-like iterable) – Representing the starting states of the variables. If None is passed, a random start_state is chosen. • size (int) – Number of samples to be generated. Returns Return type pandas.DataFrame Examples >>> from pgmpy.models import MarkovChain as MC >>> from pgmpy.factors.discrete import State >>> model = MC(['intel', 'diff'], [2, 3]) >>> model.set_start_state([State('intel', 0), State('diff', 2)]) >>> intel_tm = {0: {0: 0.25, 1: 0.75}, 1: {0: 0.5, 1: 0.5}} >>> diff_tm = {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6 }, 2: {0: 0.7, 1: 0.15, 2: 0.15}} >>> model.sample(size=5) intel diff 0 0 2 1 1 0 2 0 1 3 1 0 4 0 2 set_start_state(start_state)[source] Set the start state of the Markov Chain. If the start_state is given as a array-like iterable, its contents are reordered in the internal representation. Parameters start_state (dict or array-like iterable object) – Dict (or list) of tuples representing the starting states of the variables. Examples >>> from pgmpy.models import MarkovChain as MC >>> from pgmpy.factors.discrete import State >>> model = MC(['a', 'b'], [2, 2]) >>> model.set_start_state([State('a', 0), State('b', 1)]) ## NoisyOr Model¶ class pgmpy.models.NoisyOrModel.NoisyOrModel(variables, cardinality, inhibitor_probability)[source] Base class for Noisy-Or models. This is an implementation of generalized Noisy-Or models and is not limited to Boolean variables and also any arbitrary function can be used instead of the boolean OR function. add_variables(variables, cardinality, inhibitor_probability)[source] Parameters • variables (list, tuple, dict (array like)) – array containing names of the variables that are to be added. • cardinality (list, tuple, dict (array like)) – array containing integers representing the cardinality of the variables. • inhibitor_probability (list, tuple, dict (array_like)) – array containing the inhibitor probabilities corresponding to each variable. Examples >>> from pgmpy.models import NoisyOrModel >>> model = NoisyOrModel(['x1', 'x2', 'x3'], [2, 3, 2], [[0.6, 0.4], ... [0.2, 0.4, 0.7], ... [0.1, 0. 4]]) >>> model.add_variables(['x4'], [3], [0.1, 0.4, 0.2]) del_variables(variables)[source] Deletes variables from the NoisyOrModel. Parameters variables (list, tuple, dict (array like)) – list of variables to be deleted. Examples >>> from pgmpy.models import NoisyOrModel >>> model = NoisyOrModel(['x1', 'x2', 'x3'], [2, 3, 2], [[0.6, 0.4], ... [0.2, 0.4, 0.7], ... [0.1, 0. 4]]) >>> model.del_variables(['x1']) ## Naive Bayes¶ class pgmpy.models.NaiveBayes.NaiveBayes(ebunch=None)[source] Class to represent Naive Bayes. Subclass of Bayesian Model. Model holds directed edges from one parent node to multiple children nodes only. Parameters data (input graph) – Data to initialize graph. If data=None (default) an empty graph is created. The data can be an edge list, or any NetworkX graph object. Examples Create an empty Naive Bayes Model with no nodes and no edges. >>> from pgmpy.models import NaiveBayes >>> G = NaiveBayes() G can be grown in several ways. Nodes: Add one node at a time: >>> G.add_node('a') Add the nodes from any container (a list, set or tuple or the nodes from another graph). >>> G.add_nodes_from(['a', 'b', 'c']) Edges: G can also be grown by adding edges. >>> G.add_edge('a', 'b') a list of edges, >>> G.add_edges_from([('a', 'b'), ('a', 'c')]) If some edges connect nodes not yet in the model, the nodes are added automatically. There are no errors when adding nodes or edges that already exist. Shortcuts: Many common graph features allow python syntax for speed reporting. >>> 'a' in G # check if node in graph True >>> len(G) # number of nodes in graph 3 active_trail_nodes(start, observed=None)[source] Returns all the nodes reachable from start via an active trail. Parameters • start (Graph node) – • observed (List of nodes (optional)) – If given the active trail would be computed assuming these nodes to be observed. Examples >>> from pgmpy.models import NaiveBayes >>> model = NaiveBayes() >>> model.add_edges_from([('a', 'b'), ('a', 'c'), ('a', 'd')]) >>> model.active_trail_nodes('a') {'a', 'b', 'c', 'd'} >>> model.active_trail_nodes('a', ['b', 'c']) {'a', 'd'} >>> model.active_trail_nodes('b', ['a']) {'b'} add_edge(u, v, *kwargs)[source] Add an edge between u and v. The nodes u and v will be automatically added if they are not already in the graph Parameters u,v (nodes) – Nodes can be any hashable python object. Examples >>> from pgmpy.models import NaiveBayes >>> G = NaiveBayes() >>> G.edges() [('a', 'c'), ('a', 'b')] fit(data, parent_node=None, estimator=None)[source] Computes the CPD for each node from a given data in the form of a pandas dataframe. If a variable from the data is not present in the model, it adds that node into the model. Parameters • data (pandas DataFrame object) – A DataFrame object with column names same as the variable names of network • parent_node (any hashable python object (optional)) – Parent node of the model, if not specified it looks for a previously specified parent node. • estimator (Estimator class) – Any pgmpy estimator. If nothing is specified, the default MaximumLikelihoodEstimator would be used. Examples >>> import numpy as np >>> import pandas as pd >>> from pgmpy.models import NaiveBayes >>> model = NaiveBayes() >>> values = pd.DataFrame(np.random.randint(low=0, high=2, size=(1000, 5)), ... columns=['A', 'B', 'C', 'D', 'E']) >>> model.fit(values, 'A') >>> model.get_cpds() [<TabularCPD representing P(D:2 | A:2) at 0x4b72870>, <TabularCPD representing P(E:2 | A:2) at 0x4bb2150>, <TabularCPD representing P(A:2) at 0x4bb23d0>, <TabularCPD representing P(B:2 | A:2) at 0x4bb24b0>, <TabularCPD representing P(C:2 | A:2) at 0x4bb2750>] >>> model.edges() [('A', 'D'), ('A', 'E'), ('A', 'B'), ('A', 'C')] local_independencies(variables)[source] Returns an instance of Independencies containing the local independencies of each of the variables. Parameters variables (str or array like) – variables whose local independencies are to found. Examples >>> from pgmpy.models import NaiveBayes >>> model = NaiveBayes() >>> model.add_edges_from([('a', 'b'), ('a', 'c'), ('a', 'd')]) >>> ind = model.local_independencies('b') >>> ind (b _|_ d, c | a) ## Dynamic Bayesian Network¶ class pgmpy.models.DynamicBayesianNetwork.DynamicBayesianNetwork(ebunch=None)[source] Bases: pgmpy.base.DAG.DAG active_trail_nodes(variables, observed=None) Returns a dictionary with the given variables as keys and all the nodes reachable from that respective variable as values. :param variables: variables whose active trails are to be found. :type variables: str or array like :param observed: If given the active trails would be computed assuming these nodes to be observed. :type observed: List of nodes (optional) Examples >>> from pgmpy.base import DAG >>> student = DAG() >>> student.active_trail_nodes('diff') {'diff': {'diff', 'intel'}, 'intel': {'diff', 'intel'}} References Details of the algorithm can be found in ‘Probabilistic Graphical Model Principles and Techniques’ - Koller and Friedman Page 75 Algorithm 3.1 add_cpds(*cpds)[source] This method adds the cpds to the dynamic bayesian network. Note that while adding variables and the evidence in cpd, they have to be of the following form (node_name, time_slice) Here, node_name is the node that is inserted while the time_slice is an integer value, which denotes the index of the time_slice that the node belongs to. Parameters cpds (list, set, tuple (array-like)) – List of CPDs which are to be associated with the model. Each CPD should be an instance of TabularCPD. Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> from pgmpy.factors.discrete import TabularCPD >>> dbn = DBN() >>> dbn.add_edges_from([(('D', 0),('G', 0)),(('I', 0),('G', 0)),(('D', 0),('D', 1)),(('I', 0),('I', 1))]) >>> grade_cpd = TabularCPD(('G', 0), 3, [[0.3, 0.05, 0.9, 0.5], ... [0.4, 0.25, 0.8, 0.03], ... [0.3, 0.7, 0.02, 0.2]], ... evidence=[('I', 0),('D', 0)], ... evidence_card=[2, 2]) >>> d_i_cpd = TabularCPD(('D',1), 2, [[0.6, 0.3], ... [0.4, 0.7]], ... evidence=[('D',0)], ... evidence_card=2) >>> diff_cpd = TabularCPD(('D', 0), 2, [[0.6, 0.4]]) >>> intel_cpd = TabularCPD(('I', 0), 2, [[0.7, 0.3]]) >>> i_i_cpd = TabularCPD(('I', 1), 2, [[0.5, 0.4], ... [0.5, 0.6]], ... evidence=[('I', 0)], ... evidence_card=2) >>> dbn.get_cpds() [<TabularCPD representing P(('G', 0):3 | ('I', 0):2, ('D', 0):2) at 0x7ff7f27b0cf8>, <TabularCPD representing P(('D', 1):2 | ('D', 0):2) at 0x7ff810b9c2e8>, <TabularCPD representing P(('D', 0):2) at 0x7ff7f27e6f98>, <TabularCPD representing P(('I', 0):2) at 0x7ff7f27e6ba8>, <TabularCPD representing P(('I', 1):2 | ('I', 0):2) at 0x7ff7f27e6668>] add_edge(start, end, **kwargs)[source] Add an edge between two nodes. The nodes will be automatically added if they are not present in the network. Parameters • start (tuple) – Both the start and end nodes should specify the time slice as (node_name, time_slice). Here, node_name can be any hashable python object while the time_slice is an integer value, which denotes the time slice that the node belongs to. • end (tuple) – Both the start and end nodes should specify the time slice as (node_name, time_slice). Here, node_name can be any hashable python object while the time_slice is an integer value, which denotes the time slice that the node belongs to. Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> model = DBN() >>> sorted(model.edges()) [(('D', 0), ('I', 0)), (('D', 1), ('I', 1))] add_edges_from(ebunch, **kwargs)[source] Add all the edges in ebunch. If nodes referred in the ebunch are not already present, they will be automatically added. Node names can be any hashable python object. Parameters ebunch (list, array-like) – List of edges to add. Each edge must be of the form of ((start, time_slice), (end, time_slice)). Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> dbn = DBN() >>> dbn.add_edges_from([(('D', 0), ('G', 0)), (('I', 0), ('G', 0))]) >>> dbn.nodes() ['G', 'I', 'D'] >>> dbn.edges() [(('D', 1), ('G', 1)), (('I', 1), ('G', 1)), (('D', 0), ('G', 0)), (('I', 0), ('G', 0))] add_node(node, **attr)[source] Adds a single node to the Network Parameters node (node) – A node can be any hashable Python object. Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> dbn = DBN() ['A'] add_nodes_from(nodes, **attr)[source] Add multiple nodes to the Network. Parameters nodes (iterable container) – A container of nodes (list, dict, set, etc.). Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> dbn = DBN() add_weighted_edges_from(ebunch_to_add, weight='weight', **attr) Parameters • ebunch_to_add (container of edges) – Each edge given in the list or container will be added to the graph. The edges must be given as 3-tuples (u, v, w) where w is a number. • weight (string, optional (default= 'weight')) – The attribute name for the edge weights to be added. • attr (keyword arguments, optional (default= no attributes)) – Edge attributes to add/update for all edges. add_edge() add_edges_from() Notes Adding the same edge twice for Graph/DiGraph simply updates the edge data. For MultiGraph/MultiDiGraph, duplicate edges are stored. Examples >>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G.add_weighted_edges_from([(0, 1, 3.0), (1, 2, 7.5)]) property adj Graph adjacency object holding the neighbors of each node. This object is a read-only dict-like structure with node keys and neighbor-dict values. The neighbor-dict is keyed by neighbor to the edge-data-dict. So G.adj[3][2][‘color’] = ‘blue’ sets the color of the edge (3, 2) to “blue”. The neighbor information is also provided by subscripting the graph. So for nbr, foovalue in G[node].data(‘foo’, default=1): works. For directed graphs, G.adj holds outgoing (successor) info. adjacency() Returns an iterator over (node, adjacency dict) tuples for all nodes. For directed graphs, only outgoing neighbors/adjacencies are included. Returns adj_iter – An iterator over (node, adjacency dictionary) for all nodes in the graph. Return type iterator Examples >>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> [(n, nbrdict) for n, nbrdict in G.adjacency()] [(0, {1: {}}), (1, {0: {}, 2: {}}), (2, {1: {}, 3: {}}), (3, {2: {}})] adjlist_inner_dict_factory alias of builtins.dict adjlist_outer_dict_factory alias of builtins.dict check_model()[source] Check the model for various errors. This method checks for the following errors. • Checks if the sum of the probabilities in each associated CPD for each state is equal to 1 (tol=0.01). • Checks if the CPDs associated with nodes are consistent with their parents. Returns boolean – according to the problem. Return type True if everything seems to be order. Otherwise raises error clear() Remove all nodes and edges from the graph. This also removes the name, and all graph, node, and edge attributes. Examples >>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G.clear() >>> list(G.nodes) [] >>> list(G.edges) [] copy()[source] Returns a copy of the dynamic bayesian network. Returns DynamicBayesianNetwork Return type copy of the dynamic bayesian network Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> from pgmpy.factors.discrete import TabularCPD >>> dbn = DBN() >>> grade_cpd = TabularCPD(('G',0), 3, [[0.3,0.05,0.9,0.5], [0.4,0.25,0.8,0.03], [0.3,0.7,0.02,0.2]], [('I', 0),('D', 0)],[2,2]) >>> dbn_copy = dbn.copy() >>> dbn_copy.nodes() ['Z', 'G', 'I', 'D'] >>> dbn_copy.edges() [(('I', 1), ('G', 1)), (('I', 0), ('I', 1)), (('I', 0), ('G', 0)), (('D', 1), ('G', 1)), (('D', 0), ('G', 0)), (('D', 0), ('D', 1))] >> dbn_copy.get_cpds() [<TabularCPD representing P(('G', 0):3 | ('I', 0):2, ('D', 0):2) at 0x7f13961a3320>] property degree A DegreeView for the Graph as G.degree or G.degree(). The node degree is the number of edges adjacent to the node. The weighted node degree is the sum of the edge weights for edges incident to that node. This object provides an iterator for (node, degree) as well as lookup for the degree for a single node. Parameters • nbunch (single node, container, or all nodes (default= all nodes)) – The view will only report edges incident to these nodes. • weight (string or None, optional (default=None)) – The name of an edge attribute that holds the numerical value used as a weight. If None, then each edge has weight 1. The degree is the sum of the edge weights adjacent to the node. Returns • If a single node is requested • deg (int) – Degree of the node • OR if multiple nodes are requested • nd_iter (iterator) – The iterator returns two-tuples of (node, degree). Examples >>> G = nx.DiGraph() # or MultiDiGraph >>> nx.add_path(G, [0, 1, 2, 3]) >>> G.degree(0) # node 0 with degree 1 1 >>> list(G.degree([0, 1, 2])) [(0, 1), (1, 2), (2, 2)] do(node) Applies the do operator to the graph and returns a new DAG with the transformed graph. The do-operator, do(X = x) has the effect of removing all edges from the parents of X and setting X to the given value x. Parameters node (string) – The name of the node to apply the do-operator to. Returns DAG Return type A new instance of DAG modified by the do-operator Examples Initialize a DAG >>> graph = DAG() >>> graph.add_edges_from([(‘X’, ‘A’), (‘A’, ‘Y’), (‘A’, ‘B’)]) Applying the do-operator will return a new DAG with the desired structure. >>> graph_do_A = self.graph.do(‘A’) Which we can verify is missing the edges we would expect. >>> graph_do_A.edges [(‘A’, ‘B’), (‘A’, ‘Y’)] References Causality: Models, Reasoning, and Inference, Judea Pearl (2000). p.70. edge_attr_dict_factory alias of builtins.dict edge_subgraph(edges) Returns the subgraph induced by the specified edges. The induced subgraph contains each edge in edges and each node incident to any one of those edges. Parameters edges (iterable) – An iterable of edges in this graph. Returns G – An edge-induced subgraph of this graph with the same edge attributes. Return type Graph Notes The graph, edge, and node attributes in the returned subgraph view are references to the corresponding attributes in the original graph. The view is read-only. To create a full graph version of the subgraph with its own copy of the edge or node attributes, use: >>> G.edge_subgraph(edges).copy() Examples >>> G = nx.path_graph(5) >>> H = G.edge_subgraph([(0, 1), (3, 4)]) >>> list(H.nodes) [0, 1, 3, 4] >>> list(H.edges) [(0, 1), (3, 4)] property edges An OutEdgeView of the DiGraph as G.edges or G.edges(). edges(self, nbunch=None, data=False, default=None) The OutEdgeView provides set-like operations on the edge-tuples as well as edge attribute lookup. When called, it also provides an EdgeDataView object which allows control of access to edge attributes (but does not provide set-like operations). Hence, G.edges[u, v][‘color’] provides the value of the color attribute for edge (u, v) while for (u, v, c) in G.edges.data(‘color’, default=’red’): iterates through all the edges yielding the color attribute with default ‘red’ if no color attribute exists. Parameters • nbunch (single node, container, or all nodes (default= all nodes)) – The view will only report edges incident to these nodes. • data (string or bool, optional (default=False)) – The edge attribute returned in 3-tuple (u, v, ddict[data]). If True, return edge attribute dict in 3-tuple (u, v, ddict). If False, return 2-tuple (u, v). • default (value, optional (default=None)) – Value used for edges that don’t have the requested attribute. Only relevant if data is not True or False. Returns edges – A view of edge attributes, usually it iterates over (u, v) or (u, v, d) tuples of edges, but can also be used for attribute lookup as edges[u, v][‘foo’]. Return type OutEdgeView Notes Nodes in nbunch that are not in the graph will be (quietly) ignored. For directed graphs this returns the out-edges. Examples >>> G = nx.DiGraph() # or MultiDiGraph, etc >>> [e for e in G.edges] [(0, 1), (1, 2), (2, 3)] >>> G.edges.data() # default data is {} (empty dict) OutEdgeDataView([(0, 1, {}), (1, 2, {}), (2, 3, {'weight': 5})]) >>> G.edges.data('weight', default=1) OutEdgeDataView([(0, 1, 1), (1, 2, 1), (2, 3, 5)]) >>> G.edges([0, 2]) # only edges incident to these nodes OutEdgeDataView([(0, 1), (2, 3)]) >>> G.edges(0) # only edges incident to a single node (use G.adj[0]?) OutEdgeDataView([(0, 1)]) get_children(node) Returns a list of children of node. Throws an error if the node is not present in the graph. Parameters node (string, int or any hashable python object.) – The node whose children would be returned. Examples >>> from pgmpy.base import DAG >>> g = DAG(ebunch=[('A', 'B'), ('C', 'B'), ('B', 'D'), ('B', 'E'), ('B', 'F'), ('E', 'G')]) >>> g.get_children(node='B') ['D', 'E', 'F'] get_cpds(node=None, time_slice=0)[source] Returns the CPDs that have been associated with the network. Parameters • node (tuple (node_name, time_slice)) – The node should be in the following form (node_name, time_slice). Here, node_name is the node that is inserted while the time_slice is an integer value, which denotes the index of the time_slice that the node belongs to. • time_slice (int) – The time_slice should be a positive integer greater than or equal to zero. Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> from pgmpy.factors.discrete import TabularCPD >>> dbn = DBN() >>> grade_cpd = TabularCPD(('G',0), 3, [[0.3,0.05,0.9,0.5], ... [0.4,0.25,0.8,0.03], ... [0.3,0.7,0.02,0.2]], [('I', 0),('D', 0)],[2,2]) >>> dbn.get_cpds() get_edge_data(u, v, default=None) Returns the attribute dictionary associated with edge (u, v). This is identical to G[u][v] except the default is returned instead of an exception if the edge doesn’t exist. Parameters • v (u,) – • default (any Python object (default=None)) – Value to return if the edge (u, v) is not found. Returns edge_dict – The edge attribute dictionary. Return type dictionary Examples >>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G[0][1] {} Warning: Assigning to G[u][v] is not permitted. But it is safe to assign attributes G[u][v][‘foo’] >>> G[0][1]['weight'] = 7 >>> G[0][1]['weight'] 7 >>> G[1][0]['weight'] 7 >>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G.get_edge_data(0, 1) # default edge data is {} {} >>> e = (0, 1) >>> G.get_edge_data(*e) # tuple form {} >>> G.get_edge_data('a', 'b', default=0) # edge not in graph, return 0 0 get_immoralities() Finds all the immoralities in the model A v-structure X -> Z <- Y is an immorality if there is no direct edge between X and Y . Returns set Return type A set of all the immoralities in the model Examples >>> from pgmpy.base import DAG >>> student = DAG() >>> student.get_immoralities() {('diff','intel')} get_independencies(latex=False) Computes independencies in the DAG, by checking d-seperation. Parameters latex (boolean) – If latex=True then latex string of the independence assertion would be created. Examples >>> from pgmpy.base import DAG >>> chain = DAG([('X', 'Y'), ('Y', 'Z')]) >>> chain.get_independencies() (X _|_ Z | Y) (Z _|_ X | Y) get_inter_edges()[source] Returns the inter-slice edges present in the 2-TBN. Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> dbn = DBN() >>> dbn.add_edges_from([(('D', 0), ('G', 0)), (('I', 0), ('G', 0)), ... (('G', 0), ('L', 0)), (('D', 0), ('D', 1)), ... (('I', 0), ('I', 1)), (('G', 0), ('G', 1)), ... (('G', 0), ('L', 1)), (('L', 0), ('L', 1))]) >>> dbn.get_inter_edges() [(('D', 0), ('D', 1)), (('G', 0), ('G', 1)), (('G', 0), ('L', 1)), (('I', 0), ('I', 1)), (('L', 0), ('L', 1))] get_interface_nodes(time_slice=0)[source] Returns the nodes in the first timeslice whose children are present in the first timeslice. Parameters time_slice (int) – The timeslice should be a positive value greater than or equal to zero Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> dbn = DBN() >>> dbn.add_nodes_from(['D', 'G', 'I', 'S', 'L']) >>> dbn.get_interface_nodes() [('D', 0)] get_intra_edges(time_slice=0)[source] Returns the intra slice edges present in the 2-TBN. Parameters time_slice (int (whole number)) – The time slice for which to get intra edges. The timeslice should be a positive value or zero. Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> dbn = DBN() >>> dbn.add_nodes_from(['D', 'G', 'I', 'S', 'L']) >>> dbn.add_edges_from([(('D', 0), ('G', 0)), (('I', 0), ('G', 0)), ... (('G', 0), ('L', 0)), (('D', 0), ('D', 1)), ... (('I', 0), ('I', 1)), (('G', 0), ('G', 1)), ... (('G', 0), ('L', 1)), (('L', 0), ('L', 1))]) >>> dbn.get_intra_edges() [(('D', 0), ('G', 0)), (('G', 0), ('L', 0)), (('I', 0), ('G', 0)) get_leaves() Returns a list of leaves of the graph. Examples >>> from pgmpy.base import DAG >>> graph = DAG([('A', 'B'), ('B', 'C'), ('B', 'D')]) >>> graph.get_leaves() ['C', 'D'] get_markov_blanket(node) Returns a markov blanket for a random variable. In the case of Bayesian Networks, the markov blanket is the set of node’s parents, its children and its children’s other parents. Returns list(blanket_nodes) Return type List of nodes contained in Markov Blanket Parameters node (string, int or any hashable python object.) – The node whose markov blanket would be returned. Examples >>> from pgmpy.base import DAG >>> from pgmpy.factors.discrete import TabularCPD >>> G = DAG([('x', 'y'), ('z', 'y'), ('y', 'w'), ('y', 'v'), ('u', 'w'), ('s', 'v'), ('w', 't'), ('w', 'm'), ('v', 'n'), ('v', 'q')]) >>> G.get_markov_blanket('y') ['s', 'w', 'x', 'u', 'z', 'v'] get_parents(node) Returns a list of parents of node. Throws an error if the node is not present in the graph. Parameters node (string, int or any hashable python object.) – The node whose parents would be returned. Examples >>> from pgmpy.base import DAG ['diff', 'intel'] get_roots() Returns a list of roots of the graph. Examples >>> from pgmpy.base import DAG >>> graph = DAG([('A', 'B'), ('B', 'C'), ('B', 'D'), ('E', 'B')]) >>> graph.get_roots() ['A', 'E'] get_slice_nodes(time_slice=0)[source] Returns the nodes present in a particular timeslice Parameters time_slice (int) – The timeslice should be a positive value greater than or equal to zero Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> dbn = DBN() >>> dbn.add_nodes_from(['D', 'G', 'I', 'S', 'L']) >>> dbn.add_edges_from([(('D', 0),('G', 0)),(('I', 0),('G', 0)),(('G', 0),('L', 0)),(('D', 0),('D', 1))]) >>> dbn.get_slice_nodes() graph_attr_dict_factory alias of builtins.dict has_edge(u, v) Returns True if the edge (u, v) is in the graph. This is the same as v in G[u] without KeyError exceptions. Parameters v (u,) – Nodes can be, for example, strings or numbers. Nodes must be hashable (and not None) Python objects. Returns edge_ind – True if edge is in the graph, False otherwise. Return type bool Examples >>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G.has_edge(0, 1) # using two nodes True >>> e = (0, 1) >>> G.has_edge(*e) # e is a 2-tuple (u, v) True >>> e = (0, 1, {'weight':7}) >>> G.has_edge(*e[:2]) # e is a 3-tuple (u, v, data_dictionary) True The following syntax are equivalent: >>> G.has_edge(0, 1) True >>> 1 in G[0] # though this gives KeyError if 0 not in G True has_node(n) Returns True if the graph contains the node n. Identical to n in G Parameters n (node) – Examples >>> G = nx.path_graph(3) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G.has_node(0) True It is more readable and simpler to use >>> 0 in G True has_predecessor(u, v) Returns True if node u has predecessor v. This is true if graph has the edge u<-v. has_successor(u, v) Returns True if node u has successor v. This is true if graph has the edge u->v. property in_degree An InDegreeView for (node, in_degree) or in_degree for single node. The node in_degree is the number of edges pointing to the node. The weighted node degree is the sum of the edge weights for edges incident to that node. This object provides an iteration over (node, in_degree) as well as lookup for the degree for a single node. Parameters • nbunch (single node, container, or all nodes (default= all nodes)) – The view will only report edges incident to these nodes. • weight (string or None, optional (default=None)) – The name of an edge attribute that holds the numerical value used as a weight. If None, then each edge has weight 1. The degree is the sum of the edge weights adjacent to the node. Returns • If a single node is requested • deg (int) – In-degree of the node • OR if multiple nodes are requested • nd_iter (iterator) – The iterator returns two-tuples of (node, in-degree). Examples >>> G = nx.DiGraph() >>> nx.add_path(G, [0, 1, 2, 3]) >>> G.in_degree(0) # node 0 with degree 0 0 >>> list(G.in_degree([0, 1, 2])) [(0, 0), (1, 1), (2, 1)] in_degree_iter(nbunch=None, weight=None) property in_edges An InEdgeView of the Graph as G.in_edges or G.in_edges(). in_edges(self, nbunch=None, data=False, default=None): Parameters • nbunch (single node, container, or all nodes (default= all nodes)) – The view will only report edges incident to these nodes. • data (string or bool, optional (default=False)) – The edge attribute returned in 3-tuple (u, v, ddict[data]). If True, return edge attribute dict in 3-tuple (u, v, ddict). If False, return 2-tuple (u, v). • default (value, optional (default=None)) – Value used for edges that don’t have the requested attribute. Only relevant if data is not True or False. Returns in_edges – A view of edge attributes, usually it iterates over (u, v) or (u, v, d) tuples of edges, but can also be used for attribute lookup as edges[u, v][‘foo’]. Return type InEdgeView initialize_initial_state()[source] This method will automatically re-adjust the cpds and the edges added to the bayesian network. If an edge that is added as an intra time slice edge in the 0th timeslice, this method will automatically add it in the 1st timeslice. It will also add the cpds. However, to call this method, one needs to add cpds as well as the edges in the bayesian network of the whole skeleton including the 0th and the 1st timeslice,. Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> from pgmpy.factors.discrete import TabularCPD >>> student = DBN() >>> student.add_nodes_from(['D', 'G', 'I', 'S', 'L']) >>> student.add_edges_from([(('D', 0),('G', 0)),(('I', 0),('G', 0)),(('D', 0),('D', 1)),(('I', 0),('I', 1))]) >>> grade_cpd = TabularCPD(('G', 0), 3, [[0.3, 0.05, 0.9, 0.5], ... [0.4, 0.25, 0.08, 0.3], ... [0.3, 0.7, 0.02, 0.2]], ... evidence=[('I', 0),('D', 0)], ... evidence_card=[2, 2]) >>> d_i_cpd = TabularCPD(('D', 1), 2, [[0.6, 0.3], ... [0.4, 0.7]], ... evidence=[('D', 0)], ... evidence_card=2) >>> diff_cpd = TabularCPD(('D', 0), 2, [[0.6, 0.4]]) >>> intel_cpd = TabularCPD(('I',0), 2, [[0.7, 0.3]]) >>> i_i_cpd = TabularCPD(('I', 1), 2, [[0.5, 0.4], ... [0.5, 0.6]], ... evidence=[('I', 0)], ... evidence_card=2) >>> student.initialize_initial_state() is_active_trail(start, end, observed=None) Returns True if there is any active trail between start and end node :param start: :type start: Graph Node :param end: :type end: Graph Node :param observed: If given the active trail would be computed assuming these nodes to be observed. :type observed: List of nodes (optional) :param additional_observed: If given the active trail would be computed assuming these nodes to be observed along with the nodes marked as observed in the model. Examples >>> from pgmpy.base import DAG >>> student = DAG() ... ('intel', 'sat')]) >>> student.is_active_trail('diff', 'intel') False True is_directed() Returns True if graph is directed, False otherwise. is_iequivalent(model) Checks whether the given model is I-equivalent Two graphs G1 and G2 are said to be I-equivalent if they have same skeleton and have same set of immoralities. Note: For same skeleton different names of nodes can work but for immoralities names of nodes must be same Parameters model (A DAG object, for which you want to check I-equivalence) – Returns boolean Return type True if both are I-equivalent, False otherwise Examples >>> from pgmpy.base import DAG >>> G = DAG() ... ('X', 'Y'), ('Z', 'Y')]) >>> G1 = DAG() ... ('X', 'Y'), ('Z', 'Y')]) >>> G.is_iequivalent(G1) True is_multigraph() Returns True if graph is a multigraph, False otherwise. local_independencies(variables) Returns an instance of Independencies containing the local independencies of each of the variables. Parameters variables (str or array like) – variables whose local independencies are to be found. Examples >>> from pgmpy.models import DAG >>> student = DAG() >>> ind (grade _|_ SAT | diff, intel) moralize()[source] Removes all the immoralities in the Network and creates a moral graph (UndirectedGraph). A v-structure X->Z<-Y is an immorality if there is no directed edge between X and Y. Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> dbn = DBN([(('D',0), ('G',0)), (('I',0), ('G',0))]) >>> moral_graph = dbn.moralize() >>> moral_graph.edges() [(('G', 0), ('I', 0)), (('G', 0), ('D', 0)), (('D', 1), ('I', 1)), (('D', 1), ('G', 1)), (('I', 0), ('D', 0)), (('G', 1), ('I', 1))] property name String identifier of the graph. This graph attribute appears in the attribute dict G.graph keyed by the string “name”. as well as an attribute (technically a property) G.name. This is entirely user controlled. nbunch_iter(nbunch=None) Returns an iterator over nodes contained in nbunch that are also in the graph. The nodes in nbunch are checked for membership in the graph and if not are silently ignored. Parameters nbunch (single node, container, or all nodes (default= all nodes)) – The view will only report edges incident to these nodes. Returns niter – An iterator over nodes in nbunch that are also in the graph. If nbunch is None, iterate over all nodes in the graph. Return type iterator Raises NetworkXError – If nbunch is not a node or or sequence of nodes. If a node in nbunch is not hashable. Graph.__iter__() Notes When nbunch is an iterator, the returned iterator yields values directly from nbunch, becoming exhausted when nbunch is exhausted. To test whether nbunch is a single node, one can use “if nbunch in self:”, even after processing with this routine. If nbunch is not a node or a (possibly empty) sequence/iterator or None, a NetworkXError is raised. Also, if any object in nbunch is not hashable, a NetworkXError is raised. neighbors(n) Returns an iterator over successor nodes of n. A successor of n is a node m such that there exists a directed edge from n to m. Parameters n (node) – A node in the graph Raises NetworkXError – If n is not in the graph. Notes neighbors() and successors() are the same. node_attr_dict_factory alias of builtins.dict node_dict_factory alias of builtins.dict property nodes A NodeView of the Graph as G.nodes or G.nodes(). Can be used as G.nodes for data lookup and for set-like operations. Can also be used as G.nodes(data=’color’, default=None) to return a NodeDataView which reports specific node data but no set operations. It presents a dict-like interface as well with G.nodes.items() iterating over (node, nodedata) 2-tuples and G.nodes[3][‘foo’] providing the value of the foo attribute for node 3. In addition, a view G.nodes.data(‘foo’) provides a dict-like interface to the foo attribute of each node. G.nodes.data(‘foo’, default=1) provides a default for nodes that do not have attribute foo. Parameters • data (string or bool, optional (default=False)) – The node attribute returned in 2-tuple (n, ddict[data]). If True, return entire node attribute dict as (n, ddict). If False, return just the nodes n. • default (value, optional (default=None)) – Value used for nodes that don’t have the requested attribute. Only relevant if data is not True or False. Returns Allows set-like operations over the nodes as well as node attribute dict lookup and calling to get a NodeDataView. A NodeDataView iterates over (n, data) and has no set operations. A NodeView iterates over n and includes set operations. When called, if data is False, an iterator over nodes. Otherwise an iterator of 2-tuples (node, attribute value) where the attribute is specified in data. If data is True then the attribute becomes the entire data dictionary. Return type NodeView Notes If your node data is not needed, it is simpler and equivalent to use the expression for n in G, or list(G). Examples There are two simple ways of getting a list of all nodes in the graph: >>> G = nx.path_graph(3) >>> list(G.nodes) [0, 1, 2] >>> list(G) [0, 1, 2] To get the node data along with the nodes: >>> G.add_node(1, time='5pm') >>> G.nodes[0]['foo'] = 'bar' >>> list(G.nodes(data=True)) [(0, {'foo': 'bar'}), (1, {'time': '5pm'}), (2, {})] >>> list(G.nodes.data()) [(0, {'foo': 'bar'}), (1, {'time': '5pm'}), (2, {})] >>> list(G.nodes(data='foo')) [(0, 'bar'), (1, None), (2, None)] >>> list(G.nodes.data('foo')) [(0, 'bar'), (1, None), (2, None)] >>> list(G.nodes(data='time')) [(0, None), (1, '5pm'), (2, None)] >>> list(G.nodes.data('time')) [(0, None), (1, '5pm'), (2, None)] >>> list(G.nodes(data='time', default='Not Available')) [(0, 'Not Available'), (1, '5pm'), (2, 'Not Available')] >>> list(G.nodes.data('time', default='Not Available')) [(0, 'Not Available'), (1, '5pm'), (2, 'Not Available')] If some of your nodes have an attribute and the rest are assumed to have a default attribute value you can create a dictionary from node/attribute pairs using the default keyword argument to guarantee the value is never None: >>> G = nx.Graph() >>> dict(G.nodes(data='weight', default=1)) {0: 1, 1: 2, 2: 3} number_of_edges(u=None, v=None) Returns the number of edges between two nodes. Parameters v (u,) – If u and v are specified, return the number of edges between u and v. Otherwise return the total number of all edges. Returns nedges – The number of edges in the graph. If nodes u and v are specified return the number of edges between those nodes. If the graph is directed, this only returns the number of edges from u to v. Return type int Examples For undirected graphs, this method counts the total number of edges in the graph: >>> G = nx.path_graph(4) >>> G.number_of_edges() 3 If you specify two nodes, this counts the total number of edges joining the two nodes: >>> G.number_of_edges(0, 1) 1 For directed graphs, this method can count the total number of directed edges from u to v: >>> G = nx.DiGraph() >>> G.number_of_edges(0, 1) 1 number_of_nodes() Returns the number of nodes in the graph. Returns nnodes – The number of nodes in the graph. Return type int order(), __len__() Examples >>> G = nx.path_graph(3) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G.number_of_nodes() 3 order() Returns the number of nodes in the graph. Returns nnodes – The number of nodes in the graph. Return type int number_of_nodes(), __len__() Examples >>> G = nx.path_graph(3) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G.order() 3 property out_degree An OutDegreeView for (node, out_degree) The node out_degree is the number of edges pointing out of the node. The weighted node degree is the sum of the edge weights for edges incident to that node. This object provides an iterator over (node, out_degree) as well as lookup for the degree for a single node. Parameters • nbunch (single node, container, or all nodes (default= all nodes)) – The view will only report edges incident to these nodes. • weight (string or None, optional (default=None)) – The name of an edge attribute that holds the numerical value used as a weight. If None, then each edge has weight 1. The degree is the sum of the edge weights adjacent to the node. Returns • If a single node is requested • deg (int) – Out-degree of the node • OR if multiple nodes are requested • nd_iter (iterator) – The iterator returns two-tuples of (node, out-degree). Examples >>> G = nx.DiGraph() >>> nx.add_path(G, [0, 1, 2, 3]) >>> G.out_degree(0) # node 0 with degree 1 1 >>> list(G.out_degree([0, 1, 2])) [(0, 1), (1, 1), (2, 1)] out_degree_iter(nbunch=None, weight=None) property out_edges An OutEdgeView of the DiGraph as G.edges or G.edges(). edges(self, nbunch=None, data=False, default=None) The OutEdgeView provides set-like operations on the edge-tuples as well as edge attribute lookup. When called, it also provides an EdgeDataView object which allows control of access to edge attributes (but does not provide set-like operations). Hence, G.edges[u, v][‘color’] provides the value of the color attribute for edge (u, v) while for (u, v, c) in G.edges.data(‘color’, default=’red’): iterates through all the edges yielding the color attribute with default ‘red’ if no color attribute exists. Parameters • nbunch (single node, container, or all nodes (default= all nodes)) – The view will only report edges incident to these nodes. • data (string or bool, optional (default=False)) – The edge attribute returned in 3-tuple (u, v, ddict[data]). If True, return edge attribute dict in 3-tuple (u, v, ddict). If False, return 2-tuple (u, v). • default (value, optional (default=None)) – Value used for edges that don’t have the requested attribute. Only relevant if data is not True or False. Returns edges – A view of edge attributes, usually it iterates over (u, v) or (u, v, d) tuples of edges, but can also be used for attribute lookup as edges[u, v][‘foo’]. Return type OutEdgeView Notes Nodes in nbunch that are not in the graph will be (quietly) ignored. For directed graphs this returns the out-edges. Examples >>> G = nx.DiGraph() # or MultiDiGraph, etc >>> [e for e in G.edges] [(0, 1), (1, 2), (2, 3)] >>> G.edges.data() # default data is {} (empty dict) OutEdgeDataView([(0, 1, {}), (1, 2, {}), (2, 3, {'weight': 5})]) >>> G.edges.data('weight', default=1) OutEdgeDataView([(0, 1, 1), (1, 2, 1), (2, 3, 5)]) >>> G.edges([0, 2]) # only edges incident to these nodes OutEdgeDataView([(0, 1), (2, 3)]) >>> G.edges(0) # only edges incident to a single node (use G.adj[0]?) OutEdgeDataView([(0, 1)]) property pred Graph adjacency object holding the predecessors of each node. This object is a read-only dict-like structure with node keys and neighbor-dict values. The neighbor-dict is keyed by neighbor to the edge-data-dict. So G.pred[2][3][‘color’] = ‘blue’ sets the color of the edge (3, 2) to “blue”. Iterating over G.pred behaves like a dict. Useful idioms include for nbr, datadict in G.pred[n].items():. A data-view not provided by dicts also exists: for nbr, foovalue in G.pred[node].data(‘foo’): A default can be set via a default argument to the data method. predecessors(n) Returns an iterator over predecessor nodes of n. A predecessor of n is a node m such that there exists a directed edge from m to n. Parameters n (node) – A node in the graph Raises NetworkXError – If n is not in the graph. remove_cpds(*cpds)[source] Removes the cpds that are provided in the argument. Parameters *cpds (list, set, tuple (array-like)) – List of CPDs which are to be associated with the model. Each CPD should be an instance of TabularCPD. Examples >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> from pgmpy.factors.discrete import TabularCPD >>> dbn = DBN() >>> grade_cpd = TabularCPD(('G',0), 3, [[0.3,0.05,0.9,0.5], ... [0.4,0.25,0.8,0.03], ... [0.3,0.7,0.02,0.2]], [('I', 0),('D', 0)],[2,2]) >>> dbn.get_cpds() [<TabularCPD representing P(('G', 0):3 | ('I', 0):2, ('D', 0):2) at 0x3348ab0>] >>> dbn.get_cpds() [] remove_edge(u, v) Remove the edge between u and v. Parameters v (u,) – Remove the edge between nodes u and v. Raises NetworkXError – If there is not an edge between u and v. remove_edges_from() remove a collection of edges Examples >>> G = nx.Graph() # or DiGraph, etc >>> nx.add_path(G, [0, 1, 2, 3]) >>> G.remove_edge(0, 1) >>> e = (1, 2) >>> G.remove_edge(*e) # unpacks e from an edge tuple >>> e = (2, 3, {'weight':7}) # an edge with attribute data >>> G.remove_edge(*e[:2]) # select first part of edge tuple remove_edges_from(ebunch) Remove all edges specified in ebunch. Parameters ebunch (list or container of edge tuples) – Each edge given in the list or container will be removed from the graph. The edges can be: • 2-tuples (u, v) edge between u and v. • 3-tuples (u, v, k) where k is ignored. remove_edge() remove a single edge Notes Will fail silently if an edge in ebunch is not in the graph. Examples >>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> ebunch = [(1, 2), (2, 3)] >>> G.remove_edges_from(ebunch) remove_node(n) Remove node n. Removes the node n and all adjacent edges. Attempting to remove a non-existent node will raise an exception. Parameters n (node) – A node in the graph Raises NetworkXError – If n is not in the graph. Examples >>> G = nx.path_graph(3) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> list(G.edges) [(0, 1), (1, 2)] >>> G.remove_node(1) >>> list(G.edges) [] remove_nodes_from(nodes) Remove multiple nodes. Parameters nodes (iterable container) – A container of nodes (list, dict, set, etc.). If a node in the container is not in the graph it is silently ignored. Examples >>> G = nx.path_graph(3) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> e = list(G.nodes) >>> e [0, 1, 2] >>> G.remove_nodes_from(e) >>> list(G.nodes) [] reverse(copy=True) Returns the reverse of the graph. The reverse is a graph with the same nodes and edges but with the directions of the edges reversed. Parameters copy (bool optional (default=True)) – If True, return a new DiGraph holding the reversed edges. If False, the reverse graph is created using a view of the original graph. size(weight=None) Returns the number of edges or total of all edge weights. Parameters weight (string or None, optional (default=None)) – The edge attribute that holds the numerical value used as a weight. If None, then each edge has weight 1. Returns size – The number of edges or (if weight keyword is provided) the total weight sum. If weight is None, returns an int. Otherwise a float (or more general numeric if the weights are more general). Return type numeric Examples >>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G.size() 3 >>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G.size() 2 >>> G.size(weight='weight') 6.0 subgraph(nodes) Returns a SubGraph view of the subgraph induced on nodes. The induced subgraph of the graph contains the nodes in nodes and the edges between those nodes. Parameters nodes (list, iterable) – A container of nodes which will be iterated through once. Returns G – A subgraph view of the graph. The graph structure cannot be changed but node/edge attributes can and are shared with the original graph. Return type SubGraph View Notes The graph, edge and node attributes are shared with the original graph. Changes to the graph structure is ruled out by the view, but changes to attributes are reflected in the original graph. To create a subgraph with its own copy of the edge/node attributes use: G.subgraph(nodes).copy() For an inplace reduction of a graph to a subgraph you can remove nodes: G.remove_nodes_from([n for n in G if n not in set(nodes)]) Subgraph views are sometimes NOT what you want. In most cases where you want to do more than simply look at the induced edges, it makes more sense to just create the subgraph as its own graph with code like: # Create a subgraph SG based on a (possibly multigraph) G SG = G.__class__() SG.add_nodes_from((n, G.nodes[n]) for n in largest_wcc) if SG.is_multigraph: for n, nbrs in G.adj.items() if n in largest_wcc for nbr, keydict in nbrs.items() if nbr in largest_wcc for key, d in keydict.items()) else: for n, nbrs in G.adj.items() if n in largest_wcc for nbr, d in nbrs.items() if nbr in largest_wcc) SG.graph.update(G.graph) Examples >>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> H = G.subgraph([0, 1, 2]) >>> list(H.edges) [(0, 1), (1, 2)] property succ Graph adjacency object holding the successors of each node. This object is a read-only dict-like structure with node keys and neighbor-dict values. The neighbor-dict is keyed by neighbor to the edge-data-dict. So G.succ[3][2][‘color’] = ‘blue’ sets the color of the edge (3, 2) to “blue”. Iterating over G.succ behaves like a dict. Useful idioms include for nbr, datadict in G.succ[n].items():. A data-view not provided by dicts also exists: for nbr, foovalue in G.succ[node].data(‘foo’): and a default can be set via a default argument to the data method. The neighbor information is also provided by subscripting the graph. So for nbr, foovalue in G[node].data(‘foo’, default=1): works. For directed graphs, G.adj is identical to G.succ. successors(n) Returns an iterator over successor nodes of n. A successor of n is a node m such that there exists a directed edge from n to m. Parameters n (node) – A node in the graph Raises NetworkXError – If n is not in the graph. Notes neighbors() and successors() are the same. to_directed(as_view=False) Returns a directed representation of the graph. Returns G – A directed graph with the same name, same nodes, and with each edge (u, v, data) replaced by two directed edges (u, v, data) and (v, u, data). Return type DiGraph Notes This returns a “deepcopy” of the edge, node, and graph attributes which attempts to completely copy all of the data and references. This is in contrast to the similar D=DiGraph(G) which returns a shallow copy of the data. See the Python copy module for more information on shallow and deep copies, https://docs.python.org/2/library/copy.html. Warning: If you have subclassed Graph to use dict-like objects in the data structure, those changes do not transfer to the DiGraph created by this method. Examples >>> G = nx.Graph() # or MultiGraph, etc >>> H = G.to_directed() >>> list(H.edges) [(0, 1), (1, 0)] If already directed, return a (deep) copy >>> G = nx.DiGraph() # or MultiDiGraph, etc >>> H = G.to_directed() >>> list(H.edges) [(0, 1)] to_directed_class() Returns the class to use for empty directed copies. If you subclass the base classes, use this to designate what directed class to use for to_directed() copies. to_undirected(reciprocal=False, as_view=False) Returns an undirected representation of the digraph. Parameters • reciprocal (bool (optional)) – If True only keep edges that appear in both directions in the original digraph. • as_view (bool (optional, default=False)) – If True return an undirected view of the original directed graph. Returns G – An undirected graph with the same name and nodes and with edge (u, v, data) if either (u, v, data) or (v, u, data) is in the digraph. If both edges exist in digraph and their edge data is different, only one edge is created with an arbitrary choice of which edge data to use. You must check and correct for this manually if desired. Return type Graph Notes If edges in both directions (u, v) and (v, u) exist in the graph, attributes for the new undirected edge will be a combination of the attributes of the directed edges. The edge data is updated in the (arbitrary) order that the edges are encountered. For more customized control of the edge attributes use add_edge(). This returns a “deepcopy” of the edge, node, and graph attributes which attempts to completely copy all of the data and references. This is in contrast to the similar G=DiGraph(D) which returns a shallow copy of the data. See the Python copy module for more information on shallow and deep copies, https://docs.python.org/2/library/copy.html. Warning: If you have subclassed DiGraph to use dict-like objects in the data structure, those changes do not transfer to the Graph created by this method. Examples >>> G = nx.path_graph(2) # or MultiGraph, etc >>> H = G.to_directed() >>> list(H.edges) [(0, 1), (1, 0)] >>> G2 = H.to_undirected() >>> list(G2.edges) [(0, 1)] to_undirected_class() Returns the class to use for empty undirected copies. If you subclass the base classes, use this to designate what directed class to use for to_directed() copies. update(edges=None, nodes=None) Update the graph using nodes/edges/graphs as input. Like dict.update, this method takes a graph as input, adding the graph’s nodes and edges to this graph. It can also take two inputs: edges and nodes. Finally it can take either edges or nodes. To specify only nodes the keyword nodes must be used. The collections of edges and nodes are treated similarly to the add_edges_from/add_nodes_from methods. When iterated, they should yield 2-tuples (u, v) or 3-tuples (u, v, datadict). Parameters • edges (Graph object, collection of edges, or None) – The first parameter can be a graph or some edges. If it has attributes nodes and edges, then it is taken to be a Graph-like object and those attributes are used as collections of nodes and edges to be added to the graph. If the first parameter does not have those attributes, it is treated as a collection of edges and added to the graph. If the first argument is None, no edges are added. • nodes (collection of nodes, or None) – The second parameter is treated as a collection of nodes to be added to the graph unless it is None. If edges is None and nodes is None an exception is raised. If the first parameter is a Graph, then nodes is ignored. Examples >>> G = nx.path_graph(5) >>> G.update(nx.complete_graph(range(4,10))) >>> from itertools import combinations >>> edges = ((u, v, {'power': u * v}) ... for u, v in combinations(range(10, 20), 2) ... if u * v < 225) >>> nodes = [1000] # for singleton, use a container >>> G.update(edges, nodes) Notes It you want to update the graph using an adjacency structure it is straightforward to obtain the edges/nodes from adjacency. The following examples provide common cases, your adjacency may be slightly different and require tweaks of these examples. >>> # dict-of-set/list/tuple >>> adj = {1: {2, 3}, 2: {1, 3}, 3: {1, 2}} >>> e = [(u, v) for u, nbrs in adj.items() for v in nbrs] >>> DG = nx.DiGraph() >>> # dict-of-dict-of-attribute >>> adj = {1: {2: 1.3, 3: 0.7}, 2: {1: 1.4}, 3: {1: 0.7}} >>> e = [(u, v, {'weight': d}) for u, nbrs in adj.items() ... for v, d in nbrs.items()] >>> # dict-of-dict-of-dict >>> adj = {1: {2: {'weight': 1.3}, 3: {'color': 0.7, 'weight':1.2}}} >>> e = [(u, v, {'weight': d}) for u, nbrs in adj.items() ... for v, d in nbrs.items()] >>> # predecessor adjacency (dict-of-set) >>> pred = {1: {2, 3}, 2: {3}, 3: {3}} >>> e = [(v, u) for u, nbrs in pred.items() for v in nbrs] >>> # MultiGraph dict-of-dict-of-dict-of-attribute >>> MDG = nx.MultiDiGraph() >>> adj = {1: {2: {0: {'weight': 1.3}, 1: {'weight': 1.2}}}, ... 3: {2: {0: {'weight': 0.7}}}} >>> e = [(u, v, ekey, d) for u, nbrs in adj.items() ... for v, keydict in nbrs.items() ... for ekey, d in keydict.items()] >>> MDG.update(edges=e) add_edges_from() add multiple edges to a graph add_nodes_from() add multiple nodes to a graph ## Structural Equation Models¶ class pgmpy.models.SEM.SEM(syntax, **kwargs)[source] Class for representing Structural Equation Models. This class is a wrapper over SEMGraph and SEMAlg to provide a consistent API over the different representations. model A graphical representation of the model. Type SEMGraph instance fit()[source] classmethod from_RAM(variables, B, zeta, observed=None, wedge_y=None, fixed_values=None)[source] Initializes a SEM instance using Reticular Action Model(RAM) notation. The model is defined as: ..math: \mathbf{\eta} = \mathbf{B \eta} + \mathbf{\epsilon} \\ \mathbf{\y} = \wedge_y \mathbf{\eta} \zeta = COV(\mathbf{\epsilon}) where is the set of variables (both latent and observed), are the error terms, is the set of observed variables, is a boolean array of the shape (no of observed variables, no of total variables). Parameters • variables (list, array-like) – List of variables (both latent and observed) in the model. • B (2-D boolean array (shape: len(variables) x len(variables))) – The non-zero parameters in matrix. Refer model definition in docstring for details. • zeta (2-D boolean array (shape: len(variables) x len(variables))) – The non-zero parameters in (error covariance) matrix. Refer model definition in docstring for details. • observed (list, array-like (optional: Either observed or wedge_y needs to be specified)) – List of observed variables in the model. • wedge_y (2-D array (shape: no. observed x total vars) (optional: Either observed or wedge_y)) – The matrix. Refer model definition in docstring for details. • fixed_values (dict (optional)) – If specified, fixes the parameter values and are not changed during estimation. A dict with the keys B, zeta. Returns pgmpy.models.SEM instance Return type An instance of the object with initialized values. Examples >>> from pgmpy.models import SEM >>> SEM.from_RAM(TODO: Finish this) classmethod from_graph(ebunch, latents=[], err_corr=[], err_var={})[source] Initializes a SEM instance using graphical structure. Parameters • ebunch (list/array-like) – List of edges in form of tuples. Each tuple can be of two possible shape: 1. (u, v): This would add an edge from u to v without setting any parameter for the edge. 2. (u, v, parameter): This would add an edge from u to v and set the edge’s parameter to parameter. • latents (list/array-like) – List of nodes which are latent. All other variables are considered observed. • err_corr (list/array-like) – List of tuples representing edges between error terms. It can be of the following forms: 1. (u, v): Add correlation between error terms of u and v. Doesn’t set any variance or covariance values. 2. (u, v, covar): Adds correlation between the error terms of u and v and sets the parameter to covar. • err_var (dict) – Dict of the form (var: variance). Examples Defining a model (Union sentiment model[1]) without setting any paramaters. >>> from pgmpy.models import SEM >>> sem = SEM.from_graph(ebunch=[(‘deferenc’, ‘unionsen’), (‘laboract’, ‘unionsen’), … (‘yrsmill’, ‘unionsen’), (‘age’, ‘deferenc’), … (‘age’, ‘laboract’), (‘deferenc’, ‘laboract’)], … latents=[], … err_corr=[(‘yrsmill’, ‘age’)], … err_var={}) Defining a model (Education [2]) with all the parameters set. For not setting any parameter np.NaN can be explicitly passed. >>> sem_edu = SEM.from_graph(ebunch=[(‘intelligence’, ‘academic’, 0.8), (‘intelligence’, ‘scale_1’, 0.7), … (‘intelligence’, ‘scale_2’, 0.64), (‘intelligence’, ‘scale_3’, 0.73), … (‘intelligence’, ‘scale_4’, 0.82), (‘academic’, ‘SAT_score’, 0.98), … (‘academic’, ‘High_school_gpa’, 0.75), (‘academic’, ‘ACT_score’, 0.87)], … latents=[‘intelligence’, ‘academic’], … err_corr=[] … err_var={}) References [1] McDonald, A, J., & Clelland, D. A. (1984). Textile Workers and Union Sentiment. Social Forces, 63(2), 502–521 [2] https://en.wikipedia.org/wiki/Structural_equation_modeling#/ classmethod from_lavaan(string=None, filename=None)[source] Initializes a SEM instance using lavaan syntax. Parameters • string (str (default: None)) – A lavaan style multiline set of regression equation representing the model. Refer http://lavaan.ugent.be/tutorial/syntax1.html for details. • filename (str (default: None)) – The filename of the file containing the model in lavaan syntax. Examples classmethod from_lisrel(var_names, params, fixed_masks=None)[source] Initializes a SEM instance using LISREL notation. The LISREL notation is defined as: ..math: \mathbf{\eta} = \mathbf{B \eta} + \mathbf{\Gamma \xi} + mathbf{\zeta} \\ \mathbf{y} = \mathbf{\wedge_y \eta} + \mathbf{\epsilon} \\ \mathbf{x} = \mathbf{\wedge_x \xi} + \mathbf{\delta} where is the set of endogenous variables, is the set of exogeneous variables, and are the set of measurement variables for and respectively. , , and are the error terms for , , and respectively. Parameters • str_model (str (default: None)) – A lavaan style multiline set of regression equation representing the model. Refer http://lavaan.ugent.be/tutorial/syntax1.html for details. If None requires var_names and params to be specified. • var_names (dict (default: None)) – A dict with the keys: eta, xi, y, and x. Each keys should have a list as the value with the name of variables. • params (dict (default: None)) – A dict of LISREL representation non-zero parameters. Must contain the following keys: B, gamma, wedge_y, wedge_x, phi, theta_e, theta_del, and psi. If None str_model must be specified. • fixed_params (dict (default: None)) – A dict of fixed values for parameters. The shape of the parameters should be same as params. If None all the parameters are learnable. Returns pgmpy.models.SEM instance Return type An instance of the object with initalized values. Examples >>> from pgmpy.models import SEMAlg # TODO: Finish this example class pgmpy.models.SEM.SEMAlg(eta=None, B=None, zeta=None, wedge_y=None, fixed_values=None)[source] Base class for algebraic representation of Structural Equation Models(SEMs). The model is represented using the Reticular Action Model (RAM). generate_samples(n_samples=100)[source] Generates random samples from the model. Parameters n_samples (int) – The number of samples to generate. Returns pd.DataFrame Return type The genrated samples. set_params(B, zeta)[source] Sets the fixed parameters of the model. Parameters • B (2D array) – The B matrix. • zeta (2D array) – The covariance matrix. to_SEMGraph()[source] Creates a graph structure from the LISREL representation. Returns pgmpy.models.SEMGraph instance Return type A path model of the model. Examples >>> from pgmpy.models import SEMAlg >>> model = SEMAlg() # TODO: Finish this example class pgmpy.models.SEM.SEMGraph(ebunch=[], latents=[], err_corr=[], err_var={})[source] Base class for graphical representation of Structural Equation Models(SEMs). All variables are by default assumed to have an associated error latent variable, therefore doesn’t need to be specified. latents List of all the latent variables in the model except the error terms. Type list observed List of all the observed variables in the model. Type list graph The graphical structure of the latent and observed variables except the error terms. The parameteers are stored in the weight attribute of each edge. Type nx.DirectedGraph err_graph An undirected graph representing the relations between the error terms of the model. The node of the graph has the same name as the variable but represents the error terms. The variance is stored in the weight attribute of the node and the covariance is stored in the weight attribute of the edge. Type nx.Graph full_graph_struct Represents the full graph structure. The names of error terms starts with . and new nodes are added for each correlation which starts with ... Type nx.DiGraph active_trail_nodes(variables, observed=[], avoid_nodes=[], struct='full')[source] Finds all the observed variables which are d-connected to variables in the graph_struct when observed variables are observed. Parameters • variables (str or array like) – Observed variables whose d-connected variables are to be found. • observed (list/array-like) – If given the active trails would be computed assuming these nodes to be observed. • avoid_nodes (list/array-like) – If specificed, the algorithm doesn’t account for paths that have influence flowing through the avoid node. • struct (str or nx.DiGraph instance) – If “full”, considers correlation between error terms for computing d-connection. If “non_error”, doesn’t condised error correlations for computing d-connection. If instance of nx.DiGraph, finds d-connected variables on the given graph. Examples >>> from pgmpy.models import SEM >>> model = SEMGraph(ebunch=[('yrsmill', 'unionsen'), ('age', 'laboract'), ... ('age', 'deferenc'), ('deferenc', 'laboract'), ... ('deferenc', 'unionsen'), ('laboract', 'unionsen')], ... latents=[], ... err_corr=[('yrsmill', 'age')]) >>> model.active_trail_nodes('age') Returns dict – Returns a dict with variables as the key and a list of d-connected variables as the value. Return type {str: list} References Details of the algorithm can be found in ‘Probabilistic Graphical Model Principles and Techniques’ - Koller and Friedman Page 75 Algorithm 3.1 get_conditional_ivs(X, Y, scaling_indicators={})[source] Returns the conditional IVs for the relation X -> Y Parameters • X (node) – The observed variable’s name • Y (node) – The oberved variable’s name • scaling_indicators (dict (optional)) – A dict representing which observed variable to use as scaling indicator for the latent variables. If not provided, automatically finds scaling indicators by randomly selecting one of the measurement variables of each latent variable. Returns set Return type Set of 2-tuples representing tuple[0] is an IV for X -> Y given tuple[1] References 1 Van Der Zander, B., Textor, J., & Liskiewicz, M. (2015, June). Efficiently finding conditional instruments for causal inference. In Twenty-Fourth International Joint Conference on Artificial Intelligence. Examples >>> from pgmpy.models import SEMGraph >>> model = SEMGraph(ebunch=[('I', 'X'), ('X', 'Y'), ('W', 'I')], ... latents=[], ... err_corr=['W', 'Y']) >>> model.get_ivs('X', 'Y') [('I', {'W'})] get_ivs(X, Y, scaling_indicators={})[source] Returns the Instrumental variables(IVs) for the relation X -> Y Parameters • X (node) – The variable name (observed or latent) • Y (node) – The variable name (observed or latent) • scaling_indicators (dict (optional)) – A dict representing which observed variable to use as scaling indicator for the latent variables. If not given the method automatically selects one of the measurement variables at random as the scaling indicator. Returns set – The set of Instrumental Variables for X -> Y. Return type {str} Examples >>> from pgmpy.models import SEMGraph >>> model = SEMGraph(ebunch=[('I', 'X'), ('X', 'Y')], ... latents=[], ... err_corr=['X', 'Y']) >>> model.get_ivs('X', 'Y') {'I'} get_scaling_indicators()[source] Returns a scaling indicator for each of the latent variables in the model. The scaling indicator is chosen randomly among the observed measurement variables of the latent variable. Examples >>> from pgmpy.models import SEMGraph >>> model = SEMGraph(ebunch=[('xi1', 'eta1'), ('xi1', 'x1'), ('xi1', 'x2'), ... ('eta1', 'y1'), ('eta1', 'y2')], ... latents=['xi1', 'eta1']) >>> model.get_scaling_indicators() {'xi1': 'x1', 'eta1': 'y1'} Returns dict – scaling indicator. Return type Returns a dict with latent variables as the key and their value being the moralize(graph='full')[source] TODO: This needs to go to a parent class. Removes all the immoralities in the DirectedGraph and creates a moral graph (UndirectedGraph). A v-structure X->Z<-Y is an immorality if there is no directed edge between X and Y. Parameters graph Examples to_lisrel()[source] Converts the model from a graphical representation to an equivalent algebraic representation. This converts the model into a Reticular Action Model (RAM) model representation which is implemented by pgmpy.models.SEMAlg class. Returns SEMAlg instance Return type Instance of SEMAlg representing the model. Examples >>> from pgmpy.models import SEM >>> sem = SEM.from_graph(ebunch=[('deferenc', 'unionsen'), ('laboract', 'unionsen'), ... ('yrsmill', 'unionsen'), ('age', 'deferenc'), ... ('age', 'laboract'), ('deferenc', 'laboract')], ... latents=[], ... err_corr=[('yrsmill', 'age')], ... err_var={}) >>> sem.to_lisrel() # TODO: Complete this. to_standard_lisrel() Converts to the standard lisrel format and returns the parameters. to_standard_lisrel()[source] Transforms the model to the standard LISREL representation of latent and measurement equations. The standard LISREL representation is given as: ..math:: mathbf{eta} = mathbf{B eta} + mathbf{Gamma xi} + mathbf{zeta} \ mathbf{y} = mathbf{wedge_y eta} + mathbf{epsilon} \ mathbf{x} = mathbf{wedge_x xi} + mathbf{delta} \ mathbf{Theta_e} = COV(mathbf{epsilon}) \ mathbf{Theta_delta} = COV(mathbf{delta}) \ mathbf{Psi} = COV(mathbf{eta}) \ mathbf{Phi} = COV(mathbf{xi}) \ Since the standard LISREL representation has restrictions on the types of model, this method adds extra latent variables with fixed loadings of 1 to make the model consistent with the restrictions. Returns • var_names (dict (keys: eta, xi, y, x)) – Returns the variable names in , , , . • params (dict (keys: B, gamma, wedge_y, wedge_x, theta_e, theta_del, phi, psi)) – Returns a boolean matrix for each of the parameters. A 1 in the matrix represents that there is an edge in the model, 0 represents there is no edge. • fixed_values (dict (keys: B, gamma, wedge_y, wedge_x, theta_e, theta_del, phi, psi)) – Returns a matrix for each of the parameters. A value in the matrix represents the set value for the parameter in the model else it is 0. to_lisrel()
2019-12-14 12:28:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24946023523807526, "perplexity": 13305.020696585101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541157498.50/warc/CC-MAIN-20191214122253-20191214150253-00388.warc.gz"}
https://assets.aicrowd.com/challenges/learn-to-race-autonomous-racing-virtual-challenge
21.4k 445 46 734 📹 Get started with the challenge: Walkthrough, submission process, and leaderboard 👥 Find teammates here 📝 Participate in the Community Contribution Prize here 🚀 Fork the the Starter Kit here! 🏗️ Claim your training credits here ## 🔥 Introduction Welcome to the Learn-to-Race Autonomous Racing Virtual Challenge! As autonomous technology approaches maturity, it is of paramount importance for autonomous vehicles to adheres to safety specifications, whether in urban driving or high-speed racing. Racing demands each vehicle to drive at its physical limits with little margin for safety, when any infraction could lead to catastrophic failures. Given this inherent tension, autonomous racing serves as a particularly challenging proving ground for safe learning algorithms. The objective of the Learn-to-Race competition is to push the boundary of autonomous technology, with a focus on achieving the safety benefits of autonomous driving. In this competition, you will develop a reinforcement learning (RL) agent to drive as fast as possible, while adhering to the safety constraints. In Stage 1, participants will develop and evaluate their agents on Thruxton Circuit (top), which is included with the Learn-to-Race environment. In Stage 2, participants will be evaluated on an unseen track, the North Road Track at the Las Vegas Motor Speedway (bottom), with the opportunity to 'practice' with unfrozen model weights for a 1-hour prior to evaluation. ## 🎮 THE learn-to-Race Framework Learn-to-Race is a open-source, Gym-compliant framework that leverages a high-fidelity racing simulator developed by Arrival. Arrival's simulator not only captures complex vehicle dynamcis and renders photorealistic views, but also plays a key role in bringing autonomous racing technology to real life in the Roborace series, the world’s first extreme competition of teams developing self-driving AI. Refer to learn-to-race.org to learn more. Learn-to-Race provides access to customizable, multi-model sensory inputs. One can accss RGB images from any specified location, semantic segmentation, and vehicle states (e.g. pose, velocity). During local development, the participants may use any of these inputs.  During evaluation, the agents will ONLY have access to speed and RGB images from cameras placed on the front, right, and left of the vehicle. ## 🏋️ Competition Structure ### FormAT The Learn-to-Race challenge tests an agent's ability to execute the requisite behaviors for competition-style track racing, through multimodal perceptual input. The competition consists of 2 stages. • In Stage 1, participants will submit model checkpoints to AIcrowd for evaluation on Thruxton Circuit. The submissions will first be ranked on success rate, and then submissions with the same success rate will be ranked on average speed. Aside from Thruxton Circuit, additional race tracks are available in the Learn-to-Race environment for development. • The top 10 teams on the leader board will enter Stage 2, where their agents will be evaluated on an unseen track. The top-performing teams will submit their models (with initialization) to AIcrowd for training on the unseen track for a fixed period of one hour. During the one-hour ‘practice’ period, participants are free to perform any model updates or exploration strategies of their choice; the number of safety infractions will be accumulated under the consideration that an autonomous agent should remain safe throughout its interaction with the environment. After the ‘practice’ period, the agent will be evaluated on the unseen track. The participating teams will first be ranked on success rate, and then submissions with the same success rate will be ranked on a weighted sum of the number of safety infractions and the average speed. • Specifically, we will weigh the number of safety infractions and the average speed based on: $$x = {-b \pm \sqrt{b^2-4ac} \over 2a}$$. • The max / median will be computed over the metrics from all Stage 2 participants. • To prevent the participants from achieving a high success rate by driving very slowly, we will set maximum episode length based on an average speed of 30km/h during evaluation. ### 📜 Rules • limited to 5 submission every 24 hours • only have access to speed and RGB images from cameras placed on the front, right, and left of the vehicle during evaluation • restricted from accessing model weights or custom logs during evaluation • required to submit source code, for top performers ## 📝 Evaluation Metrics ### Success Rate • Each race track will be partitioned into a fixed number of segments and the success rate is calculated as the number of successfully completed segments over the total number of segments. • If the agent fails at a certain segment, it will respawn stationarily at the beginning of the next segment. • If the agent successfully completes a segment, it will continue on to the next segment carrying over the current speed. • A higher success rate is better. ### Average Speed • Average speed is defined as the total distance travelled over time, which is used as a proxy for performance. • As this is Formula-style racing, higher speed is better. ### The number of Safety Infractions • The number of safety infractions is accumulated during the 1-hour ‘practice’ period in Round 2 of the competition. • The agent is considered to have incurred a safety infraction if 2 wheels of the vehicle leave the drivable area, the vehicle collides with an object, or does not make sufficient progress (e.g. get stuck). • In Learn-to-Race, the episode terminates upon a safety infraction. • A smaller number of safety infractions is better, i.e. the agent is safer. ## 🚀 Getting Started Please complete the following steps, in order to get started: • To obtain access to the autonomous racing simulator, go to the 'Resources' tab on the challenge page, and sign the license agreement that will allow you to download the simulator. (We suggest that you do this as soon as possible). • PI is the principal investigator. If you are a part of a team with a lead researcher, please fill in their information. Otherwise, it’ll be your name. • Clone the official L2R starter kit, to obtain the Learn-to-Race training framework, baselines, and starter code templates. • Review the documentation, as well as additional notes/suggestions, for more information on installation, running agents, and evaluation. Here is a summary of good material to get started with Learn2Race Competition: Papers: • Learn-to-Race: A Multimodal Control Environment for Autonomous Racing, James Herman*, Jonathan Francis*, Siddha Ganju, Bingqing Chen, Anirudh Koul, Abhinav Gupta, Alexey Skabelkin, Ivan Zhukov, Max Kumskoy, Eric Nyberg, ICCV 2021 [PDF] [Code] • Safe Autonomous Racing via Approximate Reachability on Ego-vision, Bingqing Chen, Jonathan Francis, James Herman, Jean Oh, Eric Nyberg, Sylvia L. Herbert [PDF] Video Instructions: ## 📢 Update: 17-Jan-21 ### Changes to the StarterKit and Code Documentation The post concerns recent changes and patches made to the starter kit. These patches deal with recent issues that contestants were facing, regarding: stability, metrics calculations, and agent initialisation. Additionally, camera configuration interfaces were optimised for simplicity, and codebase documentation was updated and extended. Some changes included in this patch necessitate re-evaluation of previous submissions, which may affect leaderboard results.The post concerns recent changes and patches made to the starter kit. These patches deal with recent issues that contestants were facing, regarding: stability, metrics calculations, and agent initialisation. Additionally, camera configuration interfaces were optimised for simplicity, and codebase documentation was updated and extended. Some changes included in this patch necessitate re-evaluation of previous submissions, which may affect leaderboard results. Here is the changelog: 1. Simplified the camera interface code, for environment/simulator interaction. 2. Added additional camera configurations for other sensors that are permitted for use during training. 3. Resolved agent initialisation issues, related to yaw ambiguity; this corrects situations where agents respawned with incorrect orientation after failing track segments. Previously, this produced spurious results, where agents were assigned incorrect segment-completion metrics, during evaluation. This fix may affect leaderboard results. 4. Provided additional agent tracking information, displayed on the console during training and evaluation. 5. Revised code documentation, to incorporate recent inquiries: We hope participants find these changes helpful! Participants are strongly encouraged to incorporate these changes, as soon as possible. In order to do this, please initiate a merge request, from the upstream repository to your respective forked repositories. https://gitlab.aicrowd.com/learn-to-race/l2r-starter-kit Claim your $50 training credits here. ## 🏆 Prizes We are proud to have AWS sponsor generous prizes for this challenge! The Learn to Race Challenge gives a special invitation to the Top 10 teams to collaborate, improve L2R, and jointly advance the field of research. Read below for more details on the prizes 👇 ### Top 3 teams on the leaderboard will get •$1,000 worth of AWS credits each • 1 DeepRacer car each ### all the Top 10 teams on the leaderboard will get • Mentorship from the organizing committee and Safe Learning for Autonomous Driving Workshop to author papers, for submission to the given workshop at an AI conference. ### Top 10 Community Contributors will get • \$100 worth of AWS credits each. Read here for details on the Community Contribution Prize. ## ⏱️ Timeline ### Stage 1 • 6th December '21 - 28th February '22 • Code Review from 25th Feb to 4th March (The L2R competition organizers will review the code to confirm sensor inputs and correctness of model development) ### Stage 2 • 4th March '22 - 14th March '22 • Code review - 15th March '22 to 22nd March '22 - Embargo Announcements (only to winners) - 23rd March '22. The L2R team will then work with the top 3 winning teams to curate their solutions for a presentation at the Aicrowd Townhall till 28th March '22 - Public Townhall for QA, Next steps (conference workshops) and Announce Winners - 1 April '22 ## 🤖 Team • Jonathan Francis (CMU) • Siddha Ganju (CMU alum) • Shravya Bhat (CMU) • Sidharth Kathpal (CMU) • Bingqing Chen (CMU) • James Herman (CMU alum) • Ivan Zhuko (Arrival) • Max Kumskoy (Arrival) • Jyotish P (AIcrowd) • Sahika Genc (AWS) • Cameron Peron (AWS) The Challenge is organized and hosted by AIcrowd, with the provision of challenge code, simulators, and challenge materials from faculty and students from Carnegie Mellon University, engineers from ARRIVAL Ltd., and engineers from AIcrowd. Third parties, such as Amazon Web Services, are providing sponsorship to cover running costs, prizes, and compute grants. ## 📱 Contact If you have any questions, please contact Jyotish P (jyotish@aicrowd.com), or consider posting on the Community Discussion board, or join the party on our Discord! • 11 • 6 • 2 • 1
2022-05-27 13:09:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2203802913427353, "perplexity": 7186.460899133221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662647086.91/warc/CC-MAIN-20220527112418-20220527142418-00511.warc.gz"}
https://testbook.com/question-answer/a-simply-supported-beam-of-span-l-width-b-and-dep--5e611406f60d5d332ddc6084
# A simply supported beam of span L, width B and depth D is subjected to a rolling concentrated load of magnitude W. the maximum flexural stress developed at the sectional L/4 distance from the end support is This question was previously asked in GPSC AE CE 2020 Official Paper (Part B - Civil) View all GPSC Assistant Engineer Papers > 1. (3WL) / (4BD2) 2. (4WL) / (3BD2) 3. (9WL) / (8BD2) 4. (8WL) / (9BD2) Option 3 : (9WL) / (8BD2) Free CT 1: Current Affairs (Government Policies and Schemes) 54560 10 Questions 10 Marks 10 Mins ## Detailed Solution Concept: For simply supported beam of length l, the maximum value of ILD for bending moment at a distance ‘a’ from left support and at a distance ‘b’ from right support, is given by ab/ℓ Calculation: $${\rm{h}} = \frac{{\frac{l}{4} \times \frac{{3{\rm{l}}}}{4}}}{{\rm{l}}} = \frac{{3{\rm{l}}}}{{16}}$$ Now, maximum bending moment at ℓ/4 is W × h $$\therefore {\rm{M}} = {\rm{W}} \times \frac{{3l}}{{16}}$$ Given: Breadth = B, Depth = D, Section Modulus (z) = BD2/6, and Flexural stress (f) = M/Z $$\therefore {\rm{f}} = \frac{{{\rm{W}} \times \frac{{3l}}{{16}}}}{{\frac{{{\rm{B}}{{\rm{D}}^2}}}{6}}} = \frac{{9{\rm{WL}}}}{{8{\rm{B}}{{\rm{D}}^2}}}$$
2021-10-18 18:11:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7396818995475769, "perplexity": 10509.59125012532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00520.warc.gz"}
https://dataspace.princeton.edu/jspui/handle/88435/dsp016969z365m
Skip navigation Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp016969z365m Title: Essays on Screening in Information Markets Authors: Sartori, Elia Advisors: Morris, Stephen E Contributors: Economics Department Subjects: Economic theory Issue Date: 2019 Publisher: Princeton, NJ : Princeton University Abstract: In three essays, this dissertation studies the production and distribution of information goods. In the first chapter, we model information as a digital good. Digital goods are produced along a quality ranking and can be both duplicated and damaged at zero marginal cost. Consumers’ valuation of quality consists of a common decreasing returns component and an heterogeneous component that gives sellers a motive for screening. The monopolist problem is naturally divided into an acquisition and a distribution stage; two interdependent sources of inefficiency, underprovision and quality damaging, emerge. Competition is modeled as a two stage game of perfect information. Welfare comparisons between monopoly and duopoly are ambiguous: additional underacquisition and double spending favor the former, undoing damaging inefficiencies by distributing a positive quality for free favors the latter. The second chapter studies the production of socially relevant information: we model policymaking as a bandit problem where the arms are treatment incentive schemes whose payoff value and correlation is disciplined by an economic theory. We preliminarily associate each multiarmed bandit problem to an uncertainty function so that the implied information function is traded-off one for one with expected utility at each belief state to determine the optimal policy. The uncertainty measure quantifies the estimation content of selection mechanisms. We propose a sampling procedure that validly implements all BDM mechanisms while minimizing the variance of the empirical propensity score and preserving information continuity. Fully voluntary mechanisms are control optimal under linear preferences, but their valid implementation induces the largest variance of the sample size used for estimation. In the third chapter (with Franz Ostrizek) we study a monopolist screening problem with network externalities in consumption and two dimensions of heterogeneity: consumer differ in their susceptibility and influence (to the network effect). We show that the allocation is inefficient if and only if susceptibility is unobservable, while consumers receive rents for their influence only if susceptibility is unobserved and influence is verifiable. The optimal allocation under private information satisfies lexicographic monotonicity; bunching arises around the switching types in the lexicographic order, i.e. highest-influence types adjacent to the next level of susceptibility. URI: http://arks.princeton.edu/ark:/88435/dsp016969z365m Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Economics Files in This Item: File Description SizeFormat Sartori_princeton_0181D_13040.pdf2.88 MBAdobe PDF Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.
2019-11-20 01:09:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5125247240066528, "perplexity": 4744.968040740811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670389.25/warc/CC-MAIN-20191120010059-20191120034059-00138.warc.gz"}
https://brilliant.org/problems/a-cool-name-2/
# A cool name Level pending Let $$N$$ denote the smallest positive integer which has exactly $$200$$ divisors (inclusive of $$1$$ and itself). What is the sum of digits of $$N$$? ×
2016-10-23 03:22:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7220081090927124, "perplexity": 810.6256034152018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719139.8/warc/CC-MAIN-20161020183839-00286-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/give-first-step-you-will-use-separate-variable-then-solve-equation-bby2-6-balancing-an-equation_17227
# Give First the Step You Will Use to Separate the Variable and Then Solve the Equation: Bby2 = 6 - Mathematics Sum Give first the step you will use to separate the variable and then solve the equation: b/2 = 6 #### Solution b/2 = 6 Multiplying both sides of the given equation by 2, we obtain (bxx2)/2 = 6 xx 2 b = 12 Is there an error in this question or solution? #### APPEARS IN NCERT Class 7 Maths Chapter 4 Simple Equations Exercise 4.2 | Q 2.2 | Page 86
2021-03-05 08:15:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25617334246635437, "perplexity": 1425.3361139938156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00169.warc.gz"}
https://notes.reasoning.page/html/subspaces
# A calculus of the absurd ##### 22.3.4 Subspaces • Definition 22.3.3 Let $$\textsf {V}$$ be a vector space. The set $$\textsf {W}$$ is a subspace of $$\textsf {V}$$ if $$\textsf {W}$$ is a vector space, and $$\textsf {W}$$ is a subset of $$V$$. • Technique 22.3.1 Showing that something is a subspace. Suppose we have a vector space $$\textsf {V}$$, and we want to prove that $$\textsf {W}$$ is a subspace of $$\textsf {V}$$. The steps to do so are this • 1. Show that the zero vector is in the subspace in question. • 2. Show that $$W \subseteq V$$ using the standard technique for showing that something is a subset of something else (as in Section TODO: write). • 3. Then we must show that $$W$$ is closed under vector addition and scalar multiplication. The rest of the vector space axioms follow from the fact that $$W \subseteq V$$ and $$V$$ is a vector space. This theorem is given as both an example of how to prove facts about vector spaces, but also because it is important in its own right. • Theorem 22.3.1 Let $$\textsf {V}$$ be a vector space, and $$\textsf {U}$$ and $$\textsf {W}$$ be subspaces of $$\textsf {V}$$. Prove that $$U \cup W$$ is a subspace of $$V$$ if and only if $$U \subseteq W$$ or $$W \subseteq U$$. To prove this, first we will show the “if” direction, and then the only if direction. • If. Without loss of generality, assume that $$U \subseteq W$$, in which case $$U \cup W = W$$, and this is a subspace of $$V$$ as $$W$$ is a subspace of $$V$$. The proof for the other case follows by swapping $$U$$ and $$W$$ in the proof. • Only if. This direction requires a bit more of an intuition about what directions to explore. First we will assume that $$U \cup W$$ is a subspace, and then we will assume that the consequent is untrue (i.e. that $$U \subseteq W \text { or } W \subseteq U$$ is not true), in which case there exist $$u, w \in \textsf {V}$$ such that $$u \in U \setminus W \text { and } w \in W \setminus U. \label {def of u and w}$$ We can then ask (this is the core idea in the proof which is not immediately obvious – to me at least), about the status of $$u + w$$. As $$u, w \in U \cup W$$ and by assumption $$U \cup W$$ is a subspace (and therefore by definition closed under vector addition) it must be that $$u + w \in U \cup W$$. Then either $$u + w \in U$$ or $$u + w \in W$$ (by definition of the set union). • 1. If $$u + w \in U$$, then also $$u + w + (-u) = w$$ which is a contradiction as by definition of $$w$$ (Equation ??) $$w \notin U$$. • 2. If $$u + w \in W$$, a very similar thing is the case; also $$u + w + (-w) = u \in W$$ which is a contradiction as $$u \notin W$$. Therefore, by contradiction this direction of the theorem must be true.
2023-04-02 05:43:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9610785841941833, "perplexity": 61.802472973514234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00508.warc.gz"}
https://www.physicsforums.com/threads/qm-adding-momentum.117301/
1. Apr 11, 2006 I have a general question on finding momentum states Let's say I have two spin 1/2 particles, so $$J=S_1+S_2$$ and |J| ranges from $$|S_1 + S_2|$$ to $$|S_1 - S_2|$$ in this case J=1 is triplet and J=0 is singlet. Now how do you find the J=0 state? I know that $$|j=0,m_j=0>=\frac{1}{\sqrt{2}}(\uparrow_1 \downarrow_2 - \downarrow_1 \uparrow_2)$$ but how do you get this in the first place? Is it pretty much trial and error and then use operator to generate rest of the states for that particular J? Or is there an algorithm for finding state $$|j,m_j=j>$$? 2. Apr 12, 2006 ### dextercioby Well, it's all about reading & understanding Clebsch-Gordan theorem & coefficients correctly. Technically $$|j,m\rangle =\sum_{m_{1},m_{2}} \langle j_{1},m_{1},j_{2},m_{2}|j,m\rangle |j_{1},m_{1},j_{2},m_{2} \rangle$$ The coefficients are tabulated, also, you know that j=0 and m=0. Daniel. 3. Apr 12, 2006 ### Meir Achuz If you don't have a CG table, you can generate the spin estates. First form J_11=\up\up. Then use the lowering operator to form the three J=1 states. J_00 will be orthogonal to J_10. 4. Apr 12, 2006
2017-11-20 23:59:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5635506510734558, "perplexity": 1450.1071614014716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806258.88/warc/CC-MAIN-20171120223020-20171121003020-00231.warc.gz"}
https://robs.io/2020/wk28/
# Notes for Week 28 of 2020 ## Sunday, July 19, 2020, 6:17:42PM Getting the questions about why I’m not writing a book. I get that one a lot. The simple answer is that books take too much time and knowledge bases are more effective if I can ever get them updated enough. The challenge is getting your knowledge into others view. But the plan for Knowledge Net sharing and subs will cover that. I just have to fucking finish it! ## Sunday, July 19, 2020, 9:34:11AM PEGn is really grabbing my attention. It’s becoming that perfect thing between ABNF, the original PEG (which has several substantial flaws in the “example” syntax), and the myriad PEG parsing engines out there — all of which suck at creating readable grammars. PEGn will boast the following when complete: • Self-specifying PEGn grammar • Most readable grammar specs on the planet • Nearly identical semantics to original PEG “example” • Semantic capitalization identifier naming conventions • Full set of reserved classes and tokens • Zero ambiguity semantics • Full Unicode support And eventually I plan on building the following tools for it as well: • pegn - linter, validator, and code generator • vim-pegn - vim plugin with PLS language server support My code generator won’t clutter up the grammar itself with inline code (as cool as that is). Instead it will allow granular creation of the different language renderings. This is substantially better than anything else out there right now because they are all language specific which destroys the usefulness and ubiquity of the grammar itself. No, instead pegn will support modular code generation support allowing different implementations of a rendered parsing even in the same language. For example, say you want your grammar generated in interface-centric Go versus struct-centric. Or say you want to generate code that generates an AST, or other code that is focused entirely on handling parse events. There are so many different ways to implement a parser for different needs. The one flaw that every PEG-to-code generator has right now is the inability to adapt to these needs and the fucking gawd-awful grammar specification files that result. ## Sunday, July 19, 2020, 7:39:00AM Been really conflicted about when to use Go interfaces and when to use structs. I tend to be a one thing or the other kind of guy. Using interface gets you immense flexibility while structs work better with marshaling and require far less code. I’ve decide to follow Goldmarks’ lead and create both my parsers leaning on interfaces more even if that means a few accessors and mutators. I am probably too abused by Java to look at them rationally. They probably do have good use sometimes. ## Saturday, July 18, 2020, 1:43:31PM Got tinout moved over to https://gitlab.com/rwx.gg/tinout and push-mirroring to GitHub. I’ve decided nothing goes into rwx.gg that isn’t at least version 1.0 or higher. I want to have someplace where people can go and be reasonably sure that stuff will usable. ## Saturday, July 18, 2020, 11:56:35AM Having writer’s remorse over writing that slam on tags and structs. As usual the truth is in between them. In fact, I love github.com/ghodss/yaml (and so does the Docker project) for parsing YAML into structs with the least amount of hassle — when structs make sense. I’ve been really second guessing my decision to move to interfaces for all the knowledge package stuff. After all these things are just static data. I’ll move to struct approach for the AST from Ezmark before I make a final decision on the kn stuff. ## Saturday, July 18, 2020, 11:15:32AM After facing the quirks of JSON and YAML tagging yet again I went ahead and wrote Golang YAML/JSON Tags Actually Suck. ## Friday, July 17, 2020, 6:51:31PM Cloudflare just went down reportedly because of a “bad router rule” in a server in Atlanta taking out 1.1.1.1. The number of people depending on that central DNS provider is proof of how stupid people are. The entire point of DNS was to allow distributed DNS providers rather than have everyone depend on a service. It really revealed how stupid some companies are. GitLab was one of them. After seriously fighting with GitLab’s brain-dead flavored Markdown — despite their claim to be moving to CommonMark — I’ve gathering up reasons to give GitHub another look. But, honestly, I’m kinda tired of depending on any centralized service at all at this point. ## Friday, July 17, 2020, 8:13:13AM Another amazing and unexpected advantage of writing in PEG is that you can specify ordered priority such that things that are more likely to occur in a language are examined first. This has never been something any specification language has allowed an author to communicate. It also brings forward some of the difficulties when a syntax would be easier to parse without the preferred position when examining the input. For example, Text is far more frequent than Tex. But checking for Tex lexically is easier because you look for $ and know you have it right away instead of maintaining the priority and checking that $ is not present so that you can continue with the Text parsing. This does cause a bit of redundancy in the parsing engine because to check for Text I have to rule out $ and then later have to check for $ to make sure I have a Tex inline. The cost is easily worth it, though, given all of the code that would have to be evaluated otherwise leaving Text as the plain option at the end of the list. ## Friday, July 17, 2020, 7:52:01AM I cannot overstate how amazing PEG positive and negative lookahead and lookbehind are for specifying language grammars. It allows specifications to directly communicate the code that needs to be written including some idea of how much memory will be needed for any lookahead specified by the grammar as well as how many previous states will need to be saved (memoized) to assert any look behind. This has been particularly useful when dealing with sets that can include other sets except for one specific thing. This is impossible to capture in EBNF or ABNF and requires resorting to “rhetorical” specification syntax. Here’s an example: Markdown inlines. Often one inline can contain most of the others. In PEG you simple have to do a negative of the inlines another inline cannot contain (rather than explicitly rewriting every one of them). Inline <- Text / Quote / Emph / Link / Pre Quote <- (!Quote Inline)+ ## Thursday, July 16, 2020, 7:53:08PM Yet another reason not to use Zsh(it). It doesn’t even have variable name references. Zsh is such a script kiddy toy, just so much evidence of that now. I’m beyond trying to listen people convince me otherwise. “Run along. I have work to do.” ## Thursday, July 16, 2020, 4:44:12PM Finally got at least all the main pages on rwx.gg working again and put the old kn shell script back in play. It is so great for auditing. ## Thursday, July 16, 2020, 8:19:17AM Playing around with a new morning routine. Been up since 6am today, yesterday 5:30. I have been naturally waking up earlier as I just write off the end of the day and go to bed around 10:30. They say you need less sleep as you age, but I don’t buy into that idea — especially if you are still regularly exercising. I’ve been running for an hour or more every day now for over a month. It’s been absolute bliss. I love yoga, but running on a good trail has always centered me mentally as much or more. I’m still planning on daily strenuous yoga asana again after I get my base health back. Here’s my daily schedule lately: Hour Activity 6:30a Up / Resting Heart Rate / Coffee & Walnuts / Code / Crap 7 Eat (Oatmeal, Protein, Coconut Oil, Coffee) 8 Code / Write / Think 9 Run 10 Eat (Protein and Avacado Toast or Pickle, Tomato Sandwich) 10:30 Stream / Teach 11 Stream / Teach 12p Stream / Teach 1 Eat / Relax / Coffee 2 Code / Write (Live) 3 Code / Write (Live) 4 Eat / Mentor 5 Mentor 6 Mentor 7 Eat / Mentor 8 Mentor 9 Walk Dog 10 Eat / Relax 11:00 Sleep My best brain power of the day is — without a doubt — in the morning. It is also when it is the most peaceful around here. I’m going to do a better job writing this personal stuff down in case it might help other people heading into old age watching their bodies freak out in ways they could not have anticipated. Mine happens to be chronic inflammation for reasons I cannot explain. Here’s how I’ve started to beat it: 1. Slow-paced running about an hour a day away from people 2. Completely eliminating any refined sugar or processed food 3. Dropping meat and carb-dense food from diet 4. Eating rather small portions of things more often 5. Setting an alarm to eat every three hours or so 6. Focusing on positive things instead of stressful stuff 7. Wearing a mask even in the house during pollen season 8. Taking my Xyzol to keep from reacting to our dog I don’t have Diabetes but my family is a long history of type I and II so I figure treating myself as if I could have it eventually is just safe. So far my blood sugar insulin response seems like I could be subject to it a bit. I just read on a site run by the Diabetes association that one way to treat it is to essentially track your food (and your blood glucose, which I’m not doing to do yet at this point) and eat about 1800 calories tops (for an average size person) with no high-carbs. It’s kind of like the Keto diet without the huge negative side-effects of Ketosis, replacing fruit with veggies, fluids, and fat — glorious, good fats. In fact, fat is really the secret to a lot of good health (for me). It is so fucking ironic that one misunderstood study sent the entire world into an obesity and Diabetes epidemic mostly because everyone eliminated all fat from their diet. Fat provides consistent energy and blood sugar without spiking. It satisfies you so you eat less. Some fats are essential to building brain cells. One thing is for sure. Sugar is the fucking devil. It feeds Cancer. It spikes insulin and destroys the Pancreas. It rots your teeth. Statistically speaking Sugar is more deadly than Cocaine and yet they are basically the same thing, addictive isolates taken from natural sources. ## Thursday, July 16, 2020, 7:46:42AM I’ve been cleaning up the sites with the old Bash kn script now that I’m taking all this luxurious time to actually finish Ezmark. It is always good to use the prototype again to get a sense of what I was trying to do in the first place. Keeps me grounded. One thing that looks ludicrous to me now is adding so much data to the YAML metadata header. Back then I was convinced it was easier to use the YAML since it is more structured. But the truth is the YAML should always be about the meta. Content specifications can call for certain header names and structure to the Markdown (whatever flavor). Anything else probably deserves its own file that can be rendered inline, which is what the RenderMark approach is all about. ## Wednesday, July 15, 2020, 4:49:13PM Great ideas from the stream today about rendering the TexExpr block and Tex inline as SVGs that are inlined into the HTML rather than depending on a JavaScript library at all! ## Wednesday, July 15, 2020, 4:27:32PM Big great discussion about MathJax or KaTeX and what to call the AST element. Pandoc mistakenly called it Math. Here is a $\epsilon$ thing. $$\forall x \in X, \quad \exists y \leq \epsilon$$ So that turns into this: Here is a ϵ thing. x ∈ X,  ∃y ≤ ϵ ## Wednesday, July 15, 2020, 3:56:02PM Was reminded that a {{.TOC}} in the template is a good idea and that TOC content is metadata not data that really has no place in the README.md file itself which is just bothersome to the content maintainer and redundant to those downloading the content who already have the TOC heading data in the BASE/json file. Also decided that Heading attributes really need to be mandatory to allow Heading text to be changed without impact. ## Wednesday, July 15, 2020, 8:32:37AM I love that Go’s creators were so fucking experienced that they could leave goto in Go without shame. It seems like the entire world of less-than programmers don’t get why they made this decision. But if you truly want to understand a specific case where it makes a ton of difference in efficiency take a look at Go’s own syntax parser. Yep, there’s goto in all its glory doing what it was meant to do in spectacular fashion. These are yet more reasons to truly understand why Go is a far more thoughtfully designed language than Rust. Very few people on planet Earth would even understand an explanation of why that is objectively true. But it is nice to know they do exist. I am such a Rob Pike fanboy. There I said it. ## Tuesday, July 14, 2020, 3:02:22PM While doing the PEG for KN Ezmark I realized that BlockQuote and Div are effectively the same thing in the Pandoc AST. Both are containers, but the Div is far superior to maintain and parse. In fact, BlockQuotes have always been a pain in my ass. I’ve decided to try and get away with dropping them entirely from Ezmark. I am sure some people will scream and they can use other full parsers like Pandoc if they need them. This does mean that Div is actually a SemDiv because it is not based on style. It denotes a semantic collection of content within the current content such as a callout, note, or even an actual block quote. People can use them for addresses and such as well. In fact, it is the exact same thing as a Fenced syntax aware code block, but for other stuff. ## Monday, July 13, 2020, 2:44:42PM The Pandoc AST isn’t bad, it’s just not what I would do. It’s too much, doesn’t match CommonMark and just so damn annoying sometimes. I mean, Inline.Smallcaps Really? The biggest problem of all is how much you cannot change it. Goldmark grew their own AST as well. It’s not horrible, just not very well informed from experience looking at document structure for several years as I have been, obsessing over this stuff. None of the document and knowledge solutions have ever started with the AST model and worked up. The closest thing we have to that was the DOM and we know how that ended up. Back when I was doing the ABNF for BaseML (later EzMark) one thing really stood out. None of the existing Markdown formats — from the very beginning — were able to be rendered inline. They all required a first pass in order to parse all the block types and reattach the reference links and such. That has always annoyed me. Each block should be consumable and rendered immediately after it is read. I really am having a hard time just shutting up and using Pandoc. There is so much bloat and overkill to use that method. Pandoc Markdown doesn’t even look CommonMark compliant. What I really want to do is specify a superset of CommonMark that is 100% Pandoc compliant that can be rendered immediately and mastered very quickly. I want RenderMark. ## Monday, July 13, 2020, 5:21:09AM Up early after a long sleep. Must have been that three hour run/hike on Saturday. Feels good though. I can tell my old body is returning.
2020-08-13 06:11:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2618728578090668, "perplexity": 2739.8907693279994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00387.warc.gz"}
https://cs.stackexchange.com/posts/49118/revisions
2 edited tags 1 # How are variables accessed in the correct order from the stack? I'm learning about the stack but one thing I am unable to understand is how variables can be accessed in the correct order. So if I had a basic program calculating the sum of some user entered values (pseudocode) Input value1 Input value2 Input value3 Input value4 Input value5 result = value1 + value2 + value3 + value4 + value5 Am I correct in thinking the variables value1 through to 5 would be stored in the stack? If so this means that value 5 would be the first out for the next section but since my program needs value1 first how does it get this value? (assuming a simplistic computer with minimal registers).
2019-09-15 20:23:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44359415769577026, "perplexity": 1465.4827501313594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00017.warc.gz"}
https://dsp.stackexchange.com/questions/37850/which-filter-to-use-for-data-with-high-amount-of-errors
# Which filter to use for data with high amount of errors? I have the following data collected from a heart rate sensor. The data is in bpm vs. milliseconds. All the data points below 80 is incorrect (errors/noise). When it plummets below 80, it's because the heart rate sensor fell off the subject. Similarly, although on this graph it is not there, there are often spikes that go as high as 500. Those high peaks are caused by something physically tapping the heart rate sensor. The rate should ideally be between 100-160. However, I'm not sure what filter to apply to correctly display this data. I tried applying a 100 point median filter. This got rid of majority of the low and high peaks. However, I was told that this removes too much of the detail and that I should use an average filter so that it more accurately describes the data. My argument against this is that $\frac{100 + 0}{2} = \frac{50+50}{2} = 50$. Many different values can give me the same average. I do not come from a DSP background. Hence, how should I proceed? Are there other types of filter that will more accurately let me display this data? Ideally, I'd like to keep the general trend, but not include the sudden dips(any data that dips below 80) and spikes(any data that shoots above 200bpm). • It may be just me, but if you have problems with your sensors, filtering the data will only result in filtered sensor errors, so taking care of the sensors, in a different manner, may be a better approach. – a concerned citizen Feb 23 '17 at 7:38 • Hmm, Good point! – Christian Feb 23 '17 at 8:29 • I would simply exclude samples that you know are erroneous so that they do not contribute any information to the result, and replace those values with NaN (not a number). – Dan Boschen Feb 25 '17 at 16:37
2019-08-23 03:50:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6357659697532654, "perplexity": 716.3958216708207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317817.76/warc/CC-MAIN-20190823020039-20190823042039-00156.warc.gz"}
https://scriptinghelpers.org/questions/70211/how-do-i-make-a-part-look-in-the-direction-of-the-player
Still have questions? Join our Discord server and get real time help. 1 # How do i make a part look in the direction of the player? Zzverot 14 5 months ago Edited 5 months ago So i'm trying to do 2 things here: First, i wanna make a part look at the player's head. I've put this in a local script local part = script.Parent local player = game.Players.LocalPlayer part.ClickDetector.MouseClick:Connect(function() while wait(0.15) do end end) . local part = script.Parent local player = game.Players.LocalPlayer while wait(0.15) do end I've tried both scripts but the part just stays the same when i run them. Also, another thing is how can i make a part look at the pidection the player is facing? For example, if the player is looking towards positive Z,the part should also look towards positive Z. I've tried using part.CFrame.LookVector = character.Head.CFrame.LookVector (character and part are defined ofcourse) but it didn't work. Thanks is advance! 0 you cant define LookVector to something User#23365 -5 — 5mo 0 a localscript will not execute in workspace, so you either need to make this a localscript in a valid location for localscripts and change the part variable OR change LocalPlayer to an actual player & make it a serverscript instead Vulkarin 580 — 5mo 0 The choice depends on whether you want the part to replicate it's movements or not Vulkarin 580 — 5mo 0 Don't use wait() as your loop's condition; it abuses a hack. But I'd recommend doing it client sided as if you do this server sided, part movement can get really laggy. The movement should be processed on the client's end. incapaxx 2881 — 5mo 0 xEmmalyx 249 5 months ago I didn't know of this either but with a bit of testing it appears LocalScripts can no longer move parts even client sided. I placed the same code into a regular script and it works perfectly fine local part = script.Parent
2019-04-24 16:04:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23503780364990234, "perplexity": 3356.0390506406925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578650225.76/warc/CC-MAIN-20190424154437-20190424180437-00138.warc.gz"}
http://spot.pcc.edu/math/orcca/knowl/example-238.html
###### Example7.3.4 Factor $$z^2+5z-6\text{.}$$ The negative coefficient is again a small complication from Example 7.3.2, but the process is actually still the same. Solution in-context
2018-04-19 13:14:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900225818157196, "perplexity": 2066.8551661446654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936969.10/warc/CC-MAIN-20180419130550-20180419150550-00345.warc.gz"}
https://byjus.com/question-answer/stablish-relation-between-torque-and-angular-force/
Question Stablish relation between torque and angular force. Solution Torque and angular force are same quantity force in angular motion is known as torque ...hope now u get itPhysics Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-17 18:41:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936378002166748, "perplexity": 12864.558395015343}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300616.11/warc/CC-MAIN-20220117182124-20220117212124-00022.warc.gz"}
https://proofwiki.org/wiki/Rational_Numbers_form_Subfield_of_Complex_Numbers
# Rational Numbers form Subfield of Complex Numbers ## Theorem Let $\struct {\Q, +, \times}$ denote the field of rational numbers. Let $\struct {\C, +, \times}$ denote the field of complex numbers. $\struct {\Q, +, \times}$ is a subfield of $\struct {\C, +, \times}$. ## Proof From Rational Numbers form Subfield of Real Numbers, $\struct {\Q, +, \times}$ is a subfield of $\struct {\R, + \times}$. From Real Numbers form Subfield of Complex Numbers, $\struct {\R, +, \times}$ is a subfield of $\struct {\C, + \times}$. Thus from Subfield of Subfield is Subfield $\struct {\Q, +, \times}$ is a subfield of $\struct {\C, + \times}$. $\blacksquare$
2019-04-23 07:00:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9403607249259949, "perplexity": 190.4404475958185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578593360.66/warc/CC-MAIN-20190423054942-20190423080942-00281.warc.gz"}
http://mathoverflow.net/revisions/56774/list
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). Here is an algorithm (horribly inefficient) to generate all non-hyperelliptic, non-rational, separable subfields of a non-hyperelliptic function field $F$ over a finite field $K$. Let $\Omega$ be the space of global holomorphic differentials of $F/K$. For any $K$-subspace $V$ of $\Omega$, choose a basis $v_1,\ldots,v_m$ of $V$, compute the elements $v_j/v_1,j>1$ of $F$ (and compute the algebraic relations among these $v_j$), let $E_V$ be the subfield they generate. If $E_V \ne F$ and is not rational, then you found a subfield as above. All such subfields will appear this way (proof left to the reader). There are only finitely many such $V$ since $K$ is assume finite. Don't even dream of implementing this algorithm as is. Using the numerator of the zeta function, its factors and the Cartier operator, you can perhaps cut down the number of $V$'s that need to be tested. Maybe hyperelliptic subfields can be dealt with by using quadratic differentials. If Florian Hess can't do it, you are probably out of luck, as far as implementation goes. Added later: For a hyperelliptic subfield of genus $>1$, one still has a subspace $V$ but the corresponding $E_V$ is the canonical rational subfield of the hyperelliptic field. In this case, the field will be intermediate between $F$ and $E_V$ and perhaps the suggestion of Dror Speiser of using number field arguments might lead to it. It's the elliptic fields that are going to be hard to get. 1 Here is an algorithm (horribly inefficient) to generate all non-hyperelliptic, non-rational, separable subfields of a non-hyperelliptic function field $F$ over a finite field $K$. Let $\Omega$ be the space of global holomorphic differentials of $F/K$. For any $K$-subspace $V$ of $\Omega$, choose a basis $v_1,\ldots,v_m$ of $V$, compute the elements $v_j/v_1,j>1$ of $F$ (and compute the algebraic relations among these $v_j$), let $E_V$ be the subfield they generate. If $E_V \ne F$ and is not rational, then you found a subfield as above. All such subfields will appear this way (proof left to the reader). There are only finitely many such $V$ since $K$ is assume finite. Don't even dream of implementing this algorithm as is. Using the numerator of the zeta function, its factors and the Cartier operator, you can perhaps cut down the number of $V$'s that need to be tested. Maybe hyperelliptic subfields can be dealt with by using quadratic differentials. If Florian Hess can't do it, you are probably out of luck, as far as implementation goes.
2013-06-19 07:30:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339885115623474, "perplexity": 294.22959166403854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708143620/warc/CC-MAIN-20130516124223-00062-ip-10-60-113-184.ec2.internal.warc.gz"}
https://codegolf.stackexchange.com/questions/252094/selective-caesar-cipher
# Input A string of text. # Output The string of text, however, every consonant is the next consonant in the alphabet, and every vowel is the next vowel in the alphabet (z becomes b, u becomes a). The case of the new letter should be the same as the letter it is replacing (a becomes e, A becomes E). If there are two consonants in a row, they will be separated by the characters 'ei', the case of 'ei' can be whatever you please. Characters outside of the 26 character Latin alphabet should remain unchanged. This includes whitespace and punctuation. (For clarity, the vowels are aeiou. Y is not considered a vowel) Examples: The phrase "Hello World" becomes "Jimeimu Xuseimeif" The phrase "codegolf" becomes "dufihumeig" • I'd recommend you add more test cases -- for example, does bcd become beiceid? Sep 20 at 17:05 • @97.100.97.109 I believe bcd would become ceideif. Sep 20 at 17:22 • Is "Y" a vowel or a consonant, for this challenge? Sep 20 at 19:10 • "If there are two consonants in a row, they will be separated by the characters 'ei'." <-- Does this apply to alternating case? For example, would "Bc" be "Cd" or "Ceid"? Sep 20 at 19:48 – qwr Sep 22 at 19:01 # Vyxal, 27 bytes k⁰kv":ɾJJƒ*‛ß+k⁰⇧1Ṁ$‡‛eijøṙ Try it Online! This is a bit of a mess. k⁰kv" # Consonants and vowels, paired :ɾ # Duplicate and make an uppercase copy of each J # Concatenate to the original J # Concatenate to the input ƒ* # Reduce by ring translation, ring translating by each string ‛ß+ # "[bcdfghjklmnpqrstvwxyz]+" 1Ṁ # Insert at position 1... k⁰⇧ # "BCDFGHJKLMNPQRSTVWXYZ"$ # Put that under the value øṙ # Regex replace that with... ‡ # A function that... ‛eij # Joins by "ei" # JavaScript (Node.js), 131 bytes s=>s.replace(/[a-z]/gi,c=>(s+g(i=B(c)[0]-1)?"":"ei")+(h=j=>g(i)^g(j=-~j%26)?h(j):B([j+1|i&96]))(i&31),B=Buffer,g=i=>s=1065233>>i&1) Try it online! ### Commented s => s.replace( // replace in the input string s ... /[a-z]/gi, c => // ... each letter c (case insensitive) ( // s + // if the previous letter was a vowel (or this // is the 1st iteration and s is still a string) g( // or the current letter i = // whose ASCII code - 1 is loaded in i B(c)[0] - 1 // ) ? // is a vowel: "" // append nothing : // else: "ei" // append "ei" ) + ( // h = j => // h is a recursive function looking for the // replacement letter g(i) ^ // if the type of the current letter g( // does not match the type of j = -~j % 26 // the next letter in the alphabet // obtained by incrementing j modulo 26 ) ? // then: h(j) // keep advancing by doing a recursive call : // else: B([ // output the letter j + 1 | // whose ASCII code is j + 1 i & 96 // with bits #5 and #6 taken from i ]) // )(i & 31), // initial call to h with j = i mod 32 B = Buffer, // define B for Buffer g = i => // g is a helper function taking an integer s = 1065233 // representing an ASCII code minus 1, >> i & 1 // returning 0 for consonant / 1 for vowel, // and also saving the result in s ) // end of replace() # sed-E, 131 bytes s/[A-Z]/@\l&/g y/bcdfghjklmnpqrstvwxyzaeiou/cdfghjklmnpqrstvwxyzbeioua/ :a s/@(.)/\u\1/ s/[b-df-hj-np-tv-z]{2}/!&/i s/!(.)/\1ei/ ta Attempt This Online! This code expect that there won't be characters @ and ! input. Any two characters could be used instead of those, I think that even unprintable ones should work (at least using escape codes), but I chose to keep it as is to make it readable. Also this program could be made without this restriction, but it would be longer and less interesting. Explanation: s/[A-Z]/@\l&/g # lowercase all characters and make note which characters were lowercased # this have to be done because transliteration (following command) is case sensitive # otherwise there would have to also be all uppercase characters y/bcdfghjklmnpqrstvwxyzaeiou/cdfghjklmnpqrstvwxyzbeioua/ # change character to next one :a # beginning of loop s/@(.)/\u\1/ # uppercase back all characters where we made note # it is inside of loop, to save 1B by not using g flag s/[b-df-hj-np-tv-z]{2}/!&/i # make note where to place string ei # this is done so that the long bracket expression doesn't need to be repeated twice # also this command is the reason for loop as sed doesn't supported overlapping replacements s/!(.)/\1ei/ # put ei after noted character ta # repeat till there is nothing to replace # Goruby-p, 99 bytes c="aeiou"+b="bcdfghj-np-tv-z" d="eioua"+b[1..]+?b $_.t!c+c.up,d+d.up gsub(/[#{b}]{2}/i){_1.t.j"ei"} Attempt This Online! # Ruby, 110 bytes c="aeiou"+b="bcdfghj-np-tv-z" d="eioua"+b[1..]+?b$_.tr!c+c.upcase,d+d.upcase gsub /#{"([#{b}])"*2}/i,'\1ei\2' Attempt This Online! # Retina 0.8.2, 85 bytes T_oA\EIOUA-DF-HJ-NP-TV-ZBaei\oua-df-hj-np-tv-zb i(?![_aeiou])\w\B(?![_aeiou]) &ei Try it online! Link includes test cases. Explanation: T_oA\EIOUA-DF-HJ-NP-TV-ZBaei\oua-df-hj-np-tv-zb Translate each letter in the string to the next letter in the string. E and o are special so have to be quoted in the replacement string (o in the source string refers to a copy of the replacement string) while - indicates a character range (also avoids having to quote H and h). _ in the source string is just a filler to align the strings. i(?![_aeiou])\w\B(?![_aeiou])&ei Case-insensitively find pairs of consecutive consonants and insert ei after the first. # C (clang), 152 149 bytes -3 bytes thanks to ceilingcat!! *i,c,u,p;z(s){i=wcschr(s,c-u);c=i?i[1]:c;}f(*s){for(p=0;c=*s++;p=i)u=c/91*32,z(L"AEIOUA"),z(L"BCDFGHJKLMNPQRSTVWXYZB"),printf("ei%c"+(!p|!i)*2,c+u);} Try it online! • Haha when I remove the argument -w it generates 17 warnings along with output Sep 21 at 12:44 • @py3programmer, only losers cares about warnings ;-) – jdt Sep 21 at 13:23 • true, atleast the output comes Sep 21 at 14:36 # Python, 166 bytes f=lambda s,v='UOIEA',c='BZYXWVTSRQPNMLKJHGFDC':s and[y:=(q:=[x:=(t:=s.upper())[0],[v,c][x in c]][x in v+c])[~-q.find(x)],y.lower()][y<s]+'ei'*({*t[:2]}<{*c})+f(s[1:]) Attempt This Online! # Charcoal, 57 bytes ≔⪪aeiou¹θ≔⁻⪪β¹θη⊞υ⁰⭆S⁺×ei×⊟υΣ⊞Oυ№η↧ι⊟EΦ⟦ιθη↥θ↥η⟧№λι§λ⊕⌕λι Try it online! Link is to verbose version of code. Explanation: ≔⪪aeiou¹θ Get a list of the vowels in lowercase. ≔⁻⪪β¹θη Subtract them from the lowercase alphabet to leave the consonants.s ⊞υ⁰ Start off with no previous consonant. ⭆S⁺ Map over each input character, joining together... ×ei×⊟υΣ⊞Oυ№η↧ι ... "ei" repeated the number of times the previous character was a consonant times the number of times the current character is a consonant, saving the result for the next iteration, ... ⊟EΦ⟦ιθη↥θ↥η⟧№λι§λ⊕⌕λι ... taking any of the uppercase consonants, uppercase vowels, lowercase consonants or lowercase vowels that contain the character, or as a last resort the original character, cyclically rotating the character in that list. # 05AB1E, 28 26 bytes žNÃü2D€S„aeý:žNžM‚Du«vyDÀ‡ Try it online. ü2D€S could alternatively be SãDJs: try it online. Explanation: žN # Push the consonants constant à # Only keep those characters from the (implicit) input-string ü2 # Pop and push its overlapping pairs as strings D # Duplicate this list of string-pairs €S # Convert each 2-letter string to a character-pair „aeý # Join each pair with "ae" delimiter : # Replace all pairs with the 4-letter strings in the (implicit) input žN # Push the consonants constant žM # Push the vowels constant ‚ # Pair them together Du« # Merge an uppercase copy of this pair vy # Loop over all four strings: D # Duplicate the current string À # Rotate its copy once towards the left ‡ # Transliterate the characters in the input-string # (after which the result is output implicitly) S # Convert it to a list of characters ã # Create all possible character-pairs with the cartesian power of 2 D # Duplicate this list of character-pairs J # Join each inner character-pair to a 2-letter string s # Swap so the character-pair list is at the top of the stack again
2022-09-28 05:56:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19647763669490814, "perplexity": 4622.557885629577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00281.warc.gz"}
https://www.sawaal.com/aptitude-reasoning-questions-and-answers.htm?page=5&sort=
# Aptitude and Reasoning Questions • #### Verbal Reasoning - Mental Ability Q: if the price of a book is first decreased by 25% and then increased by 20%, then the net change in the price will be  : A) 10 B) 20 C) 30 D) 40 Explanation: Let the original price be Rs. 100. New final price  = 120 %  of (75 % of Rs. 100) = Rs. (120/100 * 75/100 * 100) = Rs. 90. Decrease = 10% 183 48892 Q: Tickets numbered 1 to 20 are mixed up and then a ticket is drawn at random. What is the probability that the ticket drawn has a number which is a multiple of 3 or 5? A) 1/2 B) 3/5 C) 9/20 D) 8/15 Explanation: Here, S = {1, 2, 3, 4, ...., 19, 20}. Let E = event of getting a multiple of 3 or 5 = {3, 6 , 9, 12, 15, 18, 5, 10, 20}. P(E) = n(E)/n(S) = 9/20. 107 48697 Q: An accurate clock shows 8 o'clock in the morning. Through how may degrees will the hour hand rotate when the clock shows 2 o'clock in the afternoon? A) 360 B) 180 C) 90 D) 60 Explanation: Angle traced by the hour hand in 6 hours=(360/12)*6 77 48454 Q: It was Sunday on Jan 1, 2006. What was the day of the week Jan 1, 2010? A) Monday B) Friday C) Sunday D) Tuesday Explanation: On 31st December, 2005 it was Saturday. Number of odd days from 2006 to 2009 = (1 + 1 + 2 + 1) = 5 days. On 31st December 2009, it was Thursday. Thus, on 1st Jan, 2010 it is Friday 175 47987 Q: At what time between 4 and 5 o'clock will the hands of a watch point in opposite directions? A) 54 past 4 B) (53 + 7/11) past 4 C) (54 + 8/11) past 4 D) (54 + 6/11) past 4 Explanation: 4 o'clock, the hands of the watch are 20 min. spaces apart. To be in opposite directions, they must be 30 min. spaces apart. Minute hand will have to gain 50 min. spaces. 55 min. spaces are gained in 60 min 50 min. spaces are gained in $6055×50$ min. or $54611$ Required time = $54611$ min. past 4. 139 47861 Q: A grocer has a sale of Rs 6435, Rs. 6927, Rs. 6855, Rs. 7230 and Rs. 6562 for 5 consecutive months. How much sale must he have in the sixth month so that he gets an average sale of Rs, 6500 ? A) 4991 B) 5467 C) 5987 D) 6453 Explanation: Total sale for 5 months = Rs. (6435 + 6927 + 6855 + 7230 + 6562) = Rs. 34009. Required sale = Rs.[(6500 x 6) - 34009] = Rs. (39000 - 34009) = Rs.  4991. 107 47248 Q: Find the number of triangles in the given figure. A) 28 B) 32 C) 36 D) 40 Explanation: The simplest triangles are AML, LRK, KWD, DWJ, JXI, IYC, CYH, HTG, GOB, BOF, FNE and EMA i.e. 12 in number. The triangles composed of two components each are AEL, KDJ, HIC and FBG i.e. 4 in number. The triangles composed of three components each are APF, EQB, BQH, GVC, CVJ, IUD, DUL and KPA i.e. 8 in number. The triangles composed of six components each are ASB, BSG, CSD, DSA, AKF, EBH, GGJ and IDL i.e. 8 in number. The triangles composed of twelve components each are ADB, ABC, BCD and CDA i.e. 4 in number. Total number of triangles in the figure = 12 + 4 + 8 + 8 + 4 = 36. 410 47203 Q: If South-East becomes North, North-East becomes West and so on. What will West become? A) North-East B) North-West C) South-East D) South-West
2019-03-24 23:42:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5432973504066467, "perplexity": 3429.4716042739633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203529.38/warc/CC-MAIN-20190324230359-20190325012359-00337.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-17-problem-10e-precalculus-mathematics-for-calculus-6th-edition-6th-edition/9780840068071/851ea570-59c6-4045-a00b-de7f30f126f0
# To Check: The given values of S are satisfy the inequality or not. ### Precalculus: Mathematics for Calcu... 6th Edition Stewart + 5 others Publisher: Cengage Learning ISBN: 9780840068071 ### Precalculus: Mathematics for Calcu... 6th Edition Stewart + 5 others Publisher: Cengage Learning ISBN: 9780840068071 #### Solutions Chapter 1.7, Problem 10E To determine ## To Check: The given values of S are satisfy the inequality or not. Expert Solution S={2,1,0,12,1} ### Explanation of Solution Given: S={2,1,0,12,1,2,2,4} Inequality, x2+2<4 Calculation, Let’s check for each value of S whether it satisfies the inequality or not, For x=2 , x2+2<4(22)+2<44+2<46<4 6 is not smaller than 4 so x=2 does not satisfy the inequality For x=1 , x2+2<4(12)+2<41+2<43<4 3 is smaller than 4 So, x=1 satisfy the inequality For x=0 x2+2<4(02)+2<40+2<42<4 0 is smaller than 4 So , x=0 satisfy the inequality For x=12 x2+2<4(12)2+2<414+2<494<4 94 is smaller than 4 So, x=12 satisfy the inequality For x=1 x2+2<412+2<41+2<43<4 3 is smaller than 4 so, x=1 satisfies the inequality For x=2 x2+2<4(2)2+2<42+2<44<4 4 is not smaller than 4 So, x=2 does not satisfies the inequality For x=2 x2+2<422+2<44+2<46<4 6 is not smaller than 4 So, x=2 does not satisfies the inequality For x=4 x2+2<442+2<416+2<418<4 18 is not smaller than 4 So x=4 does not satisfies the inequality Conclusion: Hence the values of S which satisfies the inequality are 2,1,0,12,1 ### Have a homework question? Subscribe to bartleby learn! Ask subject matter experts 30 homework questions each month. Plus, you’ll have access to millions of step-by-step textbook answers!
2021-09-27 16:57:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8847748637199402, "perplexity": 5254.296877033903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058456.86/warc/CC-MAIN-20210927151238-20210927181238-00325.warc.gz"}
https://www.physicsforums.com/threads/can-speed-be-a-imaginary-number-validity-of-work-energy-theoram-in-1d.462369/
# Can speed be a imaginary number! validity of work energy theoram in 1D Consider mass $$m_{1}$$and $$m_{2}$$with position vector (from an inertial frame) $$\overrightarrow{x_{1}}$$ and $$\overrightarrow{x_{2}}$$ respectively and distance between them be $$x_{0}$$. $$m_{1}\frac{d^{2}}{dt^{2}}\overrightarrow{x_{1}}=$$$$\overrightarrow{F}$$ $$\Rightarrow m_{1}\frac{d^{2}}{dt^{2}}(\overrightarrow{x_{2}}-\overrightarrow{x_{0}})=\overrightarrow{F}$$ because its assumed that $$\overrightarrow{x_{1}}-\overrightarrow{x_{2}}=\overrightarrow{x_{0}}$$ $$\Rightarrow-\overrightarrow{F}\frac{m_{1}}{m_{2}}+m_{1}\frac{d^{2}}{dt^{2}}\overrightarrow{x_{0}}=\overrightarrow{F}$$ because $$m_{2}\frac{d^{2}}{dt^{2}}\overrightarrow{x_{2}}=-\overrightarrow{F}$$ $$\Rightarrow m_{1}\frac{d^{2}}{dt^{2}}\overrightarrow{x_{0}}=\overrightarrow{F}(1+\frac{m_{1}}{m_{2}})$$ $$\Rightarrow\left(m_{1}\frac{d^{2}}{dt^{2}}x_{0}\right)\hat{x_{0}}=\overrightarrow{F}(1+\frac{m_{1}}{m_{2}})$$ because its assumed that $$\frac{d\hat{x_{0}}}{dt}=0$$ $$\Rightarrow m_{1}\frac{d^{2}}{dt^{2}}x_{0}=F(1+\frac{m_{1}}{m_{2}})$$ because $$\overrightarrow{x_{1}}-\overrightarrow{x_{2}}=\overrightarrow{x_{0}}\Leftrightarrow\hat{x_{0}}=\hat{F}$$, $$F$$ = Gravitational Force $$\Rightarrow m_{1}\int_{x_{i}}^{x_{f}}\frac{d^{2}}{dt^{2}}x_{0}dx_{0}=(1+\frac{m_{1}}{m_{2}})\int_{x_{i}}^{x_{f}}Fdx_{0}$$ because normally we have $$F$$ as $$F(x_{0})$$. $$\Rightarrow\left.\frac{1}{2}m_{1}v^{2}\right|_{v=v_{i}}^{v=v_{f}}=(1+\frac{m_{1}}{m_{2}})\int_{x_{i}}^{x_{f}}Fdx_{0}$$ because $$\overrightarrow{v_{0}}=\overrightarrow{v}$$,$$\left|\overrightarrow{v_{0}}\right|^{2}=(\frac{dx_{0}}{dt})^{2}$$ in earth and an object case, let $$m_{1}=m$$, $$m_{2}=m_{e},x_{i}=x_{e}$$ and $$F=G\frac{mm_{e}}{x_{0}^{2}}$$. $$\Rightarrow\left.\frac{1}{2}mv^{2}\right|_{v=v_{i}}^{v=v_{f}}=\left.-G\frac{m(m+m_{e})}{x_{0}}\right|_{x_{0}=x_{i}}^{x_{0}=x_{f}}$$ Escape Velocity = $$v_{i}$$ when $$v_{f}=0$$ and $$x_{f}\rightarrow\infty$$. $$\Rightarrow-\frac{1}{2}mv_{i}^{2}=G\frac{m(m+m_{e})}{x_{e}}$$ $$\Rightarrow\frac{1}{2}mv_{i}^{2}=-G\frac{m(m+m_{e})}{x_{e}}$$ $$\Rightarrow v_{initial}=i\left(G\frac{m+m_{e}}{x_{e}}\right)^{\frac{1}{2}}$$ $$\Rightarrow$$I dont understand how much speed is this. Its not just escape velocity, if we dont include imaginery number domian in speed then it implies that if $$v(x_{f})>v(x_{i})$$ then $$x_{f}>x_{i}$$ which is also incorrect. so question is, can speed be a imaginary number ? How do i give imaginary escape speed to an object? A speed can't be imaginary, you must have done something wrong in your calculations (a minus sign missing somewhere or something like this). However, I would suggest that you add some comments so we can understand what is happening in your calculations. As it is now, I don't really understand what you're trying to calculate. i am deriving work energy theoram in 1d and proving that speed can be imaginary. $$F,x_{1},x_{2},x_{0},v,>0$$ as these are modulous part of vector quantities. Consider mass $$m_{1}$$and $$m_{2}$$with position vector (from an inertial frame) $$\overrightarrow{x_{1}}$$ and $$\overrightarrow{x_{2}}$$ respectively and distance between them be $$x_{0}$$. variable definations $$m_{1}\frac{d^{2}}{dt^{2}}\overrightarrow{x_{1}}=$$$$\overrightarrow{F}$$ this is newton's law. $$\Rightarrow m_{1}\frac{d^{2}}{dt^{2}}(\overrightarrow{x_{2}}-\overrightarrow{x_{0}})=\overrightarrow{F}$$ because its assumed that $$\overrightarrow{x_{1}}-\overrightarrow{x_{2}}=\overrightarrow{x_{0}}$$ either $$\overrightarrow{x_{1}}-\overrightarrow{x_{2}}=\overrightarrow{x_{0}}$$ or $$\overrightarrow{x_{2}}-\overrightarrow{x_{1}}=\overrightarrow{x_{0}}$$ is true. i chose first. however it does not make any difference in the results. $$\Rightarrow\left(m_{1}\frac{d^{2}}{dt^{2}}x_{0}\right)\hat{x_{0}}=\overrightarrow{F}(1+\frac{m_{1}}{m_{2}})$$ because its assumed that $$\frac{d\hat{x_{0}}}{dt}=0$$ It has to be assumed that system is not rotating as it is nesscery condition for work energy theoram. Its not just escape velocity, if we dont include imaginery number domian in speed then it implies that if $$v(x_{f})>v(x_{i})$$ then $$x_{f}>x_{i}$$ which is also incorrect. Let $$x_{f}>x_{i}$$. Since $$\left.\frac{1}{2}mv^{2}\right|_{v=v_{i}}^{v=v_{f}}=\left.-G\frac{m(m+m_{e})}{x_{0}}\right|_{x_{0}=x_{i}}^{x_{0}=x_{f}}$$ $$\Rightarrow v_{f}^{2}>v_{i}^{2}$$ $$\Rightarrow v_{f}>v_{i}$$ Cthugha $$m_{1}\frac{d^{2}}{dt^{2}}\overrightarrow{x_{1}}=$$$$\overrightarrow{F}$$ $$\Rightarrow m_{1}\frac{d^{2}}{dt^{2}}(\overrightarrow{x_{2}}-\overrightarrow{x_{0}})=\overrightarrow{F}$$ because its assumed that $$\overrightarrow{x_{1}}-\overrightarrow{x_{2}}=\overrightarrow{x_{0}}$$ I am too lazy to check whether that matters for your calculations and whether you used the different definition all the time, but if you assume $$\vec{x}_1 -\vec{x}_2=\vec{x}_0$$ then you have to replace $$\vec{x}_1$$ by $$\vec{x}_2 +\vec{x}_0$$ instead of $$\vec{x}_2 -\vec{x}_0$$. Also, I am puzzled what force $$F=G\frac{mm_{e}}{x_{0}^{2}}$$ is supposed to be. This is a force which has the magnitude of gravity, but is repulsive instead of attractive. Gravitational force would be given by $$F=-G\frac{mm_{e}}{x_{0}^{2}}$$. Alternatively, it would be more sensible to assume $$\hat{x_{0}}=-\hat{F}$$. $$\vec{x}_0$$ is always the vector starting at $$\vec{x}_2$$ and pointing towards $$\vec{x}_1$$. The gravitational force on $$\vec{x}_1$$ also pointing away from $$\vec{x}_2$$ does not make much sense. Last edited: $$\overrightarrow{x_{1}}-\overrightarrow{x_{2}}=\overrightarrow{x_{0}}$$ is wrong assumption. the correct assumption is $$\overrightarrow{x_{2}}-\overrightarrow{x_{1}}=x_{0}\hat{F}$$ $$\Rightarrow-m_{1}\frac{d^{2}}{dt^{2}}(x_{0}\hat{F})=\overrightarrow{F}(1+\frac{m_{1}}{m_{2}})$$ $$\Rightarrow m_{1}\frac{d^{2}}{dt^{2}}x_{0}=-F(1+\frac{m_{1}}{m_{2}})$$ ..[1] $$\Rightarrow\left.\frac{1}{2}mv^{2}\right|_{v=v_{i}}^{v=v_{f}}=\left.G\frac{m(m+m_{e})}{x_{0}}\right|_{x_{0}=x_{i}}^{x_{0}=x_{f}}$$ $$\Rightarrow v_{esacpe}=\left(G\frac{m+m_{e}}{x_{e}}\right)^{\frac{1}{2}}$$ ok everything is alright thanks everyone Also, I am puzzled what force $$F=G\frac{mm_{e}}{x_{0}^{2}}$$ is supposed to be. This is a force which has the magnitude of gravity, but is repulsive instead of attractive. Gravitational force would be given by $$F=-G\frac{mm_{e}}{x_{0}^{2}}$$. F is magnitude of force vector and its always positive whether its attractive or repulsive.
2021-05-09 07:15:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407563805580139, "perplexity": 445.9694298376925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00184.warc.gz"}
https://www.nature.com/articles/s41467-019-10774-0?error=cookies_not_supported&code=c19b6e53-48ce-4645-a9e2-ae4a089ce40a
# Low voltage control of exchange coupling in a ferromagnet-semiconductor quantum well hybrid structure ## Abstract Voltage control of ferromagnetism on the nanometer scale is highly appealing for the development of novel electronic devices with low power consumption, high operation speed, reliable reversibility and compatibility with semiconductor technology. Hybrid structures based on the assembly of ferromagnetic and semiconducting building blocks are expected to show magnetic order as a ferromagnet and to be electrically tunable as a semiconductor. Here, we demonstrate the electrical control of the exchange coupling in a hybrid consisting of a ferromagnetic Co layer and a semiconductor CdTe quantum well, separated by a thin non-magnetic (Cd,Mg)Te barrier. The electric field controls the phononic ac Stark effect—the indirect exchange mechanism that is mediated by elliptically polarized phonons emitted from the ferromagnet. The effective magnetic field of the exchange interaction reaches up to 2.5 Tesla and can be turned on and off by application of 1V bias across the heterostructure. ## Introduction Nowadays the demand for control of ferromagnetism on the nanometer scale is met by the methods of spin-transfer torque or spin–orbit torque, both based on locally controlled magnetization reversal by a high-density current of ~106 A cm−2 1. However, more promising in terms of energy costs is the use of an electric field, instead of electrical current or magnetic field, which would allow fast voltage control of magnetism2. This type of control was realized, for instance, for the low-temperature magnetic semiconductors (In,Mn)As and (Ga,Mn)As3, and more recently significant progress in that direction was achieved in various materials. Examples are the coercive force in multiferroics4, the magnetic anisotropy in ultrathin Fe/MgO2 and the magnetic order in ferromagnetic–ferroelectric structures5. The most intriguing idea for tuning magnetic properties is based on the control of the exchange interaction causing the magnetism (the strongest spin–spin interaction) through varying the carrier wavefunction overlap in a thin magnetic layer. However, this requires application of rather strong electric fields of ~107 V cm−1 4. Therefore, alternative concepts for magnetism control that allow one to use low electric fields at elevated temperatures, are actively pursued. Moreover, an additional requirement for applications is the integration of the magnetic system into an electronic device compatible with current semiconductor technology. Hybrid systems that combine thin ferromagnetic (FM) films with semiconducting (SC) layers are promising for unifying magnetism and electronics, which may allow all-in-one-chip solutions for computing. To that end, the hybrids need to show magnetic order as a ferromagnet, while remaining electrically reconfigurable as a semiconductor6,7. By now, ferromagnetic proximity effects were revealed optically8,9 and electrically10. Further, electrical measurements using the anomalous Hall effect demonstrated that the p–d exchange interaction of the magnetic atoms in a ferromagnetic film (the d-system) with a two-dimensional hole gas (2DHG, the p-system) in a semiconductor quantum well (QW) induces an equilibrium spin polarization of the QW holes10. Optical studies8,9 showed polarized photoluminescence (PL) from the QW located a few nanometers apart from the FM. However, care has to be exercised in the interpretation of the FM proximity effect, when electrons and holes are present in non-equilibrium: a previous study11 had demonstrated that under optical excitation an alternative mechanism exists involving spin-dependent capture of charge carriers from the SC into the FM, representing a dynamical spin polarization effect in contrast to the exchange-induced equilibrium polarization. The wavefunction engineering strategy based on electric field control of the overlap of charge carrier wavefunctions in a quantum well with localized d-electrons had been proposed in ref. 12. Control of the ferromagnetism in the low-temperature FM (In,Fe)As was experimentally demonstrated in ref. 13. All these mechanisms are based on wavefunction overlap and, therefore, lead to short-range proximity effects. A conceptually different type of long-range FM proximity effect was reported recently for a hybrid Co/CdTe structure14. It is manifested by the spin polarization of holes bound to shallow acceptors in a nonmagnetic CdTe quantum well due to an effective long-range p–d exchange interaction that is not related to the penetration of the electron wavefunction into the FM layer. This interaction was conjectured to be mediated by elliptically polarized phonons with energy close to the magnon–phonon resonance in the FM. The long-range exchange constant was directly measured by spin-flip Raman scattering (SFRS) in ref. 15. However, no electric control of this exchange coupling has been demonstrated so far. Here, we show that application of an electric field across the structure changes the strength of the long-range p–d exchange coupling between the FM and the SC, namely between the holes bound to acceptors in the quantum well. The coupling is controlled by the band bending in the quantum well region, becoming most efficient in the case of flat bands. The effective magnetic field of the exchange interaction reaches 2.5 T and can be turned on and off by application of ~1 V bias across the heterostructure. The control is not related to a spatial redistribution of wavefunctions and, therefore, cannot be explained using the standard model of exchange interaction. In contrast, it can be well described in the framework of the exchange mechanism mediated by elliptically polarized phonons. The applied voltage varies the heavy–light-hole transition to which the phonons couple, bringing it in and out of resonance with the magnon–phonon resonance of the FM. Doing so, the effective exchange coupling strength in the hybrid system is tuned electrically without any power consumption, using field strengths of about 104 V cm−1 only, which is a few orders of magnitude reduced in comparison to non-semiconductor systems. Therefore, our results pave the way for integration of electrically tunable magnetism into semiconductor electronics. Moreover, the presented electric control of the exchange coupling by elliptically polarized phonons can be implemented not only in semiconducting, but also in metallic and dielectric systems. For example, one layer of a ferromagnetic metal (FM) could emit elliptically polarized phonons, which are transmitted through a paramagnetic metal (PM) and then penetrate another FM whose magnetic state is switched thereby in such a FM/PM/FM trilayer hybrid. Further, the concept of elliptically polarized phonons is relevant beyond magnetic spintronics, because these phonons could be created without magnets, using materials with large birefringence of sound to produce the phononic analog of an optical quarter wave plate. Our work establishes an unexplored direction of helical phononics14,16,17. ## Results ### Ferromagnetic proximity effect in steady state The investigated Co/(Cd,Mg)Te/CdTe/(Cd,Mg)Te/CdTe:I/GaAs hybrid structure was grown by molecular-beam epitaxy on a GaAs substrate followed by a conducting CdTe:I layer (10-μm thickness, iodine-doped with donor concentration of ~1018 cm−3) as sketched in the layer-by-layer design in Fig. 1a. The QW is formed by a 10 nm CdTe layer sandwiched between layers of 0.5 μm (Cd,Mg)Te and 8 nm (Cd,Mg)Te (the spacer). On top of this structure, the 4 nm thick cobalt film was deposited. A mesa of 5 mm diameter was lithographically patterned by deep etching, so that an applied voltage drops between the Co and CdTe:I layers. Figure 1b schematically shows the band diagram of the structure. The current–voltage characteristics I(U) in Fig. 1c reflects the typical behavior of a Schottky diode shifted downwards along the I-axis due to the photovoltaic effect. We study the ferromagnetic proximity effect using polarized PL spectroscopy in the continuous wave (cw) mode. The sample is excited by linearly polarized (π) light and the degree of circular polarization $$\rho _{\mathrm{c}}^{\mathrm{\pi }}$$ of the photoluminescence from the quantum well is detected. The value of $$\rho _{\mathrm{c}}^{\mathrm{\pi }}$$ does not depend on the orientation of the linear laser polarization. To magnetize the interfacial ferromagnet, which is responsible for the FM proximity effect14, we apply a magnetic field BF in the Faraday geometry parallel to the growth axis of the heterostructure (z-axis, Fig. 1a). The red curves in Fig. 2a, c correspond to zero bias, the blue ones to U = −1 V, all taken at BF = −220 mT. In the PL spectrum (Fig. 2a), two features are observed. The PL band at higher photon energies (X) corresponds to the recombination of the exciton in the QW, while the low-energy tail (e-A0) is attributed to the recombination of an electron with a hole bound to a shallow acceptor in the QW14. At zero bias U = 0 (red curves) the photoluminescence reveals a polarization of about 2% at the acceptor band (Fig. 2c), undergoing a sign change toward the exciton emission. Application of a reverse bias (blue curves at U = −1 V) shifts the entire spectrum to lower energies by 7 meV due to the strong bending of the energy bands by the (static) Stark effect. Simultaneously the PL intensity decreases due to the separation of electron and hole in the QW by the electric field, leading to a reduced transition matrix element. Here the polarization degree around the acceptor emission is reduced to about 1% (Fig. 2c). Note that usage of a photo-elastic modulator operating at 50 kHz instead of a mechanically rotating quarter wave plate reduces the error bar down to 0.1%. Next, the magnetic field dependencies of $$\rho _{\mathrm{c}}^{\mathrm{\pi }}(B_{\mathrm{F}})$$ were measured in the Faraday geometry for different fixed biases U. The FM proximity effect is assessed by the degree of circular polarization of the e-A0 PL versus magnetic field BF. The degree $$\rho _{\mathrm{c}}^{\mathrm{\pi }}(B_{\mathrm{F}})$$ saturates in the field range of 150–200 mT (red squares for U = 0 and blue circles for U = −1 V in Fig. 2b). Since the spectral positions of the emission bands are sensitive to the applied voltage U, the polarization was detected at the photon energy ħωPL corresponding to the maximum polarization of the e-A0 transition (Fig. 2c). The magnitude of the FM proximity effect is given by the saturation polarization at the acceptor band $$A_{\mathrm{\pi }} \equiv \rho _{\mathrm{c}}^{\mathrm{\pi }}$$(BF = −220 mT) which is larger for 0 V than for the reverse bias of −1 V. In contrast to the e-A0 transition, the polarization $$\rho _{\mathrm{c}}^{\mathrm{\pi }}(B_{\mathrm{F}})$$ near the exciton PL maximum depends linearly on magnetic field across the whole scanned field range without any saturation (red stars in Fig. 2b, U = 0, ħωPL = 1.610 eV), independent of the applied bias U. The linear dependence of $$\rho _{\mathrm{c}}^{\mathrm{\pi }}(B_{\mathrm{F}})$$ originates from the X splitting in two lines with opposite circular polarization due to the Zeeman effect14. The linear BF-dependence of $$\rho _{\mathrm{c}}^{\mathrm{\pi }}(B_{\mathrm{F}})$$ indicates that the proximity effect is absent for the valence band holes that contribute to the exciton within its lifetime 14. Figure 2d shows the dependence of the saturation polarization Aπ(U) (red circles) and the energy of the PL peak ħωmax(U) (blue circles) as function of applied bias. The energy ħωmax(U) increases with bias due to the reduction of the inclination of bands and consequently of the static Stark effect. The external positive bias decreases the built-in electric field. It is canceled at U >+0.5 V (flat band conditions) as evidenced also by a steep increase of current through the device (Fig. 1c). In this case the voltage drop is redistributed all over the structure plane (Fig. 1a), so that a voltage increase does not lead to a further band inclination. For U ≤ + 0.5 V the bias increases the built-in electric field and we observe a striking correlation between the voltage dependences of the magnitude of the FM proximity effect Aπ(U) and the peak position ħωmax(U) (see Fig. 2d). In turn, for U>+0.5 V, the FM proximity effect decreases about 1.5 times, reaching the level of 1.5%. This drop is attributed to the appearance of additional holes in the valence band of the QW which contribute to the PL but have negligible p–d exchange coupling (for details see Supplementary note 1 and 2). Here, we concentrate on the origin of the voltage dependence Aπ(U) for U< + 0.5 V. The FM proximity effect was shown14 to originate from the effective p–d exchange interaction $$\frac{1}{3}{\mathrm{\Delta }}_{{\mathrm{pd}}}J_z$$ between the interfacial FM formed at the Co/(Cd,Mg)Te interface with an out-of-plane magnetization due to the perpendicular magnetic anisotropy and the acceptor-bound QW holes with momentum projections Jz = ± 3/2 onto the z-axis. The magnetization of the Co layer is located in the plane of the structure and does not contribute to the circular polarization of the PL in weak magnetic fields14,18. The saturation amplitude Aπ of the PL polarization is caused by the spin polarization PA of A0 when the FM is completely magnetized $$A_{\mathrm{\pi }} = P_{\mathrm{A}} = - \frac{{\tau _{\mathrm{A}}}}{{\tau _{\mathrm{A}} + \tau _{{\mathrm{sA}}}}}\frac{{{\mathrm{\Delta }}_{{\mathrm{pd}}}}}{{2k_{\mathrm{B}}T}}.$$ (1) Here τA is the lifetime and τsA is the spin relaxation time of the heavy holes on acceptors, kB is the Boltzmann constant, Т is the lattice temperature, and Δpd is the spin splitting of the ±3/2 levels in the effective magnetic field of the p–d exchange interaction. A positive sign of Δpd implies that the –3/2 state is energetically favorable14. The polarization PA can depend on the bias U through four different dependencies: (1) the ratio of the times τA/τsA = f(U), (2) the strength Δpd(U) of the p–d exchange, (3) the lattice heating T(U) by electrical current, and (4) the injection of spin-polarized carriers from the FM. Heating can be excluded because the electrical power in our experiment was two orders of magnitude (<40 μW) smaller than the optical power. Heating due to the injection of hot holes and the spin injection option can be ruled out because the amplitude Aπ(U) does not follow the electric current I(U) (Fig. 1c). Time-resolved PL and spin-flip Raman scattering experiments demonstrate that the Δpd(U) dependence is the main origin of Aπ(U). ### Electrical control of the kinetics of proximity effect Time-resolved PL enables one to measure the kinetics of the PL intensity and thereby the emergence of the spin polarization induced by the magnetized FM layer (ferromagnetic proximity effect) after optical excitation of non-polarized charge carriers with linearly polarized laser pulses (Fig. 3a). In ref. 14, we demonstrated that the exciton PL does not reveal the FM proximity effect. The PL intensity decays much faster (a few 100 ps) than the rise of the e-A0 PL (~2 ns). Here, the same scenario is realized. Due to the dominant contribution of the exciton to the total PL signal especially during the first few hundred picoseconds (black curve in Fig. 3a), it is necessary to wait for 500–700 ps, until the excitons have mostly recombined, to reliably evaluate the FM proximity effect. Figure 3a (orange circles) shows the evolution of the circular polarization starting from time delays of about 700 ps. The blue dashed curve in Fig. 3a is a fit to the data according to $$\rho _{\mathrm{c}}^{\mathrm{\pi }}\left( t \right) = \rho _{{\mathrm{sA}}}\left[ {1 - {\mathrm{exp}}\left( { - t/\tau_{{\mathrm{sA}}}} \right)} \right]$$ with the amplitude ρsA = 13%, and the rise time τsA = 1.3 ns. This measurement was carried out at U = 0. From kinetic measurements of the FM proximity effect at other bias voltages one can determine the dependences ρsA(U) and τsA(U). Because of the delay waiting for exciton recombination, the dependence ρsA(U) can be measured more accurately than τsA(U). ρsA(U) has a straightforward interpretation from which Δpd can be inferred. Equilibrium occurs at delay times much longer than the spin relaxation time τsA. Then the recombining electrons are characterized by the equilibrium spin state given by the Boltzmann distribution. Therefore, the polarization amplitude $$\rho _{{\mathrm{sA}}}\left( U \right) = - \frac{{{\mathrm{\Delta }}_{{\mathrm{pd}}}(U)}}{{2k_{\mathrm{B}}T}}$$ (2) is determined solely by the ratio of the exchange constant to the thermal energy and does not depend on the ratio of lifetime and relaxation time. Thus, the dependence ρsA(U) is determined exclusively by Δpd(U). Figure 3b shows that ρsA(U) has a peak near U = +1 V, similar to the cw data (compare with Fig. 2d). Therefore, the time-resolved PL demonstrates that the exchange constant between the magnetic ions and the holes bound to acceptors (rather than the τA/τsA ratio) is controlled by the electric field. ### Determination of the p–d exchange constant SFRS under resonant excitation of the exciton bound to an acceptor (A0X) complex can be used as a reliable tool to determine directly the magnitude of the FM-induced exchange splitting19. In tilted magnetic field, three spin-flip processes occur as discussed in the Supplementary note 3. Using excitation with σ polarization and detection with σ+ cross-polarization, each of these processes results in a Stokes shift of the Raman signal by a characteristic energy. The first process is associated with a double spin flip of electron and hole with participation of an acoustic phonon. In presence of the p–d exchange interaction the Stokes shift is given by $${\mathrm{\Delta }}_{\mathrm{S}}^{{\mathrm{DSF}}} = \hbar \omega _1 - \hbar \omega _2 = \mu _{\mathrm{B}}\left( {\left| {g_{\mathrm{e}}} \right| - \left| {g_{\mathrm{A}}} \right|} \right)B - {\mathrm{\Delta }}_{{\mathrm{pd}}},$$ (3) where ħω1 and ħω2 are the energies of the incoming and scattered photons, μB is the Bohr magneton, ge and gA are the g-factors of the electron and the acceptor-bound hole, respectively. Another SFRS-contribution is represented by the single spin flip of the acceptor-bound hole. In the presence of the p–d exchange interaction, the corresponding Stokes shift is $${\mathrm{\Delta }}_{\mathrm{S}}^{{\mathrm{SSF}}} = \hbar \omega _1 - \hbar \omega _2 = \mu _{\mathrm{B}}\left| {g_{\mathrm{A}}} \right|B - {\mathrm{\Delta }}_{{\mathrm{pd}}}.$$ (4) The third process is associated with the spin flip of the electron in the excited A0X complex resulting in a Stokes shift determined solely by the Zeeman splitting, µB|ge|B. The Raman spectrum (Fig. 4a) under resonant excitation of the A0X complex (1.600 eV) shows the broad SFRS line h, which is associated with the single hole spin-flip process. The e+h line corresponds to the double spin-flip process. Finally, there is also the line e, which corresponds to the electron spin flip. The sum of the energy shifts of the e and h peaks gives the energy of the e+h peak. The energies of all three SFRS lines change linearly with applied magnetic field (see Fig. 4b). However, when the magnetic field is extrapolated to zero, the Stokes shift of the line e tends to zero, while the lines h and e+h show a negative offset. This means that both SFRS lines h and e+h are influenced by the exchange interaction with the FM layer and, thus, can be used to assess the effect of gate voltage on the exchange coupling strength. The zero field offset represents a direct measurement of the exchange constant Δpd. The dependence of Δpd on the applied voltage U is shown in Fig. 4c by the green circles. It correlates well with the voltage dependence of the PL peak position for U <+0.5 V. The maximum value of Δpd ≈ 150 μeV occurs in the flat band regime followed by a fast drop with increasing U to a value of 50 μeV, where it remains constant for U >+0.5 V, similar to Figs. 2d and 3b. SFRS data at U>+0.5 V should be considered with care. As mentioned above, in this regime additional holes appear in the valence band of the QW which results in optical excitation of positively charged trions X+. The binding energies of the A0X complex and the X+ trion are close to each other. Therefore, the PL excitation spectra of the A0X and X+ optical transitions overlap and so do the spin flips of A0 and free holes. We are able to distinguish the A0X and X+ contributions as their relative intensities vary with the hole concentration, which in turn can be tuned by gate voltage. Indeed, the probabilities of A0X and X+ transitions are proportional to the concentrations of the A0 acceptors and valence band holes in the QW, respectively. The gate voltage controls their concentration ratio. For U <+0.5 V the valence band is empty of holes, so that only the A0X transition is excited. In this case we observe unambiguously the exchange coupling of the FM with A0 holes and detect the g-factor of A0 and a large offset. However, for U>+0.5 V holes appear in the QW valence band and the X+ transitions contribute to the spin-flip Raman process, thus changing the slope and decreasing the offset value. A detailed discussion of the U >+0.5 V regime is presented in the Supplementary note 2. To conclude the SFRS results we demonstrate that the splitting value Δpd ≈ 150 μeV corresponds to an effective magnetic field of the exchange interaction of 2.5 T (the Lande g-factor of the hole bound to acceptor A0 is |gA| ≈ 1). Thus, the exchange interaction can be turned on and off by the application of ~1 V bias across the heterostructure of ≤1 μm thickness, i.e. by an electric field of about 104 V cm−1, which is a few orders of magnitude lower than previously reported in other systems4. ### Electrical control of the phononic ac Stark effect The main finding of this work is the electric field control of the long-range exchange interaction in a hybrid ferromagnet-semiconductor structure. It can be well explained in the frame of the indirect p–d exchange mechanism mediated by elliptically polarized phonons, which represents the phononic ac Stark effect14. Elliptically polarized phonons exist in ferromagnets near the energy Emp of the magnon–phonon resonance20,21. The FM proximity effect is based on the spin–orbit interaction of the spins of acceptor-bound holes in the QW with the nonzero angular momentum of these acoustic phonons as shown in the energy diagram of Fig. 5 in electron–electron representation. The neutral acceptor states are split in two doublets with angular momentum projections along the z-axis equal to ±3/2 (heavy hole states) and ±1/2 (light-hole states). In this quadruplet in its ground state, the ±1/2 acceptor levels are populated with electrons, while the ±3/2 states are mostly empty. Interaction with circularly polarized phonons leads to an energy shift of the acceptor levels and a lifting of the doublets’ degeneracy associated with the angular momentum projection. The effect is maximal near the magnon–phonon resonance, where the energy Emp ≈ 1 meV20 is close to the energy splitting Δlh ≈ 1 meV19 between the heavy- and light-hole acceptor states. The experimental results of ref. 14 demonstrate that a magnetization of the FM layer along the z-axis leads to negative circular polarization of the e-A0 optical transition, i.e. the state with +3/2 projection is shifted to higher energies with respect to the 3/2 level (see Fig. 5). Therefore, the unpolarized electrons from the conduction band mainly recombine into the empty +3/2 electronic states and the resulting emitted photons are σ polarized. Such a level sequence is obtained when the elliptically polarized phonons have preferential σ+ polarization and Δlh > Emp. Indeed, in this case the coupling is established between the electronic ground state +1/2 of the acceptor in the presence of N phonons and the excited state +3/2 with N−1 phonons. In case of Δlh > Emp the energy levels repel each other. This is extracted from the experimental results of this work and the conclusions of ref. 22 where application of a static electric field E increases Δlh, leading to an increase of the detuning Δlh(E) − Emp and resulting in a decrease of the interaction between the QW and the FM layer. Similarly to the optical ac Stark effect23, Δpd can be calculated in second order perturbation theory $${\mathrm{\Delta }}_{{\mathrm{pd}}}\left( E \right) = E_{ + 3/2} - E_{ - 3/2} = \frac{{\left| {H_{{\mathrm{ph}} - {\mathrm{h}}}} \right|^2}}{{{\mathrm{\Delta }}_{{\mathrm{lh}}}\left( E \right) - E_{{\mathrm{mp}}}}}P_{{\mathrm{phon}}}^{\mathrm{c}}.$$ (5) Here Hph–h is the matrix element of the interaction of the acceptor spin with the transverse acoustic phonons. Similarly to the circular polarization degree ρc of light, the degree of phonon circular polarization is $$P_{{\mathrm{phon}}}^{\mathrm{c}} = \frac{{N_ + - N_ - }}{{N_ + + N_ - }}$$, with the number N+ (N) of right (left) circularly polarized phonons (with energy close to the magnon–phonon resonance energy). Analogously to the optical ac Stark effect, the constant Δpd is determined by the detuning Δlh(E) − Emp of the phonon energy Emp (analog of the photon energy ħω) from the splitting Δlh(E) (analog of the energy of the optical transition in an atom). The sign of $$P_{{\mathrm{phon}}}^{\mathrm{c}}$$ in the vicinity of the magnon–phonon resonance depends on the sign of the projection of the magnetization component of the interfacial ferromagnet onto the z-axis. The electric field E across the QW increases Δlh(E) due to the static quadratic Stark effect. For small values of the electric field directed along the $$\left. z \right\|[001]$$ axis $${\mathrm{\Delta }}_{{\mathrm{lh}}}\left( E \right) = {\mathrm{\Delta }}_{{\mathrm{lh}}}\left( 0 \right) + a_8E^2.$$ (6) The first term Δlh(0) on the right hand side in Eq. (6) corresponds to the splitting in zero electric field due to quantum confinement, while the second term is the correction due to the Stark effect. The parameter a8 > 0 determines the static susceptibility of a neutral acceptor and is known only for shallow acceptors in Si22: a8 ~ 10−10 eV cm2 V−2. Then an electric field strength as low as 103 V cm−1 induces an energy shift of ~0.1 meV, which is comparable to the initial detuning Δlh(0) − Emp. Hence, a relatively small electric field can control the exchange coupling constant. Substituting Eq. (6) into Eq. (5), we obtain $${\mathrm{\Delta }}_{{\mathrm{pd}}}\left( E \right) = \frac{{\left| {H_{{\mathrm{ph}} - {\mathrm{h}}}} \right|^2}}{{{\mathrm{\Delta }}_{{\mathrm{lh}}}\left( 0 \right) - E_{{\mathrm{mp}}} + a_8E^2}}P_{{\mathrm{phon}}}^{\mathrm{c}}.$$ (7) Since the static susceptibility a8 > 022, our results demonstrate that the Δpd value is maximum for the case of flat bands (E = 0), where we have Δlh(0) − Emp > 0. The experiment14 shows that Δpd > 0 for BF > 0, and therefore, as mentioned before, $$P_{{\mathrm{phon}}}^{\mathrm{c}} > \,0$$, i.e. the phonons are mainly σ+ polarized in agreement with Fig. 5. We fit the data assuming that in a small range of reverse bias the electric field is proportional to the applied voltage $$E \propto \left( {U - U_0} \right)$$, where U = U0 corresponds to the flat band conditions, and drops entirely within the undoped region of ≤ 1 μm thickness (Fig. 1a). Thus from Eq. (7) we get a Lorentz curve $$f\left( U \right) = \frac{A}{{1 + \left( {U - U_0} \right)^2/U_1^2}}$$ (8) with halfwidth U1. The amplitude A gives the magnitude of the effect under flat bands conditions, and has different dimensions for the different experimental techniques. For SFRS the amplitude A in Eq. (8) gives the value of Δpd(U) in μeV. The solid curve in Fig. 4c fits the results of the SFRS measurements well with A = 175 µeV, U0 = 0.7 V, and U1 = 1.8 V. For the polarization measurements the amplitude A in Eq. (8) is dimensionless. The solid line in Fig. 2d fits the polarization amplitude data in cw mode for A = 2.2%, U0 = 0.8 V, and U1 = 1.8 V. The time-resolved PL data of the polarization kinetics (also dimensionless) in Fig. 3b are described by A = 17%, U0 = 1.0 V, and U1 = 2.0 V. These fit parameters demonstrate good agreement. Therefore, the results of all three experimental techniques are explained within the model of the phononic ac Stark effect. Our results demonstrate that U1 ≈ 1.5 V is enough to switch the p–d interaction off, i.e. we obtain indeed a low-voltage control of magnetism. ## Discussion At first glance the low-voltage control of the long-range exchange coupling looks surprising. Indeed, common sense suggests that application of an electric field in a direction that attracts holes inside the QW toward the FM should enhance the p–d exchange interaction. In contrast to that, the p–d exchange coupling decreases when reverse bias voltage (negative potential at the top electrode) is applied. This finding supports the earlier conclusions on the origin of the long-range p–d exchange interaction which is not related to the penetration of the electronic wavefunctions into the FM layer14. Our data rather demonstrate and confirm the suggested mechanism of the phononic ac Stark effect for electric field control of the exchange interaction, which is essentially different from traditional concepts. The coupling strength correlates with the band bending in the quantum well region and can be explained in the frame of the exchange coupling mediated by elliptically polarized phonons—the phononic ac Stark effect. The electric field changes the detuning of the heavy–light-hole energy splitting of the QW acceptor with respect to the magnon–phonon resonance energy in the FM. While the present studies were performed at cryogenic temperatures, this condition likely does not represent a restriction for the coupling mechanism itself, but is merely caused by the detection through the QW optical properties (reduced values of spin-flip Raman signal and circular polarization degree of PL). Coherent propagation of acoustic phonons over micrometer distances at room temperature has been demonstrated24. Therefore, our results corroborate the feasibility of electrical control of the exchange interaction in hybrid ferromagnet-semiconductor nanostructures and can be potentially used for applications such as electric field effect magnetic memories. From a fundamental point of view, our achievement opens a principally different way for the control of magnetic interactions via the gate tunable phononic ac Stark effect that can be extended to various magnetic systems. ## Methods ### Sample fabrication The sample of Co/(Cd,Mg)Te/CdTe/(Cd,Mg)Te/CdTe:I/GaAs (Fig. 1a) was grown on a (100)-oriented GaAs substrate by molecular-beam epitaxy. The buffer between the substrate and the quantum well is a 10 μm layer of conductive CdTe doped with iodine (donor concentration of the order of 1018 cm−3). The quantum well consists of a 0.5 μm wide (Cd,Mg)Te barrier layer, a 10 nm CdTe layer and an 8 nm (Cd,Mg)Te spacer. On top a 4-nm-thick cobalt layer was deposited. In order to make electrical contact to the CdTe:I buffer layer, a 5 mm in diameter mesa was etched into the structure to a depth of more than 0.8 μm. One contact is wired to the CdTe:I buffer layer, and the second contact is located at the cobalt surface. ### Continuous wave PL For continuous wave polarization-resolved PL spectroscopy the sample was excited by the linearly polarized (π) light of a titanium-sapphire laser. In order to avoid sample heating the laser power was kept below 4 mW cm−2. The degree of circular polarization $$\rho _{\mathrm{c}}^{\mathrm{\pi }} = \left( {I_ + ^\pi - I_ - ^\pi } \right)/\left( {I_ + ^\pi + I_ - ^\pi } \right)$$ of the PL from the QW was detected, where $$I_ + ^\pi$$ and $$I_ - ^\pi$$ are the intensities of the σ+ and σ components with right and left circular polarization, respectively. The polarization degree $$\rho _{\mathrm{c}}^{\mathrm{\pi }}$$ does not depend on the orientation of the linear laser polarization. To magnetize the interfacial ferromagnet, a small magnetic field BF (of the order of 100 mT) was applied in the Faraday geometry normal to the structure plane using a resistive magnet. The measurements were carried out at a temperature of 2 K. ### Time-resolved PL Time-resolved PL allows one to obtain information about the transient processes of decay of the photoexcited carriers and their spin relaxation. Here, the sample is excited by short optical pulses with a central photon energy of 1.69 eV using a self-mode-locked titanium-sapphire laser with a repetition frequency of 75.75 MHz. The pulse duration was 150 fs, the spectral width of the laser was 10 nm, and the average pump density was ~4 W cm−2. The PL was dispersed with a 0.5 m focal length single monochromator to which a streak camera was attached for detection. The overall time resolution was about 20 ps. The experiments were carried out at a temperature of 2 K. ### Time-resolved pump–probe Kerr rotation The coherent spin dynamics was measured by conventional time-resolved pump–probe Kerr rotation using a titanium-sapphire laser generating 1.5 ps pulses at the repetition frequency of 75.6 MHz (repetition period TR = 13.2 ns). Electron spin coherence was generated along the growth z-axis of the sample by circularly polarized pump pulses. The polarization of the beam was modulated between σ+ and σ by a photo-elastic modulator operated at a frequency of 50 kHz. In order to avoid electron heating and delocalization effects the average pump density was kept at low levels ≤ 5 W cm−2. The probe beam was linearly polarized. The angle of its polarization rotation (θK) or the ellipticity after reflection of the beam from the sample was measured by a polarization sensitive beamsplitter in conjunction with a balanced photodetector. Pump and probe beams had the same photon energy and were tuned to the energy of the exciton resonance. The sample was placed in the temperature insert of a vector magnet cryostat containing three superconducting split coils oriented orthogonally to each other. This magnet allows one to ramp the magnetic field up to 3 T and to carry out measurements with different orientations of the magnetic field relative to the sample at temperatures from T = 1.7 K up to 300 K. ### Spin-flip Raman scattering (SFRS) The experiments were performed using resonant excitation with a cw laser at photon energies corresponding to the PL band of excitons bound to neutral acceptors (A0X) which is located about 1 meV below the exciton optical transition (see Fig. 2a). We used an oblique backscattering Faraday geometry where the excitation/detection beams and the magnetic field were parallel to each other, while the sample growth z-axis was tilted by 20° with respect to the magnetic field direction. The Raman shift was measured at a temperature of 2 K in magnetic fields up to 10 T in crossed circular polarizations for excitation and detection. The SFRS spectra were dispersed by a Jobin-Yvon U-1000 monochromator equipped with a cooled GaAs photomultiplier. ## Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. ## References 1. 1. Brataas, A., Kent, A. D. & Ohno, H. Current-induced torques in magnetic materials. Nat. Mater. 11, 372–381 (2012). 2. 2. Nozaki, T. et al. Large voltage-induced changes in the perpendicular magnetic anisotropy of an MgO-based tunnel junction with an ultrathin Fe layer. Phys. Rev. Appl. 5, 044006 (2016). 3. 3. Matsukura, F., Tokura, Y. & Ohno, H. Control of magnetism by electric fields. Nat. Nanotech 10, 209–220 (2015). 4. 4. Vaz, C. A. F. Electric field control of magnetism in multiferroic heterostructures. J. Phys.: Condens. Matter 24, 333201 (2012). 5. 5. Cherifi, R. O. et al. Electric-field control of magnetic order above room temperature. Nat. Mater. 13, 345–351 (2014). 6. 6. Korenev, V. L. Electric control of magnetic moment in a ferromagnet/semiconductor hybrid system. JETP Lett. 78, 564–568 (2003). 7. 7. Zakharchenya, B. P. & Korenev, V. L. Integrating magnetism into semiconductor electronics. Phys. Uspekhi 48, 603–608 (2005). 8. 8. Myers, R. C., Gossard, A. C. & Awschalom, D. D. Tunable spin polarization in III–V quantum wells with a ferromagnetic barrier. Phys. Rev. B 69(R), 161305 (2004). 9. 9. Zaitsev, S. V. et al. Ferromagnetic effect of a Mn delta layer in the GaAs barrier on the spin polarization of carriers in an InGaAs/GaAs quantum well. JETP Lett. 90, 658–662 (2010). 10. 10. Pankov, M. A. et al. Ferromagnetic transition in GaAs/Mn/GaAs/InxGa1−xAs/GaAs structures with a two-dimensional hole gas. JETP 109, 293–301 (2009). 11. 11. Korenev, V. L. et al. Dynamic spin polarization by orientation-dependent separation in a ferromagnet-semiconductor hybrid. Nat. Commun. 3, 959 (2012). 12. 12. Lee, B., Jungwirth, T. & MacDonald, A. H. Theory of ferromagnetism in diluted magnetic semiconductor quantum wells. Phys. Rev. B 61, 15606–15609 (2000). 13. 13. Anh, L. D., Hai, P. N., Kasahara, Y., Iwasa, Y. & Tanaka, M. Modulation of ferromagnetism in (In,Fe)As quantum wells via electrically controlled deformation of the electron wave functions. Phys. Rev. B 92, 161201 (2015). 14. 14. Korenev, V. L. et al. Long-range p-d exchange interaction in a ferromagnet-semiconductor hybrid structure. Nat. Phys. 12, 85–92 (2016). 15. 15. Akimov, I. A. et al. Direct measurement of the long-range p-d exchange coupling in a ferromagnet-semiconductor Co/CdMgTe/CdTe quantum well hybrid structure. Phys. Rev. B 96, 184412 (2017). 16. 16. Nova, T. F. et al. An effective magnetic field from optically driven phonons. Nat. Phys. 13, 132–137 (2017). 17. 17. Zhu, H. et al. Observation of chiral phonons. Science 359, 579–582 (2018). 18. 18. Kalitukha, I. V. et al. Interfacial ferromagnetism in a Co/CdTe ferromagnet/semiconductor quantum well hybrid structure. Phys. Solid State 60, 1578–1581 (2018). 19. 19. Sapega, V. F. et al. Resonant Raman scattering due to bound-carrier spin flip in GaAs/AlGaAs quantum wells. Phys. Rev. B 50, 2510–2519 (1994). 20. 20. Kittel, C. Interaction of spin waves and ultrasonic waves in ferromagnetic crystalls. Phys. Rev. 110, 836–841 (1958). 21. 21. Tucker, J. W. & Rampton, V. W. Microwave Ultrasonics in Solid State Physics (North Holland, Amsterdam, 1972). 22. 22. Calvet, L. E., Wheeler, R. G. & Reed, M. A. Effect of local strain on single acceptors in Si. Phys. Rev. B 76, 035319 (2007). 23. 23. Cohen-Tannoudji, C. & Dupont-Roc, J. Experimental study of Zeeman light shifts in weak magnetic fields. Phys. Rev. A 5, 968–984 (1972). 24. 24. Maznev, A. et al. Lifetime of sub-THz coherent acoustic phonons in a GaAs-AlAs superlattice. Appl. Phys. Lett. 102, 041901 (2013). ## Acknowledgements We acknowledge support by the Deutsche Forschungsgemeinschaft and Russian Foundation for Basic Research in the frame of the ICRC TRR 160 (Projects C2, B2, and A1). Partial financial support from the Russian Foundation for Basic Research Grant No. 19-52-12034 NNIOa is acknowledged. The research in Poland was partially supported by the National Science Centre (Poland) through grants UMO 2018/30/M/ST3/00276 and UMO 2017/25/B/ST3/02966 and by the Foundation of Polish Science through the IRA Programme co-financed by the EU within SGOP. We acknowledge financial support by Technische Universität Dortmund/TU Dortmund University within the funding programme Open Access Publishing. ## Author information V.L.K., I.V.K., I.A.A., V.F.S., E.A.Z., E.K., O.S.K., and D.K. performed the experiments and analyzed the data. V.L.K. developed the theoretical model. G.K., M.W., and T.W. fabricated the samples. N.D.I. and N.M.L. patterned the mesa structure, and T.K. prepared the contacts. V.L.K., I.A.A., D.R.Y., Yu.G.K, and M.B. co-wrote the paper. All authors discussed the results and commented on the paper. Correspondence to V. L. Korenev or I. A. Akimov. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information: Nature Communications thanks Pham Hai, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2019-08-18 00:58:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.628835141658783, "perplexity": 1531.9575869050948}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313536.31/warc/CC-MAIN-20190818002820-20190818024820-00246.warc.gz"}
http://www.waterlilyz.org/wildflowers-of-ygkyvl/how-to-calculate-number-of-atoms-in-a-compound-526028
Use the formula 2^{n-4}+1 Example Number of isomers of butane (C4H10) n=4 =2^{4-4}+1 =2^{0}+1 =1+1 =2 We apply the same method: 4 × 2 = 8 H. O atom count is a piece of cake by now. I am trying to do the following exercise in a chemistry workbook. How does one promote a third queen in an over the board game? Posted on - 18 June 2020 # KIPPS CHEMISTRY In this video I explained how to calculate the atoms in a chemical formula # Class 7 # Atomic Structure # 2 Thanks for sticking around! Lockring tool seems to be 1mm or 2mm too small to fit sram 8 speed cassete? You notice the small 4 at the bottom right of hydrogen? Can I install ubuntu 20.10 or 20.04LTS on dual boot with windows 10 without USB Drive? I’ll write about it in a future post. Did you get them all right??? Congrats on your understanding. the number 1,000,000 or the number 0.5. Http Www Uh Edu Chem1p C3 C3f99 Pdf. So, if that’s the case, how many N do we have? if you have 1000 grams ; then 1,000 g / 151.001 g/mol = X g moles. We can count the number atoms before the dot (FeC2O4) in our sleep by now. #5» FeC2O4⋅2H2O [iron(II) oxalate dihydrate]. Multiply with the 5 in front gives us  1 × 5 = 5 Fe. We have 1 Fe, 2 C and 4 O atoms. How to Count Atoms How do we count the number of atoms in the chemical formulas of atoms and compounds? If you are measuring an ionic compound, then you calculate the number of formula units. Next, for C, it’s slightly more complicated. I’m very glad I could help. Why is it easier to handle a cup upside down on the finger tip? Congrats on your new understanding!! Can warmongers be highly empathic and compassionated? There are three (SO4) ions above, so 3 S, 12 O and then you still have the 2 Al atoms. H! But since there’s a 3 in front of H2O, that means it’s 2 × 3. To figure this out, you will need the molar mass of NaCl which is 58.44 g/mol. Pick a number. Next, for C, it’s slightly more complicated. Tell me if there is still any mistake! Therefore, the first approach gives the correct answer, but I don't understand what you mean by 1gr/mol. Process Technology Consulting Calculate he number of moles you have by taking the Mass / molar mass. So that means 1 × 2 (we covered that for example #3 above). It should be 4 + 2 = 6 O. Example :- CH3COOH has 1+3+1+2+1=8 atoms. Don’t sweat it if you don’t get that. In this formula, there are two types of atom, carbon (C) and hydrogen (H). What if we have an even more complex formula? However, I am feeling that a carbon containing two hydrogen cannot be considered prochiral. For example, one molecule of H2O has two atoms of Hydrogen (H) and one atom of Oxygen (O). Just as 'kilo' means '1000', a mole means $6.02 \cdot 10^{23}$ of something. or if the quantity of compound is given in no. This formula has three types of atoms – nitrogen (N), hydrogen (H) and oxygen (O). Using Avogadro's constant, it is also easy to calculate the number of atoms or molecules present in a substance (Table $$\PageIndex{1}$$). And to the bottom right of the bracket, there’s a 2? Feel free to boost your confidence as often as you wish. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. now I understand with all of the examples you gave me I thank you for making me understand. Add comment More. AWESOME, if you did! Each click will provide 5 randomized questions. If you prefer to process it directly, it would be [(4 × 3) + (1 × 3)] × 5 = 75 O. "Imagine" a word for "picturing" something that doesn't involve sense of sight, using Guidance and Resistance for long term effects. Ok…see you few lines down). So how many H do we have? What does that 3 signify? Your email address will not be published. each of these samples is a compound, like H2O for example. Within that formula, we have a dot component, ⋅3H2O. This is a neutral (with no charge) compound. Okay…so what does it mean to us when we are trying to count atoms? That’s a short answer, by the way. In that crystal, we have Fe(C2O4)3. This represents 1 atom of sodium. H! Hopefully, this post has helped you in one way or another in counting atoms. The empirical formula is the chemical formula which gives the ratio between the atoms present in the compound. To learn more, see our tips on writing great answers. Ck 12 foundation 59166 views. DOWNLOAD IMAGE . There are three hydrogen atoms, as indicated by the subscript. That’s awesome! … H! So that means whatever we have counted so far for H, there are three times of that, (4× 2) + (4× 2) + (4× 2), which is the same as 4 × 2 × 3. is was amazing now ay understand more thx to dis pages and the persond. Multiply with the 5 in front gives us 1 × 5 = 5 Fe. How to calculate the number of atoms of each element in a urea? Let’s start with before the dot, Fe(C2O4)3:  There are 4 O, but since there’s a 3 outside the bracket, it means 4 × 3 = 12. Is a password-protected stolen laptop safe? But since we have some O before the dot, we need to add together the number of O atoms in the formula. Asking for help, clarification, or responding to other answers. So, here’s what we got: Let’s bring this up one more notch. 0 4; DrBob222. H! #6» 5 K3[Fe(C2O4)3]⋅3H2O [potassium iron(III) oxalate trihydrate], Do you want to try counting atoms in this beast mode formula on your own? It does not give the exact number of each atom present. The number of atoms in a mole never varies, as a “mole” is a defined number. In our example, it means 2 water molecules are trapped in the iron(II) oxalate crystals. Starting with K, there’s 3 of it, but since there’s a 5 in front, the total atoms are 3 × 5 = 15 K. For Fe, there’s only 1 of it. 6 0 2,869; Cayden. So, in total we have a total of 5 atoms in the CH4, 1 C and 4 H. So here are what we found: Doing ok so far? That number will tell you how many of that atom is present in the formula. It's that easy and hopefully that helps you understand it better It refers to the number of moles per litre of solution. You might also want to check out the video I posted on this topic: now I understand with all of the examples you gave me I thank you for making me understand. Yes , we can calculate. (In the above compound, Al, S, O are all separate elements) You then count up the coefficients to see how many atoms there are. There are 2 C, but since there’s a 3 outside the bracket, it means 2 × 3. Let’s bring it up a notch. Why not to consider hydrogen and oxygen moles to determine an empirical formula, Theoretical yields and determining the number of moles in a reaction. Ok … so back to where we were. The Wikipedia article states . Thanks for contributing an answer to Chemistry Stack Exchange! Well, that means there are 2 groups of NH4. Can you show ne the calculations/steps? what would be a fair and deterring disciplinary sanction for a student who commited plagiarism? Calculate the number of hydrogen atoms there are in 75.00 grams of C6H14O4. Making statements based on opinion; back them up with references or personal experience. So we have to find out how many moles there are in 22g of CO2. Here’s the tally: This formula looks a bit more complicated than the previous two. To!find!outthe!number!of!atoms:!MULTIPLY+all+the+SUBSCRIPTS+in+the+molecule+ by+the+COEFFICIENT. First, convert the grams to moles using the molar mass and then use Avogadro's number to find the number of molecules: This calculation tells you that there are 2.1 x 10 22 molecules of NaCl in 2 grams of NaCl. rev 2020.12.14.38164, The best answers are voted up and rise to the top, Chemistry Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, How to calculate atoms of *each* element in a compound, Calculate the number of moles of phosphorus in 15.95g of tetraphosphorus decaoxide. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Counting the actual number of atoms will come in a later post. Formula Mass And The Mole Concept Chemistry 2e Openstax. Comes out to the same answer, just depends on how your brain works. Or do I do like this? site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Why do most guitar amps have a preamp and a power amp section? Follow • 2. Therefore, there are 42 electrons in KNO₂. To get the total number of electrons, you multiply the atomic number by the number of atoms then add them together. O! In this case, we have 4 hydrogens. In Na 2 SO 4, we know there are 2 … DOWNLOAD IMAGE. Well, inside the bracket, we have 1 N. Then outside the bracket, there’s a 2, that means whatever that’s inside the bracket, we have 2 of it. Why not write it as NH5O and make it easier for us to count atoms? Notice the bracket covers NH4? Just to be clear, I am talking about counting the number of atoms present in a chemical formula without involving your calculator. Hi, The Avogadro constant is 6.022140857 × 10 23 this tells you how many atoms you have in one mole. In the same way, HCl has one atom of Hydrogen (H) and one atom of Chlorine (Cl). Here’s how I got mine: Let’s digest the formula a little. Just multiply the molar amount means the number of moles present in the atom with the avogadro's number which 6.022×10^23.You will get the number of atoms. Why it is important to write a function as sum of even and odd functions? 5 moles $\ce{H2}$ × 6.02 × $10^{23}$ = 30.1 × $10^{23}$ atoms From NH4, there’s 1 N, but since there’s a 2 outside of the bracket, that means we actually have 1 × 2 = 2 N. What about H? Any number you choose will be correct because you didn't specify how much (NH3)3PO4 you have. (Thiswillgiveyouthe number+of+atoms+of+eachelement.)! By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Required fields are marked *. If you’re doing great, let’s continue with something more challenging. So, here’s what we got: This formula looks quite similar to #3, except there’s a 3 in front of the entire formula. But, notice O also appears in H2O after the dot. Doing ok so far? The molecular formula of a compound describes the number of atoms or types of atoms present in one molecule of that compound. Well, if it’s placed in front of the formula, it means there are 3 of the entire formula. Note: From compound to compound to a number of atoms in a molecule vary. thank you for letting me understand how to count atoms. Use MathJax to format equations. How does a limiting reactant relate to the concentration of ions in solution? A compound is known to contain only carbon, hydrogen, and oxigen. It’s used to easily group a formula together. Confusion regarding a stoichiometric formula, Difference between drum sounds and melody sounds, Remove left padding of line numbers in less. That just means we have 3 H2O molecules within the crystal. How does "quid causae" work grammatically? So since there are two groups of NH4, that means we have 2 N in total. O! If you are curious about the water of crystallization, you can read more about it on Wikipedia. To calculate the number of molecules in ‘x’ grams of a compound ‘s’, we are going to take help of the compound’s molar mass. There are 3N, 9H, 1P and 4O for a total of 17 per molecule. But since we have a 5 in front of the entire term, we’ll need to multiply by 5, 15 × 5 = 75 O. Is there a single word to express someone feeling lonely in a relationship with his/ her partner? Report 1 Expert Answer Best Newest Oldest. I have searched over internet but I couldn't find or understand it. Both iodine and fluorine are both in Group 7A, giving both atoms 7 electrons each. The group numbers tell how many valence electrons an atom has. We have 1 × 2 × 3 = 6 N. How did I  get that? In instances where the compound has a negative charge, you add … Of course there's a formula for determining number of isomers for a given hydrocarbon. What’s up with the bracket? Almost there. There are some practice questions for you to build/confirm  your confidence. It’s like saying we have 2 of H2O. That’s how we got 24 H. For O, we have 1 × 3 = 3 O. Basically, there are 5 of K3[Fe(C2O4)3]⋅3H2O. There’s a small 2 at bottom right of H, that means we have 2 H, but since there’s a big 2 in front of H2O, that means we actually have 2 × 2 = 4 H. Apply the same for O count in 2H2O, we have 1 × 2 = 2 O. Get the first item in a sequence that matches a condition, How does one maintain voice integrity when longer and shorter notes of the same pitch occur in two voices. 5.0 (105) Ph.D. University Professor with 10+ years Tutoring Experience. C = 6 x 12 = 72. How many atoms of Carbons are present in 34.5g of caffeine, C8H10N4O2? H! so n=N/Na --> n × Na = N (number of atoms) 2 × 1gr/mol H = 2 moles H x 5 moles in the compound = 10 moles. I subtracted 4 chiral carbon atoms in the compounds. That means we have 1 Fe and 3 groups of C2O4. Simply, count it in its formula. For carbon, notice there’s no small number at its bottom right? It all depends on what you are measuring. So we move on to the new stuff, “⋅2H2O”. Expectation of exponential of 3 correlated Brownian Motion. H! As for the part after the dot, 3H2O: There is 1 O, but since there’s a 3 in front of H2O, we have 1 × 3 = 3. There’s a dot right smacked in the middle. In this post, we’ll go through counting atoms from simple to more complex formula. You could, but most of the time, you’ll find it’s written as NH4OH so that it’s easier to identify the components that make up this ionic compound – NH4+ and OH–. Let’s read from left to right. Let’s continue with H. It’s kind of the same as N. We have 4 × 2 × 3 = 24 H. Inside the bracket, we have 4 H. Then outside the bracket, there’s a 2, that means whatever that’s inside the bracket, we have 2 of it. If you’re stuck somewhere, how about scrolling up and slowly work your way here? By: J.R. S. answered • 05/31/17. Symbol The symbol of an element represents one atom of that element. And finally, because there’s that 5 in front of the entire molecule, total atoms is 2 × 3 × 5 = 30 C. O is kind of like C but a little more complicated since it appears in two parts – before the dot and after the dot. So that means whatever we have counted so far for N, there are three times of that (1× 2) + (1× 2) + (1× 2), which is the same as saying 1 × 2 × 3. So in your question, $\pu{1.3 mol}$ of $\ce{H2SO4}$ means that you have $1.3 \times 6.022 \times 10^{23}$ molecules of $\ce{H2SO4}$, each of which contains one sulphur atom, two hydrogen atoms and four oxygen atoms. of moles, then multiply the moles with 6.022 x 1023 to get the no. Remember what we need to do if there’s a number in front of the term H2O (like in #4 where we have 3 in front of (NH4)2O)? how do i calculate the total number of atoms? Feel free to scroll past the easy stuff if you’re already good with the basics. Since there’s no number at the bottom right of O, it means we have 1 O. Jan 8, 2016 . Only values of 2 and above are written out. The number of atoms of an element in a molecule is described by the subscript on the chemical formula of that element. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. There’s a 2 at the bottom of H, which means we have 2H. Before finding the molecular formula, one should find out the empirical formula from the mass percentages of each atom present in the compound. And finally, since there’s a big 5 in front of the entire term, final total H atoms is 2 × 3 × 5 = 30 H. You are still here? Multiply each atom's numeric coefficient (written in subscript after the atom's chemical symbol in the compound) by its oxidation number. See, that's the great part. Your email address will not be published. So in total we have 12 + 3 = 15. That means there’s 1 carbon atom. Calculate the number of prochiral carbon in the following compound. Remember the 3 in front of the whole formula? How many atoms of each element are found in: Now the only confusing thing for me is the 5 moles. Tutor. To be precise, one mole is equal to the number of atoms in 12 grams of Carbon-12, which is approximately $6.022\cdot10^{23}$. It’s the same as saying (NH4)2O 3 times:(NH4)2O, (NH4)2O, (NH4)2O. what is the total number of atoms in the compound (NH4)3PO4? O = 4 x 16 = 64 . im my chem hw, there a table, and you have to calculate the mass, moles, molecules, and total number of atoms in a sample. The given number of carbon atoms was greater than Avogadro's number, so the number of moles of $$\ce{C}$$ atoms is greater than 1 mole. That’s really great to know! 10 moles $\ce{H2}$ × 6.02 × $10^{23}$ = 60.2 × $10^{23}$ atoms. Just take it as if 2 H2O is part of the entire molecule. I assume you means for 1 molecule. MathJax reference. If you are measuring an element, then you … Is it possible to do planet observation during the day? By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Relationship with his/ her partner travel to receive a COVID vaccine as a tourist \ce H2SO4... Are 3 of the entire formula together the number of atoms present in 34.5g of caffeine, C8H10N4O2 in 7A! When we are trying to count atoms it ’ s used to easily a! Iodine and fluorine are both in group 7A, giving both atoms 7 electrons.... That a carbon containing two hydrogen can not be considered prochiral us when we are to! 2 C, but since we have 3 H2O molecules within the crystal in our sleep by.. Choose will be correct because you did n't specify how much ( NH3 ) 3PO4 have! Subscript after the atom 's how to calculate number of atoms in a compound symbol in the formula, it 2! Are trying to do planet observation during the day cup upside down on finger! Write it as how to calculate number of atoms in a compound 2 H2O is part of the formula what got! How much ( NH3 ) 3PO4 you have boot with windows 10 without USB?! T break any sweat of CO2 initialization order of the mole is basically just a large that... O also appears in H2O after the dot, we have 1 × 5 = 5 Fe it Yes! With his/ her partner has two atoms of Carbons are present in one mole X g moles a piece cake. Then you calculate the number of hydrogen atoms, not 5k final tally: formula., 12 O and then you calculate the number of atoms then add them together multiply the with! A total of 17 per molecule if the molar mass of each element are in! Back them up with references or personal Experience in Na 2 so 4, we ’ ll meet few! Find out how many atoms of Carbons are present in a urea understand how to count atoms how do calculate! Symbol in the compound a SI prefix, like 'kilo ' means '... Each element are found in: how to calculate number of atoms in a compound the only confusing thing for me I. Read more about it on Wikipedia one mole Difference between drum sounds and melody sounds, Remove left padding line... Tips on writing great answers O and then you calculate the number of formula units, just depends how! The molecular formula, it means we have Fe ( C2O4 ).... Number you choose will be correct because you did n't specify how (. 3 = 15 ( SO4 ) ions above, so 3 s, 12 and. Chemical formula without involving your calculator to other answers the case, how many atoms of hydrogen H... ] ⋅3H2O limiting reactant relate to the concentration of ions in solution when we are trying to do observation. Scrolling up and slowly work your way here back them up with references or personal Experience same. Before the dot ( FeC2O4 ) in our sleep by now a large number that you use! Formula looks a bit more complicated $of something, as indicated by the of! Continue with something more challenging terrifying ) calculations involving mass-mol- # particles for several substances ( )... Both in group 7A, giving both atoms 7 electrons each guaranteed by the number formula. A question and answer site for scientists, academics, teachers, and oxigen saying! Agree to our terms of service, privacy policy and cookie policy O.! Ions in solution to build/confirm your confidence as often as you wish, and... Of 17 per molecule fair and deterring disciplinary sanction for a student who commited plagiarism no charge compound. 17 per molecule your calculator you for letting me understand like H2O for example for O we! To be clear, I am feeling that a carbon containing two hydrogen can be. At its bottom right of O atoms in a future post 2 Al atoms our,! ( NH4 ) 3PO4 you have what would be a fair and deterring sanction... Given in no of cake by now amps have a preamp and a power section! Samples is a defined number right smacked in the formula a little a student commited. Given for one atom of hydrogen a COVID vaccine as a tourist post has helped in. Stuff if you have 1000 grams ; then 1,000 g / 151.001 g/mol = X g moles contain carbon! Ions in solution s in it for me is the 5 moles more notch of CO2 know. Now I understand with all of the mole as a “ mole ” a. What you mean by 1gr/mol the mass percentages of each element in a mole never varies, a. Water molecules are trapped in the chemical formula of that element you start, don ’ get! A stoichiometric formula, we need to how to calculate number of atoms in a compound together the number of moles, then multiply the atomic of. You did n't specify how much ( NH3 ) 3PO4 you don ’ get... Carbon atoms in the following exercise in a chemistry workbook both iodine and fluorine are in... Technology Consulting calculate he number of each atom present in the compound ( )! Electrons an atom has I install ubuntu 20.10 or 20.04LTS on dual boot with windows 10 without USB Drive of. And 4O for how to calculate number of atoms in a compound student who commited plagiarism dot right smacked in the field of chemistry atom. For several substances for letting me understand how to calculate the number atoms. For carbon, notice O also appears in H2O after the dot for atom... And to the bottom of H, which means we have 3 H2O molecules within the crystal come a! Should be 4 + 2 = 6 O the calculation is rounded to three significant figures 5 5. I can count atoms groups of C2O4 so we move on to the new stuff, how to calculate number of atoms in a compound ⋅2H2O.. The moles with 6.022 X 1023 to get the total number of atoms in this,... Write about it on Wikipedia why it is important to write a function as sum of even odd... Way or another in counting atoms tally: how are you doing so far commited plagiarism your brain works on. Entire formula the calculation is rounded to three significant figures, the Avogadro constant is 6.022140857 × 23. Never varies, as indicated by the way aimi on July 6 2019! Basically, there are 3 of the calculation is rounded to three significant figures © 2020 Stack Exchange ;! And oxigen licensed under cc by-sa post has helped you in one mole upside down on the formula... Asked by aimi on July 6, 2019 ; chem mass / molar mass of each atom 's symbol! Thing for me is the 5 moles 2e Openstax do most guitar amps have a dot right smacked the... We know there are in 22g of CO2 FeC2O4⋅2H2O [ iron ( II ) oxalate crystals more, our! = X g moles or types of atoms then add them together you can use instead of.! Subtracted 4 chiral carbon atoms in this organic compound if the quantity of compound is given in no \ce! ” is a measured quantity with three significant figures, but since we have a and! Numeric coefficient ( written in subscript after the dot ( FeC2O4 ) in our by. Of 17 per molecule have by taking the mass percentages of each 's... Above, so 3 s, 12 O and then you still the. Just to be clear, I am trying to do the following how to calculate number of atoms in a compound... Carbon atoms in the chemical formula of that element 22g of CO2 '1000 ', mole! Like saying we have Fe ( C2O4 ) 3 ] ⋅3H2O academics, teachers, and oxigen odd functions 5k... Has helped you in one molecule of H2O has two atoms of each atom present appears... Several substances URL into your RSS reader smacked in the chemical formulas of atoms and hydrogen ( H ) 10. Some practice questions for you to build/confirm your confidence as often as you wish refers to the right! Asked by aimi on July 6, 2019 ; chem under cc by-sa of atom carbon! Bring this up one more notch Let ’ s 1 or if the quantity of compound is given for atom! I could n't find or understand how to calculate number of atoms in a compound better Yes, we have to find out many! Or 20.04LTS on dual boot with windows 10 without USB Drive way or another in counting …. Hopefully, this post has helped you in one mole by electrons in shells 4 at the bottom right hydrogen! 5 in front of the examples you gave me I thank you for letting understand! 1000 grams ; then 1,000 g / 151.001 g/mol = X how to calculate number of atoms in a compound.... Our sleep by now with 6.022 X 1023 to get the no of formula.... Limiting reactant relate to the new stuff, “ ⋅2H2O ” to easily group a formula together whole.... To other answers '1000 ', a mole means$ 6.02 \cdot 10^ { 23 } \$,... Notice O also appears in H2O after the dot ( FeC2O4 ) our. Dot, we have Fe ( C2O4 ) 3 deterring disciplinary sanction for a total 17! Fluorine are both in group 7A, giving both atoms 7 electrons each you... Hydrogen ( H ) and one atom ) 2 ( we covered that for example, should! 5 moles of something because you did n't specify how much ( ). Answer ”, you multiply the atomic number by the way there is one nitrogen atom no! Atomic mass of each atom by the subscript on the chemical formulas of atoms vaccine! For letting me understand the atomic number by the number of carbon atoms in a relationship with his/ her?!
2021-04-15 02:33:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5744047164916992, "perplexity": 923.3766480370667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00102.warc.gz"}
https://www.vasp.at/wiki/index.php?title=ML_FF_SION1_MB&oldid=11809
Requests for technical support from the VASP group should be posted in the VASP-forum. # ML FF SION1 MB (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) The unit of ML_FF_SION1_MB is in ${\displaystyle \AA }$. Test calculations showed that a 1.5 smaller value for the broadening of the radial descriptor compared to the angular descriptor (see ML_FF_SION2_MB) gives optimal results. The default value for ML_FF_SION1_MB is chosen such that ML_FF_SION2_MB becomes 0.5. This tag is not used if ML_FF_IBROAD1_MB=1 (which is not the default).
2021-01-19 16:07:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873531222343445, "perplexity": 5938.645277442411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519395.23/warc/CC-MAIN-20210119135001-20210119165001-00545.warc.gz"}
https://www.physicsforums.com/threads/factoring-problem-confused.520473/
# Homework Help: Factoring problem confused 1. Aug 9, 2011 ### Miike012 I added an attachment and highlited my question.... basically I am confused how they got there. File size: 16.8 KB Views: 100 2. Aug 9, 2011 ### Zryn It's just a bit of rearranging: Given y = $\sqrt{ax^{2} + bx + c}$ $y^{2}$ = a$x^{2}$ + bx + c. So replace a$x^{2}$ + bx + c from the initial equation with $y^{2}$. Replace the $\sqrt{ax^{2} + bx + c}$ part of the initial equation with y. Move q from one side of the equation to the other. Presto chango, $y^{2}$ + py - q = 0.
2018-05-21 17:28:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44624871015548706, "perplexity": 2846.2452288373534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864461.53/warc/CC-MAIN-20180521161639-20180521181639-00221.warc.gz"}
https://practice.geeksforgeeks.org/problems/kth-ancestor-in-a-tree/1
Kth Ancestor in a Tree Easy Accuracy: 34.95% Submissions: 7977 Points: 2 Given a binary tree of size  N, a node and a positive integer K., your task is to complete the function kthAncestor(), the function should return the Kth ancestor of the given node in the binary tree. If there does not exist any such ancestor then return -1. Example: Input: K = 2 Node = 4 Output: 1 Since, K is 2 and node is 4, so we first need to locate the node and look k times its ancestors. Here in this Case node 4 has 1 as his 2nd Ancestor aka the Root of the tree. Input: First line of input contains the number of test cases T. For each test case, there will be only a single line of input which is a string representing the tree as described below: 1. The values in the string are in the order of level order traversal of the tree where, numbers denote node values, and a character “N” denotes NULL child. 2. For example: For the above tree, the string will be: 1 2 3 N N 4 6 N 5 N N 7 N Output: For each test the function should return the Kth ancestor of the given Node from the binary tree. Constraints: 1<=T<=30 1<=N<=104 1<= K <= 100 Example: Input: 1 3 1 2 3 2 4 1 2 3 4 5 Output: 1 1 Explanation: Test Case 1: Given Tree is 1 /       \ 2         3 K = 1 , given node = 3 , Kth ancestor of 3 is 1. to report an issue on this page. ### Editorial We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial?
2020-09-27 18:21:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44027218222618103, "perplexity": 1281.804552313728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00412.warc.gz"}
http://appliedmechanics.asmedigitalcollection.asme.org/article.aspx?articleid=1417003
0 TECHNICAL PAPERS # Residual Elastic Strains in Autofrettaged Tubes: Variational Analysis by the Eigenstrain Finite Element Method [+] Author and Article Information Alexander M. Korsunsky Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, UKalexander.korsunsky@eng.ox.ac.uk Gabriel M. Regino Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, UK In the case of elastic–ideally plastic deformation closer analysis of autofrettage in thick tubes shows that plastic strain distribution has the form $ε*(r)=A−D∕r2$. This distribution, however, may also be closely approximated by a parabolic distribution. J. Appl. Mech 74(4), 717-722 (Aug 15, 2006) (6 pages) doi:10.1115/1.2711222 History: Received September 30, 2005; Revised August 15, 2006 ## Abstract Autofrettage is a treatment process that uses plastic deformation to create a state of permanent residual stress within thick-walled tubes by pressurizing them beyond the elastic limit. The present paper presents a novel analytical approach to the interpretation of residual elastic strain measurements within slices extracted from autofrettaged tubes. The central postulate of the approach presented here is that the observed residual stress and residual elastic strains are secondary parameters, in the sense that they arise in response to the introduction of permanent inelastic strains (eigenstrains) by plastic deformation. The problem of determining the underlying distribution of eigenstrains is solved here by means of a variational procedure for optimal matching of the eigenstrain finite element model to the observed residual strains reported in the literature by Venter, 2000, J. Strain Anal., 35, p. 459. The eigenstrain distributions are found to be particularly simple, given by one-sided parabolas. The relationship between the measured residual strains within a thin slice to those in a complete tube is discussed. <> ## Figures Figure 5 Radial residual elastic strain in specimen C: experimental measurements (markers) and eigenstrain model prediction (continuous curve) Figure 6 Hoop residual elastic strain in specimen C: experimental measurements (markers) and eigenstrain model prediction (continuous curve) Figure 7 Eigenstrain profile in specimen B: the distribution determined by variational eigenstrain analysis (markers) and a parabolic fit (dashed curve) Figure 8 Eigenstrain profile in specimen C: the distribution determined by variational eigenstrain analysis (markers) and a parabolic fit (dashed curve) Figure 4 Hoop residual elastic strain in specimen B: experimental measurements (markers) and eigenstrain model prediction (continuous curve) Figure 3 Radial residual elastic strain in specimen B: experimental measurements (markers) and eigenstrain model prediction (continuous curve). Figure 2 A possible arrangement of autofrettaged tube slices with respect to the incident and diffracted beams. The dashed lines indicate the incident and diffracted beams; the arrow shows the scattering vector that indicates the orientation of the strain component being measured (radial in the present example). Figure 1 Schematic illustration for the description of axisymmetric deformation of a thick-walled tube of internal radius a and external radius b under internal pressure p. Parameter c indicated the radius of the elastic–plastic boundary, and q is the pressure transmitted across this interface. ## Errata Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
2018-12-17 19:49:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20805715024471283, "perplexity": 4178.732050229372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829115.83/warc/CC-MAIN-20181217183905-20181217205905-00339.warc.gz"}
https://phvu.net/2016/07/20/k-means-is-a-special-case-of-gaussian-mixtures/
# K-means is a special case of Gaussian Mixtures? Some while ago I had a coffee chat with a former colleague at Arimo Inc., where we somehow talked about K-means and its properties. At some points I profoundly claimed I am pretty sure that K-means is just a special case of Gaussian mixture models, with all clusters having the same identity covariance matrix. I had no proof for that but deep down, I believe that is true, and my intuition also assures that (c’mon, the K-means training algorithm is almost exactly the EM algorithm used to train GMM). I was even so tempted to sit down and write the maths out, but we went on discussing other stuff, and forgot about K-means. Without proof, I failed to convince my colleague, but the question still hangs in my head. Plus, after the slides I made several years ago, I planned to write something seriously about GMM and its relatives, but have never managed to. Last night the question somehow popped out again. For god sake, I can’t handle too many questions simultaneously in my head, so I have to sort this thing out (since it seems to be the easiest one, compared to others). The answer is actually written in Bishop’s book, section 9.3.2, which is excerpted below There we go. Take GMM, if we make the covariance matrices of all the components to be $\displaystyle \epsilon \mathbf{I}$, where $\displaystyle \epsilon$ is a fixed constant shared by all the components, and take the limit when $\displaystyle \epsilon \rightarrow 0$, then we get K-means. Things get interesting when you don’t take the limit, in which case it will become the Fuzzy C-means algorithm. If you take the limit but the covariance matrix is not constrained to be identity, then you get the elliptical K-means algorithm. Although my gut feeling was wrong (missing the limit detail), but at the end of the day, learning new things everyday seems to be a rewarding experience.
2017-10-21 23:17:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7692394256591797, "perplexity": 574.3505356689891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.75/warc/CC-MAIN-20171021224608-20171022004608-00001.warc.gz"}
http://mathhelpforum.com/calculus/79849-implict-functions.html
# Math Help - Implict functions 1. ## Implict functions When doing these I am unsure of where to place dy/dx for example of this problem $x^2+xy-y^3=xy^2$ 2. Originally Posted by sk8erboyla2004 When doing these I am unsure of where to place dy/dx for example of this problem $x^2+xy-y^3=xy^2$ If you are using implicit differentiation, take the derivative with respect to x of both sides. When you come across something with a y, use the chain rule. You should end up with something that has $\frac{dy}{dx}$ in it, and then make $\frac{dy}{dx}$ the subject. $\frac{d}{dx}(x^2 + xy - y^3) = \frac{d}{dx}(xy^2)$ $\frac{d}{dx}(x^2) + \frac{d}{dx}(xy) - \frac{d}{dx}(y^3) = \frac{d}{dx}(xy^2)$ $2x + x\frac{dy}{dx} + y - 3y^2\frac{dy}{dx} = y^2 + 2xy\frac{dy}{dx}$ Now just make $\frac{dy}{dx}$ the subject.
2015-03-06 03:10:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6283116340637207, "perplexity": 507.8467878947623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465456.40/warc/CC-MAIN-20150226074105-00233-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/lap-fillet-weld-bending.226683/
# Lap fillet weld bending 1. Apr 5, 2008 ### Fantastic Fox I have to design a lap fillet weld joining a plate to the top flange of an beam. There will be a force acting down on the plate so the weld will be bending. How do I calculate the stress on the weld? l l <- Forces acting on the plate and beam ----------------- The plate is the length of the beam 2. Apr 5, 2008 ### PhanthomJay For cover plate weld design, you have to calculate the longitudinal shear at the intersection of the cover plate and top flange of the beam. Does tau=VQ/It sound familiar? 3. Apr 5, 2008 ### Fantastic Fox I haven't used that equation before, so I have a few questions: Is t = thickness of the flange or the breadth of the flange? Or, is it the thickness of the flange or the breadth of the plate? When I have calculated tau, where do I go after that? Force per inch of weld direct vertical shear = f = P/A? Then,calculate Force per inch of weld due to bending too? Add them, and divide by allowable force per inch of leg? tau = shear load / (throat thickness)(length) to calculate t? Thanks Last edited: Apr 5, 2008 4. Apr 5, 2008 ### PhanthomJay yow, it's been 20 years since i designed my last weld. Too much managing and scheduling, and too little engineering. My recall is that you want shear flow, VQ/I , at the plate/beam interface (VQ/Ib gives you the shear stress in the beam at this interface, where b is the width of the flange, not its thickness, but this is irrelevant for the weld design.) So calculate 'I' for the built up member about its neutral axis, and Q (the moment area) of the plate (Q=area of cover plate times the distance from its centrioid to the neutral axis), and your result will be in kips per inch; usually you will have 2 welds on either side of the plate, so divide the result by 2, and that's the load per weld, and size its thickness accordingly (usually it's a minmum size fillet weld, because shear stress is not that big near the top of the section). BTW, the weld resists the bending load in shear, so this is weld shear stress, not bending stress. EDIT: i just noted your query about adding the shear flow stress to the vertical shear V/A stress. The answer is NO. V/A shear stress is the average of the shear stress across the entire section. You don't want the average, you want the actual VQ/I at the interface. So just calculate it, and size the weld to take that force/in, and make its length continuous over the full length of the cover plate. If the weld thickness comes out less than the minimum required by codes, it will be oversized, so instead of making it continuous, you can make the weld intermittent at so many inches on center, if you want. Last edited: Apr 6, 2008
2016-12-04 12:37:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118664622306824, "perplexity": 1996.2578811488254}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00417-ip-10-31-129-80.ec2.internal.warc.gz"}
https://codeforces.com/problemset/problem/815/C
C. Karen and Supermarket time limit per test 2 seconds memory limit per test 512 megabytes input standard input output standard output On the way home, Karen decided to stop by the supermarket to buy some groceries. She needs to buy a lot of goods, but since she is a student her budget is still quite limited. In fact, she can only spend up to b dollars. The supermarket sells n goods. The i-th good can be bought for ci dollars. Of course, each good can only be bought once. Lately, the supermarket has been trying to increase its business. Karen, being a loyal customer, was given n coupons. If Karen purchases the i-th good, she can use the i-th coupon to decrease its price by di. Of course, a coupon cannot be used without buying the corresponding good. There is, however, a constraint with the coupons. For all i ≥ 2, in order to use the i-th coupon, Karen must also use the xi-th coupon (which may mean using even more coupons to satisfy the requirement for that coupon). Karen wants to know the following. What is the maximum number of goods she can buy, without exceeding her budget b? Input The first line of input contains two integers n and b (1 ≤ n ≤ 5000, 1 ≤ b ≤ 109), the number of goods in the store and the amount of money Karen has, respectively. The next n lines describe the items. Specifically: • The i-th line among these starts with two integers, ci and di (1 ≤ di < ci ≤ 109), the price of the i-th good and the discount when using the coupon for the i-th good, respectively. • If i ≥ 2, this is followed by another integer, xi (1 ≤ xi < i), denoting that the xi-th coupon must also be used before this coupon can be used. Output Output a single integer on a line by itself, the number of different goods Karen can buy, without exceeding her budget. Examples Input 6 1610 910 5 112 2 120 18 310 2 32 1 5 Output 4 Input 5 103 13 1 13 1 23 1 33 1 4 Output 5 Note In the first test case, Karen can purchase the following 4 items: • Use the first coupon to buy the first item for 10 - 9 = 1 dollar. • Use the third coupon to buy the third item for 12 - 2 = 10 dollars. • Use the fourth coupon to buy the fourth item for 20 - 18 = 2 dollars. • Buy the sixth item for 2 dollars. The total cost of these goods is 15, which falls within her budget. Note, for example, that she cannot use the coupon on the sixth item, because then she should have also used the fifth coupon to buy the fifth item, which she did not do here. In the second test case, Karen has enough money to use all the coupons and purchase everything.
2019-07-22 14:59:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26547595858573914, "perplexity": 1119.6854954681355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528037.92/warc/CC-MAIN-20190722133851-20190722155851-00139.warc.gz"}
https://physics.stackexchange.com/questions/591677/solving-expectation-value-of-product-of-position-and-momentum/591686
# Solving expectation value of product of position and momentum I have already worked out the expectation value of the the product in the opposite order $$\langle x\,p_x\rangle$$. I'm now trying to work out the expectation value $$\langle p_x \, x\rangle$$. I've been trying to work it out from $$\int \psi^* \left( -i\hbar\, \frac{\partial}{\partial x}\right)x \,\psi\, dx$$ I can't seem to get the given answer: $$\langle p_x \,x\rangle^*-i\hbar$$ I was hoping someone could show me the intermediate steps as I think I must be making mistakes in my integration by parts. The way you have written your equations could come as confusing because I think you are missing some brackets to clarify the terms. But assuming that I understood what you mean, then I actually think that if you already have the expectation value $$\langle\hat{x}\hat{p}\rangle$$, you do not need to perform a separate calculation to get the expectation value $$\langle\hat{p}\hat{x}\rangle$$. You can directly use the commutator: $$[\hat{x},\hat{p}]=i\hbar\Longrightarrow\hat{x}\hat{p}-\hat{p}\hat{x}=i\hbar\Longrightarrow\hat{p}\hat{x}=\hat{x}\hat{p}-i\hbar.$$ It then follows that $$\langle\hat{p}\hat{x}\rangle=\langle\hat{x}\hat{p}\rangle-i\hbar$$. Is that * by the expectaion value meant to be an equals sign? If so I think you are trying to obtain the commutator of the position and momentum operator, not the expectation value. That given answer definitely isn't correct but the commutator given by the square brackets: $$[A,B] = AB-BA$$ $$[x,p_x]\psi = -[p_x x]\psi = i\hbar$$. Computing the integral you can actually see this. $$-i\hbar\int \psi^*\frac{\partial}{\partial x} (x\psi) dx = -i \hbar \int\psi^* (\psi + x\frac{\partial \psi}{\partial x})dx$$ $$= -i\hbar\int \psi^*\psi dx + \int \psi^* x (-ih\frac{\partial}{\partial x} \psi) dx = -i\hbar + \langle x,p_x \rangle$$
2022-07-04 14:54:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9452459812164307, "perplexity": 119.02340581339162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104432674.76/warc/CC-MAIN-20220704141714-20220704171714-00608.warc.gz"}
https://arbital.greaterwrong.com/p/reflective_consistency?l=2rb
# Reflective consistency A decision system is “reflectively consistent” if it can approve the construction of similar decision systems. For example, if you have an expected utility satisficer (it either takes the null action, or an action with expected utility greater than $$\theta$$) then this agent can self-modify to any other design which also either takes no action, or approves a plan with expected utility greater than $$\theta.$$ A satisficer might also approve changing itself into an expected utility maximizer (if it expects that this self-modification itself leads to expected utility at least $$\theta$$) but it will at least approve replacing itself with another satisficer. On the other hand, a causal decision theorist given a chance to self-modify will only approve the construction of something that is not a causal decision theorist. A property satisfies the stronger condition of reflective stability when decision systems with that property only approve their own replacement with other decision systems with that property. For example, a paperclip maximizer will under ordinary circumstances only approve code changes that preserve the property of maximizing paperclips, so “wanting to make paperclips” is reflectively stable and not just reflectively consistent. Parents: • Vingean reflection The problem of thinking about your future self when it’s smarter than you.
2022-12-01 15:50:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7028560042381287, "perplexity": 2110.8869488383666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00023.warc.gz"}
https://zbmath.org/?q=an:0953.34068
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Eigenvalue problems for nonlinear differential equations on a measure chain. (English) Zbl 0953.34068 Summary: Values of $\lambda$ are determined for which there exist positive solutions to the second-order differential equation on a measure chain, $$u^{\Delta\Delta}(t)+\lambda a(t) f(u(\sigma(t)))= 0,\quad t\in [0,1],$$ satisfying either the conjugate boundary conditions $u(0)= u(\sigma(1))= 0$ or the right focal boundary conditions $u(0)= u^\Delta(\sigma(1))= 0$, where $a$ and $f$ are positive valued, and both $\lim_{x\to 0^+} {f(x)\over x}$ and $\lim_{x\to \infty}{f(x)\over x}$ exist. ##### MSC: 34L05 General spectral theory for OD operators 34B24 Sturm-Liouville theory 34B15 Nonlinear boundary value problems for ODE Full Text: ##### References: [1] R. P. Agarwal, and, M. Bohner, Basic calculus on time scales and some of its applications, preprint. · Zbl 0927.39003 [2] Aulback, B.; Hilger, S.: Linear dynamic processes with inhomogeneous time scale. Math. res. 59 (1990) · Zbl 0719.34088 [3] Deimling, K.: Nonlinear functional analysis. (1985) · Zbl 0559.47040 [4] Eloe, P. W.; Henderson, J.: Positive solutions and nonlinear (k,n-k) conjugate eigenvalue problems. Differential equations dynam. Systems 6, 309-317 (1998) · Zbl 1003.34018 [5] Erbe, L. H.; Hilger, S.: Sturmian theory on measure chains. Differential equations dynam. Systems 1, 223-246 (1993) · Zbl 0868.39007 [6] Erbe, L. H.; Peterson, A.: Green’s functions and comparison theorems for differential equations on measure chains. Dynam. contin., discrete impuls. Systems 6, 121-137 (1999) · Zbl 0938.34027 [7] L. H. Erbe, and, A. Peterson, Positive solutions for a nonlinear differential equation on a measure chain, preprint. · Zbl 0963.34020 [8] Hilger, S.: Analysis on measure chains--A unified approach to continuous and discrete calculus. Results math. 18, 18-56 (1990) · Zbl 0722.39001 [9] Henderson, J.; Wang, H.: Positive solutions for nonlinear eigenvalue problems. J. math. Anal. appl. 208, 252-259 (1997) · Zbl 0876.34023 [10] Kaymakcalan, B.; Lakshmikantham, V.; Sivasundaram, S.: Dynamical systems on measure chains. (1996) · Zbl 0869.34039 [11] Krasnosel’skii, M. A.: Positive solutions of operator equations. (1964) [12] S. Lauer, Positive Solutions of Boundary Value Problems for Nonlinear Difference Equations, Ph.D. dissertation, Auburn University, 1997. · Zbl 0883.39003
2016-04-29 12:13:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7859755158424377, "perplexity": 9336.715006006958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111324.43/warc/CC-MAIN-20160428161511-00071-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.rdocumentation.org/packages/stats/versions/3.2.5/topics/integrate
# integrate 0th Percentile ##### Integration of One-Dimensional Functions Adaptive quadrature of functions of one variable over a finite or infinite interval. Keywords utilities, math ##### Value A list of class "integrate" with components value the final estimate of the integral. abs.error estimate of the modulus of the absolute error. subdivisions the number of subintervals produced in the subdivision process. message "OK" or a character string giving the error message. call the matched call. ##### Note Like all numerical integration routines, these evaluate the function on a finite set of points. If the function is approximately constant (in particular, zero) over nearly all its range it is possible that the result and error estimate may be seriously wrong. When integrating over infinite intervals do so explicitly, rather than just using a large number as the endpoint. This increases the chance of a correct answer -- any function whose integral over an infinite interval is finite must be near zero for most of that interval. For values at a finite set of points to be a fair reflection of the behaviour of the function elsewhere, the function needs to be well-behaved, for example differentiable except perhaps for a small number of jumps or integrable singularities. f must accept a vector of inputs and produce a vector of function evaluations at those points. The Vectorize function may be helpful to convert f to this form. ##### Source Based on QUADPACK routines dqags and dqagi by R. Piessens and E. deDoncker--Kapenga, available from Netlib. ##### References R. Piessens, E. deDoncker--Kapenga, C. Uberhuber, D. Kahaner (1983) Quadpack: a Subroutine Package for Automatic Integration; Springer Verlag. ##### Aliases • integrate • print.integrate ##### Examples library(stats) integrate(dnorm, -1.96, 1.96) integrate(dnorm, -Inf, Inf) ## a slowly-convergent integral integrand <- function(x) {1/((x+1)*sqrt(x))} integrate(integrand, lower = 0, upper = Inf) ## don't do this if you really want the integral from 0 to Inf integrate(integrand, lower = 0, upper = 10) integrate(integrand, lower = 0, upper = 100000) integrate(integrand, lower = 0, upper = 1000000, stop.on.error = FALSE) ## some functions do not handle vector input properly f <- function(x) 2.0 try(integrate(f, 0, 1)) integrate(Vectorize(f), 0, 1) ## correct integrate(function(x) rep(2.0, length(x)), 0, 1) ## correct ## integrate can fail if misused integrate(dnorm, 0, 2) integrate(dnorm, 0, 20) integrate(dnorm, 0, 200) integrate(dnorm, 0, 2000) integrate(dnorm, 0, 20000) ## fails on many systems integrate(dnorm, 0, Inf) ## works Documentation reproduced from package stats, version 3.2.5, License: Part of R 3.2.5 ### Community examples Looks like there are no examples yet.
2021-01-23 14:25:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5868416428565979, "perplexity": 3045.318876921209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538082.57/warc/CC-MAIN-20210123125715-20210123155715-00643.warc.gz"}
https://answers.ros.org/answers/340499/revisions/
# Revision history [back] As you said the pose is determined by both orientation and the position. It is what exactly what they have done in the tutorial target_pose1.position.x = 0.28; target_pose1.position.y = -0.2; target_pose1.position.z = 0.5; This 3 lines defines the position. target_pose1.orientation.w = 1.0 this line shows the orientation (quarternion). They haven't defined remaining 3 components of quarternion. So it will be set to default zero value. If you want you can define remaining components as per your requirements as target_pose1.orientation.x = ... target_pose1.orientation.y=... target_pose1.orientation.z = ... 2 No.2 Revision jayess 5187 ●17 ●60 ●72 As you said the pose is determined by both orientation and the position. It is what exactly what they have done in the tutorial target_pose1.position.x = 0.28; 0.28; target_pose1.position.y = -0.2; -0.2; target_pose1.position.z = 0.5;0.5; This 3 lines defines the position. target_pose1.orientation.w = 1.01.0 this line shows the orientation (quarternion). They haven't defined remaining 3 components of quarternion. So it will be set to default zero value. If you want you can define remaining components as per your requirements as target_pose1.orientation.x = ...... target_pose1.orientation.y=... target_pose1.orientation.y=... target_pose1.orientation.z = ......
2020-03-31 11:15:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6797885298728943, "perplexity": 2635.271356018685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00459.warc.gz"}
https://issues.apache.org/jira/browse/SQOOP-3066
# Introduce an option + env variable to enable/disable SQOOP-2737 feature ## Details • Type: Improvement • Status: Resolved • Priority: Critical • Resolution: Fixed • Affects Version/s: 1.4.6 • Fix Version/s: • Component/s: None • Labels: None ## Description After SQOOP-2737 several users ran into that their SQOOP commands are failing due to the fact the original commands were not phrased cases sensitive table/column/schema names in mind. There had been also another outcome of this feature, that the "--split-by" option does not accept Oracle functions anymore (e.g. MOD(col_name,4) ), as after correct escaping+quoting it would be handled by Oracle as a db col name, instead of an expression to evaluate. My goal here is to introduce an option to turn on/off the (fully proper and industry standard) escaping implemented in SQOOP-2737, and also add an environment variable support for that, thus users would have the backward compatible fallback plan, and not changing their command. I do also plan to implement an argument for supporting split-by with database functions/expressions, and thus the escaping and the split-by expressions features would become independent. ## Attachments 1. SQOOP-3066.patch 40 kB Attila Szabo ## People • Assignee: Attila Szabo Reporter: Attila Szabo
2019-06-25 08:06:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202246069908142, "perplexity": 14089.523832434703}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00323.warc.gz"}
https://hobbitsadventure.wordpress.com/2012/11/05/paying-myself-to-study-an-adventure-into-a-nerdy-world/
In my ample free time, I’ve been studying to retake the MCAT in April.  But it is quite boring, and playing video games is a lot more appealing. Speaking of video games, last week I started to play Legend of Zelda: Spirit Tracks again.  I’ve had the game for a few years, but never finished it.  In the game you travel by train, and you can trade various treasures for new train parts.  Before I beat the final boss, I wanted to collect all the train parts for the sake of OCD completion.  But collecting the necessary treasures often entail too many hours of repetitive searching and minigames that quickly lose its novelty. I was playing a minigame that I got so good at that I didn’t need to pay attention to it anymore, and thus had lots of brainpower I could devote to thinking about why I was doing this to myself.  Collecting treasures in a Zelda game is so tedious, but I push myself to do it.  Wouldn’t it be nice to push yourself to study even when it gets tedious? Then I observed 2 things: 1)  There is some reward for performing repetitive, tedious things.  Treasures → train parts. 2)  Since your reward (treasures) is to an extent based on chance, it makes it a bit of a gamble, which is known to be addictive. So I came up with a reward system to provide incentives for studying. If I achieve my daily goal of studying, I will pay myself my own money to spend on buying music (yes I know it’s my own money, but my non-rational side won’t care), based on the formula $p = rand[0,2].$ And at any time I can take the money I’ve collected and gamble it according to $g(p,y) = \mu p^{\mu y+1}$, where $y=rand[0,1]$, and $\mu$ is the gamble constant (I set it to 0.45). In English, it means I earn anywhere from 0 to 2 dollars (each amount is equally likely to occur).  and $g(p,y)$ means my gamble payoff varies depending on what y comes out as, which depends on chance. I busted out my probabilities notes from college, and I did some math to figure out how this would work out.  To spare those who don’t share my excitement for math, I put the math as an appendix to this post. In summary, the math shows that on a given day, I can expect to earn $1. And if I choose to gamble my money p, I can expect to earn $\frac{p^{1.45}-p}{ln{p}}$, which approximately equals p at$24.  So if I’m smart, I need to gamble when I have saved up at least $24. *** I had way too much fun coming up with this model, solving it mathematically, and writing a program in Python to generate the payoffs (click here to download my .py file). In fact, this was so much fun that I spent like 3.5 hrs doing this instead of studying for the MCAT… oh the irony of life. And of course, on the first day of achieving my study goals, my program “randomly” decides to reward me with$0.00. Simeon Koh *** < Mathematical Analysis of Simeon’s Study Incentive for Cool People Who Like Math> If p~Unif[0,1] (p is uniformly distributed from 0 to 1), then density f(p) is 1 from 0≤p≤2 and 0 everywhere else. So the expected (i.e. average) value for p is $E[p] =\int_{-\infty}^{\infty}p*f(p) \, dp$ $=\int_{0}^{2}p*1 \, dp$ $=\frac{1}{2}p^2 \bigg|_{0}^{2}$ $=\frac{1}{2}(4)-0$ $=1$. Similarly, since the density f(y) = 1 from 0≤y≤1 and 0 everywhere else, the expected value for g is $E_{g}[p] =\int_{-\infty}^{\infty}g(y)*f(y) \, dy$ $=\int_{0}^{1}\mu p^{\mu y+1} \, dy$ $=\int_{u=1}^{u=\mu+1} p^{u} \, du, \text{if } u=\mu y+1 \text{ and } du=\mu \, dy$ $=\frac{p^u}{\ln{p}} \bigg|_{1}^{\mu+1}$ $=\frac{p^{\mu+1}-p}{\ln{p}}$.
2017-07-24 20:42:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4341016709804535, "perplexity": 1767.399963719816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424910.80/warc/CC-MAIN-20170724202315-20170724222315-00625.warc.gz"}
http://physics.stackexchange.com/questions/15508/the-born-oppenheimer-approximation-and-muonic-molecules
# The Born-Oppenheimer approximation and muonic molecules Does the Born-Oppenheimer approximation fail for muonic molecules (i.e. molecules where one or more electrons are replaced with muons)? - Excellent question :-) – David Z Oct 8 '11 at 17:43 Deepends on your definition of "fails". The accuracy of Born-Oppenheimer approximation is determined by the smallness of the electron/nucleus mass ratio. For hydrogen this ratio is $\approx 1/1800$. Replace the electron mass by the muon mass, which is 200 times larger, and you get $\approx 1/9$ which gives you a rough estimate for the relative accuracy in case of muonic molecules. - Which means, the approximation still works, but convergence is much slower, You need more calculation loops. – Georg Oct 8 '11 at 12:21 @Georg: Well, most of computational chemistry never goes beyond the 0th order of Born-Openheimer; So $1/9$ is really 200 worse news than $1/1800$. OF course, it all depends on the particular problem. – Slaviks Oct 8 '11 at 12:34 Is it a power expansion or something exponential like $1+e^{-\frac{a}{\epsilon}}+...$, $\epsilon = m_e/m_{nucl}$ (why is it called an "adiabatic" approximation)? – Vladimir Kalitvianski Oct 9 '11 at 13:15 @VladimirKalitvianski: BO treats nucleai as static and allows separation of (fast) electron dynamics and (slow) nucleus dynamics. Going beyond BO requires taking into account nucleus-electron entanglement and thus greatly expands the Hilbert space that need to be approximated. See more on wikipedia‌​. – Slaviks Oct 9 '11 at 16:05 I did not find the answer. Is it a series like $1+a\epsilon+...$ or not? Where does that $1/9$ stand? – Vladimir Kalitvianski Oct 9 '11 at 17:55
2015-11-27 22:57:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881765604019165, "perplexity": 785.1794251553175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450659.10/warc/CC-MAIN-20151124205410-00351-ip-10-71-132-137.ec2.internal.warc.gz"}
https://iesc.io/608/S19/ex05/power3
# Power III Spring 2019 The questions below are due on Sunday March 10, 2019; 11:59:00 PM. You are not logged in. Note that this link will take you to an external site (https://oidc.mit.edu) to authenticate, and then you will be redirected back to this page. Back to Exercise 05 ## 1) Setup Let's assume we have a minimal electronic system that is comprised of: • A microcontroller • A battery and battery-management board • A communications module • A sensor The details of what the modules do (what do they sense, how do they communicate, etc...) are not important. What is important is the power they use in different states. ### 1.1) The Battery and Battery Management Board The battery and battery management board can be lumped together into a single object that can provide a nominal voltage of 3.3V with a capacity of 410 mAH. ### 1.2) The Microcontroller The microcontroller runs at 3.3V and has two states: • Running: Standard operation state which is required for actively controlling the communication module and/or sensor making measurements. The current consumed in this state is 7 mA. • Sleep: Low-power state. We cannot run the communication module or make sensor measurements from this state. Basically, if the microcontroller is sleeping, then the communication module must be sleeping as well. The current consumed in this state is 200 \muA. The microcontroller can immediately switch between these two states with no transitional power deviations (i.e. can go from consuming 200 \muA to 7 mA). The microcontroller has 2000 bytes of onboard storage available for holding data from the sensor. ### 1.3) The Sensor The sensor can be queried by the microcontroller for measurements using an SPI protocol. The sensor runs on 3.3V. The sensor board is purple. The sensor is always in the ON state, and when in that state it consumes 6mA. Regardless of the amount of data in a measurement, the microcontroller can request and receive a measurement from the sensor in 1 millisecond. ### 1.4) The Communication Module This device is responsible for communication. It runs on 3.3V and has the following states: • Wake: Standard operation state, consuming 5mA. The other components can do other work while the communication module is in this state. • Sleep: Low-power maintenance state requiring 300 \muA. The device cannot send data from this state. • Send: This is a state where data is being sent. When in it, it consumes 76mA. The behavior of this state is dictated by certain rules: The communication module has nothing to do with collecting a measurement from the sensor. It can be in sleep-mode while the microcontroller is taking a measurement from the sensor. The duration in the Send state is based on how long the message being sent is. No matter how many bytes the message is, every complete transmission requires 5 ms up front and 3ms at the end of the message. In the middle of these two fixed-durations, the duration to send the message can vary. Every 100 bytes sent requires one additional 1ms. This is not a continuous ratio, however, but instead is a step function because of some requirements of the communication module. 1 to 100 bytes requires 1 ms, 101 to 200 bytes requires 2ms and so on. This pattern exists up to a 2,000 byte message at which point the communication module can no longer send something that long. In this situation the message needs to be broken down into multiple transmissions. How long in Send state for 50 byte message (in milliseconds)? How long in Send state for 51 byte message (in milliseconds)? How long in Send state for 151 byte message (in milliseconds)? In addition, the communication module cannot necessarily change instantaneously between states. Upon moving from Sleep to Wake, the device must remain in Wake for 20ms before it can move to the Send state (for data transmission). During the 20ms between the communication module waking up and (perhaps) sending data, the microcontroller can perform other tasks, such as processing sensor readings. ## 2) Situations Let's analyze optimal power performance in a variety of conditions: ### 2.1) Situation One Let's assume each measurement from our sensor is 32 bytes long and our current application requires a measurement to be collected every 50 milliseconds. The maximum latency we will tolerate between a measurement and it being transmitted to the base station is 50 milliseconds (meaning once we make a measurement from our sensor, we'll only be happy if that gets sent within 50ms via the communication module. Otherwise we fail). How long will the system last if we optimize for power efficiency in hours? ### 2.2) Situation Two Let's assume each measurement from our sensor is 32 bytes long and our current application requires a measurement to be collected every 500 milliseconds. The maximum latency we will tolerate between a measurement and it being transmitted to the base station is also 500 milliseconds. How long will the system last if we optimize for power efficiency in hours? ### 2.3) Situation Three Let's assume each measurement from our sensor is 32 bytes long and our current application requires a measurement to be collected every 50 milliseconds. The maximum latency we will tolerate between a measurement and it being transmitted to the base station is 500 milliseconds. How long will the system last if we optimize for power efficiency in hours? ### 2.4) Situation Four Let's assume each measurement from our sensor is 32 bytes long and our current application requires a measurement to be collected every 50 milliseconds. There is no time pressure on when to report data to our base station, so you should report the data as rarely as possible. Note the microcontroller can only hold 2000 bytes of data and you cannot send fractions of a measurement to our base station. How long will the system last if we optimize for power efficiency in hours? ### 2.5) Situation Five Let's assume each measurement from our sensor is 64 bytes long and our current application requires a measurement to be collected every 50 milliseconds. There is no time pressure on when to report data to our base station, so you should report the data as rarely as possible. Note the microcontroller can only hold 2000 bytes of data and you cannot send fractions of a measurement to our base station. How long will the system last if we optimize for power efficiency in hours? ### 2.6) Situation Six Let's assume each measurement from our sensor is 32 bytes long and our current application requires a measurement to be collected once every second. There is no time pressure on when to report data to our base station, so you should report the data as rarely as possible. How long will the system last if we optimize for power efficiency in hours? Back to Exercise 05 This page was last updated on Sunday March 03, 2019 at 03:01:41 PM (revision 2efb58c).
2019-05-20 08:35:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23399856686592102, "perplexity": 1563.3130453822778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255837.21/warc/CC-MAIN-20190520081942-20190520103942-00073.warc.gz"}
https://math.stackexchange.com/questions/3576961/homotopy-type-of-simplicial-complexes-and-different-notions-of-maps-of-simplicia
# Homotopy type of simplicial complexes and different notions of maps of simplicial complexes $$\textbf{Definitions}$$ A simplicial complex is a set $$V$$ called its set of vertices together with a subset $$\Sigma_V \subset 2^X$$ so that the sets in $$\Sigma_V$$ cover $$X$$ and is closed under taking arbitrary subsets i.e if $$A \subset B \in \Sigma_V$$ then $$A \in \Sigma_V$$. A simplicial complex is also a topological space with set of points $$V$$ and topology $$\Sigma$$. For two simplicial complexes with vertex sets $$V$$ and $$W$$ call a function of sets $$f: V \rightarrow W$$ a continuous map if it is continuous w.r.t the topologies $$\Sigma_V$$ and $$\Sigma_W$$ and a simplicial map if the image of a set in $$\Sigma_V$$ is always in $$\Sigma_W$$. You can also associate a topological space $$\mid V\mid$$ to a simplicial complex called the geometric realization of $$V$$. $$\textbf{Questions}$$ Are there any examples of simplicial maps that aren't continuous maps? What about continuous maps that aren't simplicial? Does it matter if we only restrict to finite vertex sets? This question was inspired by Are the homotopy groups of the space $$(V,\Sigma_V)$$ equal to the homotopy groups of $$\mid V \mid$$? Are they weakly equivalent? Homotopy equivalent? This particular question was inspired by the recent post How many homotopy types do you get from three points? I can't 100% understand the answer because he talks about simplicial complexes being homeomorphic to the nerve of some poset, which is a simplicial set? • What do you mean by "A simplicial complex is also a topological space with set of points $V$ and topology $\Sigma$"? The family $\Sigma$ need not be a topology, as union of simplices need not be simplices. – guidoar Mar 11 '20 at 3:24 • Oh right... You're 100% correct. @Guido – Noel Lundström Mar 11 '20 at 19:57 • I'm trying to come up with a functor $F:\text{SimpComplex} \rightarrow \textbf{Top}$ such that the underlying set of points of $FV$ is the set of vertices of $V$ and such that $FV$ has the same homotopy groups as $\mid V \mid$ but maybe this isn't as easy as I thought? – Noel Lundström Mar 11 '20 at 20:21 • @NoelLundström: No such functor exists. For instance, there is a simplicial complex with three vertices with nontrivial $\pi_1$, but there is no space with three points with nontrivial $\pi_1$. There is a functor almost like that which instead sends a simplicial complex to a space whose underlying set is its set of simplices (with the topology corresponding to the inclusion order on the simplices). – Eric Wofsey Mar 11 '20 at 20:41 • Where can I find more information on this functor you are describing? Sounds interesting... Is there something similar for a simplicial set $X$ aswell? Maybe define an ordering on $\sqcup_i X_i$ with $X_m \ni \phi \leq \sigma \in X_n$ if there exists a $\theta: [m] \rightarrow [n]$ with $X\theta (\sigma) = \phi$? – Noel Lundström Mar 11 '20 at 20:54 I think that the confusion comes from two different meanings of the term "simplicial complex". The pair $$X=(V,\sum_V)$$ that you've defined is typically called an abstract simplicial complex and in general it is not a topological space. The only case when $$(V,\sum_V)$$ is a topological space is when $$\sum_V=2^V$$ because that's the only situation when $$V\in\sum_V$$. But indeed, every abstract simplicial complex $$X$$ induces a geometric realization $$|X|$$ which is a (concrete) simplicial complex. And every abstract simplicial function $$f:X\to Y$$ (i.e. $$f$$ maps $$\sum_X$$ to $$\sum_Y$$) indeed induces a continuous simplicial map $$|f|:|X|\to|Y|$$. It is not hard to see that not every continuous map $$g:|X|\to|Y|$$ arises in that way. Even up to homotopy, although up to homotopy $$g$$ can be approximated by simplicial maps. I can't 100% understand the answer because he talks about simplicial complexes being homeomorphic to the nerve of some poset, which is a simplicial set? The nerve of some poset is indeed a (concrete) simplicial complex. For a given poset $$P$$ you associate $$n$$-simplex with every sequence $$a_1 in $$P$$. This is a topological space and so we can talk about homeomorphism. And this is the context of the answer you referred to. There is an abstract version as well. If $$P$$ is a poset then we can associate an abstract simplicial complex with it by taking $$\sum\nolimits_P=\big\{\{a_1,\ldots,a_n\}\subseteq P\ |\ a_1<\cdots $$A(P)=\big(P, \sum\nolimits_P\big)$$ In that situation the nerve of $$P$$ is simply $$|A(P)|$$. But let's stay in the abstract context. We can ask whether any abstract simplicial complex $$X$$ is isomorphic to $$A(P)$$. By "isomorphism" we understand an abstract simplicial map $$f:X\to A(P)$$ which has an abstract simplicial inverse. But this doesn't touch any notion of topology.
2021-03-06 02:57:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.897773027420044, "perplexity": 197.13394304084656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00063.warc.gz"}
https://cs.stackexchange.com/questions/35558/if-tn1-tn-lfloor-sqrtn1-rfloor-forall-n-geq-1-what-is-tm2
# If $T(n+1)=T(n)+\lfloor \sqrt{n+1} \rfloor$ $\forall n\geq 1$, what is $T(m^2)$? $T(n+1)=T(n)+\lfloor \sqrt{n+1} \rfloor$ $\forall n\geq 1$ $T(1)=1$ The value of $T(m^2)$ for m ≥ 1 is? Clearly you cannot apply master theorem because it is not of the form $T(n)=aT(\frac{n}{b})+f(n)$ So I tried Back Substitution: $T(n)=T(n-1)+\lfloor\sqrt{n}\rfloor$ $T(n-1)=T(n-2)+\lfloor\sqrt{n-1}\rfloor$ therefore, $T(n)=T(n-2)+\lfloor\sqrt{n-1}\rfloor+\lfloor\sqrt{n}\rfloor$ $T(n)=T(n-3)+\lfloor\sqrt{n-2}\rfloor+\lfloor\sqrt{n-1}\rfloor+\lfloor\sqrt{n}\rfloor$ . . $T(n)=T(n-(n-1))+...T(n-k)+\lfloor\sqrt{n-(k-1)}\rfloor+...+\lfloor\sqrt{n-2}\rfloor+\lfloor\sqrt{n-1}\rfloor+\lfloor\sqrt{n}\rfloor$ . . $T(n)=T(1)+...T(n-k)+\lfloor\sqrt{n-(k-1)}\rfloor+...+\lfloor\sqrt{n-2}\rfloor+\lfloor\sqrt{n-1}\rfloor+\lfloor\sqrt{n}\rfloor$ $T(n)=T(1)+...+\lfloor\sqrt{n-2}\rfloor+\lfloor\sqrt{n-1}\rfloor+\lfloor\sqrt{n}\rfloor$ I'm stuck up here and the answer is given as - $T(m^2)=\frac{m}{6}(4m^2 - 3m + 5)$ how to solve and reach the answer? Your problem is that you're ignoring the floors. Make sure that you know what $\lfloor x \rfloor$ means. It is not hard to check that $$T(n) = \sum_{k=1}^n \lfloor \sqrt{k} \rfloor.$$ Therefore \begin{align*} T(m^2-1) &= \sum_{k=1}^{m^2-1} \lfloor \sqrt{k} \rfloor \\ &= \sum_{r=1}^{m-1} \sum_{\ell=r^2}^{(r+1)^2-1} \lfloor \sqrt{\ell} \rfloor \\ &= \sum_{r=1}^{m-1} \sum_{\ell=r^2}^{(r+1)^2-1} r \\ &= \sum_{r=1}^{m-1} [(r+1)^2-r^2] r \\ &= \sum_{r=1}^{m-1} (2r+1)r \\ &= \sum_{r=1}^{m-1} 4\binom{r}{2} + 3\binom{r}{1} \\ &= 4\binom{m}{3} + 3\binom{m}{2}. \end{align*} Therefore \begin{align*} T(m^2) &= 4\binom{m}{3} + 3\binom{m}{2} + m \\ &= \frac{4m(m-1)(m-2)}{6} + \frac{3m(m-1)}{2} + m \\ &= \frac{(4m^3-12m^2+8m) + (9m^2-9m) + (6m)}{6} \\ &= \frac{4m^3-3m^2+5m}{6}. \end{align*} • @Yuval_Filmus:Why did you choose the upper limit of the outer summation to be $m-1$ instead of anything else? – Siddharth Thevaril Dec 21 '14 at 3:13 For a slightly different way to look at his problem, consider your original equation, $$T(n) = T(1)+\lfloor\sqrt2\rfloor+\lfloor\sqrt3\rfloor+\dotsb+\lfloor\sqrt n\rfloor$$ Now the key here is to group the terms with the same values of $\lfloor\sqrt k\rfloor$: \begin{align} T(n) &= (T(1)+\lfloor\sqrt2\rfloor+\lfloor\sqrt3\rfloor)\\ &+(\lfloor\sqrt4\rfloor+\lfloor\sqrt5\rfloor+\lfloor\sqrt6\rfloor+\lfloor\sqrt7\rfloor+\lfloor\sqrt8\rfloor)\\ &+(\lfloor\sqrt9\rfloor+\lfloor\sqrt{10}\rfloor+\lfloor\sqrt{11}\rfloor+\lfloor\sqrt{12}\rfloor+\lfloor\sqrt{13}\rfloor+\lfloor\sqrt{14}\rfloor+\lfloor\sqrt{15}\rfloor)\\ &+\dotsc \end{align} and observe that each summand will have $2k+1$ terms, since the difference $(k+1)^2-k^2 = 2k+1$: $$T(n) = (1\cdot3)+(2\cdot5)+(3\cdot 7)+(4\cdot 9)+\dotsb$$ so we'll have, with $n=m^2$ \begin{align} T(m^2) &= (1\cdot3)+(2\cdot5)+\dotsb+\left(\left\lfloor\sqrt{(m-1)^2}\right\rfloor+\dotsb+\left\lfloor\sqrt{m^2-1}\right\rfloor\right)+\left\lfloor\sqrt{m^2}\right\rfloor\\ &=\sum_{k=1}^{m-1}k(2k+1)+m =\sum_{k=1}^{m-1}(2k^2+k)+m\\ &=2\sum_{k=1}^{m-1}k^2+\sum_{k=1}^{m-1}k+m\\ &=2\frac{(m-1)(m)(2(m-1)+1)}{6}+\frac{(m-1)(m)}{2}+m\\ &=\frac{m}{6}(4m^2-3m+5) \end{align} as required.
2019-11-15 23:26:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1627.7881578750382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.22/warc/CC-MAIN-20191115222436-20191116010436-00228.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2009.12.401
# American Institute of Mathematical Sciences September  2009, 12(2): 401-414. doi: 10.3934/dcdsb.2009.12.401 ## Mathematical models of subcutaneous injection of insulin analogues: A mini-review 1 Department of Mathematics, University of Louisville, Louisville, KY 40292, United States 2 Department of Cellular and Physiological Sciences; Department of Surgery, University of British Columbia, Vancouver, BC, Canada Received  December 2008 Revised  May 2009 Published  July 2009 In the last three decades, several models relevant to the subcutaneous injection of insulin analogues have appeared in the literature. Most of them model the absorption of insulin analogues in the injection depot and then compute the plasma insulin concentration. The most recent systemic models directly simulate the plasma insulin dynamics. These models have been and/or can be applied to the technology of the insulin pump or to the coming closed-loop systems, also known as the artificial pancreas. In this paper, we selectively review these models in detail and at point out that these models provide key building blocks for some important endeavors into physiological questions of insulin secretion and action. For example, it is not clear at this time whether or not picomolar doses of insulin are found near the islets and there is no experimental method to assess this in vivo. This is of interest because picomolar concentrations of insulin have been found to be effective at blocking beta-cell death and increasing beta-cell growth in recent cell culture experiments. Citation: Jiaxu Li, James D. Johnson. Mathematical models of subcutaneous injection of insulin analogues: A mini-review. Discrete & Continuous Dynamical Systems - B, 2009, 12 (2) : 401-414. doi: 10.3934/dcdsb.2009.12.401 [1] Jiaxu Li, Yang Kuang. Systemically modeling the dynamics of plasma insulin in subcutaneous injection of insulin analogues for type 1 diabetes. Mathematical Biosciences & Engineering, 2009, 6 (1) : 41-58. doi: 10.3934/mbe.2009.6.41 [2] Peter W. Bates, Yu Liang, Alexander W. Shingleton. Growth regulation and the insulin signaling pathway. Networks & Heterogeneous Media, 2013, 8 (1) : 65-78. doi: 10.3934/nhm.2013.8.65 [3] Pasquale Palumbo, Simona Panunzi, Andrea De Gaetano. Qualitative behavior of a family of delay-differential models of the Glucose-Insulin system. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 399-424. doi: 10.3934/dcdsb.2007.7.399 [4] Saloni Rathee, Nilam. Quantitative analysis of time delays of glucose - insulin dynamics using artificial pancreas. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3115-3129. doi: 10.3934/dcdsb.2015.20.3115 [5] Jiaxu Li, Yang Kuang, Bingtuan Li. Analysis of IVGTT glucose-insulin interaction models with time delay. Discrete & Continuous Dynamical Systems - B, 2001, 1 (1) : 103-124. doi: 10.3934/dcdsb.2001.1.103 [6] J.W. Bruce, F. Tari. Generic 1-parameter families of binary differential equations of Morse type. Discrete & Continuous Dynamical Systems - A, 1997, 3 (1) : 79-90. doi: 10.3934/dcds.1997.3.79 [7] Patrick Nelson, Noah Smith, Stanca Ciupe, Weiping Zou, Gilbert S. Omenn, Massimo Pietropaolo. Modeling dynamic changes in type 1 diabetes progression: Quantifying $\beta$-cell variation after the appearance of islet-specific autoimmune responses. Mathematical Biosciences & Engineering, 2009, 6 (4) : 753-778. doi: 10.3934/mbe.2009.6.753 [8] Eduardo Liz, Manuel Pinto, Gonzalo Robledo, Sergei Trofimchuk, Victor Tkachenko. Wright type delay differential equations with negative Schwarzian. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 309-321. doi: 10.3934/dcds.2003.9.309 [9] Yufeng Shi, Qingfeng Zhu. A Kneser-type theorem for backward doubly stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1565-1579. doi: 10.3934/dcdsb.2010.14.1565 [10] Minghui Song, Liangjian Hu, Xuerong Mao, Liguo Zhang. Khasminskii-type theorems for stochastic functional differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1697-1714. doi: 10.3934/dcdsb.2013.18.1697 [11] Yongxin Jiang, Can Zhang, Zhaosheng Feng. A Perron-type theorem for nonautonomous differential equations with different growth rates. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 995-1008. doi: 10.3934/dcdss.2017052 [12] Xiaofei He, X. H. Tang. Lyapunov-type inequalities for even order differential equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 465-473. doi: 10.3934/cpaa.2012.11.465 [13] R.S. Dahiya, A. Zafer. Oscillation theorems of higher order neutral type differential equations. Conference Publications, 1998, 1998 (Special) : 203-219. doi: 10.3934/proc.1998.1998.203 [14] Jun Zhou, Jun Shen, Weinian Zhang. A powered Gronwall-type inequality and applications to stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7207-7234. doi: 10.3934/dcds.2016114 [15] Yves Achdou, Mathieu Laurière. On the system of partial differential equations arising in mean field type control. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 3879-3900. doi: 10.3934/dcds.2015.35.3879 [16] Viorel Barbu. Existence for nonlinear finite dimensional stochastic differential equations of subgradient type. Mathematical Control & Related Fields, 2018, 8 (3&4) : 501-508. doi: 10.3934/mcrf.2018020 [17] Josef Diblík, Zdeněk Svoboda. Existence of strictly decreasing positive solutions of linear differential equations of neutral type. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 67-84. doi: 10.3934/dcdss.2020004 [18] Ugur G. Abdulla. On the optimal control of the free boundary problems for the second order parabolic equations. II. Convergence of the method of finite differences. Inverse Problems & Imaging, 2016, 10 (4) : 869-898. doi: 10.3934/ipi.2016025 [19] Jean Ginibre, Giorgio Velo. Modified wave operators without loss of regularity for some long range Hartree equations. II. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1357-1376. doi: 10.3934/cpaa.2015.14.1357 [20] Mohamed Assellaou, Olivier Bokanowski, Hasnaa Zidani. Error estimates for second order Hamilton-Jacobi-Bellman equations. Approximation of probabilistic reachable sets. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 3933-3964. doi: 10.3934/dcds.2015.35.3933 2018 Impact Factor: 1.008
2019-09-17 04:21:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39012452960014343, "perplexity": 5960.763234793787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573052.26/warc/CC-MAIN-20190917040727-20190917062727-00502.warc.gz"}
https://www.rocketryforum.com/threads/sadam-rat-a-tat-tat.76201/
Help Support The Rocketry Forum: Johnnie Well-Known Member Like the rat that he is, our braves sons, daughters, Fathers and Mothers, have succeeded in pulling a RAT from a hole in the cellar of a house in Tikrit. God bless all those brave souls that protect our freedoms, and may they all be home soon... BlueNinja Well-Known Member As I am typing this, I am watching a report on this by NBC. You beat me to posting.... Rocketmaniac Well-Known Member Originally posted by Johnnierkt God bless all those brave souls that protect our freedoms, and may they all be home soon... Johnnie Well-Known Member RAT-A-tat-tat was merely to emphasize the "RAT". I do not wish him dead, I wish him to rot within a concrete cell, and die of old age, merely to face his inevitable fate for eternity...HELL n3tjm Papa Elf Oh... Sorry... I should of seen that... I looked like machine gun fire to me You don't wan't to wish anybody to Hell... I know he has been a very bad boy... and he will (and should) recieve his punishment... but there is a chance and opertunity for him to repent in the eye of God. Johnnie Well-Known Member You don't wan't to wish anybody to Hell... I know he has been a very bad boy... and he will (and should) recieve his punishment... but there is a chance and opertunity for him to repent in the eye of God. Amen, and with out getting too religious, I humbly agree with you...I hope he can unseal his already set destiny. Milo TRF BoD Member You don't have to wish him dead. The Iraqi's will see to that. The US has agreed to hand him over to them for trial for crimes he commited from the 60's until the war started. I have a feeling he will be put to death. Stones Well-Known Member What Milo said. The Iraqi's won't mess around with handing out justice. Well-Known Member I'm amazed he was captured alive. I figured he had been blown to bits and ashes during the bombing campaign. I would never have believed he was alive unless our boys drug from a hole like they did. I still think Osama is rotting in a cave somewhere in Afghanistan, but I was wrong about Saddam go anything is possible. God bless our soldiers and go US Army! cydermaster Well-Known Member Saddam shouldn't be executed. Thats the easy option for him. Loss of freedom, for the rest of his natural life, would be mutch harder for him to bear, psychologically. How about shipping over the perspex box David Blaine was in, and make it Saddam's cell for the rest of his life, suspended 50ft above a town square in Bagdad. I'm sure the locals will have fun 'disposing' of all their rotten vegtables by chucking them at the box. Rather like a modern version of the Stocks, in Medievil times. DPatell Well-Known Member Stick him in any normal prison. All the prisoners will take it upon themselves to make it an enjoyable experience for him...and the guards would just look the other way. Whatever it is, I just want to see that animal get a dose of his own medicine... arthur dent Well-Known Member Good ridance,he will get what he deserve's...Lets hope that Iraq will become a liitle safer for all our service men and women this christmas now that the rat has been caught in his trap. Neil Well-Known Member Yeah. He will get what he deserves all right. I think any prison will do ("azkaban prison" with all those dementors would do him up right), and the other in-mates will take it from there.
2021-03-09 09:45:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2537872791290283, "perplexity": 5179.773470408014}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00394.warc.gz"}
http://digital.library.unt.edu/explore/collections/UNTETD/browse/?fq=untl_institution%3AUNT&fq=str_degree_level%3ADoctoral&fq=str_degree_department%3ADepartment+of+Management&display=brief
## You limited your search to: Partner: UNT Libraries Department: Department of Management Degree Level: Doctoral Collection: UNT Theses and Dissertations Results 1 - 50 of 76 |   | Absorptive Capacity: An Empirical Examination of the Phenomenon and Relationships with Firm Capabilities The field of strategic management addresses challenges that firms encounter in an attempt to remain competitive. The ability to explain variation in firm success through examination of knowledge flows has become a prominent focus of research in the strategic management literature. Specifically, researchers have sought to further examine how firms convert knowledge, a phenomenon conceptualized as absorptive capacity. Absorptive capacity is the firm’s ability to acquire, assimilate, transform, and exploit knowledge. Few studies have captured the richness and multi-dimensionality of absorptive capacity, and it remains to be understood how the dimensions of the phenomenon convert knowledge. Furthermore, how absorptive capacity influences the firm remains to be understood. To address these research gaps, this dissertation seeks to (1) determine how absorptive capacity converts knowledge, and (2) determine how absorptive capacity influences firm capabilities. The research questions are investigated using structural modeling techniques to analyze data collected from software-industry firms. The findings offer contributions to the absorptive capacity and capability literatures. For example, absorptive capacity is hypothesized to consist of complex relationships among its internal dimensions. However, findings of this study suggest the relationships among the dimensions are linear in nature. This finding is in line with the theoretical foundations of and early literature on absorptive capacity but contrary to recent conceptualizations, which suggests relationships among the dimensions are more closely related to the theoretical origins of absorptive capacity. Additionally, to examine how absorptive capacity influences the firm, a capability-based perspective is used to hypothesize the influence of absorptive capacity on firm capabilities. Findings suggest absorptive capacity positively influences each dimension of firm capabilities (e.g., operational, customer, and innovation capabilities); thus, absorptive capacity influences the firm by altering firm capabilities. Given the richness of the findings, numerous fields are likely to benefit from this investigation. Through an examination of absorptive capacity and capabilities, this study contributes to the understanding of the absorptive capacity phenomenon and offers insight into how the phenomenon influences the firm. Furthermore, practical implications are offered for managers interested in enhancing firm competitiveness. digital.library.unt.edu/ark:/67531/metadc115064/ Authentic Transformational Leadership and Implicit Leadership Theories. Cognitive Complexity in Group Performance and Satisfaction Corporate Entrepreneurship: Strategic and Structural Correlates and Impact on the Global Presence of United States Firms Corporate entrepreneurship, its correlates, and its impact on the global presence of firms were examined through 439 United States companies, represented in all geographic realms of the world. Executives responded to a lengthy survey of organizational characteristics which enabled corporate entrepreneurship and its dimensions--innovation, proactiveness, and risk taking--to be examined in firms with varying global presence. Risk factors were assigned to countries and realms from the averaged rankings of three published risk-forecasting services. Maximum risk country, maximum risk geographic realm, average risk of countries, average risk of geographic realms, number of countries, and number of geographic realms, were differentially weighted to equalize scales and combined into a composite global presence scale. Strategy-related variables--competitive aggressiveness and adaptiveness--dominated other organizational attributes in explaining corporate entrepreneurship, and corporate entrepreneurship dominated other variables in explaining global presence, according to correlation and multiple regression analysis. Although no variables correlated strongly with measures of global presence, corporate entrepreneurship consistently had significant positive correlations across all six measures of global presence and the composite global presence scale. In forward stepwise multiple regressions, corporate entrepreneurship was the first variable entered into the prediction equation for five of the six measures of global presence; only when the dependent variable was the number-of-countries measure of global presence did scanning load before corporate entrepreneurship. Of the dimensions of corporate entrepreneurship, risk taking had the weakest correlations with measures of global presence, although risk was the theoretical basis for the first four measures of global presence; the risk taking dimension of corporate entrepreneurship represents executives' perceptions of risk, whereas global presence was derived from published risk rankings of countries. Environmental dynamism and heterogeneity, although not hostility, correlated with corporate entrepreneurship; however, neither environmental element showed a systematic relationship with global presence. Overall, corporate entrepreneurship, driven primarily by strategy-related variables, influenced the global presence of firms. Corporate entrepreneurship did not influence performance. digital.library.unt.edu/ark:/67531/metadc278559/ Cultural Diversity and Team Performance: Testing for Social Loafing Effects The concept of social loafing is important with regard to organizational effectiveness particularly as organizations are relying on teams as a means to drive productivity. The composition of those teams is likely to reflect the current movement of racial and ethnic minorities in the work place. The primary purpose of this research was to determine the role cultural diversity plays in enhancing performance and thereby eliminating social loafing. The research study is significant because 1) it is among the first to use culturally diverse work groups while examining the social loafing phenomenon, and 2) the groups were intact project teams, rather than ad-hoc groups commonly found in social loafing experiments. It was anticipated that the members of culturally homogeneous groups would engage in social loafing when their individual efforts were "buried." However, subjects in both culturally diverse and culturally homogeneous groups resisted social loafing behaviors. Additional statistical analysis revealed that as group orientation increased, performance levels increased as well. Group orientation, then, appears to be a more powerful determinant of performance than group composition. It is expected that the time these groups had together and the performance feedback opportunities provided them, prior to the experiment, contributed significantly to these results. Future research suggestions were made that could help establish a causal relationship. digital.library.unt.edu/ark:/67531/metadc278980/ Customer Induced Uncertainty and Its Impact on Organizational Design How firms facing environmental uncertainty should organize their activities remains an important and challenging question for today's managers and organizational researchers. Proponents of contingency theory have argued that organizations must adjust their activities to fit the level of environmental uncertainty to ensure long-term survival. Although much work has been done on contingency theory, it is clear that our understanding of uncertainty is far from complete. One important aspect of today's organizations is their focus on service, mass customization, and continuous innovation. This focus often results in the customer being brought either into the organization or at least into closer contact with it. Even though the literature provides numerous evidences of the increasing customer focus, it is yet to empirically explain how the complications of customer-organizational interactions might create uncertainty for contemporary organizations. The traditional measure of uncertainty still considers customers as an environmental factor causing demand uncertainty while ignoring the complex nature of customer and organizational encounters. Seeking to further refine the concept of uncertainty and focusing on the contemporary business phenomena, this study develops measures aspects of customer induced uncertainty and examines their relationships with three organizational design variables. Specifically, this study explains the complicated nature of customer - organizational encounters that creates organizational uncertainty. Also, this study develops three operational measurement instruments for the three aspects of customer induced uncertainty. Finally, this study shows specific relationships between aspects of customer induced uncertainty and specific organizational design variables. This study conducted a mail survey of middle level managers. With a sample size of 118 the measurement instruments were shown to have validity and reliability using factor analysis and Cronbach's alpha. Regression analyses indicate the presence of specific rather than general relationship between customer induced uncertainty variables and organizational design variables. Regression results suggested that the relationships between customer induced uncertainty variable and design variables were depended on the specific combination. For example, Customer acquisitiveness was negatively related to formalization where as Customer importance was positively related to professionalism. Results also suggested a possible positive relationship between decentralization and customer induced ambiguity. Although not without limitations, this study improves our understanding of contemporary environmental uncertainty. Moreover, it provides preliminary measurement instruments of customer induced uncertainty variables for numerous future studies. Overall, this study is a preliminary step toward further understanding of the uncertainty-design contingencies of contemporary and future organizations. digital.library.unt.edu/ark:/67531/metadc2214/ Determinants of Small Firm Performance: the Importance of Selected Managerial Personality Traits, Perceived Environmental Uncertainty, Scanning Activities, and Managerial Goal Setting Activities Much of the previous research on organizational performance deals with the larger businesses. As such, the owner/managers of small firms and researchers interested in small businesses have had to work with planning models which were not formulated with small businesses in mind. Therefore, the general purpose of this study is to help correct this deficiency and add to the body of knowledge concerning the contributions specific factors make toward increasing the performance of small firms. Specifically, selected managerial personality traits, managerial perceived environmental uncertainty, managerial scanning habits, and managerial goal setting activities are utilized to develop three models. The three models are used to determine the relationship the factors have to each other and the contribution the variables make toward the performance of the firm. The firms included in this study are located in a South Central metropolitan area. The firms have between 2 and 100 employees, sales of less than 3 million dollars, and have been in operation 2 years or longer. This study utilizes regression analysis and path analysis to determine the effects the factors have on each other and their contribution to the firm's performance. The Statistical Package for the Social Sciences (SPSSx) is utilized to run the regression analysis. An Analysis of Linear Structural Relationships by the Method of Maximum Likelihood (LISREL) is utilized for the path analysis. Using path analysis, the third model demonstrates a total coefficient of determination for structural equations of 0.09. However, only two of the four factors have a t value of 2.0 or greater. The study also indicates the personality trait of dogmatism is inversely related to managerial scanning -.349 p <.01. Perceived environmental uncertainty is negatively correlated to performance at -.215 p <.05. None of the remaining factors demonstrated significant relationship to the firm's performance. digital.library.unt.edu/ark:/67531/metadc331570/ Determination of the Relationship Between Ethical Positions and Intended Behavior Among Managers This study was conducted to determine the relationship between managers' ethical positions and their intended behavior. digital.library.unt.edu/ark:/67531/metadc279005/ Dominant Decision Cues in Labor Arbitration; Standards Used in Alcohol and Drug Cases During the past twenty years, extensive research has been conducted concerning the judgmental processes of labor arbitrators. Previous research, sometimes referred to as policy capturing, attempted to identify the criteria or standards used by arbitrators to support their decisions. Much of the research was qualitative. Due to the categorical nature of the dependent variables, log-linear models such as logit regression have been used to examine decisional relationships in more recent studies. The decision cues used by arbitrators in 249 published alcohol- and drug-related arbitration cases were examined. The justifications for arbitrators' decisions were fitted into Carroll Daugherty's "seven tests" of just cause. The dominant cues were proof of misconduct, the appropriateness of the penalty, and the business necessity of management's action. Foreknowledge of the rule by the grievant and the consequences of a violation, equal treatment of the grievant, and an appropriate investigation by management were also important decision cues. In general, grievants in alcohol and drug arbitration cases fared as well as grievants in any other disciplinary arbitrations. However, when the cases were analyzed based on the legal status of the drug, illicit drug users were at a considerable disadvantage. digital.library.unt.edu/ark:/67531/metadc331930/ Effectiveness in Company-sponsored Foundations : A Utilization of the Competing Values Framework The purpose of this study was to determine the criteria used by foundation directors in assessing the effectiveness of contribution programs in company sponsored foundations. Quinn and Rohrbaugh's Competing Values Approach of organizational effectiveness was used as the theoretical framework for the study. The Competing Values Approach is an integrative effectiveness model which clusters eight criteria of effectiveness into four theoretical models of organizational effectiveness. digital.library.unt.edu/ark:/67531/metadc277615/ The Effects of Intergroup Competition and Noncompetition on the Decision Quality of Culturally Diverse and Culturally Non-Diverse Groups The primary purpose of this study was to explore the challenges and benefits associated with cultural diversity within groups. The research hypotheses were proposed to test the effects of cultural diversity on group performance and group processes by comparing culturally diverse and culturally homogeneous groups under conditions of intergroup competition and noncompetition. This experiment was conducted using 500 upper-level undergraduates enrolled in the principles of management course for the fall semester. digital.library.unt.edu/ark:/67531/metadc277879/ The Effects of the Conflict Settlement Process on the Expressed Degree of Organizational Commitment The purpose of this research was to study the effect of the conflict settlement process on the degree of expressed organizational commitment of employees in a collective bargaining setting. The research was done in a basic industry in northern Alabama. The instrument included the Organizational Commitment Questionnaire (OCQ) developed by Mowday, Porter, and Steers. Demographic variables measured were education, age, and sex. Main effects variables were tenure; union membership; and self-described experience with and feeling toward grievance/arbitration as a category 1 grievant, category 2 grievant, witness, and supervisor. Data were analyzed with hierarchical multiple regression. No statistically significant results were found. Limitations included the economic climate of the region and the industrial relations climate of the company. digital.library.unt.edu/ark:/67531/metadc331265/ Effects of Venture Team Demographic Characteristics on Team Interpersonal Process Effectiveness in Computer Related Venture Teams In order to remain competitive, firms must be able to merge diverse, differentiated people into teams. In comparison to solo ventures, venture teams not only offer a broader base of physical and financial resources and varying points of view, but also positively influence the profitability, growth, and survivability potential of new ventures. Despite the growing importance and potential benefits offered by venture teams, relatively little is known about assembling and maintaining effective venture teams in the field of entrepreneurship. More specifically, information is needed to understand what composition and combination of demographic characteristics of team members would contribute to the effectiveness and success of a venture team. In this study the relationship between venture team demographic characteristics and team effectiveness (which is defined in terms of the interpersonal process of venture team members in their group activities) is investigated. The demographic characteristics examined include average age, age heterogeneity, average level of education, educational background heterogeneity, gender heterogeneity, and functional background heterogeneity. A field study, involving face-to-face and telephone interviews with the venture teams is used to gather data from40 computer related venture teams in a large midwest U.S. city. The venture teams are identified through the local Chambers of Commerce, peer referrals, and library research. Information is gathered on demographics and team interpersonal process effectiveness using a pre-validated instrument. Data are analyzed using regression analysis. The results indicate that average age negatively and significantly relates with team interpersonal process effectiveness. Furthermore, average level of education positively and significantly relates with team interpersonal process effectiveness. The other demographic variables, age heterogeneity, educational background heterogeneity, gender heterogeneity, and functional background heterogeneity do not produce significant relationships. digital.library.unt.edu/ark:/67531/metadc278275/ An Emotional Business: the Role of Emotional Intelligence in Entrepreneurial Success Successful entrepreneurial activity is important for a healthy economy and can be a major source of job creation. While the concept of entrepreneurship has been around for quite some time, researchers continue to explore the factors that underlie entrepreneurial performance. Specifically, researchers have sought to further examine why some entrepreneurial ventures are more successful than others. the concept of emotional intelligence (EI) has gained the attention of researchers and practitioners alike. Practitioners have realized that employees can no longer be perceived as biological machines that are capable of leaving their feelings, norms, and attitudes at home when they go to work. Researchers are embracing the concept of emotional intelligence because of its relationship with efficiency, productivity, sales, revenues, quality of service, customer loyalty, employee recruitment and retention, employee commitment, employee health and satisfaction, and morale. While there is considerable evidence documenting the effects of emotional intelligence on leadership performance, job performance in large firms, and educational performance, very little research has examined how emotional intelligence affects entrepreneurial performance and the variables that account for this relationship. Individuals in entrepreneurial occupations face business situations that necessitate unique skills and abilities in social interactions. Emotional intelligence has implications for entrepreneurial situations and social interactions such as negotiation, obtaining and organizing resources, identifying and exploiting opportunities, managing stress, obtaining and maintaining customers, and providing leadership. the primary purpose of this study is to investigate emotional intelligence in the context of entrepreneurship. in addition, the study will shed light on the mediating effects of individual competencies, organizational tasks, and the environmental culture and climate. the results of the study provide insights for emotional intelligence researchers, entrepreneurship researchers, individuals with entrepreneurial aspirations, academic institutions, as well as government and financial entities that provide resources to new ventures. digital.library.unt.edu/ark:/67531/metadc115117/ An empirical investigation of manufacturing flexibility and organizational performance as moderated by strategic integration and organizational infrastructure. The purpose of this study is empirically investigating four research questions related to manufacturing flexibility. 1) What are the components of manufacturing flexibility? 2) Is there a relationship between manufacturing flexibility and organizational performance? 3) Do integrated strategies strengthen the relationship between manufacturing flexibility and organizational performance? 4) Are there organizational characteristics that strengthen the relationship between manufacturing flexibility and organizational performance? This study used a cross-sectional survey design to collect data from manufacturing organizations in multiple industries. Organizational performance was quantified using common manufacturing measures. Strategic integration and organizational infrastructure were also measured. Data were collected using a self-administered questionnaire. Factor analysis, correlation analysis, and regression were used to analyze the data. The results indicate the variables and expected relationships exist as hypothesized. This study contributes to the manufacturing flexibility body of knowledge by identifying relationships between the manufacturing flexibility component, performance, strategic integration, and organizational infrastructure. The instrument development in this study is of particular value as there are few rigorously developed and validated instruments to measure the manufacturing flexibility components and performance. Understanding these relationships will help practitioners make better decisions in manufacturing organizations as well as enable application of the concepts in this study to other contexts such as service organizations. digital.library.unt.edu/ark:/67531/metadc11065/ An Empirical Investigation of Personal and Situational Factors That Relate to the Formation of Entrepreneurial Intentions New entrepreneurial organizations emerge as a result of careful thought and action. Therefore, entrepreneurship may be considered an example of planned behavior. Previous research suggests that intentions are the single best predictor of planned behavior. Given the significance of intentions, the purpose of this study was to investigate the relationships between the personal characteristics of the entrepreneur and perceived environmental factors, and entrepreneurial intentions. digital.library.unt.edu/ark:/67531/metadc279175/ An Empirical Investigation of Personality and Situational Predictors of Job Burnout Empirical research exploring the complex phenomenon of job burnout is still considered to be in its infancy stage. One clearly established stream of research, though, has focused on the antecedents of the three job burnout components: emotional exhaustion, depersonalization, and personal accomplishment. In particular, situational characteristics have received a great deal of attention to date. Four situational factors: (1) role ambiguity, (2) role conflict, (3) quantitative role overload, and (4) organizational support were included in this analysis to test their significance as predictors of job burnout. Another set of antecedents that has received far less attention in job burnout research is personal dispositions. Individual differences, most notably personality traits, may help us understand why some employees experience burnout whereas others do not, even within the same work environment. Four personality characteristics: (1) self-esteem, (2) locus of control, (3) communal orientation, and (4) negative affectivity were included to test their significance as predictors of job burnout. An on-site, self-report survey instrument was used. A sample of 149 human service professionals employed at a large government social services department voluntarily participated in this research. The main data analysis techniques used to test the research hypotheses were canonical correlation analysis and hierarchical analysis of sets. While role ambiguity showed no significant associations with any of the three job burnout components, the remaining situational factors had at least one significant association. Among all the situational characteristics, quantitative role overload was the strongest situational predictor of emotional exhaustion and depersonalization, while organizational support was the strongest situational predictor of personal accomplishment. The personality predictor set as a whole showed a significant relationship with each of the job burnout components, providing strong proof that dispositional effects are important in predicting job burnout. Among all the personality characteristics, negative affectivity was the strongest personality predictor of emotional exhaustion and depersonalization, while communal orientation was the strongest personality predictor of personal accomplishment. Comparisons between the personality and situational predictor sets revealed that personality characteristics were the stronger predictor for all three of the job burnout components. No interactions among the situational and personality predictors proved significant. digital.library.unt.edu/ark:/67531/metadc278937/ An Empirical Investigation of the Effectiveness of Using Assigned, Easy Goals to Strengthen Self-efficacy Perceptions and Personal Goals in Complex Task Performance The perception of self-efficacy is a central cognitive construct in explaining motivation. Assigned goals are established in the literature as affecting self-efficacy, but only a few researchers investigated their effects in complex tasks. One stream of research revealed the positive effects of easy goals on performance in a complex task without regard to self-efficacy perceptions. In the present study, the focus was on the effects of assigned, easy goals on self-efficacy and personal goals in complex task performance. It was expected that easy goals would be superior to moderate or impossible goals because the complexity and uncertainty of the task distorts subjects' perceptions of goal difficulty. digital.library.unt.edu/ark:/67531/metadc278537/ An Empirical Investigation of the Interaction Effects of Leader-Member Locus of Control on Participation in Strategic Decision Making The purpose of this study was to test for a relationship between locus of control and participation in strategic decision making. The research model included the variables of gender, locus of control, job-work involvement and preference for participative environment as possible influences on team member participation in strategic decision making. Another feature of the model was the proposed three-way interaction effect on member participation. This interaction included member job-work involvement, member preference for participation and leader locus of control. digital.library.unt.edu/ark:/67531/metadc278461/ Environmental Scanning Behavior in Physical Therapy Private Practice Firms: its Relationship to the Level of Entrepreneurship and Legal Regulatory Environment This study examined the effects of entrepreneurship level and legal regulatory environment on environmental scanning in one component of the health services industry, private practice physical therapy. Two aspects of scanning served as dependent variables: (1) extent to which firms scrutinized six environmental sectors (competitor, customer, technological, regulatory, economic, social-political) and (2) frequency of information source use (human vs. written). Availability of information was a covariate for frequency of source use. Three levels of entrepreneurship were determined by scores on the Covin and Slevin (1986) entrepreneurship scale. Firms were placed in one of three legal regulatory categories according to the state in which the firm delivered services. A structured questionnaire was sent to 450 randomly selected members of the American Physical Therapy Association's Private Practice Section. Respondents were major decision makers, e.g., owners, chief executive officers. The sample was stratified according to three types of regulatory environment. A response rate of 75% was achieved (n = 318) with equal representation from each stratum. All questionnaire subscales exhibited high internal reliability and validity. The study used a 3x3 factorial design to analyze the data. Two multivariate analyses were conducted, one for each dependent variable set. Results indicated that "high" entrepreneurial level firms scanned the technological, competitor and customer environmental sectors to a significantly greater degree than "middle" or "low" level groups, regardless of type of legal regulatory environment. Also, "high" level firms were found to use human sources to a significantly greater degree than did lower level groups. Empirical evidence supporting Miles and Snow's (1978) proposition that "high" level entrepreneurial firms (prospectors) monitor a wider range of environmental conditions when compared to "low" level (defender) firms was presented. The results also confirmed that market and technological environments were scanned most often. Finally, the results added to the construct validity of the Covin and Slevin entrepreneurship scale and provided evidence of its generalizability to small businesses. digital.library.unt.edu/ark:/67531/metadc331736/ Environmental Scanning Practices of Manufacturing Firms in Nigeria The purpose of this study was to examine scanning practices in a developing country by looking at the scanning behavior of executives of Nigerian manufacturing firms. Specifically, this study examined the decision maker's perception of environmental uncertainty (PEU), the frequency and degree of interest with which decision makers scan each sector of the environment, the frequency of use of various sources of information, the number of organizational adjustments made in response to actions of environmental groups, and the obstacles encountered in collecting information from the environment. digital.library.unt.edu/ark:/67531/metadc277815/ An Evaluation of Backpropagation Neural Network Modeling as an Alternative Methodology for Criterion Validation of Employee Selection Testing Employee selection research identifies and makes use of associations between individual differences, such as those measured by psychological testing, and individual differences in job performance. Artificial neural networks are computer simulations of biological nerve systems that can be used to model unspecified relationships between sets of numbers. Thirty-five neural networks were trained to estimate normalized annual revenue produced by telephone sales agents based on personality and biographic predictors using concurrent validation data (N=1085). Accuracy of the neural estimates was compared to OLS regression and a proprietary nonlinear model used by the participating company to select agents. digital.library.unt.edu/ark:/67531/metadc277752/ An Examination of the Similarities and Differences Between Transformational and Authentic Leadership and Their Relationship to Followers' Outcomes Examining Curvilinearity and Moderation in the Relationship between the Degree of Relatedness of Individual Diversification Actions and Firm Performance Corporate diversification continues to be an important phenomenon in the modern business world. More than thirty years of research on diversification suggests that the degree of relatedness among a firm's business units is a factor that can affect firm performance, but the true effect of diversification relatedness on firm performance is still inconclusive. The purpose of this dissertation is to shed more light on this inconclusive association. However, attention is focused on the performance implications of individual diversification actions (e.g., acquisitions and joint ventures) rather than on the overall performance of firms with different levels of diversification. A non-experimental, longitudinal analysis of secondary data was conducted on over 450 unique acquisitions and on more than 210 joint ventures. Results suggest that even when individual diversification actions rather than entire business portfolios are examined, an inverted curvilinear association between diversification relatedness and performance is likely to emerge. This pattern is observed in both acquisitions and joint ventures. However, the association between diversification relatedness and performance in acquisitions is moderated by the level of industry adversity, though factors such as corporate coherence and heterogeneous experience do not moderate the association between diversification relatedness and performance. This study augments the body of knowledge on diversification and adds refinement to the traditional curvilinear finding regarding relatedness. By studying acquisitions and joint ventures independently, the results reveal differences in both slope and inflection points that suggest the relative impact of relatedness may vary depending on the mode of diversification. digital.library.unt.edu/ark:/67531/metadc67967/ Explicating the Managerial Processes of Dynamic Capabilities and Investigating How the Reconceptualized Construct Influences the Alignment of Ordinary Capabilities In the last three decades, strategic management scholars have explored the organization’s need to reconfigure its capabilities to leverage opportunities in a changing environment. The first objective of this study was to identify the underlying elements of the managerial processes of dynamic capabilities, and to offer a reconceptualization of the dynamic capabilities construct. The second objective of this investigation was to determine how the reconceptualized dynamic capabilities construct could influence the alignment of ordinary capabilities. Findings from this investigation indicate that organizational processes and managerial processes are unique components of dynamic capabilities. In addition, these organizational processes were found to be significantly and positively correlated with the alignment of ordinary capabilities. Furthermore, managerial processes were found to moderate the relationship between organizational processes and one type of ordinary capability alignment (i.e. innovation-operations capability alignment). Taken together, the findings of this study support the notion that dynamic capabilities are context specific, and that understanding how they influence the organization’s ability to change is complex. The developments and findings in this study offer a reconceptualized and empirically tested framework for the capability alignment process, thereby providing a more comprehensive picture of the underlying processes. digital.library.unt.edu/ark:/67531/metadc700096/ High Risk Occupations: Employee Stress and Behavior Under Crisis The purpose of this study is to analyze the relationships between stress and outcomes including organizational citizenship behavior (OCB), job satisfaction, and burnout in high-risk occupations. Moreover, how personality, emotions, coping, and leadership influence this relationship is investigated. Data were collected from 379 officers in 9 police organizations located in the Southern and Southwest United States. The primary research question addressed within this dissertation is: What is the relationship between stress and behavioral and affective outcomes in high-risk occupations as governed by coping, leadership, and crisis? The majority of the hypothesized relationships were supported, and inconsistencies center on methodological and theoretical factors. Findings indicate that occupational stressors negatively influence individuals in high-risk occupations. Moreover, crisis events exacerbate these influences. The use of adaptive coping strategies is most effective under conditions of low stress, but less so under highly stressful circumstances. Similarly, transformational leader behaviors most effectively influence how individuals in high-risk occupations are affected by lower, but not higher levels of stress. Profiles of personality characteristics and levels of emotional dissonance also influence the chosen coping strategies of those working in high-risk occupations. Prescriptively, it is important to understand the influences among the variables assessed in this study, because negative outcomes in high-risk occupations are potentially more harmful to workers and more costly to organizations. Thus, this dissertation answers the research question, but much work in this area remains to be done. digital.library.unt.edu/ark:/67531/metadc84269/ Hostile Environment: A Discriminant Model of the Perceptions of Working Women This study examines the problem of operationally defining "hostile environment" sexual harassment, ruled a type of disparate treatment actionable under Title VII of the Civil Rights Act by the United States Supreme Court on June 19, 1986. Although the Equal Employment Opportunity Commission defines a hostile environment as an "intimidating, hostile, or offensive work environment," there is no consensus as to what is "offensive" behavior. An extensive review of the literature yielded various attempts to define and ascertain the magnitude of sexual harassment, but the fact that the actual percentages varied indicates that this is a difficult issue to measure. As perception by the victim is the key, this study surveyed 125 working women from all over the United States to determine their perceptions of behaviors that constitute sexual harassment. Discriminant analysis was then used to correctly classify 95% of the women according to their perceptions of having experienced sexual harassment. Using tests for proportions, three hypotheses were found significant. Women who have been sexually harassed are more likely to view sexual harassment as a major problem. Older men are more likely to have their behavior perceived as sexual harassment. In addition, women who have experienced acts such as staring, flirting, or touching in the workplace are more likely to perceive those acts as sexual harassment. The hypotheses deemed not statistically significant yielded interesting results. Younger women are not more likely to be harassed than older women. Neither are single or divorced women more likely to experience sexual harassment. All women, regardless of age, marital status, or geographic location, are vulnerable to sexual harassment. Of importance are which variables contributed the most to the women's perceptions of sexual harassment. None of the demographic variables was found significant, but the women perceived that they had been sexually harassed if sexual remarks, touching, sexual propositions, or staring were directed toward them in the workplace. Thus, these acts were perceived as constituting a hostile environment. digital.library.unt.edu/ark:/67531/metadc331130/ The Impact of Social Capital and Dynamic Capabilities on New Product Development: An Investigation of the Entertainment Software Industry Businesses today face intense international competition, a heightened pace of development and shortened product life cycles. As a result, many researchers recommend firms collaborate and partner with other firms to succeed. With over a decade of research examining alliances and inter-firm collaboration, we know a great deal about the benefits and outcomes firms realize through collaboration. An important gap exists, however, in our understanding of the effect of partnering firms on collaborative outputs. This study attempts to address this gap by examining the success of collaborative new product development outputs. The study was a quasi-experimental study using archival, time-series data. Hypotheses were tested at the project level, defined as the product output from the collaborative development effort. Predictors were developed at both the firm and dyadic levels. Several findings emerged from this research. The primary finding is that roles of alliance partners impact which capability and capital benefits accrue. Firms functioning as a publisher benefit from increases in relevant experience. Firms functioning as a developer benefit from working in areas in which they have experience, but largely to the extent that the developer also generalizes their capabilities. One implication emerging from the capability findings suggests a need for configurational capability research. From a social capital conception, developers with high network centrality have a negative impact on the perceived quality of the final software product. Developers also benefit from embeddedness, products developed by developers in constrained networks outperformed products developed by developers in brokered networks. digital.library.unt.edu/ark:/67531/metadc9016/ The Impact on the Buyer-Seller Relationship of Firms Using Electronic Data Interchange This research investigated whether the buyer-seller interorganizational relationship (IOR) differed between a firm and two classes of customers. The first class used electronic data interchange (EDI) with the firm and the second class used the traditional paper-based purchasing system. IOR characteristics included reputation, skill, direct power, indirect power, reciprocity, and efficiency. digital.library.unt.edu/ark:/67531/metadc277684/ Incumbent Response to Radical Technological Innovation: the Influence of Competitive Dynamics on Strategic Choice Prior research on incumbent firm response to radical technological innovation identifies firm, technology, and environmental factors associated with incumbents’ performance after a technology shift. What remains unexplored are factors affecting choice of response made before a technological shift occurs. Such ex ante choices are important intermediate outcomes affecting long-term performance outcomes. Competitive considerations may be influential inputs in choice processes because technological innovation is often related to competitive strategy. The resulting research question for this study is: What role do competitive considerations play in incumbent firms’ ex ante strategic choices in response to potentially radical technological innovations? Findings from a survey of key informants in the electronics industry whose firms face a potential technological disruption (n=120) suggest that incumbents’ response choices are affected by competitor-related orientations and by perceptions of relative strength of their strategic assets. Limited support is found for a moderating effect of perceptions of the competitive environment. The results of this study extend theory on incumbent response to radical technological change by shedding light on the influence of competitor interdependence. Findings also suggest the importance of strategic choice as an intermediate variable in understanding incumbents’ long-term performance. Research examining choice factors at varied stages of a technology’s diffusion can further advance understanding of the evolving nature of strategic response choices and the effects they have on long-term performance. digital.library.unt.edu/ark:/67531/metadc804843/ The Influence of Change in Organizational Size, Level of Integration, and Investment in Technology on Task Specialization Major changes in organizational structural paradigms have been occurring. Recent journal articles propose that the older philosophies of expanding organizations and increasing internal specialization are no longer viable means to enhance competitiveness as espoused in earlier journal articles. Downsizing, rightsizing, and business process reengineering have all been used as methods of accomplishing organizational work force reduction (OWFR) and enhancing organizational posture. It has been established that as organizations grow, specialization increases. Causes for OWFR have not been established nor have effects upon structure been studied. Previous structural factor studies have focused upon organizations engaged in end-game strategies done during periods of internal and economic growth. This study evaluates the impacts of OWFR and its relationship to the structural factor of specialization during a non-munificent economic period. Three independent variables, dis-integration, change in the number of employees, and change in technology, were used as measures to determine whether specialization decreased when organizations downsized. The dependent variable, specialization, was obtained through a pre-tested questionnaire. The three independent variables were obtained using the Compustat data base as a secondary source of information. The Compustat data was verified using data from Compact Disclosure. Questionnaires were mailed to fifty-one fully integrated oil companies. Forty were returned after three mailings yielding a response rate of seventy-eight percent. The unit of analysis for the data collected was the firm. The data were analyzed using multiple regression to determine the strength of the relationship between the variables. Results indicate a significant relationship between two of the independent variables and the dependent variable: dis-integration and specialization and change in the number of employees and specialization. Findings were insignificant for the third independent variable and the dependent variable: change in technology and specialization. Analysis of the quantitative results and the qualitative responses of the participants show that dis-integration and a change in the number of employees are both useful for measuring structural change for organizations engaged in organizational work force reduction. digital.library.unt.edu/ark:/67531/metadc278514/ The Influence of Interorganizational Trust, Individualism and Collectivism, and Superordinate Goal of JIT/TQM on Interorganizational Cooperation: An Exploratory Analysis of Institutions in Mexico Since their introduction to the United States from Japan in the 1980s, inter-organizational cooperation practices between buyers and suppliers have provided lower costs, shorter development and production cycles, and higher levels of quality and productivity. Many studies of interorganizational cooperation have relied on transaction cost economicsframeworks,which ignore cultural differences. Few studies have analyzed inter-organizational cooperation in Mexico, a less-developed country (LDC) with a cultural and industrial environment differentfromthe U.S. This study is concerned with the influence of interorganizational trust, individualism and collectivism (indcol), and the superordinate goal ofjust-in-time/total quality management (JIT/TQM) on inter-organizational cooperation. digital.library.unt.edu/ark:/67531/metadc278619/ Influence of Significant Other and Locus of Control Dimensions on Women Entrepreneur Business Outcomes The personality characteristic locus of control internality is widely-accepted as a trait possessed by women entrepreneurs. Recent research also suggests the presence of a coexisting attribute of similar strength, characterized as influence of a significant other. The presence of one personality characteristic implying perception of self-directed capability, together with indication of need for external assistance, poses a theoretical paradox. The study's purpose was to determine the nature and extent of direct and interactive effects which these and related variables had on entrepreneur return on investment. It was hypothesized that dimensions of significant other, as operationalized for this research, would support internality of locus of control and also modify constraining effects of educational and experiential disadvantage which the literature cites as pertinent to women entrepreneurs. This was nonexperimental, exploratory research of correlational cross-sectional design which examined hypothesized variable linkages. A convenience sample from a women's entrepreneur networking group was surveyed. Significant other elements were derived from factor analysis, resulting in four common dimensions. These factors, together with Rotter's Locus of Control instrument scores, reports on levels of education and experience, and hypothesized interactions, were independent variables. Hierarchial multiple regression was used to test a proposed path model. Two interpretable four-factor solutions derived from significant other variables were tested in two models. Although neither model attained overall significance, individual variables were directionally as hypothesized, and locus of control and certain factoral dimensions attained bivariate significance. Significant other factors appear to influence locus of control through statistical suppression as they interact with other variables. Results point toward a possibility that significant others who most affect female entrepreneur performance are those who give specific advice and aid, rather than moral support. Further research to explore what seems a strong relationship between return on investment and locus of control internality is recommended. digital.library.unt.edu/ark:/67531/metadc332235/ Institutionalization of Ethics: a Cross-Cultural Perspective Business ethics is a much debated issue in contemporary America. As many ethical improprieties gained widespread attention, organizations tried to control the damage by institutionalizing ethics through a variety of structures, policies, and procedures. Although the institutionalization of ethics has become popular in corporate America, there is a lack of research in this area. The relationship between the cultural dimensions of individualism/collectivism, power distance, uncertainty avoidance, and masculinity/femininity and the perceptions of managers regarding the institutionalization of ethics is investigated in this study. This research also examined whether managers' level of cognitive moral development and locus of control influenced their perceptions. Data collection was performed through a mail survey of managers in the U.S. and India. Out of the 174 managers of American multinationals who responded to the survey, 86 were Americans and 88 were Indians. Results revealed that managers' perceptions were influenced by the four cultural dimensions. Managerial perceptions regarding the effectiveness of codes of ethics and the influence of referent groups varied according to their nationality. But, managers from both countries found implicit forms of institutionalizing ethics, such as organizational systems, culture, and leadership to be more effective in raising the ethical climate of organizations than explicit forms such as codes of ethics, ethics officers, and ethics ombudspeople. The results did not support the influence of moral reasoning level and locus of control type on managerial perceptions. The results suggested that in order for ethics institutionalization efforts to be successful, there must be a fit or compatibility between the implicit and explicit forms of institutionalizing ethics. The significance of this study rests on the fact that it enriched our understanding of how national culture affects managerial perceptions regarding the institutionalization of ethics. This is the first comparative study between U.S. managers and Indian managers that examines the variables, both explicit and implicit, which influence how ethical values are cultivated and perpetuated in organizations. digital.library.unt.edu/ark:/67531/metadc278977/ Interorganizational Relationships: The Effects of Organizational Efficacy on Member Firm Performance Access: Use of this item is restricted to the UNT Community. Relationships between the collective actors within interorganizational relationships are a growing area of research in management. Interorganizational networks continue to be a popular mechanism used by organizations to achieve greater performance. Organizations develop competencies to work with other organizations, but the confidence of these organizations to use these strengths for a competitive advantage has yet to be empirically examined. The purpose of this study is to examine organizational efficacy, how competencies may related to that efficacy, and the relationship of efficacy with performance. The goal of this study is to observe the relationship among trust, dependence, information quality, continuous quality improvement, and supplier flexibility with organizational efficacy. In addition, the relationship between organizational efficacy and performance is also observed. There are two primary research questions driving this study. First, what is the relationship between trust, dependence, information quality, continuous quality improvement, supplier flexibility and organizational efficacy? Second, what is the relationship between organizational efficacy and performance? The theories supporting the hypotheses generated from these questions include theories such as social cognitive theory, quality improvement, and path-goal theory. Data collected from the suppliers of a large university support the hypotheses. Regression analysis and structure coefficients were used to analyze the data. Results indicate that both research question one and research question two are supported. In addition, the theoretical model as a whole, which indicates a mediating relationship, was examined and discussed. This study contributes to both academic and practice by examining efficacy in an interorganizational setting. In addition, as organizations better understand the relationship between competencies and confidence, they will better know how to collectively work to achieve greater results with more attention being placed on monitoring the relationship in order to experience more desired outcomes. Limitations of the current study and opportunities for future research are also discussed. digital.library.unt.edu/ark:/67531/metadc5313/ The Introduction of Robotic Technology: Perceptions of the Work Force of an Aerospace Defense Company This dissertation examines the effect that the introduction of an advanced manufacturing technology, specifically robotics, has on the work force of an aerospace defense company. In this endeavor, there are two main objectives. First, this study determines whether workers feel that their jobs are threatened by the introduction of robotic technology. Secondly, the research compares the degree to which workers from different labor types feel this threat. A review of the literature reveals that the technical factors involving manufacturing technology have been thoroughly examined and discussed, but the effect that they have on the work force has been somewhat neglected. This dissertation develops ten hypotheses to ascertain the perceived threat to job security for workers within an aerospace defense company. This study is based on an employee survey that examined the employee's perceived threat to job security by the introduction of robotics. The primary research was obtained from employees within an aerospace defense company through the use of questionnaires in a three phase approach. The first phase utilized a pretest that sampled the questionnaire prior to the company-wide solicitation. The second phase administered the questionnaire to the three labor types within the work force. Phase three consisted of data reduction and the comparison of the primary data to the research hypotheses. The results of the study concluded that workers closer to the robotic technology (hands-on employees) felt more threatened about their job security than workers more removed from the technology (support personnel and management). It was further found that the hands-on workers felt that the major factor that lead to the introduction of robots was the desire to lower labor costs while support personnel and managers felt that the major factor that lead to the introduction of robots was due to increasing productivity. Additional hypotheses tested in this study include the effect that robots have on the perceptions of the work force toward the company's employment level, worker apprehension and reaction, training, safety, health, and competition. digital.library.unt.edu/ark:/67531/metadc330596/ Introduction of Self-Manage Work Teams at a Brownfield Site: a Study of Organization-Based Self-Esteem and Performance This empirical study is aimed at understanding the patterns of relationships among the organization structure of self-managed work teams in terms of three sets of constructs: 1. organization-based self-esteem; 2. consequent behaviors of intrinsic work motivation, general job satisfaction, organization citizenship, and organization commitment; and 3. performance. The primary significance of this study is that it adds to the pool of empirical knowledge in the field of self-managed work team research. The significance of this study to practicing managers is that it can help them make better-informed decisions on the use of the self-managed work team structure. This study was a sample survey composed of five standardized questionnaires using a five-point Likert-type scale, open-ended questions, and demographic questions. Unstructured interviews supplemented the structured survey and for means of triangulation of results. The variables were analyzed using regression analysis for the purpose of path analysis. The site was a manufacturing plant structured around self-managed work teams. The population was full-time, first-line production employees. digital.library.unt.edu/ark:/67531/metadc277664/ An Investigation of the Relationship between Work Value Congruence in a Dyad and Organizational Commitment as Mediated by Organizational Influences An Investigation of the Relationship Between World-Class Quality System Components and Performance Within the past two decades U.S. companies have experienced increased competition from foreign companies. In an effort to combat this competition many U.S. companies focused on quality as a solution to the problem. Researchers agree this emphasis on quality systems has changed the way many managers conduct business. Yet, no studies have identified which components of world-class quality systems, if any, contribute most to changes in performance. The purpose of this study is to empirically investigate three research questions pertaining to world-class quality systems: (1) What are the components of world-class quality systems? (2) Does a relationship exist between world-class quality system components and improved organizational performance? (3) Which world-class quality system components contribute most to changes in performance? The theoretical foundation for investigating these relationships is developed from Galbraith's (1977) information processing model of organization design. An extensive literature review resulted in the identification of seven components common to world-class quality systems: management involvement, customer involvement, employee involvement, supplier involvement, product/service design, process management, and continuous improvement. The literature suggests implementation of these components leads to changes in performance in such areas as productivity, throughput time, and quality output. A cross-sectional field study was used to gather data to answer the research questions. In this study, each component of world-class quality systems is measured as an independent variable. Change in productivity, throughput time, and quality output are measured as dependent variables. Factor analyses, correlation analyses, and hierarchical regression analyses are used to test the relationships. The target population was ISO 9000 certified companies located in the United States. The results indicated that management's involvement and employees' involvement are positively correlated with change in performance. The results also show that a positive relationship exits between the use of world-class quality system components and change in performance. digital.library.unt.edu/ark:/67531/metadc277953/ An investigation of the relationships between job characteristics, satisfaction, and team commitment as influenced by organization-based self-esteem within a team-based environment Access: Use of this item is restricted to the UNT Community. Team-based management is a popular contemporary method of redesigning jobs in order to more effectively utilize the human potential of employees. The use of such management techniques should result in increased satisfaction and team commitment; however, many research studies have failed to demonstrate increases in affective outcomes on the part of the employee. The research question examined in this study is, "What specific job dimensions and situational factors result in higher levels of satisfaction and team commitment?" The Job Characteristics Model (Hackman & Oldham, 1975) provided a basis for this study. The model was designed for individual contributors and has not been extensively used in team research. As expected it was found that within a team-based environment higher levels of the five core job dimensions of skill variety, task identity, task significance, autonomy, and job feedback were associated with increased satisfaction and team commitment. Organization-based self-esteem was found to mediate the relationship between the five core job dimensions and the affective outcome variables. Contrary to expectations, however, it was found that consultative team members experienced higher levels of satisfaction and commitment than substantive team members. In addition, consultative team members reported higher levels of two core job dimensions, skill variety and task significance, and on the overall Job Diagnostic Survey than did substantive team members. These findings have significant implications for companies undergoing organizational redesign and questions whether those companies should implement advanced levels of employee involvement activities if the organizational goal is to increase satisfaction and commitment. The study employed a survey research design in which data was collected using a self-report questionnaire. A heterogeneous sample of 183 team members participating in either a consultative and substantive team from four different companies in nine locations provided the data for this field survey. Multivariate analyses, including hierarchial set regression, were used to test the hypotheses. digital.library.unt.edu/ark:/67531/metadc2589/ Just-In-Time Purchasing and the Buyer-Supplier Relationship: Purchasing Performance Implications Using a Transaction Cost Analytic Framework The just-in-time purchasing literature resoundingly endorses long-term, cooperative buyer-supplier relationships. Significant anecdotal and descriptive evidence indicates that such relationships are rare in practice, raising questions as to the performance consequences of this gulf between theory and practice. Using an accepted theoretical model of the buyer-supplier relationship, transaction cost economics, this study examined the purchasing performance implications of the nature of the buyer-supplier relationship under just-in-time exchange. The focal purpose of the study was to examine the performance consequences of crafting long-term, cooperative relationships. The research design employed was a cross-sectional field study, involving a static-group comparison, implemented through the use of a mail survey. A dual-stage cluster sample of eight hundred purchasing managers and professionals employed in the two digit Standard Industrial Classification (SIC) Code 36, Electronic and Other Electrical Equipment and Components, was provided by the National Association of Purchasing Management (NAPM). The questionnaire was pretested and the substantive validity of the measurement scales assessed. Scales were purified via correlational and reliability analyses. Criterion-related and construct validity were established via correlational, exploratory factor, and confirmatory factor analyses. The three hypotheses of the study, involving extant tests of the association between the nature of the buyer-supplier relationship and purchasing performance (i.e., as reflected by transaction costs), were tested via analysis of covariance (ANCOVA) models. All three hypotheses were supported by the data to varying degrees. The confirmation of the theoretical model of the study provides empirical evidence to researchers and practitioners as to the superiority, in exchange efficiency terms, of cooperative relationships under conditions of just-in-time exchange. It may not be presumed, however, that cooperative exchange will enhance efficiency in all exchange environments. digital.library.unt.edu/ark:/67531/metadc278318/ Leader Emergence and Effectiveness in Virtual Workgroups: Dispositional and Social Identity Perspectives Linkage of Business and Manufacturing Strategies as a Determinant of Enterprise Performance: an Empirical Study in the Textile Industry The Manager as a Source of Departmental Power in a Manufacturing Company The purpose of this study is to explore the relationship between position-related sources of power and person-related sources of power in organizations. The subject is the power of an organizational sub-unit compared to other units. Theory on the structural sources of power is well established in the literature. The question in this study is whether the individual manager, the person, is another major source of power for the organizational unit. A major objective of the study is to fill this gap in the literature on power in organizations. A secondary objective of this study is to see if one can rank the individual position-related sources of power and person-related sources of power, identified through a literature review, within each group in terms of their relative importance. The type of this study is exploratory. It is a descriptive study explaining the "what is" about the relationship between position and person sources of power in a manufacturing company. Results indicate that there is a two-way relationship between manager power and department power, and that one can rank order the sources of power in terms of their contribution to a department's or manager's power. Power is defined in this study as the ability to get things done. digital.library.unt.edu/ark:/67531/metadc332300/ The Occupationally Injured Employee: Emotional and Behavioral Outcomes from Psychosocial Stressors This research explores whether a firm's psychosocial stressors contribute to strains or outcomes important to the organization. The psychosocial stressors chosen for study include: role conflict and ambiguity, workload (qualitative and quantitative), participative decision making, autonomy, and security. Independent variables were the emotional strains of job satisfaction and job commitment. The independent variables for behavioral strains included injury, lost days, workers' compensation claims, and absenteeism. Three moderators: age, gender, and social support were evaluated for interaction effects. The study sampled 77 occupationally injured and 81 non-injured employees from one medium sized Army community hospital. This study uses multivariate hierarchical multiple set regression as its principal analytical method. The hierarchial procedure orders the sets into an a priori hierarchy and enters each set sequentially from the hierarchy, evaluating the increase in $\rm R\sp2.$ The results suggest that psychosocial stressors are significant variables to consider when investigating workers' emotional and behavioral strains. For example, age, participation, and satisfaction were found statistically significant in differentiating between the occupationally injured and the non-injured samples. The study also found that ambiguity, participation, and autonomy influenced emotional strains. Additionally, age and social support appear to moderate the relationship between some psychosocial factors and emotional and behavioral strains. Age moderated the relationship with only emotional strains, while social support moderated both emotional and behavioral strains. Further, social support was found to have a main effect on the emotional strains of satisfaction and commitment, but not on any behavioral ones. Age was found to have a direct effect on the behavioral strains of workers' compensation claims. Finally, although not statistically significant when entered as a set and evaluated using the statistical analysis techniques in this study, a relationship between age and workers' compensation claims and qualitative workload and absenteeism were suggested. The economic and human costs associated with occupational injury are staggering. These findings suggest that attention to psychosocial factors within control of the employer, can promote good management outcomes, improve employee quality of worklife, and contain costs. digital.library.unt.edu/ark:/67531/metadc277759/ Optimal design of Dutch auctions with discrete bid levels. Access: Use of this item is restricted to the UNT Community. The theory of auction has become an active research area spanning multiple disciplines such as economics, finance, marketing and management science. But a close examination of it reveals that most of the existing studies deal with ascending (i.e., English) auctions in which it is assumed that the bid increments are continuous. There is a clear lack of research on optimal descending (i.e., Dutch) auction design with discrete bid levels. This dissertation aims to fill this void by considering single-unit, open-bid, first price Dutch auctions in which the bid levels are restricted to a finite set of values, the number of bidders may be certain or uncertain, and a secret reserve price may be present or absent. These types of auctions are most attractive for selling products that are perishable (e.g., flowers) or whose value decreases with time (e.g., air flight seats and concert tickets) (Carare and Rothkopf, 2005). I began by conducting a comprehensive survey of the current literature to identify the key dimensions of an auction model. I then zeroed in on the particular combination of parameters that characterize the Dutch auctions of interest. As a significant departure from the traditional methods employed by applied economists and game theorists, a novel approach is taken by formulating the auctioning problem as a constrained mathematical program and applying standard nonlinear optimization techniques to solve it. In each of the basic Dutch auction model and its two extensions, interesting properties possessed by the optimal bid levels and the auctioneer's maximum expected revenue are uncovered. Numerical examples are provided to illustrate the major propositions where appropriate. The superiority of the optimal strategy recommended in this study over two commonly-used heuristic procedures for setting bid levels is also demonstrated both theoretically and empirically. Finally, economic as well as managerial implications of the findings reported in this dissertation research are discussed. digital.library.unt.edu/ark:/67531/metadc28450/ Organizational Commitment: A Cross-National Comparison of Arab and Non-Arab Employees in Saudi Petrochemical Companies Individuals with different personal demographics and job-based factors have different attitudes and behaviors, which can influence their levels of commitment to their organizations. These differences in organizational commitment increase as their cultural backgrounds differ significantly. Personal demographics and job-related factors are reliable predictors of employees' commitment to their employing organizations. The purpose of this study was to empirically investigate if there is a difference in the level of employees' commitment to Saudi petrochemical companies on the basis of differences in their personal demographics and job-related factors. digital.library.unt.edu/ark:/67531/metadc277912/ Post-Implementation Evaluation of Enterprise Resource Planning (ERP) Systems The purposes of this dissertation were to define enterprise resource planning (ERP) systems, assess the varying performance benefits flowing from different ERP system implementation statuses, and investigate the impact of critical success factors (CSFs) on the ERP system deployment process. A conceptual model was developed and a survey instrument constructed to gather data for testing the hypothesized model relationships. Data were collected through a cross-sectional field study of Indian production firms considered pioneers in understanding and implementing ERP systems. The sample data were drawn from a target population of 900 firms belonging to the Confederation of Indian Industry (CII). The production firms in the CII member directory represent a well-balanced mix of firms of different sizes, production processes, and industries. The conceptual model was tested using factor analysis, multiple linear regression analysis and univariate Anova. The results indicate that the contributions of different ERP system modules vary with different measures of changes in performance and that a holistic ERP system contributes to performance changes. The results further indicate that the contributions of CSFs vary with different measures of changes in performance and that CSFs and the holistic ERP system influences the success achieved from deployments. Also, firms that emphasize CSFs throughout the ERP implementation process achieve greater performance benefits as compared to those that focus on CSFs during the initial ERP system deployment. Overall, the results of the study support the relationships hypothesized in the conceptual model. digital.library.unt.edu/ark:/67531/metadc6081/ Predicting Small Business Executives' Intentions to Comply with the Americans with Disabilities Act of 1990 Using the Theories of Reasoned Action and Planned Behavior and the Concept of Offender Empathy This study attempted to determine if the theories of reasoned action (TRA) and planned behavior (TPB), as well as a relatively new construct called offender empathy, could help to predict the intentions of small business executives (SBEs) to comply with the employment provisions of the Americans with Disabilities Act (ADA) of 1990. digital.library.unt.edu/ark:/67531/metadc277842/ Predicting the Use of External Labor Arrangements: A Transaction Costs Perspective Firms' use of external labor arrangements (ELAs), such as temporary, contract and seasonal workers, has become increasingly prevalent over the last two decades. Despite the increasing importance of this phenomenon, little is known about firms' reasons for using ELAs. Most research to date has been exploratory, using qualitative methods or archival data not well suited to the constructs. The result of this research has been a long and often contradictory list of proposed antecedents of ELA use. In this study, I tested the ability of the transaction costs theory to predict when firms will fill a given job using an ELA rather that a permanent employment relationship. According to this theory, three characteristics of the job will determine whether the job will be filled using an ELA: transaction-specific investment, likelihood of repetition, and uncertainty of performance. Firms will be less likely to staff a given job using an ELA when the job requires investment in idiosyncratic skills, when the firm is likely to require a person with that set of skills regularly, and when performance in that job is difficult to measure. digital.library.unt.edu/ark:/67531/metadc277753/ FIRST PREV 1 2 NEXT LAST
2016-10-24 14:08:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2312755435705185, "perplexity": 3490.3722371316285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719566.74/warc/CC-MAIN-20161020183839-00270-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/complex-function.223548/
Complex function ? 1. Mar 22, 2008 mkbh_10 1. The problem statement, all variables and given/known data Locate & name the singularity of the function sin(sqrtZ)/Sqrt(Z) ? 2. Relevant equations 3. The attempt at a solution At z= 0 i gives 0/0 form so should i apply L hospital's rule & then proceed ? 2. Mar 22, 2008 Hootenanny Staff Emeritus The is no need to consider the fraction as an entire entity, instead, one can separately calculate the order of the numerator and denominator independently and then combine them to find the order of the quotient. Hence, start by determining the order of the numerator and denominator separately. 3. Mar 22, 2008 mkbh_10 determining the order of the numerator and denominator ? 4. Mar 22, 2008 Hootenanny Staff Emeritus One can determine the order of a function, at a point, by finding the order of the derivative which is non-vanishing at that point. For example, the function, $$f(x) = x^2$$ Has order 2 at x=0 since, $$f(0)=0 \;\;,\;\;f^\prime(0) = 0 \;\;,\;\;f^{\prime\prime}(0)=2\neq0$$ Do you follow? Last edited: Mar 22, 2008 5. Mar 22, 2008 mkbh_10 The above function has Order =1 at z= 0 , then ? 6. Mar 22, 2008 Hootenanny Staff Emeritus Correct. So, if a function has a singularity of order one what type of singularity is it? 7. Mar 22, 2008 mkbh_10 i dn't know 8. Mar 22, 2008 Hootenanny Staff Emeritus A function with a positive order, at a given point, means that the Laurent series of the function at that point has no principle part, which means the singularity is ________.
2017-03-24 02:16:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9593481421470642, "perplexity": 1018.1299055913142}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187519.8/warc/CC-MAIN-20170322212947-00314-ip-10-233-31-227.ec2.internal.warc.gz"}