url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.shotgunworld.com/threads/801-piston-question.128008/ | 1 - 4 of 4 Posts
·
##### Registered
Joined
·
54 Posts
Discussion Starter · ·
I've checked past posts,but I'm still not clear.It would seem to me that for the gun to be set up for light loads,the "light/std" arrow should be pointed to the muzzle.
I think the confussion is from the fact that manual doesn't mention the arrows..
To put it another way,my piston is in the same position as in the pic on page 14 of the manual.
Really need help here,have a round of sporting on Tuesday
best regards,
Rob
#### KEMOSABE
·
##### Registered
Joined
·
104 Posts
My piston doesn,t have a arrow, instead it has a M on it for magnum. If the M is closer to the muzzle it is set for magnum loads. If the M is closer to the trigger, it is set for light or field loads. I would think in your case, the arrow repesents the M. Hope this helps. Willaim
·
##### Registered
Joined
·
54 Posts
Discussion Starter · ·
I just checked and the magnum end is pointed toward the muzzle,but in this position the magnum arrow is pointed back toward the trigger.The strange part is that the arrows point toward each other.Like this.
light magnum
---------> <----------
std. heavy
Now see why I'm confussed?
Rob
·
##### Registered
Joined
·
54 Posts
Discussion Starter · ·
light magnum
------> <---------
std heavy
Viewed this way the word magnum and heavy are upside down.Surely this would indicate that this is the light/std configuration,since this is readable upright,and the arrow is pointed toward the muzzle.
Italians!...go figure....
Rob
1 - 4 of 4 Posts | 2021-12-03 07:47:14 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988010287284851, "perplexity": 5296.168350248334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00151.warc.gz"} |
https://www.physicsforums.com/threads/having-trouble-differentiating-exponential-equations.309566/ | Homework Help: Having trouble differentiating exponential equations
1. Apr 24, 2009
Draggu
1. The problem statement, all variables and given/known data
L(t)=15(0.5^(t/26))
Find the rate of L, when t=60
2. Relevant equations
3. The attempt at a solution
L'(t) = (15/26)(1/2)^(t/26)ln(1/2)
L'(60)=(15/26)(1/2)^(60/26)ln(1/2)
= -0.08
Did I do this right? If I did it wrong, please say where, I am having great trouble understanding this.
2. Apr 24, 2009
Cyosis
Yes you did it correctly. You say you have great trouble understanding "this". What is "this" exactly, how to differentiate an exponent?
3. Apr 24, 2009
Draggu
I guess I was just lucky to be honest, since I did the question with a friend. My problem is applying the chain rule, and knowing where/when to put ln.
Here's another, that I have not finished yet (don't know how)
Flow of lava from a volcano is modelled by
l(t)= 12(2-0.8^t)
l is dist from crater in km
t is time in hours
how fast is the lava travelling down hillside after 4hours,
well, firstly i need to differentiate l(t), then solve for t
Can somebody give me hints for what to do with the t? I'm not quite sure..
4. Apr 24, 2009
Dick
0.8^t=e^(ln(0.8)*t). The derivative of e^(ct) with respect to t is what? Use the chain rule. Powers of constants are just exponentials.
5. Apr 24, 2009
jhae2.718
Use this derivative shortcut for exponential functions:
dy/dx a^u=ln(a)*a^u*du //*du is the derivative of u; this is where you use the chain rule
For a=e, dy/dx e^u=e^u*du
You are supposed to find the rate at which lava flows down the hill at 4 hours. Since t is the time in hours and l(t) is the distance of the lava from the crater in km, you need
to find l'(4) to get the rate. It is not necessary to "solve" for t: t=4.
If you were told the rate and were supposed to find the time, t, you would use algebra to solve for t, much as any other equation. Since l'(t) is an exponential function, you would have to use logarithms and the property that log(a^b)=b log(a), or that log(a)/log(b)=log_b(a).
6. Apr 25, 2009
HallsofIvy
An exponential is about the easiest function to differentiate!
(ex)'= ex and (ax)'= ax ln(a).
More generally, by the chain rule, (af(x))'= af(x)ln(a) f'(x).
7. Apr 25, 2009
Draggu
l(t)= 12(2-0.8^t)
so l'(t) = 12(2-0.8^(t) ln 0.8)
If that's true, it doesn't seem to get me anywhere.
8. Apr 25, 2009
Hootenanny
Staff Emeritus
That's not quite right, why is there still a 2 there?
9. Apr 25, 2009
Draggu
Ah yes.
so l'(t) = 12[-0.8^(t)] ln 0.8)
..
..
So when I sub in 4 for t, I get 1.1. Is this correct?
10. Apr 25, 2009
Hootenanny
Staff Emeritus
Indeed you do.
11. Apr 25, 2009
Draggu
"how fast is the lava travelling down hillside after 4hours"
One more question, just needs clarifying.
For example if I had (1/2)^(t/138)
The derivative would be 1/138(1/2)^t/138 ln 1/2 ?
I'm having trouble differentiating an exponent with a denominator
12. Apr 25, 2009
Hootenanny
Staff Emeritus
Apologies, I was looking at your OP.
13. Apr 25, 2009
jhae2.718
That derivative would be correct. With a denominator, remember the chain rule" dy/dx=dy/dt*dt/dx.
To put it simply, you multiply the derivative of the function (the a^x*ln(x) ) by the derivative of the exponent.
So for a derivative with a denominator, you would have a^(x/b)*ln(a)*dx/b.
Examples: dy/dx 2^(x/20)=2^(x/20)*ln(2)*1/20
dy/dx 3.4^[(x^2)/85]=3.4^[(x^2)/85]*ln(3.4)*(2x)/85
dy/dx 3-5^[(x^4)/220]=-5^[(x^4)/220]*ln(5)*[(x^3)/55]
Does that help? | 2018-07-21 04:23:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376466035842896, "perplexity": 2780.676883752773}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592309.94/warc/CC-MAIN-20180721032019-20180721052019-00075.warc.gz"} |
https://socialsci.libretexts.org/Courses/Foothill_College/Book%3A_Introduction_to_Psychology_2020/09%3A_Emotions_and_Motivations/9.02%3A_Positive_Emotions-_The_Power_of_Happiness | # 9.2: Positive Emotions- The Power of Happiness
• Anonymous
• LibreTexts
Learning Objectives
1. Understand the important role of positive emotions and happiness in responding to stress.
2. Understand the factors that increase, and do not increase, happiness.
Although stress is an emotional response that can kill us, our emotions can also help us cope with and protect ourselves from it. The stress of the Monday through Friday grind can be offset by the fun that we can have on the weekend, and the concerns that we have about our upcoming chemistry exam can be offset by a positive attitude toward school, life, and other people. Put simply, the best antidote for stress is a happy one: Think positively, have fun, and enjoy the company of others.
You have probably heard about the “power of positive thinking”—the idea that thinking positively helps people meet their goals and keeps them healthy, happy, and able to effectively cope with the negative events that occur to them. It turns out that positive thinking really works. People who think positively about their future, who believe that they can control their outcomes, and who are willing to open up and share with others are healthier people (Seligman, & Csikszentmihalyi, 2000).
The power of positive thinking comes in different forms, but they are all helpful. Some researchers have focused on optimism, a general tendency to expect positive outcomes, finding that optimists are happier and have less stress (Carver & Scheier, 2009). Others have focused self-efficacy, the belief in our ability to carry out actions that produce desired outcomes. People with high self-efficacy respond to environmental and other threats in an active, constructive way—by getting information, talking to friends, and attempting to face and reduce the difficulties they are experiencing. These people too are better able to ward off their stresses in comparison to people with less self-efficacy (Thompson, 2009).
Self-efficacy helps in part because it leads us to perceive that we can control the potential stressors that may affect us. Workers who have control over their work environment (e.g., by being able to move furniture and control distractions) experience less stress, as do patients in nursing homes who are able to choose their everyday activities (Rodin, 1986). Glass, Reim, and Singer (1971) found that participants who believed that they could stop a loud noise experienced less stress than those who did not think that they could, even though the people who had the option never actually used it. The ability to control our outcomes may help explain why animals and people who have higher status live longer (Sapolsky, 2005).
Suzanne Kobasa and her colleagues (Kobasa, Maddi, & Kahn, 1982) have argued that the tendency to be less affected by life’s stressors can be characterized as an individual difference measure that has a relationship to both optimism and self-efficacy known as hardiness. Hardy individuals are those who are more positive overall about potentially stressful life events, who take more direct action to understand the causes of negative events, and who attempt to learn from them what may be of value for the future. Hardy individuals use effective coping strategies, and they take better care of themselves.
Taken together, these various coping skills, including optimism, self-efficacy, and hardiness, have been shown to have a wide variety of positive effects on our health. Optimists make faster recoveries from illnesses and surgeries (Carver et al., 2005). People with high self-efficacy have been found to be better able to quit smoking and lose weight and are more likely to exercise regularly (Cohen & Pressman, 2006). And hardy individuals seem to cope better with stress and other negative life events (Dolbier, Smith, & Steinhardt, 2007). The positive effects of positive thinking are particularly important when stress is high. Baker (2007) found that in periods of low stress, positive thinking made little difference in responses to stress, but that during stressful periods optimists were less likely to smoke on a day-to-day basis and to respond to stress in more productive ways, such as by exercising.
It is possible to learn to think more positively, and doing so can be beneficial. Antoni et al. (2001) found that pessimistic cancer patients who were given training in optimism reported more optimistic outlooks after the training and were less fatigued after their treatments. And Maddi, Kahn, and Maddi (1998) found that a “hardiness training” program that included focusing on ways to effectively cope with stress was effective in increasing satisfaction and decreasing self-reported stress.
The benefits of taking positive approaches to stress can last a lifetime. Christopher Peterson and his colleagues (Peterson, Seligman, Yurko, Martin, & Friedman, 1998) found that the level of optimism reported by people who had first been interviewed when they were in college during the years between 1936 and 1940 predicted their health over the next 50 years. Students who had a more positive outlook on life in college were less likely to have died up to 50 years later of all causes, and they were particularly likely to have experienced fewer accidental and violent deaths, in comparison to students who were less optimistic. Similar findings were found for older adults. After controlling for loneliness, marital status, economic status, and other correlates of health, Levy and Myers found that older adults with positive attitudes and higher self-efficacy had better health and lived on average almost 8 years longer than their more negative peers (Levy & Myers, 2005; Levy, Slade, & Kasl, 2002). And Diener, Nickerson, Lucas, and Sandvik (2002) found that people who had cheerier dispositions earlier in life had higher income levels and less unemployment when they were assessed 19 years later.
## Finding Happiness Through Our Connections With Others
Happiness is determined in part by genetic factors, such that some people are naturally happier than others (Braungart, Plomin, DeFries, & Fulker, 1992; Lykken, 2000), but also in part by the situations that we create for ourselves. Psychologists have studied hundreds of variables that influence happiness, but there is one that is by far the most important. People who report that they have positive social relationships with others—the perception of social support—also report being happier than those who report having less social support (Diener, Suh, Lucas, & Smith, 1999; Diener, Tamir, & Scollon, 2006). Married people report being happier than unmarried people (Pew, 2006)1, and people who are connected with and accepted by others suffer less depression, higher self-esteem, and less social anxiety and jealousy than those who feel more isolated and rejected (Leary, 1990).
Social support also helps us better cope with stressors. Koopman, Hermanson, Diamond, Angell, and Spiegel (1998) found that women who reported higher social support experienced less depression when adjusting to a diagnosis of cancer, and Ashton et al. (2005) found a similar buffering effect of social support for AIDS patients. People with social support are less depressed overall, recover faster from negative events, and are less likely to commit suicide (Au, Lau, & Lee, 2009; Bertera, 2007; Compton, Thompson, & Kaslow, 2005; Skärsäter, Langius, Ågren, Häagström, & Dencker, 2005).
Social support buffers us against stress in several ways. For one, having people we can trust and rely on helps us directly by allowing us to share favors when we need them. These are the direct effects of social support. But having people around us also makes us feel good about ourselves. These are the appreciation effects of social support. Gençöz and Özlale (2004) found that students with more friends felt less stress and reported that their friends helped them, but they also reported that having friends made them feel better about themselves. Again, you can see that the tend-and-befriend response, so often used by women, is an important and effective way to reduce stress.
## What Makes Us Happy?
One difficulty that people face when trying to improve their happiness is that they may not always know what will make them happy. As one example, many of us think that if we just had more money we would be happier. While it is true that we do need money to afford food and adequate shelter for ourselves and our families, after this minimum level of wealth is reached, more money does not generally buy more happiness (Easterlin, 2005). For instance, as you can see in Figure $$\PageIndex{11}$$, even though income and material success has improved dramatically in many countries over the past decades, happiness has not. Despite tremendous economic growth in France, Japan, and the United States between 1946 to 1990, there was no increase in reports of well-being by the citizens of these countries. Americans today have about three times the buying power they had in the 1950s, and yet overall happiness has not increased. The problem seems to be that we never seem to have enough money to make us “really” happy. Csikszentmihalyi (1999) reported that people who earned $30,000 per year felt that they would be happier if they made$50,000 per year, but that people who earned $100,000 per year said that they would need$250,000 per year to make them happy.
These findings might lead us to conclude that we don’t always know what does or what might make us happy, and this seems to be at least partially true. For instance, Jean Twenge and her colleagues (Twenge, Campbell & Foster, 2003) have found in several studies that although people with children frequently claim that having children makes them happy, couples who do not have children actually report being happier than those who do.
Psychologists have found that people’s ability to predict their future emotional states is not very accurate (Wilson & Gilbert, 2005). For one, people overestimate their emotional reactions to events. Although people think that positive and negative events that might occur to them will make a huge difference in their lives, and although these changes do make at least some difference in life satisfaction, they tend to be less influential than we think they are going to be. Positive events tend to make us feel good, but their effects wear off pretty quickly, and the same is true for negative events. For instance, Brickman, Coates, and Janoff-Bulman (1978) interviewed people who had won more than \$50,000 in a lottery and found that they were not happier than they had been in the past, and were also not happier than a control group of similar people who had not won the lottery. On the other hand, the researchers found that individuals who were paralyzed as a result of accidents were not as unhappy as might be expected.
How can this possibly be? There are several reasons. For one, people are resilient; they bring their coping skills to play when negative events occur, and this makes them feel better. Secondly, most people do not continually experience very positive, or very negative, affect over a long period of time, but rather adapt to their current circumstances. Just as we enjoy the second chocolate bar we eat less than we enjoy the first, as we experience more and more positive outcomes in our daily lives we habituate to them and our life satisfaction returns to a more moderate level (Small, Zatorre, Dagher, Evans, & Jones-Gotman, 2001).
Another reason that we may mispredict our happiness is that our social comparisons change when our own status changes as a result of new events. People who are wealthy compare themselves to other wealthy people, people who are poor tend to compare with other poor people, and people who are ill tend to compare with other ill people, When our comparisons change, our happiness levels are correspondingly influenced. And when people are asked to predict their future emotions, they may focus only on the positive or negative event they are asked about, and forget about all the other things that won’t change. Wilson, Wheatley, Meyers, Gilbert, and Axsom (2000) found that when people were asked to focus on all the more regular things that they will still be doing in the future (working, going to church, socializing with family and friends, and so forth), their predictions about how something really good or bad would influence them were less extreme.
If pleasure is fleeting, at least misery shares some of the same quality. We might think we can’t be happy if something terrible, such as the loss of a partner or child, were to happen to us, but after a period of adjustment most people find that happiness levels return to prior levels (Bonnano et al., 2002). Health concerns tend to put a damper on our feeling of well-being, and those with a serious disability or illness show slightly lowered mood levels. But even when health is compromised, levels of misery are lower than most people expect (Lucas, 2007; Riis et al., 2005). For instance, although disabled individuals have more concern about health, safety, and acceptance in the community, they still experience overall positive happiness levels (Marinić & Brkljačić, 2008). Taken together, it has been estimated that our wealth, health, and life circumstances account for only 15% to 20% of life satisfaction scores (Argyle, 1999). Clearly the main ingredient in happiness lies beyond, or perhaps beneath, external factors.
## Key Takeaways
• Positive thinking can be beneficial to our health.
• Optimism, self-efficacy, and hardiness all relate to positive health outcomes.
• Happiness is determined in part by genetic factors, but also by the experience of social support.
• People may not always know what will make them happy.
• Material wealth plays only a small role in determining happiness.
## Exercises and Critical Thinking
1. Are you a happy person? Can you think of ways to increase your positive emotions?
2. Do you know what will make you happy? Do you believe that material wealth is not as important as you might have thought it would be?
1Pew Research Center (2006, February 13). Are we happy yet? Retrieved from pewresearch.org/pubs/301/are-we-happy-yet.
## References
Antoni, M. H., Lehman, J. M., Klibourn, K. M., Boyers, A. E., Culver, J. L., Alferi, S. M.,…Kilbourn, K. (2001). Cognitive-behavioral stress management intervention decreases the prevalence of depression and enhances benefit finding among women under treatment for early-stage breast cancer. Health Psychology, 20(1), 20–32.
Argyle, M. (1999). Causes and correlates of happiness. In D. Kahneman, E. Diener, & N. Schwarz (Eds.), Well being: The foundations of hedonic psychology. New York, NY: Russell Sage Foundation.
Ashton, E., Vosvick, M., Chesney, M., Gore-Felton, C., Koopman, C., O’Shea, K.,…Spiegel, D. (2005). Social support and maladaptive coping as predictors of the change in physical health symptoms among persons living with HIV/AIDS. AIDS Patient Care & STDs, 19(9), 587–598. doi:10.1089/apc.2005.19.587
Au, A., Lau, S., & Lee, M. (2009). Suicide ideation and depression: The moderation effects of family cohesion and social self-concept. Adolescence, 44(176), 851–868. Retrieved from Academic Search Premier Database.
Baker, S. R. (2007). Dispositional optimism and health status, symptoms, and behaviors: Assessing ideothetic relationships using a prospective daily diary approach. Psychology and Health, 22(4), 431–455.
Bertera, E. (2007). The role of positive and negative social exchanges between adolescents, their peers and family as predictors of suicide ideation. Child & Adolescent Social Work Journal, 24(6), 523–538. doi:10.1007/s10560-007-0104-y.
Bonanno, G. A., Wortman, C. B., Lehman, D. R., Tweed, R. G., Haring, M., Sonnega, J.,…Nesse, R. M. (2002). Resilience to loss and chronic grief: A prospective study from preloss to 18-months postloss. Journal of Personality and Social Psychology, 83(5), 1150–1164.
Braungart, J. M., Plomin, R., DeFries, J. C., & Fulker, D. W. (1992). Genetic influence on tester-rated infant temperament as assessed by Bayley’s Infant Behavior Record: Nonadoptive and adoptive siblings and twins. Developmental Psychology, 28(1), 40–47.
Brickman, P., Coates, D., & Janoff-Bulman, R. (1978). Lottery winners and accident victims: Is happiness relative? Journal of Personality and Social Psychology, 36(8), 917–927.
Carver, C. S., & Scheier, M. F. (2009). Optimism. In M. R. Leary & R. H. Hoyle (Eds.), Handbook of individual differences in social behavior (pp. 330–342). New York, NY: Guilford Press.
Carver, C. S., Smith, R. G., Antoni, M. H., Petronis, V. M., Weiss, S., & Derhagopian, R. P. (2005). Optimistic personality and psychosocial well-being during treatment predict psychosocial well-being among long-term survivors of breast cancer. Health Psychology, 24(5), 508–516.
Cohen, S., & Pressman, S. D. (2006). Positive affect and health. Current Directions in Psychological Science, 15(3), 122–125.
Compton, M., Thompson, N., & Kaslow, N. (2005). Social environment factors associated with suicide attempt among low-income African Americans: The protective role of family relationships and social support. Social Psychiatry & Psychiatric Epidemiology, 40(3), 175–185. doi:10.1007/s00127-005-0865-6.
Csikszentmihalyi, M. (1999). If we are so rich, why aren’t we happy? American Psychologist, 54(10), 821–827.
Diener, E., Nickerson, C., Lucas, R., & Sandvik, E. (2002). Dispositional affect and job outcomes. Social Indicators Research, 59(3), 229. Retrieved from Academic Search Premier Database.
Diener, E., Tamir, M., & Scollon, C. N. (2006). Happiness, life satisfaction, and fulfillment: The social psychology of subjective well-being. In P. A. M. VanLange (Ed.), Bridging social psychology: Benefits of transdisciplinary approaches. Mahwah, NJ: Lawrence Erlbaum Associates.
Diener, E., Suh, E. M., Lucas, R. E., & Smith, H. L. (1999). Subjective well-being: Three decades of progress. Psychological Bulletin, 125(2), 276–302.
Dolbier, C. L., Smith, S. E., & Steinhardt, M. A. (2007). Relationships of protective factors to stress and symptoms of illness. American Journal of Health Behavior, 31(4), 423–433.
Easterlin, R. (2005). Feeding the illusion of growth and happiness: A reply to Hagerty and Veenhoven. Social Indicators Research, 74(3), 429–443. doi:10.1007/s11205-004-6170-z
Gençöz, T., & Özlale, Y. (2004). Direct and indirect effects of social support on psychological well-being. Social Behavior & Personality: An International Journal, 32(5), 449–458.
Glass, D. C., Reim, B., & Singer, J. E. (1971). Behavioral consequences of adaptation to controllable and uncontrollable noise. Journal of Experimental Social Psychology, 7(2), 244–257.
Kobasa, S. C., Maddi, S. R., & Kahn, S. (1982). Hardiness and health: A prospective study. Journal of Personality and Social Psychology, 42(1), 168–177.
Koopman, C., Hermanson, K., Diamond, S., Angell, K., & Spiegel, D. (1998). Social support, life stress, pain and emotional adjustment to advanced breast cancer. Psycho-Oncology, 7(2), 101–110.
Leary, M. R. (1990). Responses to social exclusion: Social anxiety, jealousy, loneliness, depression, and low self-esteem. Journal of Social and Clinical Psychology, 9(2), 221–229.
Levy, B., Slade, M., & Kasl, S. (2002). Longitudinal benefit of positive self-perceptions of aging on functional health. Journals of Gerontology Series B: Psychological Sciences & Social Sciences, 57B(5), P409. Retrieved from Academic Search Premier Database.
Levy, B., & Myers, L. (2005). Relationship between respiratory mortality and self-perceptions of aging. Psychology & Health, 20(5), 553–564. doi:10.1080/14768320500066381.
Lucas, R. (2007). Long-term disability is associated with lasting changes in subjective well-being: Evidence from two nationally representative longitudinal studies. Journal of Personality & Social Psychology, 92(4), 717–730. Retrieved from Academic Search Premier Database.
Lykken, D. T. (2000). Happiness: The nature and nurture of joy and contentment. New York, NY: St. Martin’s Press.
Maddi, S. R., Kahn, S., & Maddi, K. L. (1998). The effectiveness of hardiness training. Consulting Psychology Journal: Practice and Research, 50(2), 78–86.
Marinić, M., & Brkljačić, T. (2008). Love over gold—The correlation of happiness level with some life satisfaction factors between persons with and without physical disability. Journal of Developmental & Physical Disabilities, 20(6), 527–540. doi:10.1007/s10882-008-9115-7
Peterson, C., Seligman, M. E. P., Yurko, K. H., Martin, L. R., & Friedman, H. S. (1998). Catastrophizing and untimely death. Psychological Science, 9(2), 127–130.
Riis, J., Baron, J., Loewenstein, G., Jepson, C., Fagerlin, A., & Ubel, P. (2005). Ignorance of hedonic adaptation to hemodialysis: A study using ecological momentary assessment. Journal of Experimental Psychology/General, 134(1), 3–9. doi:10.1037/0096-3445.134.1.3
Rodin, J. (1986). Aging and health: Effects of the sense of control. Science, 233(4770), 1271–1276.
Sapolsky, R. M. (2005). The influence of social hierarchy on primate health. Science, 308(5722), 648–652.
Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55(1), 5–14.
Skärsäter, I., Langius, A., Ågren, H., Häggström, L., & Dencker, K. (2005). Sense of coherence and social support in relation to recovery in first-episode patients with major depression: A one-year prospective study. International Journal of Mental Health Nursing, 14(4), 258–264. doi:10.1111/j.1440-0979.2005.00390.x
Small, D. M., Zatorre, R. J., Dagher, A., Evans, A. C., & Jones-Gotman, M. (2001). Changes in brain activity related to eating chocolate: From pleasure to aversion. Brain, 124(9), 1720–1733.
Thompson, S. C. (2009). The role of personal control in adaptive functioning. In S. J. Lopez & C. R. Snyder (Eds.), Oxford handbook of positive psychology (2nd ed., pp. 271–278). New York, NY: Oxford University Press.
Twenge, J. M., Campbell, W. K., & Foster, C. A. (2003). Parenthood and marital satisfaction: A meta-analytic review. Journal of Marriage and Family, 65(3), 574–583.
Wilson, T. D., & Gilbert, D. T. (2005). Affective forecasting: Knowing what to want. Current Directions in Psychological Science, 14(3), 131–134.
Wilson, T. D., Wheatley, T., Meyers, J. M., Gilbert, D. T., & Axsom, D. (2000). Focalism: A source of durability bias in affective forecasting. Journal of Personality and Social Psychology, 78(5), 821–836.
This page titled 9.2: Positive Emotions- The Power of Happiness is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Anonymous. | 2022-11-30 11:01:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3688376247882843, "perplexity": 5520.745150910158}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00257.warc.gz"} |
https://petsc.org/release/docs/manualpages/Sys/PetscBT/ | # PetscBT#
PETSc bitarrays, efficient storage of arrays of boolean values
## Synopsis#
typedef char *PetscBT;
## Notes#
The following routines do not have their own manual pages
PetscBTCreate(m,&bt) - creates a bit array with enough room to hold m values
PetscBTDestroy(&bt) - destroys the bit array
PetscBTMemzero(m,bt) - zeros the entire bit array (sets all values to false)
PetscBTSet(bt,index) - sets a particular entry as true
PetscBTClear(bt,index) - sets a particular entry as false
PetscBTLookup(bt,index) - returns the value
PetscBTLookupSet(bt,index) - returns the value and then sets it true
PetscBTLookupClear(bt,index) - returns the value and then sets it false
PetscBTLength(m) - returns number of bytes in array with m bits
PetscBTView(m,bt,viewer) - prints all the entries in a bit array
PETSc does not check error flags on PetscBTLookup(), PetcBTLookupSet(), PetscBTLength() because error checking would cost hundreds more cycles then the operation. | 2022-11-29 15:38:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2707674205303192, "perplexity": 6280.209238263156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00467.warc.gz"} |
https://www.gamedev.net/blogs/entry/2259703-resources-handling-part-1/ | • entries
3
2
• views
3365
# Resources handling - part 1
980 views
During the years, I've developed many resources handling systems; none of them have survived.
At first, every new systems looked like the right one, the smartest ever.
After a few weeks of real use, they all had shown their weakness.
So, for the new engine, I wrote a brand new resources handling system.
First, what is a resource? To me, a resource is something that:
1) can be loaded, unloaded and re-loaded at any time, without having to always specify some fancy parameters. I want to define a resource only once and, from there on, be able to refer to it with a simple handle.
2) can be shared. This means that a texture for example, can be used by model1, model2, and whoever needs to. Obviously, the resource is unique, so it should not be loaded/create twice.
3) when I need it, I immediately get it. No wait time at all. I call getResource("name") and I want a result now. This does not means that the resource will be always immediately available, it could be not loaded for example, but it does not matter. I want immediately its unique handle and a status that tell me if the resource is ready or not
4) can be updated at runtime. Once the resource is updated, everybody see it in its new updated state. So if I update a texture, then model1 and model2, both see the updated texture without any special handling needed.
4a) as a bonus, if a resource is updated and then unloaded, next time it's loaded it should be loaded in its updated state
5) it can have sub-resources which are normal resources that are loaded along with the master resource. For example, if a material resource has 2 textures, when I load the material, both the textures should be also loaded, without any special handling needed.
Looking at the list of requisite, it's clear why I failed with all my previous attempts; it's not very easy to fulfill this goals altogether.
The resources system I'm developing for the current engine, seems a good candidate. I finished developing just 3 days ago, and so far it worked fine. It's still a work in progress and probably will be adjusted in the future, but I can already see the power in it (muahahahah).
How does it work?
There's a central hub that I call resourceHub (resHub) that acts as the main interface from the app to the resources.
No, it's not a singleton, nor a global. Ideally you can have as many hub as you want, they wont interact each other.
The resHub can have many families. Every family, can have many providers.
//family shaperesHub.addFamily ("shape", 4, &famShape);resHub.addProvider (famShape, geom::shape::Provider::InitParams(), &providerShape_gosgeom);//family imageresHub.addFamily ("image", 128, &famImage);resHub.addProvider (famImage, image::Provider::InitParams(), &providerImage_gosimage);resHub.addProvider (famImage, gpu::ImageProvider::InitParams(gpu, providerImage_gosimage), &providerImage_gpuimage);//family shaderresHub.addFamily ("shader", 128, &famShader)resHub.addProvider (famShader, gpu::TextShaderProvider::InitParams(), &providerShader_txt);
A bit scary isn't it?
What this code does, is to add 3 families to the hub: family shape, family image and family shader
Then it adds a geom::shape::Provider to the shape family, adds a image::Provider and a gpu::Image::Provider to the image family and so on.
What this means, is that an image resource can exists in 2 different formats: a image format, and a gpu::Image format.
A shape resource, can exists only in a geom::shape format (but you can add more providers later if you need additional formats).
More on formats later...
addResource() wants a family (either a name or a familyID obtained from the addFamily), a "resource name" and fill a resID with the resource unique id.
The resource name is also (part) of the filename that a provider will use to try to load the resource itself.
The provider will typically add one or more extensions to the filename.
Take as an example the texture "checker_512". If an image::Provider is asked to load it, it will look for a file named "checker_512.image.gosimage".
The same resource but with the gpu::image::Provider, will result in a "checker_512.image.gpu" filename.
Generally speaking, a resource filename is composed by resource_name.resource_family.provider_parameters.
getResource() wants a resourceID (obtained from addResource()), a providerID (obtained from addProvider()) and a pointer to a buffer that will eventually points to the loaded resource.
If eResStatus_loading, you can't use the resource, the resBuffer will be NULL, but you know that the resource is being loaded.
If eResStatus_loaded, you're done, the resBuffer will point to valid data and you are free to use the resource as you wish. You are guarantee that the resource will stay loaded until the next sync-point (more on this later)
If eResStatus_loadError, a load() started previously failed. At this point you know there's no way to load it from the HD, so you can eventually manually create it and update() with valid data (more on this later..again), or mark it as noHope which will prevent any further load() to occur.
The key point here, is that with the same resource ID (which is an unsigned 32 bit), I can ask for different formats.
For example:[code=:0]ResID resIDresHub.addResource ("image", "checker", &resID);//let's say that now resID = 12345, I can://ask for the gosimage formatresHub->getResource (resID, providerImage, ..)//and also the gpu.formatresHub->getResource (resID, providerGPUImage, ..)
This come handy in many way.
For example, the render queue takes a textureID as parameter.
Now, the gpu needs a gpu.image (which is a texture loaded in gpu memory and ready to be used by the gpu, ie: a ID3D11Texture2D*).
To get the gpu.image I call getResource (textureID, providerGPUImage).
If this fail and return eNotLoaded or eLoadError, I can load the "real" image (a dds file for example) by calling getResource (textureID, providerImage) and then, create the gpu.image using the "real" image just loaded.
The same ID, many formats.
This come handy with shader too. The same ID can reference a text file with the plain source code, or a pre-compiled binary, or a text file to be compiled with a set of #define and so on.
I find this feature very useful. The resource creation parameters, are sort of stored in the provider. Switching provider, will change the way a resource is viewed and/or created.
I think it's enough for now, way too many lines of text. See you next time with part 2
There are no comments to display.
## Create an account
Register a new account | 2018-09-21 05:50:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24784769117832184, "perplexity": 3119.674532316472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156857.16/warc/CC-MAIN-20180921053049-20180921073449-00446.warc.gz"} |
https://insidedarkweb.com/wordpress-development/is-moving-wp-config-outside-the-web-root-really-beneficial/ | # Is moving wp-config outside the web root really beneficial?
One of the most common security best practices these days seems to be moving wp-config.php one directory higher than the vhost’s document root. I’ve never really found a good explanation for that, but I’m assuming it’s to minimize the risk of a malicious or infected script within the webroot from reading the database password.
But, you still have to let WordPress access it, so you need to expand open_basedir to include the directory above the document root. Doesn’t that just defeat the entire purpose, and also potentially expose server logs, backups, etc to attackers?
Or is the technique only trying to prevent a situation where wp-config.php would be shown as plain-text to anyone requesting http://example.com/wp-config.php, instead of being parsed by the PHP engine? That seems like a very rare occurance, and it wouldn’t outweigh the downsides of exposing logs/backups/etc to HTTP requests.
Maybe it’s possible to move it outside the document root in some hosting setups without exposing other files, but not in other setups?
Conclusion:
After a lot of back-and-forth on this issue, two answers have emerged that I think should be considered the authoritative ones. Aaron Adams makes a good case in favor of moving wp-config, and chrisguitarguymakes a good case against it. Those are the two answers you should read if you’re new to the thread and don’t want to read the entire thing. The other answers are either redundant or inaccurate.
WordPress Development Asked on November 11, 2021
Eternety later and wordpress still puts wp-config.php by default in its root directory accessable to the web without even adding .htaccess rules to prevent access to it. All the shared hosting which have a one click wordpress install most likely do the same. The result is that most of wordpress sites ae configured like that and I don't believe I ever heard anyone saying "my site was hacked because wp-config.php was on the root directory".
To use the information contained in the file you need an access to the DB server, probably by adding scripts to some app server which if you run a VPS means that if an attacker has such an ability it is "game over" for you in any case, and on shared hosting they probably isolate access to DB based on user therefor it is not a trivial thing to do even in that setting.
The result is that the wordpress 5.2+ health info will not give a suggestion to move the file, and never heard of a security plugin that does it.
So a long term practical info shows that it is theoretically better to do it, but it is mostly a security theater.
The real problem with moving wp-config.php to one directory above is that it essentialy prevents other wordpress from being installed in the same directory as the first one, something that many people do. The solution is to still have your wp-config.php in the default location but add to it code that loads the actual configuration from a different file which is located outside of the web root and probably named in a way which is not generic but site specific.
The problem with that is that many wordpress toturials do not even mention the possibility of having wp-config.php in another place, and people that will come after you will have a WTF moment trying to figure out how to follow instructions which asks them to add adefine to the wp-config.php file
Answered by Mark Kaplun on November 11, 2021
Sorry to bump an old post but is there not just an obvious solution to all this. We know there is some security benefits from moving the wp-config.php file out of the wordpress route directory. Some would argue that the benefits are minimal others would not.
On the flip side there can be some drawbacks to moving the file out of it's default location such as breaking some plugins that do not have functionality to look for the wp-config.php file in other locations.
Most obvious thing to me is to create a secret-info.php file outside of the wordpress route directory which contains variables for all your usernames and passwords i.e.
$userName = "user";$databasePassword = "12345";
Leave the wp-config.php file in the default wordpress route directory, remove the username and password values from wp-config.php but leave everything else. Then just simply reference the $userName and$databasePassword variable by requiring secret-info.php in wp-config.php i.e.
require('PATH-TO-FILE/secret-info.php');
Seems the obvious thing to do, am I missing something here ?
Answered by MikeMoy on November 11, 2021
The answer to this question is yes and to say otherwise is probably irresponsible.
# Long answer: a real-world example
Allow me to provide a very real example, from my very real server, where moving wp-config.php outside the web root specifically prevented its contents from being captured.
## The bug:
Take a look at this description of a bug in Plesk (fixed in 11.0.9 MU#27):
Plesk resets subdomain forwarding after syncing subscription with hosting plan (117199)
Sounds harmless, right?
Well, here's what I did to trigger this bug:
1. Set up a subdomain to redirect to another URL (e.g. site.staging.server.com to site-staging.ssl.server.com).
2. Changed the subscription's service plan (e.g. its PHP configuration).
When I did this, Plesk reset the subdomain to defaults: serving the contents of ~/httpdocs/, with no interpreters (e.g. PHP) active.
And I didn't notice. For weeks.
## The result:
• With wp-config.php in the web root, a request to /wp-config.php would have downloaded the WordPress configuration file.
• With wp-config.php outside the web root, a request to /wp-config.php downloaded a completely harmless file. The real wp-config.php file could not be downloaded.
Thus, it's obvious that moving wp-config.php outside the web root can have bona fide security benefits in the real world.
# How to move wp-config.php to any location on your server
WordPress will automatically look one directory above your WordPress installation for your wp-config.php file, so if that's where you've moved it, you're done!
But what if you've moved it somewhere else? Easy. Create a new wp-config.php in the WordPress directory with the following code:
<?php
/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') )
define('ABSPATH', dirname(__FILE__) . '/');
/** Location of your WordPress configuration. */
require_once(ABSPATH . '../phpdocs/wp-config.php');
(Be sure to change the above path to the actual path of your relocated wp-config.php file.)
If you run into a problem with open_basedir, just add the new path to the open_basedir directive in your PHP configuration:
open_basedir = "/var/www/vhosts/example.com/httpdocs/;/var/www/vhosts/example.com/phpdocs/;/tmp/"
That's it!
# Addressing arguments to the contrary
Every argument against moving wp-config.php outside the web root seems to hinge on false assumptions.
## Argument 1: If PHP is disabled, they're already in
The only way someone is going to see that contents of [wp-config.php] is if they circumvent your servers PHP interpreter… If that happens, you're already in trouble: they have direct access to your server.
FALSE: The scenario I describe above is the result of a misconfiguration, not an intrusion.
## Argument 2: Accidentally disabling PHP is rare, and therefore insignificant
If an attacker has enough access to change the PHP handler, you're already screwed. Accidental changes are very rare in my experience, and in that case it'd be easy to change the password.
FALSE: The scenario I describe above is the result of a bug in a common piece of server software, affecting a common server configuration. This is hardly "rare" (and besides, security means worrying about the rare scenario).
Changing the password after an intrusion hardly helps if sensitive information was picked up during the intrusion. Really, do we still think WordPress is only used for casual blogging, and that attackers are only interested in defacement? Let's worry about protecting our server, not just restoring it after somebody gets in.
## Argument 3: Denying access to wp-config.php is good enough
You can restrict access to the file via your virtual host config or .htaccess – effectively limiting outside access to the file in the same way that moving outside the document root would.
FALSE: Imagine your server defaults for a virtual host are: no PHP, no .htaccess, allow from all (hardly unusual in a production environment). If your configuration is somehow reset during a routine operation – like, say, a panel update – everything will revert to its default state, and you're exposed.
If your security model fails when settings are accidentally reset to defaults, you probably need more security.
Why would anybody specifically recommend fewer layers of security? Expensive cars don't just have locks; they also have alarms, immobilizers, and GPS trackers. If something's worth protecting, do it right.
## Argument 4: Unauthorized access to wp-config.php is no big deal
The database information is really the only sensitive stuff in [wp-config.php].
FALSE: The authentication keys and salts can be used in any number of potential hijacking attacks.
Even if database credentials were the only thing in wp-config.php, you should be terrified of an attacker getting their hands on them.
## Argument 5: Moving wp-config.php outside the web root actually makes a server less secure
You still have to let WordPress access [wp-config.php], so you need to expand open_basedir to include the directory above the document root.
FALSE: Assuming wp-config.php is in httpdocs/, just move it to ../phpdocs/, and set open_basedir to include only httpdocs/ and phpdocs/. For instance:
open_basedir = "/var/www/vhosts/example.com/httpdocs/;/var/www/vhosts/example.com/phpdocs/;/tmp/"
(Remember to always include /tmp/, or your user tmp/ directory, if you have one.)
# Conclusion: configuration files should always always always be located outside the web root
If you care about security, you should move wp-config.php outside your web root.
There are a lot of bad written themes and plugins out there which allow atatckers to inject code (remember the security issue with Timthumb). If I would be a attacker, why should I search for the wp-config.php? Simply inject this code:
var_dump( DB_NAME, DB_USER, DB_PASSWORD );
You can try to hide your wp-config.php. As long as WordPress make all the sensitive information global accessible, it have no benefit to hide the wp-config.php.
The bad part in wp-config.php is not that it holds sensitive data. The bad part is to define the sensitive data as a global accessible constant.
Update
I want to clearify the problems with define() and why it is a bad idea to define sensitive data as a global constant.
There are a lot of ways to attack a website. Script injection is only one way to atack a website.
Assuming the server has a vulnerability that let an attacker access a memory dump. The attacker will find in the memory dump all values of all variables. If you define a global accessible constant, it have to stay in memory until the script ended. Creating a variable instead of a constant, there is a good chance that the garbage collector will overwrite (or free) the memory after the variable is not longer needed.
A better way to protect sensitive data is to delete them immediately after using it:
$db_con = new stdClass();$db_con->db_user = 'username';
$db_con->password = 'password';$db_con->host = 'localhost';
$db_handler = new Database_Handler($db_con );
$db_con = null; After using the sensitive data, the assigning to null will overwrite the data in memory. An attacker have to get the memory dump just in the moment when $db_con contains the sensitive data. And that is a very short time in the example above (if the class Database_Handler do not save a copy of it).
Answered by Ralf912 on November 11, 2021
I just want to clarify, for the sake of argument, that moving your wp_config.php file does not necessarily mean you have to move it only to the parent directory. Let's say you have a structure like /root/html, where html contains the WP installation and all of your HTML content. Instead of moving wp_config.php to /root, you could move it to something like /root/secure ... which is both outside the html directory and also not in the server root directory. Of course, you would need to make sure that php can run in this secure folder as well.
Since WP cannot be configured to look for wp_config.php in a sibling folder like /root/secure, you have to take an additional step. I left the wp_config.php in /root/html, and cut out the sensitive portions (database login, salt, table prefix) and moved them to a separate file called config.php. Then you add the PHP include command to your wp_config.php, like this: include('/home/content/path/to/root/secure/config.php');
This is essentially what I've done in my setup. Now, based on the above discussion, I am still evaluating whether it is necessary or even a good idea. But I just wanted to add that the above configuration is possible. It does not expose your backups and other root files, and so long as the secure folder is not set up with its own public URL, it is not browsable.
Furthermore, you can limit access to the secure folder by creating an .htaccess file in there with:
order deny,allow
deny from all
allow from 127.0.0.1
Answered by Michael on November 11, 2021
Yes, there are security benefits from isolating your wp-config.php from the root directory of your site.
1- If your PHP handler gets broken or modified in some way, your DB information will not be exposed. And yes, I saw this happen a few times on shared hosts during server updates. Yes, the site will be broken during that period, but your passwords will be intact.
2- Best practices always recommend isolating configuration files from data files. Yes, it is hard to do that with WordPress (or any web app), but moving it up does a bit of isolation.
3- Remember the PHP-CGI vulnerability, where anyone could pass the ?-s to a file and view the source. http://www.kb.cert.org/vuls/id/520827
At the end, those are small details, but they do help to minimize risk. Specially if you are on a shared environment, where anyone can access your database (all they need is a user/pass).
But don't let small distractions (premature optimizations) get in front of what is really necessary to get a site properly secure:
1- Keep it always updated
3- Restrict access (via permissions). We have a post about it here:
http://blog.sucuri.net/2012/08/wordpress-security-cutting-through-the-bs.html
thanks,
Answered by Sucuri on November 11, 2021
The biggest thing is the wp-config.php contains some sensitive information: your database username/password, etc.
So the idea: move it outside the document root, and you don't have to worry about anything. An attacker will never be able to access that file from an external source.
Here's the rub, however: wp-config.php never actually prints anything to the screen. It only defines various constants that are used throughout your WP install. Thus the only way someone is going to see that contents of that file is if they circumvent your servers PHP interpreter -- they get .php file to render as just plain text. If that happens, you're already in trouble: they have direct access to your server (and probably root permissions) and can do whatever they like.
I'm going to go ahead and say there's no benefit to moving wp-config outside the document root from a security perspective -- for the reasons above and these:
1. You can restrict access to the file via your virtual host config or .htaccess -- effectively limiting outside access to the file in the same way that moving outside the document root would
2. You can ensure the file permissions are strict on wp-config to prevent any user without sufficient privileges from reading the file even if they gain (limited) access to your server via SSH.
3. Your sensitive information, database settings, are only used on a single site. So even if an attacker gained access to that information, the only site it would affect would be the WordPress install to which the wp-config.php file belongs. More importantly, that database user only has permissions to read and write to that WP install's database and and nothing else -- no access to grant other users permissions. Meaning, in otherwords, if an attacker gains access to your database, it's simply a matter of restoring from a backup (see point 4) and changing the database user
4. You backup often. Often being a relative term: if you post 20 article every day, you better back up every day or every few days. If you post once a week, backing up once a week is likely sufficient.
5. You have your site under version control (like this), which means even if an attacker gained access, you can easily detect code changes and roll them back. If an attacker has access to wp-config, they've probably messed with something else.
6. The database information is really the only sensitive stuff in wp-config, and because you're careful about it (see point 3 and 4), it's not a huge deal. Salts and such can be changed any time. The only thing that happens is that it invalidates logged in users' cookies.
To me, moving wp-config out of the document root reeks of security by obscurity -- which is very much a straw man.
Answered by chrisguitarguy on November 11, 2021
Apart from the security benefits, it also allows you to keep your WordPress instance under version control while keeping the core WordPress files as a submodule/external. This is how Mark Jaquith has setup his WordPress-Skeleton project. See https://github.com/markjaquith/WordPress-Skeleton#assumptions for details.
Answered by Emyr Thomas on November 11, 2021
I think Max's is a knowledgeable answer, and that's one side of the story. The WordPress Codex has more advise:
Also, make sure that only you (and the web server) can read this file (it generally means a 400 or 440 permission).
If you use a server with .htaccess, you can put this in that file (at the very top) to deny access to anyone surfing for it:
<files wp-config.php>
order allow,deny
deny from all
</files>
Note that setting 400 or 440 permission on wp-config.php may prevent plugins from writing to or modifying it. A genuine case for example would be, caching plugins (W3 Total Cache, WP Super Cache, etc.) In that case, I'd go with 600 (the default permission for files in /home/user directory).
Answered by its_me on November 11, 2021
Definitely YES.
When you move wp-config.php outside public directory you protect it from reading using browser when php handler gets maliciously (or accidentally!) changed.
Reading your DB login/password is possible when server is hardly infected through a fault of lame administrator. Charge the administrator a fine and get a better-tended and more reliable server host. Though that may be more expensive.
Answered by Max Yudin on November 11, 2021
## Related Questions
### Custom payment gateway issue
1 Asked on November 26, 2020 by klevis-miho
### How to upload multiple images to multiple posts?
0 Asked on November 22, 2020
1 Asked on November 19, 2020 by red5
0 Asked on November 19, 2020 by szachmat
### Modify Maximum upload file size text in WordPress Media
0 Asked on November 15, 2020 by tacker
### How to add a sub directory to WordPress single posts without affecting other post types?
1 Asked on November 14, 2020 by joboy2212
### How do you find a file in the media library using the file URL?
1 Asked on November 7, 2020 by garrett-massey
### How to display fields from the loop in two separate divs
1 Asked on November 3, 2020 by pual
### Add custom field to Posts and sort by it
1 Asked on October 27, 2020 by nikname
### How to generate the COOKIEHASH from JavaScript
1 Asked on October 26, 2020 by zeth
### Get category slug and display it on a query_post
3 Asked on October 24, 2020 by dave
### Woocommerce Keep custom Sorting and sold out products in the back – hide theme element
0 Asked on October 24, 2020 by diederick-l
### Custom permalink variable on single post
1 Asked on October 20, 2020 by quyet
### get_terms (or tax_query) for term of current post?
2 Asked on October 18, 2020 by fntc
### How to add one time a new page?
1 Asked on October 17, 2020 by zed93
### Sorting Woocommerce Fees by name in Order confirmation and emails
0 Asked on October 11, 2020 by shaun-hearnden
### Woocommerce tables not responsive mobile
1 Asked on October 8, 2020 by rickmorty
### Add category to custom post URL
1 Asked on October 1, 2020 by kwa | 2021-12-03 14:11:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.173489511013031, "perplexity": 3004.6083523420416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00040.warc.gz"} |
https://montecarlonet.org/node/80771 | arXiv:2105.11399
FERMILAB-PUB-21-218-T
IPPP/20/101
MCNET-21-08
KA-TP-08-2021, OUTP-21-14P
ZU-TH 22/21
CERN-TH-2021-081
by: Buckley, A. (Glasgow U.) et al.
Abstract:
The data taken in Run II at the LHC have started to probe Higgs boson production at high transverse momentum. Future data will provide a large sample of events with boosted Higgs boson topologies, allowing for a detailed understanding of electroweak Higgs boson plus two-jet production, and in particular the vector-boson fusion mode (VBF). We perform a detailed comparison of precision calculations for Higgs boson production in this channel, with particular emphasis on large Higgs boson transverse momenta, and on the jet radius dependence of the cross section. We study fixed-order predictions at NLO and NNLO QCD, and compare the results to NLO plus parton shower (NLOPS) matched calculations. The impact of the NNLO corrections on the central predictions is mild, with inclusive scale uncertainties of the order of a few percent, which can increase with the imposition of kinematic cuts. We find good agreement between the fixed-order and matched calculations in non-Sudakov regions, and the various NLOPS predictions also agree well in the Sudakov regime. We analyze backgrounds to VBF Higgs boson production stemming from associated production, and from gluon-gluon fusion. At high Higgs boson transverse momenta, the $\Delta y_{jj}$ and/or $m_{jj}$ cuts typically used to enhance the VBF signal over background lead to a reduced efficiency. We examine this effect as a function of the jet radius and using different definitions of the tagging jets. QCD radiative corrections increase for all Higgs production modes with increasing Higgs boson $p_T$, but the proportionately larger increase in the gluon fusion channel results in a decrease of the gluon-gluon fusion background to electroweak Higgs plus two jet production upon requiring exclusive two-jet topologies. We study this effect in detail and contrast in particular a central jet veto with a global jet multiplicity requirement.
publ_date:
Tuesday, May 25, 2021 | 2021-09-21 10:44:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43807244300842285, "perplexity": 2095.542823618445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00306.warc.gz"} |
https://mathematica.stackexchange.com/questions/238758/change-mesh-density-of-graphics3d-object-made-of-triangles | # Change mesh density of Graphics3D object made of Triangles
I am new to mesh discretisation on Mathematica. I have a Graphics3D object made up of Triangles, that I would like to convert into a MeshRegion object using DiscretizeGraphics (see https://reference.wolfram.com/language/ref/DiscretizeGraphics.html).
In particular, I would like to control the mesh density. The above link tells me to use the MaxCellMeasure option, but it doesn't seem to make any difference to my graphics!
Thus,
Table[DiscretizeGraphics[g,
MaxCellMeasure -> {"Area" -> m}], {m, {0.3, 0.01, 0.001}}]
gives:
As you can see, the meshing is unchanged. It doesn't matter if I replace "Area" by "Volume" or "Length".
Can someone please tell me how to do this properly? Is this happening because my Graphics is already made up of triangles?
Using one of the solutions recommended here, I applied DiscretizeRegion with the MaxCellMeasure option to the meshed object produced by DiscretizeGraphics:
mr = DiscretizeGraphics[g]; | 2021-05-09 04:37:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32649269700050354, "perplexity": 1452.480308161583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00417.warc.gz"} |
https://www.physicsforums.com/threads/how-much-air.524109/ | # How much air?
1. Aug 24, 2011
### Painter1
This is a real world problem. We have a vessel sunk in about 50' of water and need to raise it. We have devised a way to pump the water out and pump air in to replace the water. It is important that we keep the differential pressure very close to Zero and running a "snorkle" tube is not an option.
So we need to replace the water with air while keeping the inside pressure the same. How much air do I need to put in?
I am going to pump the air in with a compressor that puts out 115 psi at 170 cfm and pump the water out with a submersible at 20 gals per minute. The air will need to to pumped in in short bursts so we need to know how long to hold the compressor air valve open (putting air in) for every minute the water pump operates.
Your help is most welcome. Thanks
2. Aug 24, 2011
### HallsofIvy
That depends upon how large and how heavy the boat is. you will need to replace enough of the water in the boat with air so that the total weight of boat, water still in it, and air will be less that the same volume of water.
3. Aug 24, 2011
### DaveC426913
You should not do this.
This is extremely dangerous. You are dealing with very large forces. And you are not experienced.
I am an open water diver and this is way over my head. This is technical diving certification stuff. And technical diving is one of the most dangerous professions in the world.
If you found yourself pushed up by that boat a mere 6 feet, you could die from embolism. It has happened to extremely well-seasoned divers.
This is why professionals do this and how non-professionals get killed.
4. Aug 24, 2011
### Staff: Mentor
EDIT -- Thread re-opened for a bit. It may be closed, depending on how the safety questions are addressed...
Last edited: Aug 24, 2011
5. Aug 24, 2011
### Staff: Mentor
Painter, to Dave's points, can you please give us more details about the diving experience and certifications of your crew? Have you done anything like this before? How big is the vessel? Are you going to pump up an air bag inside the vessel, or are you going to rely on some air-tight compartment that can stably lift the vessel? How are you planning on monitoring the operation under water?
6. Aug 24, 2011
### Staff: Mentor
....and why would you need to pump water out of the vessel? It's gonna wanna leave on its own when you pump air in!
7. Aug 24, 2011
### DaveC426913
I think that was his point about pumping air in.
Man, this is not the place for amateurs!
Lock this thread before he gets himself killed and they come after PF.
8. Aug 24, 2011
### Staff: Mentor
No, the OP refers to a water pump in addition to the air compressor.
We'll monitor it...
9. Aug 24, 2011
### K^2
First of all, 115 PSI is WAY overkill for this. You should need a little over 20PSI to pump air to 50', and you should not use much more than that for safety reasons.
That said, you obviously don't know what you're doing. Hire professionals.
10. Aug 24, 2011
### mrspeedybob
That's what I was wondering.
The closest I've come to raising a boat was watching the myth-busters do it with ping-ping balls so I don't know what I'm talking about, but why does pumping air in not just displace the water, pushing it out through doors and windows and such. It seems like if the air was added slowly then pressures should stay in equilibrium.
I'm hoping someone with relevant experience can educate us.
11. Aug 24, 2011
### DaveC426913
Frankly, I'm having trouble imagining how even in principle they're going to pump water out except by pumping air in to displace it. What will be left? Vacuum?
If you started a water pump and didn't have air flowing in, well, the pump will just cavitate. It makes no sense.
I guess having a water pump in conjunction with an air pump might take some pressure off the air pump...
12. Aug 24, 2011
### K^2
That'd make some sense, if the air pump wasn't outputting 5x the pressure they need to pump the air down.
I doubt the thing is completely water tight. It will just take in more water as you pump it out.
If it was not for the dangers involved, I would very much like to see what happens when these guys try to put their plan into action.
13. Aug 25, 2011
### JeffKoch
Sounds suspiciously like a homework problem rephrased. It's difficult to imagine someone trying to actually do something like this, and be so totally inexperienced that he would need to post this question on a forum.
14. Aug 25, 2011
### FireStorm000
It wouldn't be terribly difficult to derive equations for this; it's just ideal gas law, but I'll echo what everyone else said. Bad idea more than likely. You sound like you've got about as much clue what you're doing as I do, and I'm smart enough not to try this.
I can post up an equation for you IF you address the concerns listed so far.
Also, it might be enlightening to try this in small scale, with a weighted 2L bottle as your boat, a small pump attached to the mouth and a very small compressor (or just a straw to blow into). It's probably safe to play around in small scale, but just make sure that boat is worth your life if you choose to go though with this for real.
15. Aug 25, 2011
### jack action
First, I'm no specialist in this kind of work and I would like to start by seconding FireStorm000's idea of doing a scale down experiment. Especially since we don't know what kind of vessel we're talking about, its size and condition.
All that being said, I don't understand the big deal about this or I'm missing something. Basic physics tells me the water pressure at 50' is around 20 psi ($\rho$gh), so the air pump pressure doesn't need to be much higher. Using a regulator, you can increase the air pressure until the water pressure is achieved. Then by pumping the air inside the vessel with a pressure slightly above the outside water pressure, the water will get out of the vessel by whatever outlet there is. The cfm needed by the pump only depends on the size of this outlet and the rate desired.
Once there is enough air, the vessel begins rising and as it rises, you must adjust your regulator pressure to match the decreasing water pressure. If pressure differential is critical, you might want to control your rising rate by using some kind of physical restraints. Restraints will probably be a good idea also to make sure the vessel does not turn around itself, flipping over and loosing all the air inside, which would mean a rapid sudden drop. That would be the most dangerous part of the lifting process in my point of view. Inflating a balloon inside the vessel might be a good idea if one doesn't want to take the chance of loosing the air.
Again, not an expert opinion, just my two cents, and do a scale down experiment first.
16. Aug 25, 2011
### Integral
Staff Emeritus
Just not the way it will work. The boat will NOT begin to slowly rise it will hang in the mud untill it has sufficient boyuancy to break free. Once free of the mud it will shoot up VERY quickly.
Since you have no idea as to the integrity of the hull, you will have no idea as to what its orientitation will be when (or if) it reaches the surface. I am kinda on Dave's side here, anyone underwater and near this wreck as it breaks free is putting themselves at risk.
17. Aug 25, 2011
### DaveC426913
That is an excellent point that I should have thought of.
Bouyancy increases as it rises, meaning every foot the thing rises it becomes more bouyant. (It is going from 9 atmo to 1 , the air will expand by that amount).
It will break the surface like a breaching whale.
All that aside, there is no issue about the trick in principle - a small scale test will tell you nothing you don't already know. The problem is the real life danger.
18. Aug 25, 2011
### cjl
I fully agree about the danger, but at that depth, it's only going from about 2.5 atm (absolute) to 1. That doesn't change the fact that it should only be attempted by professionals though.
19. Aug 25, 2011
### DaveC426913
What's a factor of 4 between friends...
20. Aug 25, 2011
### sophiecentaur
To maintain the same amount of lift on its way to the surface, you would need to produce a volume around the highest point on the wreck and make it airtight but with a large hole underneath. (Effectively, an inverted bucket) This volume would need to be such that it contains the same weight of water as the boat's weight - or a little bit more. If you then pump air and fill this void, water will be displaced and the boat will rise up. As it rises, the air inside will expand but escape through the bottom - maintaining the same volume of displaced water and hence, the same amount of lift so the boat won't accelerate upwards uncontrollably. It's, afaik, normal to use lifting bags, which self vent out of the bottom and work in the way described above. Basically, you need a constant volume system and not a constant pressure system.
What do you intend to do with the wreck, once it has reached the surface? There is always the risk that it will roll when it reaches the surface and go right down again. (How big is it?) | 2018-11-14 20:35:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3235907256603241, "perplexity": 949.0673644341947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742263.28/warc/CC-MAIN-20181114191308-20181114212738-00021.warc.gz"} |
https://www.physicsforums.com/threads/differential-form-notation-help.822416/ | # Differential Form - Notation Help
Tags:
1. Jul 8, 2015
### Mistake Not...
Hi there,
The page says it is a differential form. Can anyone explain the notation for me or provide a link or two to documents or pages which explain this notation?
Thank you very much,
Geoff
2. Jul 8, 2015
### gleem
Cij = ∂fi/∂qj
and
Ci = ∂fi/∂t where fi is the i th constraint. and qj is the j th coordinate.
3. Jul 8, 2015
### lavinia
You can interpret the dhqs and dt as small increments in the q's and in t.
Formally a differential 1 form is a linear function on tangent vectors that varies smoothly from one tangent space to the next. .
4. Jul 8, 2015
### stedwards
I'm quite confused.
Why are there lower indices on the $q^i$coordinate differentials?
I would also expect to see an equal number of indeces on $c$.
Last edited: Jul 8, 2015 | 2018-05-24 22:08:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968982100486755, "perplexity": 3100.0547190635216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866870.92/warc/CC-MAIN-20180524205512-20180524225512-00293.warc.gz"} |
https://publications.aap.org/pediatrics/article/136/4/809/73899/Influenza-Immunization-for-All-Health-Care | The purpose of this statement is to reaffirm the American Academy of Pediatrics’ support for a mandatory influenza immunization policy for all health care personnel. With an increasing number of organizations requiring influenza vaccination, coverage among health care personnel has risen to 75% in the 2013 to 2014 influenza season but still remains below the Healthy People 2020 objective of 90%. Mandatory influenza immunization for all health care personnel is ethical, just, and necessary to improve patient safety. It is a crucial step in efforts to reduce health care–associated influenza infections.
Health care–associated influenza is a common and serious public health problem, contributing significantly to patient morbidity and mortality and creating a financial burden on health care systems.1,4 Immunization (used interchangeably with vaccination in this statement) of health care personnel (HCP) annually is a matter of patient safety and is crucial in efforts to reduce health care–associated influenza infections. Optimal prevention of influenza in the health care setting depends on the vaccination of at least 90% of HCP, which is consistent with the national Healthy People 2020 target for annual influenza vaccination among HCP.5 Although increasing, overall immunization rates for this group remain consistently below this goal.6
Mandatory influenza immunization programs for all HCP should be implemented nationwide. During the 2013 to 2014 influenza season, 36% of all HCP and 58% of HCP working in hospitals reported an influenza vaccination requirement at their institution.6 Mandating influenza vaccine for all HCP is ethical, just, and necessary.7,9 Because individuals are embedded in societies and populations, their risk of illness cannot be considered in isolation from the disease risk of the population to which they belong.10 Employees of health care institutions are obligated to honor the requirement of causing no harm and to act in the best interests of the health of their patients.11 Medical exemptions to required influenza immunization (eg, life-threatening allergic reaction after receiving an influenza vaccine or severe allergy to a vaccine component) should be kept at a minimum to ensure high coverage rates and granted only on an individual basis. Rigorous standards, such as requiring counseling, detailing the benefits of influenza vaccination, and insisting on a signed affidavit stating an acceptable reason for opting out, will place a higher burden on nonadherent HCP and would make it more difficult for HCP to impose unnecessary risks on their patients.12 Granting specific medical exemptions is constitutionally required, but states do not have to grant philosophical or religious opt-outs.12 Consistent policies also must be developed for management of exempted HCP during influenza season. For example, although scientific evidence supporting the medical concept of unvaccinated employees wearing a mask is limited,13 some institutions have required such an approach throughout the influenza season.
Influenza is a major public health concern. Each year in the United States, more than 200 000 hospitalizations are associated with the influenza virus.14 The number of annual influenza-associated deaths has ranged from a low of about 3000 to a high of about 49 000 in recent decades.14 Serious morbidity and mortality can result from influenza infection in any person of any age. Rates of serious influenza-related illness and death are highest among children younger than 2 years old, seniors 65 years and older, and people of any age with medical conditions that place them at increased risk of having complications from influenza, such as pregnant women and people with underlying chronic cardiopulmonary, neuromuscular, and immunodeficient conditions. Hospital-acquired influenza has been shown to have a particularly high mortality rate, with a median of 16% among all patients and a range of 33% to 60% in high-risk groups such as transplant recipients and patients in the ICU.1 Transmission from an infected, previously healthy child or adult begins as early as 1 day before the onset of symptoms and persists for up to 7 days; infants and immunocompromised people may shed virus even longer. Some infected people remain asymptomatic yet contagious.15
Immunization is the most effective way to prevent influenza, so the vaccine is universally recommended by the American Academy of Pediatrics (AAP), Centers for Disease Control and Prevention (CDC), and American Academy of Family Physicians for everyone 6 months old and older.16,17 A 2010 meta-analysis of randomized clinical trial results among healthy adults 16 through 65 years of age suggested that when vaccine and circulating influenza virus strains were well matched, efficacy against influenza symptoms was 73% (95% confidence interval, 54%–84%) whereas efficacy was 44% (95% confidence interval, 23%–59%) when they were not well matched.17 However, in the 2014 to 2015 season, early data estimated overall vaccine effectiveness to be much lower, at 19%.18 Vaccine effectiveness can vary based on the match of circulating virus with vaccine strains, vaccine product, previous influenza vaccination, and age and immune status of patients. The influenza vaccine still remains the best available preventive measure. Many people at high risk of influenza and its associated complications are in frequent, close contact with HCP because of their need to seek medical services. Therefore, immunization of HCP is a crucial step in efforts to protect those at risk for health care–associated influenza, similar to the concept of cocooning, in which immunization of parents, caregivers, and other close contacts of children is intended to reduce their risk of contagion. It is important not to rely solely on influenza immunization of HCP for prevention of nosocomial transmission. Other infection precaution controls are necessary, such as use of masks and hand hygiene and careful evaluation of sick employees, even if no fever is present.19
Influenza vaccination of HCP has the potential to reduce both morbidity and mortality among patients. Ahmed et al20 systematically reviewed the evidence surrounding this concept, using the Grading of Recommendations Assessment, Development, and Evaluation framework. With pooled results of 4 cluster randomized trials conducted in 116 long-term care facilities, they estimated a 29% reduction in all-cause death and a 42% reduction in influenza-like illness. In addition, pooled results of 4 observational studies conducted in 234 long-term care facilities and 1 hospital-based setting indicated significant protective associations for influenza-like illness and for laboratory-confirmed influenza. On the basis of these findings, the authors graded the quality of the evidence for the effect of HCP vaccination on mortality and influenza cases in patients as “moderate” and “low,” respectively. The authors concluded that the benefits of immunizing HCP outweigh possible harms and can increase patient safety.20 In contrast, a 2013 Cochrane review concluded that there were no accurate data supporting the vaccination of health care workers to prevent laboratory-confirmed influenza in residents 60 years and older in long-term care facilities.21 Specifically, the authors did not find a significant decrease in respiratory illness or in deaths related to respiratory illness.
Annual influenza epidemics account for 610 660 life-years lost, 3.1 million days of hospitalization, and 31.4 million outpatient visits.22 Influenza in the United States generates a cost burden estimated to be $87 billion per year.23 The bulk of this cost is a result of medical care in outpatient and inpatient settings, work absenteeism, and mortality. A retrospective cohort study found that unvaccinated HCP had a larger increase in absenteeism attributable to all-cause illness during the influenza season than vaccinated HCP.24 The fiscal benefit of reduced absenteeism from vaccination was more than$1 million, whereas the cost of introducing a new policy requiring staff in clinical areas to be vaccinated or wear a mask was minimal by comparison.24 Impaired on-the-job productivity (known as presenteeism) also contributes significantly to the total economic burden caused by illness. Presenteeism accounted for 18% to 60% of costs for the top 10 health conditions affecting US employers and for approximately two-thirds of lost productivity costs related to the common cold.23 Similar to absenteeism, it is a major contributor to the economic burden associated with influenza and is a threat to patient safety.23 Although 86% of HCP report their intent to leave work if they have an influenza-like illness, 59% report having worked in the past with a fever or influenza-like symptoms.25 Furthermore, healthy adults who receive the influenza immunization have 25% fewer upper respiratory infections, 44% fewer physician visits, and 43% fewer sick days off, saving an average of $47 per person annually, highlighting the cost-effectiveness of immunization against influenza.23 A decision-analytic computational simulation model that determined the cost/benefit ratio of employer-sponsored workplace immunization from the employer’s perspective found cost savings across diverse occupational groups in all seasonal influenza scenarios.26 The growing understanding of the effect of influenza on all age and risk groups prompted the Advisory Committee on Immunization Practices of the CDC to expand annual influenza immunization recommendations to include all people 6 months and older starting in 2010.27 This universal recommendation is especially important for HCP and people in training for health care professions, such as physicians, nurses, workers in hospital and outpatient care settings, medical emergency response workers, and employees of nursing homes and longer-term care facilities.17 HCP who are pregnant or breastfeeding also should receive the influenza vaccine. The Advisory Committee on Immunization Practices began recommending influenza immunization for HCP in the early 1980s.28 Despite this long-standing recommendation, overall immunization rates for HCP never exceeded 50% before the 2008 to 2009 influenza season.29 Coverage has gradually improved in recent years, reaching a high of 75% during the 2013 to 2014 season, but it still remains below the Healthy People 2020 objective of 90%.6 Influenza vaccination coverage among acute care hospital-based HCP in 2013 to 2014 reached a level of 81.8%, with the highest proportion among those directly employed by the health care facility (86.1%) and the lowest among licensed independent practitioners who are affiliated but not directly employed by it (61.9%).30 Just over one quarter of states reached the Healthy People 2020 objective of 90%.30 In the past, efforts to increase immunization rates among HCP have focused primarily on voluntary programs, which attempt to increase rates by ensuring that the vaccine is conveniently available and free of charge and providing influenza prevention education and incentives or rewards to increase participation. A more comprehensive approach involves the use of signed declination statements coupled with education about risks and benefits of being immunized. However, use of declination statements in 22 hospitals demonstrated only a modest increase in influenza immunization.31 It is difficult to assess the overall effectiveness of declination statements, because the language and context can vary between programs, and multiple strategies to prevent influenza are often initiated simultaneously.32 Although these efforts may lead to an immediate increase in immunization rates, it appears that sustainability of high immunization rates in health care settings can be achieved only through a mandated policy. Despite many organizations’ efforts to increase influenza immunization rates with the use of voluntary campaigns, influenza coverage within such organizations remains below the Healthy People 2020 objective of 90%, ranging from 65% to 77% since 2010. In contrast, coverage among HCP who have reported a mandatory influenza vaccination requirement has exceeded 94% each year.6 In 1 study, more than half of unvaccinated HCP stated that they would have been vaccinated had it been required by their employer.33 Voluntary programs have proved ineffective, in part because HCP have misconceptions about the risks and benefits of the influenza vaccine. In a cohort of HCP providing direct patient care, the most commonly reported barriers to vaccination were concerns about vaccine safety and effectiveness and low perceived susceptibility to influenza. Furthermore, 17% of unvaccinated participants falsely believed that the vaccine could cause influenza.33 The Joint Commission found that the reasons HCP decline immunization include fear of getting influenza-like illness from the vaccine, fear of adverse effects, perceived low or no likelihood of developing influenza disease, and concern about exposure to thimerosal.34 With the use of live-attenuated influenza virus (LAIV) vaccine, some HCP expressed concern that the vaccine virus could be shed to vulnerable patients, infecting them with the influenza virus. Although LAIV recipients shed vaccine virus, much lower amounts are shed than during natural infection, transmission is unlikely to occur, and the duration of shedding is less in adults than in children (ie, 0–4 days vs 5–9.8 days, respectively).35 Serious illness has not been reported among unvaccinated, otherwise healthy people who have been infected inadvertently with virus from LAIV vaccine.17 HCP immunized with LAIV may continue to work in most units of a hospital, including the NICU and general oncology wards, if they use standard infection control techniques.16 These findings highlight the importance of educating HCP of the risks, benefits, and basic principles of influenza vaccination. Given the ineffectiveness of voluntary programs in increasing rates of HCP influenza immunization and the effectiveness of influenza immunization in decreasing infection among those most vulnerable to severe complications from influenza, implementation of mandatory programs around the country is a crucial step in efforts to improve patient safety. Mandatory influenza immunization of HCP is a matter of patient safety. In a prospective surveillance study of laboratory-confirmed influenza among hospitalized adults in a network of Canadian hospitals from 2006 to 2012, 17.3% of influenza cases were health care associated.2 The risk of transmission is possible because HCP work when they are mildly symptomatic or ill, putting their co-workers and patients more at risk.36 A serosurvey conducted in 4 acute care hospitals in the United Kingdom revealed that 23% of HCP had serologic evidence of influenza virus infection during a single influenza season; the majority reported mild illness or subclinical infection.37 HCP can transmit influenza virus to patients and co-workers. Two landmark studies highlight the negative effect HCP infected with influenza can have on their patients. • In a NICU, 19 of 54 (35%) infants were infected with influenza A as a result of health care-associated transmission; 6 became ill and 1 died. Only 15% of staff survey respondents in this NICU had received influenza vaccine (67% of physicians and 9% of nurses). Of respondents who had an influenza-like illness in the preceding 4 months, half occurred during the outbreak period, and only 14% reported taking time off work because of illness; these data suggest that symptomatic personnel had a role in transmission.3 • During an outbreak of influenza in a bone marrow transplant unit, there were 7 cases of health care–associated influenza; 6 patients developed pneumonia, and 2 patients died.4 Five staff members developed influenza-like illness during the outbreak. Surveys revealed a vaccination rate of 12% among unit staff. The hospital took measures during the next influenza season to implement a multifaceted voluntary education program aimed at improving immunization rates. But even with these aggressive measures, 42% of the staff on the bone marrow transplant unit remained unimmunized the next year.4 Mandatory immunization is not a novel concept. All states have laws requiring certain vaccines for school entry or attendance. Many health care facilities currently require specific vaccines and a tuberculin skin test as conditions for working in certain areas of the institution or for employment.36 However, implementation of mandatory influenza immunization programs for HCP continues to be controversial to some who argue that a mandatory program violates civil liberties. The US Supreme Court ruled in 1905 in Jacobson v Massachusetts that states have the power to require immunization if it is necessary for public health or safety of the people. The power of states to enforce immunization requirements or other public health initiatives is constitutionally permissible when all of the following conditions are met. The intervention (ie, influenza vaccination) must • Be a public health necessity • Have been proven to be effective • Not be “gratuitously onerous or unfair” • Not pose a health risk to the subject For example, school immunization laws are judicially sanctioned, emphasizing that mandatory immunization programs have long existed without infringing on constitutional rights.38 Mandatory influenza vaccination policies are increasingly common in the United States. During the 2013 to 2014 influenza season, 36% of all HCP and 58% of HCP working in hospitals reported such a requirement at their institutions.6 Of those required, 98% received the vaccine; coverage rates were greater than 96% for all occupational settings, including hospitals, ambulatory care offices, and long-term care facilities.6 Nationally, more than 500 health care facilities and systems have implemented influenza vaccination requirements for HCP.39 A recent report estimated increases in influenza vaccination coverage after implementation of a mandatory vaccination program. More than 200 nationally representative US hospitals were surveyed. On average, coverage increased by 14.7% in a single season; in contrast, institutions with voluntary policies have rarely reported single-season increases of greater than 10%. Most hospitals that reported postrequirement coverage of greater than 90% were those that terminated HCP who refused vaccination.40 The following examples each resulted in a substantial increase in employee immunization rates, demonstrating success with the implementation of a mandatory program. • BJC Healthcare, a large nonprofit health care organization with approximately 26 000 employees, implemented a mandatory influenza immunization program in 2008 after voluntary models failed to increase rates to greater than 80%.41 BJC made influenza immunization a condition of employment as a patient safety initiative. Employees could be granted medical or religious exemptions on review by an occupational medicine professional. The result was an immunization rate of 98.4% for the organization. Only 8 employees refused to be vaccinated, and their employment was terminated.41 • Seattle’s Virginia Mason Medical Center implemented a mandatory influenza vaccination program in 2005. HCP who were granted an accommodation for medical or religious reasons were required to wear masks during the influenza season. The institution reported 97.6% coverage among its employees in the first year. For the remainder of a 5-year study period, vaccination rates of greater than 98% were sustained. In comparison, vaccination rates in the years before the study period ranged from 29.5% to 54.0%.42 • The National Institutes of Health Clinical Center passed a mandatory influenza immunization policy in 2008. The policy required that employees who had patient contact be immunized or complete an online declination statement specifying the reason for refusal (eg, concern about adverse effects or believing that the vaccine was ineffective). The policy achieved 100% participation in that all 2754 employees who were identified to have direct patient contact were either immunized or formally declined vaccination. Compared with vaccination rates of 40% to 60% from previous years, the organization achieved an immunization rate of 88% (2424) among employees with patient contact.43 • Hospital Corporation of America, which includes 163 hospitals, 112 outpatient centers, and 368 physician practices in 20 states, put a mandatory policy into effect in late 2009. The policy required all employees in contact with patients to either receive the annual influenza vaccine or wear a surgical mask in patient areas. Before the policy, vaccination rates in Hospital Corporation of America facilities varied from 20% to 74%. This mandatory policy offered influenza vaccine to 140 599 HCP; 96% of these employees complied.44 • University of California Irvine Healthcare instituted a mandatory vaccination program beginning in the 2009 to 2010 season, after a series of less successful vaccination campaigns that began in 2006. Voluntary programs, which used mobile carts, mandatory declination, and peer-to-peer vaccination efforts, increased rates from 44% to 63%. The mandatory vaccination campaign, which required unvaccinated HCP to wear a mask during the influenza season, increased coverage to greater than 90%.45 It is certainly possible to implement a mandatory influenza vaccination policy that is supported by the majority of the affected staff. Among a sample of HCP in the United States, almost 60% agreed that HCP should be required to be vaccinated for seasonal influenza.46 Support was significantly higher among HCP who were already subject to employer-based influenza vaccination requirements; a mandate was supported by 77% and 95%, respectively, of HCP covered by vaccination requirements with and without penalties for noncompliance.46 Support was also much higher among those who perceived seasonal influenza as a serious threat to their own health and to the health of people around them, those who agreed that the vaccine is effective in protecting them and their contacts, and those who agreed that the vaccine is safe.46 Increased educational outreach regarding the safety and efficacy of the influenza vaccine and additional communication of HCP vaccination as a patient safety issue therefore should be expected to increase staff support for influenza vaccination requirements. Widespread support for influenza vaccination of HCP also exists among patient caregivers, according to a cross-sectional survey of parents and guardians of hospitalized children during the 2011 to 2012 season.47 Independent of their feelings about the safety and efficacy of the influenza vaccine, most (88%) believed that HCP should be vaccinated, and 76% thought that vaccination should be required.47 In addition, an increasing number of professional organizations have released their own statements in support of mandatory influenza vaccination for health care personnel, including the CDC, American Academy of Family Physicians, American Hospital Association, Society for Healthcare Epidemiology of America, Infectious Diseases Society of America, Pediatric Infectious Diseases Society, Association for Professionals in Infection Control and Epidemiology, Inc and American Public Health Association.48,52 Compared with employer-based requirements, state-based or even county-wide vaccination requirements are more reliable and efficient in increasing coverage of HCP. This approach creates a uniform policy and takes the burden off individual facilities to develop, implement, and defend management decisions related to mandatory programs.53 As of November 2014, fewer than half of all states have influenza vaccination requirements for HCP, and the scope of the requirements varies widely.53,54 For instance, some states require only that employers offer the vaccine to HCP, whereas others require HCP to be vaccinated or declare in writing that they have declined vaccination.54,55 Recently, some state-level requirements have incorporated stricter policies for unvaccinated HCP, such as requiring them to wear masks during patient care.54,55 Beginning in the 2012 to 2013 influenza season, Rhode Island mandated statewide annual influenza vaccinations for HCP.56 All HCP in licensed health care facilities in the state are now required to either receive the vaccine or formally decline by December 15 each year. Unvaccinated HCP must wear a surgical face mask during patient contact when influenza is declared widespread; those who fail to comply face a$100 fine per violation.55 As a result of these regulations, the proportion of immunized HCP in Rhode Island increased dramatically, from 70% in the 2011 to 2012 season to 87% in the 2012 to 2013 season.56 In a qualitative evaluation, the majority of facilities reported that HCP had mostly positive or compliant attitudes toward its revised policy.55 Successful implementation was facilitated by early and regular communication from the state health department and the facilities’ ability to adapt their existing influenza vaccination programs to incorporate provisions of the new regulations.55
In contrast, an evaluation of California’s 2006 influenza vaccination law for HCP found that hospital employees were no more likely to be vaccinated than their counterparts in other states.57 This is not surprising, given that California’s law imposes a permissive state-level requirement. Although the statute requires hospital employees to be vaccinated or sign a declination statement, it does not require masking of unvaccinated HCP or include penalties for noncompliance. Therefore, permissive state-level requirements may not be sufficient to increase coverage of HCP.57
The US Constitution supports a medical exemption requirement but not religious and philosophical opt-outs.12 The regulations of New York State’s mandatory program highlight the details that compel individuals to be vaccinated to protect the public from seasonal and pandemic influenza.58 Although some argue that mandatory influenza vaccination violates an individual’s right to make decisions about his or her own health and well-being, employees of health care institutions are obligated to honor the requirement of causing no harm and to act in the best interests of the health of their patients.11 Although some have suggested that medical and religious exemptions be granted on an individual basis,41,59 the US Constitution requires the granting of medical exemptions but not religious exemptions, so mandating influenza immunization for HCP can be ethically justified. Three criteria that a public health intervention must meet to justify mandatory status have been proposed.60
• There should be clear medical value from the intervention to the individual. The positive effects of the influenza vaccine on the health of the person immunized are well known.
• The public health benefit of the mandatory intervention must be clear to justify the infringement on personal liberties. Populations staying at or frequenting hospitals are especially vulnerable to increased health risks from influenza. HCP are obliged to take preventive measures to protect patients when they join the profession. The effects on the health of patients and on the loss of days worked by personnel have been sufficiently demonstrated.
• A mandate must be considered the only option. Current rates of influenza immunization remain suboptimal among HCP, despite decades-long recommendations using myriad other strategies. When other approaches have failed, a mandate is a reliable way to achieve improvement. “If it is possible to obtain herd immunity through education, insurance coverage, public outreach and so on, then a mandate would not be needed and should not be used.”60 To satisfy a mandate, each health care facility should design, implement, and evaluate a program tailored to fit its particular needs.
To maximize success in implementing a mandatory policy, relevant factors include the following:
• Having full support of health care leadership.
• Customizing the plan for each institution. The policy must be tailored to the geographic setting, educational resources, financial assets, local culture, and potential language barriers.
• Making vaccine free to all HCP.
• Publicizing the program to HCP at all levels by
• Communicating program details regularly
• Making presentations about influenza prevention and the program
• Holding question-and-answer sessions
• Creating a volunteer team of staff HCP to offer education (and vaccines, if possible) to fellow HCP with concerns
• Offering convenient times and locations for education and immunization administration, preferably within the institution. Vaccinators should adapt to accommodate HCP schedules, including
• Expanding available hours to receive the vaccine
• Increasing the number of locations where the vaccine is given
• Offering the vaccine at various venues and gathering places within the institution
• Using a universal form with defined acceptable medical and religious exemptions. This procedure is more effective, concrete, and uniform than requiring a physician’s note.
• Creating a clear institutional policy for management of employees who are exempted from immunization.
These recommendations for the prevention and control of influenza in HCP will have considerable effect on clinical practice. Therefore, the AAP has developed implementation guidance on supply, payment, coding and liability issues; these documents can be found at http://redbook.solutions.aap.org/selfserve/ssPage.aspx?SelfServeContentId=vaccine-policy-guidance.
This policy statement reaffirms the AAP’s support for a mandatory influenza immunization policy for all HCP. Vaccine effectiveness is unpredictable from year to year because of various factors such as the match of circulating virus with vaccine strains, vaccine product, previous influenza vaccination, and age and immune status of patients. Despite this variability, the influenza vaccine remains the best available preventive measure. Presenteeism should not be condoned, because vaccination is not expected to prevent all cases of influenza. However, even in years with suboptimal vaccine efficacy, millions of cases of influenza are prevented and influenza-related hospitalizations and complications are reduced. Because health care workers are exposed to the most vulnerable populations, prevention of even some fraction of influenza cases in health care workers is an advantage for patients. Unfortunately, there only is modest published evidence for the effect of HCP influenza vaccination policies on patient outcomes20 because of a low number of randomized control trials with nonintervention groups.
Mandatory influenza immunization programs for HCP benefit the health of employees, their patients, and members of the community. The influenza vaccine is safe, effective, and cost-effective. Health care organizations must work to assuage common fears and misconceptions about the influenza virus and the vaccine. Immunizing all HCP will serve as an example to patients, highlighting the safety and effectiveness of annual immunization. HCP fail to lead by example if they recommend universal immunization, including influenza vaccine, to their patients but do not require it of themselves. Furthermore, unvaccinated HCP feed public distrust and fear of vaccines.11
Health care–associated influenza creates a financial burden on health care systems and contributes to patient morbidity and mortality. Voluntary programs have failed to increase immunization rates to acceptable levels. Large health care organizations have implemented highly successful mandatory annual influenza immunization programs without significant problems. Mandating influenza vaccine for all HCP nationwide is ethical, just, and necessary.7,9 For the prevention and control of influenza, we must continue to put the health and safety of the patient first.
Henry H. Bernstein, DO, MHCM, FAAP
Jeffrey R. Starke, MD, FAAP
Carrie L. Byington, MD, FAAP, Chairperson
Yvonne A. Maldonado, MD, FAAP, Vice Chairperson
Elizabeth D. Barnett MD, FAAP
H. Dele Davies, MD, FAAP
Kathryn M. Edwards, MD, FAAP
Ruth Lynfield, MD, FAAP
Flor M. Munoz, MD, FAAP
Dawn L. Nolt, MD, FAAP
Ann-Christine Nyquist, MD, MSPH, FAAP
Mobeen H. Rathore, MD, FAAP
Mark H. Sawyer, MD, FAAP
William J. Steinbach, MD, FAAP
Tina Q. Tan, MD, FAAP
Theoklis E. Zaoutis, MD, MSCE, FAAP
Dennis L. Murray, MD, FAAP
Gordon E. Schutze, MD, FAAP
Rodney E. Willoughby, MD, FAAP
Henry H. Bernstein, DO, MHCM, FAAP – Red Book Online Associate Editor
Michael T. Brady, MD, FAAP – Red Book Associate Editor
Mary Anne Jackson, MD, FAAP – Red Book Associate Editor
David W. Kimberlin, MD, FAAP – Red Book Editor
Sarah S. Long, MD, FAAP – Red Book Associate Editor
H. Cody Meissner, MD, FAAP – Visual Red Book Associate Editor
Rebecca J. Schneyer, BA
Catherina Yang, BA
Patriot Yang, BA
Tiffany L. Wang
Doug Campos-Outcalt, MD, MPA – American Academy of Family Physicians
Karen M. Farizo, MD – US Food and Drug Administration
Marc A. Fischer, MD, FAAP – Centers for Disease Control and Prevention
Bruce G. Gellin, MD – National Vaccine Program Office
Richard L. Gorman, MD, FAAP – National Institutes of Health
Natasha B. Halasa, MD, MPH, FAAP – Pediatric Infectious Disease Society
Joan L. Robinson, MD – Canadian Paediatric Society
Marco Aurelio Palazzi Safadi, MD – Sociedad Latinoamericana de Infectologia Pediatrica (SLIPE)
Jane F. Seward, MBBS, MPH, FAAP – Centers for Disease Control and Prevention
Geoffrey R. Simon, MD, FAAP – Committee on Practice Ambulatory Medicine
Jeffrey R. Starke, MD, FAAP – American Thoracic Society
Jennifer M. Frantz, MPH
• AAP
American Academy of Pediatrics
•
• CDC
Centers for Disease Control and Prevention
•
• HCP
health care personnel
•
• LAIV
live-attenuated influenza virus
This document is copyrighted and is property of the American Academy of Pediatrics and its Board of Directors. All authors have filed conflict of interest statements with the American Academy of Pediatrics. Any conflicts have been resolved through a process approved by the Board of Directors. The American Academy of Pediatrics has neither solicited nor accepted any commercial involvement in the development of the content of this publication.
Policy statements from the American Academy of Pediatrics benefit from expertise and resources of liaisons and internal (AAP) and external reviewers. However, policy statements from the American Academy of Pediatrics may not reflect the views of the liaisons or the organizations or government agencies that they represent.
The guidance in this report does not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate.
All policy statements from the American Academy of Pediatrics automatically expire 5 years after publication unless reaffirmed, revised, or retired at or before that time.
FUNDING: No external funding.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
1
Salgado
CD
,
Farr
BM
,
Hall
KK
,
Hayden
FG
.
Influenza in the acute hospital setting.
[erratum in Lancet Infect Dis. 2002;2(6):383]
Lancet Infect Dis
.
2002
;
2
(
3
):
145
155
[PubMed]
2
Taylor
G
,
Mitchell
R
,
McGeer
A
, et al
Canadian Nosocomial Infection Surveillance Program
.
Healthcare-associated influenza in Canadian hospitals from 2006 to 2012.
Infect Control Hosp Epidemiol
.
2014
;
35
(
2
):
169
175
[PubMed]
3
Cunney
RJ
,
Bialachowski
A
,
Thornley
D
,
Smaill
FM
,
Pennie
RA
.
An outbreak of influenza A in a neonatal intensive care unit.
Infect Control Hosp Epidemiol
.
2000
;
21
(
7
):
449
454
[PubMed]
4
Weinstock
DM
,
Eagan
J
,
Malak
SA
, et al
.
Control of influenza A on a bone marrow transplant unit.
Infect Control Hosp Epidemiol
.
2000
;
21
(
11
):
730
732
[PubMed]
6
Black
CL
,
Yue
X
,
Ball
SW
, et al
Centers for Disease Control and Prevention (CDC)
.
Influenza vaccination coverage among health care personnel—United States, 2013–14 influenza season.
MMWR Morb Mortal Wkly Rep
.
2014
;
63
(
37
):
805
811
[PubMed]
7
Galanakis
E
,
Jansen
A
,
Lopalco
PL
,
Giesecke
J
.
Ethics of mandatory vaccination for healthcare workers.
Euro Surveill
.
2013
;
18
(
45
):
20627
[PubMed]
8
Lee
LM
.
Adding justice to the clinical and public health ethics arguments for mandatory seasonal influenza immunisation for healthcare workers.
J Med Ethics
.
2015
;
41
(
8
):
682
686
[PubMed]
9
Lantos
JD
,
Jackson
MA
.
Vaccine mandates are justifiable because we are all in this together.
Am J Bioeth
.
2013
;
13
(
9
):
1
2
[PubMed]
10
Rose
GA
.
The Strategy of Preventive Medicine
.
Oxford, England
:
Oxford University Press
;
1992
11
Dubov
A
,
Phung
C
.
Nudges or mandates? The ethics of mandatory flu vaccination.
Vaccine
.
2015
;
33
(
22
):
2530
2535
[PubMed]
12
Gostin
LO
.
Law, ethics, and public health in the vaccination debates: politics of the measles outbreak.
JAMA
.
2015
;
313
(
11
):
1099
1100
[PubMed]
13
Aledort
JE
,
Lurie
N
,
Wasserman
J
,
Bozzette
SA
.
Non-pharmaceutical public health interventions for pandemic influenza: an evaluation of the evidence base.
BMC Public Health
.
2007
;
7
:
208
[PubMed]
14
Centers for Disease Control and Prevention. Seasonal influenza Q&A. Available at: www.cdc.gov/flu/about/qa/disease.htm. Accessed February 25, 2015
15
Poland
GA
,
Tosh
P
,
Jacobson
RM
.
Requiring influenza vaccination for health care workers: seven truths we must accept.
Vaccine
.
2005
;
23
(
17–18
):
2251
2255
[PubMed]
16
American Academy of Pediatrics, Committee on Infectious Diseases. Recommendations for prevention and control of influenza in children, 2015–2016. Pediatrics. 2015;136(5):
17
Centers for Disease Control and Prevention
.
Prevention and control of seasonal influenza with vaccines: recommendations of the Advisory Committee on Immunization Practices (ACIP)—United States, 2013–2014.
MMWR Recomm Rep
.
2013
;
62
(
RR-07
):
1
43
18
D’Mello
T
,
Brammer
L
,
Blanton
L
, et al
Centers for Disease Control and Prevention (CDC)
.
Update: influenza activity—United States, September 28, 2014–February 21, 2015.
MMWR Morb Mortal Wkly Rep
.
2015
;
64
(
8
):
206
212
[PubMed]
19
Ridgway
JP
,
Bartlett
AH
,
Garcia-Houchins
S
, et al
.
Influenza among afebrile and vaccinated healthcare workers.
Clin Infect Dis
.
2015
;
60
(
11
):
1591
1595
[PubMed]
20
Ahmed
F
,
Lindley
MC
,
Allred
N
,
Weinbaum
CM
,
Grohskopf
L
.
Effect of influenza vaccination of healthcare personnel on morbidity and mortality among patients: systematic review and grading of evidence.
Clin Infect Dis
.
2014
;
58
(
1
):
50
57
[PubMed]
21
Thomas
RE
,
Jefferson
T
,
Lasserson
TJ
.
Influenza vaccination for healthcare workers who care for people aged 60 or older living in long-term care institutions.
Cochrane Database Syst Rev
.
2013
;
7
(
7
):
CD005187
[PubMed]
22
Molinari
NA
,
Ortega-Sanchez
IR
,
Messonnier
ML
, et al
.
The annual impact of seasonal influenza in the US: measuring disease burden and costs.
Vaccine
.
2007
;
25
(
27
):
5086
5096
[PubMed]
23
Nichol
KL
,
D’Heilly
SJ
,
Greenberg
ME
,
Ehlinger
E
.
Burden of influenza-like illness and effectiveness of influenza vaccination among working adults aged 50–64 years.
Clin Infect Dis
.
2009
;
48
(
3
):
292
298
[PubMed]
24
Van Buynder
PG
,
Konrad
S
,
Kersteins
F
, et al
.
Healthcare worker influenza immunization vaccinate or mask policy: strategies for cost effective implementation and subsequent reductions in staff absenteeism due to illness.
Vaccine
.
2015
;
33
(
13
):
1625
1628
[PubMed]
25
Ablah
E
,
Konda
K
,
Tinius
A
,
Long
R
,
Vermie
G
,
Burbach
C
.
Influenza vaccine coverage and presenteeism in Sedgwick County, Kansas.
Am J Infect Control
.
2008
;
36
(
8
):
588
591
[PubMed]
26
Lee
BY
,
Bailey
RR
,
Wiringa
AE
, et al
.
Economics of employer-sponsored workplace vaccination to prevent pandemic and seasonal influenza.
Vaccine
.
2010
;
28
(
37
):
5952
5959
[PubMed]
27
Fiore
AE
,
Uyeki
TM
,
Broder
K
, et al
Centers for Disease Control and Prevention (CDC)
.
Prevention and control of influenza with vaccines: recommendations of the Advisory Committee on Immunization Practices (ACIP), 2010.
MMWR Recomm Rep
.
2010
;
59
(
RR-8
):
1
62
[PubMed]
28
Centers for Disease Control (CDC)
.
Prevention and control of influenza.
MMWR Morb Mortal Wkly Rep
.
1984
;
33
(
19
):
253
260, 265–266
[PubMed]
29
Centers for Disease Control and Prevention. Table: Self-reported influenza vaccination coverage trends 1989–2008 among adults by age group, risk group, race/ethnicity, health-care worker status, and pregnancy status, United States, National Health Interview Survey (NHIS). Available at: www.cdc.gov/flu/pdf/professionals/nhis89_08fluvaxtrendtab.pdf. Accessed March 3, 2015
30
Lindley
MC
,
Bridges
CB
,
Strikas
RA
, et al
Centers for Disease Control and Prevention (CDC)
.
Influenza vaccination performance measurement among acute care hospital-based health care personnel—United States, 2013–14 influenza season.
MMWR Morb Mortal Wkly Rep
.
2014
;
63
(
37
):
812
815
[PubMed]
31
Polgreen
PM
,
Septimus
EJ
,
Parry
MF
, et al
.
Relationship of influenza vaccination declination statements and influenza vaccination rates for healthcare workers in 22 US hospitals.
Infect Control Hosp Epidemiol
.
2008
;
29
(
7
):
675
677
[PubMed]
32
Douville
LE
,
Myers
A
,
Jackson
MA
,
Lantos
JD
.
Health care worker knowledge, attitudes, and beliefs regarding mandatory influenza vaccination.
Arch Pediatr Adolesc Med
.
2010
;
164
(
1
):
33
37
[PubMed]
33
Naleway
AL
,
Henkle
EM
,
Ball
S
, et al
.
Barriers and facilitators to influenza vaccination and vaccine coverage in a cohort of health care personnel.
Am J Infect Control
.
2014
;
42
(
4
):
371
375
[PubMed]
34
The Joint Commission
.
Providing a safer environment for health care personnel and patients through influenza vaccination
. In:
Strategies from Research and Practice
.
Oakbrook Terrace, IL
:
The Joint Commission
;
2009
:
1
87
35
Talbot
TR
,
Crocker
DD
,
Peters
J
, et al
.
Duration of virus shedding after trivalent intranasal live attenuated influenza vaccination in adults.
Infect Control Hosp Epidemiol
.
2005
;
26
(
5
):
494
500
[PubMed]
36
Pavia
AT
.
Mandate to protect patients from health care–associated influenza.
[editorial]
Clin Infect Dis
.
2010
;
50
(
4
):
465
467
[PubMed]
37
Wilde
JA
,
McMillan
JA
,
Serwint
J
,
Butta
J
,
O’Riordan
MA
,
Steinhoff
MC
.
Effectiveness of influenza vaccine in health care professionals: a randomized trial.
JAMA
.
1999
;
281
(
10
):
908
913
[PubMed]
38
Hodge
JG
Jr
,
Gostin
LO
.
School Vaccination Requirements: Historical, Social, and Legal Perspectives: A State of the Art Assessment of Law and Policy
.
Baltimore, MD
:
Center for Law and the Public’s Health at Johns Hopkins and Georgetown Universities
;
2002
39
Immunization Action Coalition. Influenza vaccination honor roll. Available at: www.immunize.org/honor-roll/influenza-mandates/default.asp. Accessed February 25, 2015
40
Miller
BL
,
Ahmed
F
,
Lindley
MC
,
Wortley
PM
.
Increases in vaccination coverage of healthcare personnel following institutional requirements for influenza vaccination: a national survey of U.S. hospitals.
Vaccine
.
2011
;
29
(
50
):
9398
9403
[PubMed]
41
Babcock
HM
,
Gemeinhart
N
,
Jones
M
,
Dunagan
WC
,
Woeltje
KF
.
Mandatory influenza vaccination of health care workers: translating policy to practice.
Clin Infect Dis
.
2010
;
50
(
4
):
459
464
[PubMed]
42
Rakita
RM
,
Hagar
BA
,
Crome
P
,
Lammert
JK
.
Mandatory influenza vaccination of healthcare workers: a 5-year study.
Infect Control Hosp Epidemiol
.
2010
;
31
(
9
):
881
888
[PubMed]
43
Palmore
TN
,
Vandersluis
JP
,
Morris
J
, et al
.
A successful mandatory influenza vaccination campaign using an innovative electronic tracking system.
Infect Control Hosp Epidemiol
.
2009
;
30
(
12
):
1137
1142
[PubMed]
44
Tucker
ME
.
Mandating flu shots gets the job done.
Pediatr News
.
2010
;
44
(
4
):
16
45
Quan
K
,
Tehrani
DM
,
Dickey
L
, et al
.
Voluntary to mandatory: evolution of strategies and attitudes toward influenza vaccination of healthcare personnel.
Infect Control Hosp Epidemiol
.
2012
;
33
(
1
):
63
70
[PubMed]
46
Maurer
J
,
Harris
KM
,
Black
CL
,
Euler
GL
.
Support for seasonal influenza vaccination requirements among US healthcare personnel.
Infect Control Hosp Epidemiol
.
2012
;
33
(
3
):
213
221
[PubMed]
47
Linam
WM
,
Gilliam
CH
,
Honeycutt
M
,
Wisdom
C
,
Swearingen
CJ
,
Romero
JR
.
Parental perceptions about required influenza immunization of pediatric healthcare personnel.
Infect Control Hosp Epidemiol
.
2014
;
35
(
10
):
1301
1303
[PubMed]
48
The American Academy of Family Physicians Press Release. AAFP supports mandatory flu vaccinations for health care personnel. June 2011. Available at: www.aafp.org/news/health-of-the-public/20110613mandatoryfluvacc.html. Accessed June 29, 2015
49
American Hospital Association. AHA endorses patient safety policies requiring influenza vaccination of health care workers. July 2011. Available at: www.aha.org/advocacy-issues/tools-resources/advisory/2011/110722-quality-adv.pdf. Accessed June 29, 2015
50
IDSA, SHEA, PIDS. IDSA, SHEA, and PIDS joint policy statement on mandatory immunization of health care personnel according to the ACIP-recommended vaccine schedule. December 2013. Available at: www.idsociety.org/uploadedFiles/IDSA/Policy_and_Advocacy/Current_Topics_and_Issues/Immunizations_and_Vaccines/Health_Care_Worker_Immunization/Statements/IDSA_SHEA_PIDS%20Policy%20on%20Mandatory%20Immunization%20of%20HCP.pdf. Accessed June 29, 2015
51
Greene LR, Cox T, Dolan S, et al. APIC position paper: Influenza vaccination should be a condition of employment for healthcare personnel, unless medically contraindicated. January 2011. Available at: www.apic.org/Resource_/TinyMceFileManager/Advocacy-PDFs/APIC_Influenza_Immunization_of_HCP_12711.PDF. Accessed June 29, 2015
52
American Public Health Association. APHA policy statement: annual influenza vaccination requirements for health workers. 2010. Available at: www.apha.org/policies-and-advocacy/public-health-policy-statements/policy-database/2014/07/11/14/36/annual-influenza-vaccination-requirements-for-health-workers. Accessed June 29, 2015
53
Stewart
AM
,
Cox
MA
.
State law and influenza vaccination of health care personnel.
Vaccine
.
2013
;
31
(
5
):
827
832
[PubMed]
54
Centers for Disease Control and Prevention. State immunization laws for healthcare workers and patients: immunization administration requirements for influenza. Available at: http://www2a.cdc.gov/vaccines/statevaccsApp/AdministrationbyVaccine.asp?Vaccinetmp=Influenza. Accessed February 25, 2015
55
Lindley
MC
,
Dube
D
,
Kalayil
EJ
,
Kim
H
,
Paiva
K
,
Raymond
P
.
Qualitative evaluation of Rhode Island’s healthcare worker influenza vaccination regulations.
Vaccine
.
2014
;
32
(
45
):
5962
5966
[PubMed]
56
Kim
HH
,
Raymond
P
,
Washburn
T
,
Cappelli
D
.
Influenza vaccination coverage among healthcare workers during the 2013–14 influenza season in Rhode Island.
R I Med J (2013)
.
2014
;
97
(
10
):
60
62
[PubMed]
57
Harris
KM
,
Uscher-Pines
L
,
Han
B
,
Lindley
MC
,
Lorick
SA
.
The impact of influenza vaccination requirements for hospital personnel in California: knowledge, attitudes, and vaccine uptake.
Am J Infect Control
.
2014
;
42
(
3
):
288
293
[PubMed]
58
Stewart
AM
.
Mandatory vaccination of health care workers.
N Engl J Med
.
2009
;
361
(
21
):
2015
2017
[PubMed]
59
Tucker
SJ
,
Poland
GA
,
Jacobson
RM
.
Requiring influenza vaccination for health care workers.
Am J Nurs
.
2008
;
108
(
2
):
32
34
[PubMed]
60
Wynia
MK
.
Mandating vaccination: what counts as a “mandate” in public health and when should they be used?
Am J Bioeth
.
2007
;
7
(
12
):
2
6
[PubMed]
## Competing Interests
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose. | 2022-08-17 05:25:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2250746786594391, "perplexity": 14399.792553126324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00261.warc.gz"} |
https://publications.industry.gov.au/publications/australianinnovationsystemmonitor/science-and-research/business-R-and-D/index.html | Feedback
survey
Subscribe
As experimental development is dedicated to producing new materials, technologies, products or processes, it is closely related to business innovation. It has previously been estimated that R&D-active Australian firms were three times more likely to introduce new-to-market goods and service innovations than non-R&D-active ones.[54] BERD currently makes up just over half (52.7 per cent) of total Gross Expenditure on R&D (GERD). It is particularly relevant to firms in technology-intensive industries such as Manufacturing but also increasingly in Professional, Scientific and Technical Services, which now represents the largest contribution to BERD. Following a notable decline in 2015-16, total BERD lifted from $16.7 billion in 2015-16 to$17.4 billion in 2017-18. The largest increase in this period occurred in overseas expenditures (up $534 million), while in Western Australia expenditures continued to fall sharply (down$490 million). In 2017-18 by field of research, the largest contribution to BERD came from Information and Computing Sciences ($6.7 billion) and Engineering came in second ($4.7 billion).[55]
Australia's BERD is relatively concentrated, with just four industries accounting for more than three-quarters of the $17.4 billion in total expenditure. The largest contribution in 2017-18 was in Professional, Scientific and Technical Services (29.3 per cent), which has overtaken Manufacturing (26.4 per cent) for the first time. This is especially significant given that as recently as 2011-12, Professional, Scientific and Technical Services accounted for 15.5 per cent of total BERD, compared to 24.4 per cent for Manufacturing and 22.4 per cent for Mining. Mining expenditure peaked in 2011-12 (at$4.1 billion) and has fallen to a quarter of that year's value — to \$1.0 billion in 2017-18—contributing a comparatively modest 6.0 per cent to total BERD.[56] | 2020-02-29 06:01:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20992937684059143, "perplexity": 6020.796252560404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148671.99/warc/CC-MAIN-20200229053151-20200229083151-00424.warc.gz"} |
https://www.physicsforums.com/threads/computing-the-work-of-a-turbine.715884/ | # Computing the work of a turbine.
1. Oct 11, 2013
### S. Moger
I am preparing for a re-exam, this is a problem from the exam I took, but I can't see what I did wrong and why.
1. The problem statement, all variables and given/known data
Case: A turbine that is producing work dW is powered by compressed air (treated as a diatomic ideal gas).
Known quantities:
$P_0, P_f, T_0$
Wanted quantity:
$dW$ per mole air.
It is also known that the process is adiabatic, so $dQ = 0$, and that the flow is stationary.
2. Relevant equations
3. The attempt at a solution
By the first law of thermodynamics, and the adiabatic property:
$\Delta U = dQ + dW = dW$.
The energy content of a diatomic ideal gas is given by:
$U = \frac{f}{2} nRT$, where f = 5 (the amount of quadratic degrees of freedom)
Thus, ΔU should equal the change in energy content of the air before and after the turbine:
$\Delta U = U_0 - U_f =\frac{f}{2} nR (T_0 - T_f) = dW$.
To compute the unknown $T_f$ we again use the fact that the process is adiabatic, so the following should hold
$P_0^{-\frac{2}{f+2}} T_0 = P_f^{-\frac{2}{f+2}} T_f \iff T_f = T_0 (\frac{P_f}{P_0})^{\frac{2}{f+2}}$.
Inserting this result into the prior equation gives
$\Delta U = \frac{f}{2} R T_0 ( 1 - (\frac{P_f}{P_0})^{\frac{2}{f+2}}) = dW$ per mole (with n=1).
_________________________________
However, the solution sheet states that the term f/2=5/2 should be 7/2 (rest unchanged). I can't see why. They use a different technique as well, which I don't understand.
The correct solution:
Stationary flow implies that $H_0 = W + H_f$ by the first law of thermodynamics. The enthalpy $H=C_P T$. So, $W=C_P (T_0 - T_f)$. Furthermore,
$T_f = T_0 (\frac{P_f}{P_0})^{1-1/\gamma}$ and
$C_P = 7/2 nR$ by the properties of diatomic ideal gases.
Finally,
$W/n = \frac{7}{2} R T_0 ( 1 - (\frac{P_f}{P_0})^{1-1/\gamma})$
_________________________________
which is not what I get:
$\Delta U = \frac{f}{2} R T_0 ( 1 - (\frac{P_f}{P_0})^{\frac{2}{f+2}}) = dW$ per mole (with n=1).
Gamma is defined as (f+2)/f. Also observe that their notation is W instead of dW ( which here is not meant to be read as a change in work, but a quantity of work ).
Last edited: Oct 11, 2013
2. Oct 11, 2013
### Staff: Mentor
Have you learned about the form of the first law applicable a continuous flow open system? For steady state operation, it says that the change in enthalpy per unit mass passing through the system is equal to the "shaft work" per unit mass passing through the system. Please go back and restudy the section in your textbook on the first law for continuous flow open systems. This will tell you why the "correct solution" is correct.
3. Oct 12, 2013
### S. Moger
Yes I will, however, I prefer using as few formulae and extra definitions as possible, including concepts like enthalpy, even if it may make things harder computationally, unless explicitly required. I want to understand what makes my solution wrong. What am I computing and why do I get less energy than what I'm supposed to?
4. Oct 12, 2013
### S. Moger
$\Delta U = dW = - P\Delta V + dW_{Other}$
Is the $-P\Delta V$ the problem here?
Is turbine work limited to $dW_{Other}$? Would my solution hold if we had a machine where that wasn't the case?
The gas has to push away atmosphere on exit, due to increased volume? But how does that not reflect on the amount of energy you can get out of the turbine? The only way I can get less energy in dU than I get out of the turbine seems to be when the pressure of the gas is higher than that of the atmosphere. But wouldn't that be dependent on the atmospheric pressure, which could be any value?
5. Oct 12, 2013
### Staff: Mentor
You're close to having it. First of all, the equation should be $\Delta U = dW = - \Delta (PV) + dW_{Other}$. If you want to do it as a closed system, take as your closed system the contents of the turbine at any time plus a small parcel of gas about to enter the turbine. In the next instant, the small parcel of gas has entered, and and another small parcel of equal mass has exited at the low pressure end. Since the system is at steady state, the internal energy of the gas within the turbine has not changed between the initial and final states. Only the internal energy of the parcel that leaves is different from the internal energy of the parcel that entered. The work done by the gas behind the inlet parcel in forcing it into the turbine is the upstream pressure times the volume of the parcel. The work done by the downstream parcel on the gas ahead in leaving the turbine is the pressure downstream times the volume of that parcel. The rest of the work is the "shaft work", which you call dWother. So the total change in internal energy per unit mass entering the system is $\Delta U = - \Delta (PV) + dW_{Other}$, where all the quantities in this equation are per unit mass entering (and leaving).
6. Oct 13, 2013
### S. Moger
Ok, so basically the gas loses more energy than is put into turbine work, due to the net expansion of the parcels of gas and them having to push away existing gas, which consumes energy?
I would have to add $-\Delta(PV)$ to what I've got in other words. Can I do this by computing $-(P_0 V_0 - P_f V_f)$?
With the enthalpy solution that part seems to disappear by the use of $H = C_P T$. Is constant pressure assumed because of the steady state?
7. Oct 15, 2013
### Staff: Mentor
Not exactly. The gas actually does more work on the turbine than the expansion work. The work done by the gas behind the entering gas in pushing it into the turbine is greater than the work by the exiting gas in pushing away the gas ahead of it. The extra work done is $(P_0 V_0 - P_f V_f)$.
One way to think about this is to replace the gas by an incompressible fluid (which doesn't expand). If the inlet pressure is higher than the outlet pressure (and, if viscous drag is negligible), the fluid causes the turbine to rotate and do work (just like blowing on a pinwheel). So, even without a gas expanding, work is done.
No. For an ideal gas, dH is always equal to C_pdT, irrespective of whether the pressure is constant.
8. Oct 21, 2013
### S. Moger
Ok, I try to visualize the turbine as two containers/parcels of gas at different pressures and with a connection between them that contains a turbine. The flow pushes the turbine, whereupon the system loses more energy or pressure than it would if there was just a connection and no turbine to push.
As soon as the pressure is (almost immediately) equalized I open both containers to let them equalize with the atmosphere, I close them and open the container that is connected with the high pressure air. During equalization with the atmosphere and during equalization with the high pressure reservoir work is done. In the first case by the gas and in the second case on the gas. The difference between them is the work I won't get as W_other? Is that correct?
I get that dH = const * dT by using PV = NkT and U = f/2 NkT as long as N is fixed. But I have trouble showing that const = dQ/dT (= C_p) without assuming that the pressure is fixed. | 2017-08-19 21:50:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.633635938167572, "perplexity": 470.78202228312125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105922.73/warc/CC-MAIN-20170819201404-20170819221404-00284.warc.gz"} |
https://www.rdocumentation.org/packages/spatstat.core/versions/2.1-2/topics/Penttinen | spatstat.core (version 2.1-2)
# Penttinen: Penttinen Interaction
## Description
Creates an instance of the Penttinen pairwise interaction point process model, which can then be fitted to point pattern data.
## Usage
Penttinen(r)
r
## Value
An object of class "interact" describing the interpoint interaction structure of a point process.
## Details
Penttinen (1984, Example 2.1, page 18), citing Cormack (1979), described the pairwise interaction point process with interaction factor $$h(d) = e^{\theta A(d)} = \gamma^{A(d)}$$ between each pair of points separated by a distance $d$. Here $$A(d)$$ is the area of intersection between two discs of radius $$r$$ separated by a distance $$d$$, normalised so that $$A(0) = 1$$.
The scale of interaction is controlled by the disc radius $$r$$: two points interact if they are closer than $$2 r$$ apart. The strength of interaction is controlled by the canonical parameter $$\theta$$, which must be less than or equal to zero, or equivalently by the parameter $$\gamma = e^\theta$$, which must lie between 0 and 1.
The potential is inhibitory, i.e.\ this model is only appropriate for regular point patterns. For $$\gamma=0$$ the model is a hard core process with hard core diameter $$2 r$$. For $$\gamma=1$$ the model is a Poisson process.
The irregular parameter $$r$$ must be given in the call to Penttinen, while the regular parameter $$\theta$$ will be estimated.
This model can be considered as a pairwise approximation to the area-interaction model AreaInter.
## References
Cormack, R.M. (1979) Spatial aspects of competition between individuals. Pages 151--212 in Spatial and Temporal Analysis in Ecology, eds. R.M. Cormack and J.K. Ord, International Co-operative Publishing House, Fairland, MD, USA.
Penttinen, A. (1984) Modelling Interaction in Spatial Point Patterns: Parameter Estimation by the Maximum Likelihood Method. Jyvaskyla Studies in Computer Science, Economics and Statistics 7, University of Jyvaskyla, Finland.
ppm, ppm.object, Pairwise, AreaInter.
# NOT RUN { | 2021-06-12 11:26:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6616928577423096, "perplexity": 1170.4224107754583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00615.warc.gz"} |
https://diginole.lib.fsu.edu/islandora/object/fsu:257406 | # survey of the need for key punch operators in state government and the facilities in the Tallahassee, Florida area for training potential employees in this field
Stewart, F. (1959). A survey of the need for key punch operators in state government and the facilities in the Tallahassee, Florida area for training potential employees in this field. Retrieved from http://purl.flvc.org/fsu/fd/FSU_historic_afh5413
Stewart, F. (1959). A survey of the need for key punch operators in state government and the facilities in the Tallahassee, Florida area for training potential employees in this field. Retrieved from http://purl.flvc.org/fsu/fd/FSU_historic_afh5413 | 2021-12-04 10:18:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9248775839805603, "perplexity": 3214.08685880736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00477.warc.gz"} |
http://jaac.ijournal.cn/ch/reader/view_abstract.aspx?file_no=JAAC-2017-0332 | ### For REFEREES
Volume 8, Number 5, 2018, Pages 1555-1574 On equalities of BLUEs for a multiple restricted partitioned linear model Yunying Huang,Bing Zheng,Guoliang Chen Keywords:Partitioned linear model, restricted models, BLUE, additive decomposition of estimation, Moore-Penrose inverse. Abstract: For the multiple restricted partitioned linear model ${\mathscr{M}}=\{y, X_1$ $\beta_1+\cdots+X_s\beta_s\mid A_1\beta_1=b_1, \cdots, A_s\beta_s=b_s, \Sigma\}$, the relationships between the restricted partitioned linear model ${\mathscr{M}}$ and the corresponding $s$ small restricted linear models ${\mathscr{M}}_i=\{y, X_i\beta_i\mid A_i\beta_i=b_i, \Sigma\},~i=1, \cdots , s$ are studied. The necessary and sufficient conditions for the best linear unbiased estimators $(\mbox{BLUEs})$ under the full restricted model to be the sums of BLUEs under the $s$ small restricted model are derived. Some statistical properties of the \mbox{BLUEs} are also described. PDF Download reader | 2018-11-18 05:47:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339653849601746, "perplexity": 1543.157507821282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743968.63/warc/CC-MAIN-20181118052443-20181118073823-00057.warc.gz"} |
http://www.helpteaching.com/questions/Algebraic_Expressions/Grade_6 | Looking for Algebra worksheets?
Check out our pre-made Algebra worksheets!
Tweet
##### Browse Questions
You can create printable tests and worksheets from these Grade 6 Algebraic Expressions questions! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page.
Previous Next
Grade 6 Algebraic Expressions CCSS: 6.EE.A.2a
3 times the quantity $m$ minus 7
1. $3(m-7)$
2. $3 xx m-7$
3. $3-m$
4. $3 xx m7$
Grade 6 Algebraic Expressions CCSS: 6.EE.A.2a
Combine all like terms.
3x + 9y + 10y
1. 13x + 9y
2. 12y + 10y
3. 3x + 19y
Which expression is equivalent to 15x + 20y?
1. 35y
2. 5(3x + 4y)
3. 5x - 20y
4. 3xy + 4xy
Which expression is equal to y + 12 + 5y + 20 + 7x?
1. 10y
2. 6x + 4y + 30
3. 12y + 10x
4. 6y + 7x + 32
What is the simplified form of $2x + 5x + 4x$?
1. $11 + 3x$
2. $7x$
3. $11x$
4. $7x + 4x$ | 2017-04-24 22:46:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2786160111427307, "perplexity": 7009.863680015026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119995.14/warc/CC-MAIN-20170423031159-00499-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.i2m.univ-amu.fr/events/mod-p-points-on-shimura-varieties-of-parahoric-level/ | Mod p points on Shimura varieties of parahoric level
Pol Van Hoften
King's College London math department
https://nms.kcl.ac.uk/pol.van_hoften/
Date(s) : 01/04/2021 iCal
16 h 00 min - 17 h 00 min
The conjecture of Langlands-Rapoport gives a conjectural description of the mod p points of Shimura varieties, with applications towards computing the (semi-simple) zeta function of these Shimura varieties. The conjecture was proven by Kisin for abelian type Shimura varieties at primes of (hyperspecial) good reduction, after having constructed smooth integral models. For primes of (parahoric) bad reduction, Kisin and Pappas have constructed a good integral model and the conjecture was generalised to this setting by Rapoport. In this talk I will discuss recent results towards the conjecture for these integral models, under minor hypothesis, building on earlier work of Zhou. Along the way we will see irreducibility results for various stratifications on special fibers of Shimura varieties, including irreducibility of central leaves and Ekedahl-Oort strata.
Catégories | 2023-01-30 01:24:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781358599662781, "perplexity": 1661.1487688075306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00230.warc.gz"} |
http://mathoverflow.net/questions/56464/fixed-points-of-group-endomorphisms | # Fixed points of Group Endomorphisms
Suppose $G$ is a finitely presented group with generators $a_1, \ldots, a_n$. Suppose $f \colon G \to G$ is a group endomorphism specified by defining $f(a_1), \ldots, f(a_n)$. As expected, we define a fixed point of $f$ to be any element $g \in G$ such that $f(g) = g$ and, as $f(\mathop{id}) = \mathop{id}$, we say that $\mathop{id}$ is the trivial fixed point.
For example, let $G = \langle a | \rangle$ and $f$ and $g$ be defined by $f(a) = \mathop{id}$ and $g(a) = a^2$. Note in both cases $f$ and $g$ have no non-trivial fixed points and for this particular group we can determine that an endomorphism $f$ has a non-trivial fixed point if and only if $f(a) = a$.
For what groups is it possible to determine whether or not any given endomorphism has a non-trivial fixed point?
I am particularly interested in the question of:
Is $\langle a, b, c | \rangle$ such a group?
-
For the free group an algorithm is here: Sykiotis, Mihalis Fixed points of symmetric endomorphisms of groups. Internat. J. Algebra Comput. 12 (2002), no. 5, 737–745.
-
This provides an answer to a related question: can one determine whether an endomorphism of a free group maps a cyclic subgroup into a conjugate? Perhaps it can be improved to answer your second question exactly.
By the Combination Theorem for hyperbolic groups (Bestvina and Feighn), we have the following.
Theorem: Let $F$ be a finitely generated free group and let $\phi:F\to F$ be an endomorphism. The following are equivalent:
1. $\phi$ maps a non-trivial cyclic subgroup into a conjugate;
2. the ascending HNN extension $\Gamma_\phi=F*_\phi$ is not word-hyperbolic.
Now, Panos Papasoglu described an algorithm that confirms if a given presentation defines a word-hyperbolic group. His algorithm doesn't terminate if it doesn't.
On the other hand, given $g\in F$ and an integer $k$, the solution to the conjugacy problem in $F$ determines whether or not $\phi(g)$ is conjugate to $g^k$. Therefore, a naive enumeration of elements of $F$ and integers will eventually determine if $\phi$ maps a non-trivial cyclic subgroup into a conjugate.
Running these two procedures in parallel, one eventually determines if $\phi$ maps a cyclic subgroup into a conjugate. Of course, this algorithm is completely impractical. It would be interesting to know if a more efficient algorithm exists.
- | 2015-07-29 16:22:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9153397679328918, "perplexity": 139.39372852694072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986451.45/warc/CC-MAIN-20150728002306-00041-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/posts/110905/revisions | Tweeted twitter.com/StackMma/status/712973893909217280 occurred Mar 24 '16 at 12:06 4 added 32 characters in body edited Mar 24 '16 at 7:27 Eriek 38011 silver badge55 bronze badges (Edit: I mean 'differentiate' as in telling the difference between two things, not as in f'(x) ) I have n series of numbers, let's say two for example, with some "empty" spaces between the numbers, and I want to add them element-wise. The problem is, I need to differentiate between the numbers, which can include 0, and the empty spaces, so I cannot use 0 as a placeholder in the empty spaces. For example, using 'E' as the empty space character (to clarifyedit: oops forgot E was a built-in symbol, it doesn't HAVE tocan be Esomething else), 'adding' the following two series (in reality, they'd be much longer and be more than two series being added): {-2, E, E, 0, E, -1, E} {-1, E, 0, E, E, 1, 5} I want to get {-3, E, 0, 0, E, 0, 5} So basically, the rules are: In the ith position of the series (1D Tables) being added: If all the members are 'E', then put 'E' in the ith position of the result series Otherwise, add the numbers like normal, ignoring the Es So now that I've defined what I'm wanting to do, can anyone think of an elegant way to implement it? Only thing I can think of is by defining my own custom adding function, but it seems like there should be a more clever way to do it? Thanks. (Edit: I mean 'differentiate' as in telling the difference between two things, not as in f'(x) ) I have n series of numbers, let's say two for example, with some "empty" spaces between the numbers, and I want to add them element-wise. The problem is, I need to differentiate between the numbers, which can include 0, and the empty spaces, so I cannot use 0 as a placeholder in the empty spaces. For example, using 'E' as the empty space character (to clarify, it doesn't HAVE to be E), 'adding' the following two series (in reality, they'd be much longer and be more than two series being added): {-2, E, E, 0, E, -1, E} {-1, E, 0, E, E, 1, 5} I want to get {-3, E, 0, 0, E, 0, 5} So basically, the rules are: In the ith position of the series (1D Tables) being added: If all the members are 'E', then put 'E' in the ith position of the result series Otherwise, add the numbers like normal, ignoring the Es So now that I've defined what I'm wanting to do, can anyone think of an elegant way to implement it? Only thing I can think of is by defining my own custom adding function, but it seems like there should be a more clever way to do it? Thanks. (Edit: I mean 'differentiate' as in telling the difference between two things, not as in f'(x) ) I have n series of numbers, let's say two for example, with some "empty" spaces between the numbers, and I want to add them element-wise. The problem is, I need to differentiate between the numbers, which can include 0, and the empty spaces, so I cannot use 0 as a placeholder in the empty spaces. For example, using 'E' as the empty space character (edit: oops forgot E was a built-in symbol, it can be something else), 'adding' the following two series (in reality, they'd be much longer and be more than two series being added): {-2, E, E, 0, E, -1, E} {-1, E, 0, E, E, 1, 5} I want to get {-3, E, 0, 0, E, 0, 5} So basically, the rules are: In the ith position of the series (1D Tables) being added: If all the members are 'E', then put 'E' in the ith position of the result series Otherwise, add the numbers like normal, ignoring the Es So now that I've defined what I'm wanting to do, can anyone think of an elegant way to implement it? Only thing I can think of is by defining my own custom adding function, but it seems like there should be a more clever way to do it? Thanks. 3 added 38 characters in body edited Mar 24 '16 at 7:25 Eriek 38011 silver badge55 bronze badges (Edit: I mean 'differentiate' as in telling the difference between two things, not as in f'(x) ) I have n series of numbers, let's say two for example, with some "empty" spaces between the numbers, and I want to add them element-wise. The problem is, I need to differentiate between the numbers, which can include 0, and the empty spaces, so I cannot use 0 as a placeholder in the empty spaces. For example, using 'E' as the empty space character (to clarify, it doesn't HAVE to be E), 'adding' the following two series (in reality, they'd be much longer and be more than two series being added): {-2, E, E, 0, E, -1, E} {-1, E, 0, E, E, 1, 5} I want to get {-3, E, 0, 0, E, 0, 5} So basically, the rules are: In the ith position of the series (1D Tables) being added: If all the members are 'E', then put 'E' in the ith position of the result series Otherwise, add the numbers like normal, ignoring the Es So now that I've defined what I'm wanting to do, can anyone think of an elegant way to implement it? Only thing I can think of is by defining my own custom adding function, but it seems like there should be a more clever way to do it? Thanks. (Edit: I mean 'differentiate' as in telling the difference between two things, not as in f'(x) ) I have n series of numbers, let's say two for example, with some "empty" spaces between the numbers, and I want to add them element-wise. The problem is, I need to differentiate between the numbers, which can include 0, and the empty spaces, so I cannot use 0 as a placeholder in the empty spaces. For example, using 'E' as the empty space character, 'adding' the following two series (in reality, they'd be much longer and be more than two series being added): {-2, E, E, 0, E, -1, E} {-1, E, 0, E, E, 1, 5} I want to get {-3, E, 0, 0, E, 0, 5} So basically, the rules are: In the ith position of the series (1D Tables) being added: If all the members are 'E', then put 'E' in the ith position of the result series Otherwise, add the numbers like normal, ignoring the Es So now that I've defined what I'm wanting to do, can anyone think of an elegant way to implement it? Only thing I can think of is by defining my own custom adding function, but it seems like there should be a more clever way to do it? Thanks. (Edit: I mean 'differentiate' as in telling the difference between two things, not as in f'(x) ) I have n series of numbers, let's say two for example, with some "empty" spaces between the numbers, and I want to add them element-wise. The problem is, I need to differentiate between the numbers, which can include 0, and the empty spaces, so I cannot use 0 as a placeholder in the empty spaces. For example, using 'E' as the empty space character (to clarify, it doesn't HAVE to be E), 'adding' the following two series (in reality, they'd be much longer and be more than two series being added): {-2, E, E, 0, E, -1, E} {-1, E, 0, E, E, 1, 5} I want to get {-3, E, 0, 0, E, 0, 5} So basically, the rules are: In the ith position of the series (1D Tables) being added: If all the members are 'E', then put 'E' in the ith position of the result series Otherwise, add the numbers like normal, ignoring the Es So now that I've defined what I'm wanting to do, can anyone think of an elegant way to implement it? Only thing I can think of is by defining my own custom adding function, but it seems like there should be a more clever way to do it? Thanks. 2 added 100 characters in body edited Mar 24 '16 at 6:52 Eriek 38011 silver badge55 bronze badges (Edit: I mean 'differentiate' as in telling the difference between two things, not as in f'(x) ) I have n series of numbers, let's say two for example, with some "empty" spaces between the numbers, and I want to add them element-wise. The problem is, I need to differentiate between the numbers, which can include 0, and the empty spaces, so I cannot use 0 as a placeholder in the empty spaces. For example, using 'E' as the empty space character, 'adding' the following two series (in reality, they'd be much longer and be more than two series being added): {-2, E, E, 0, E, -1, E} {-1, E, 0, E, E, 1, 5} I want to get {-3, E, 0, 0, E, 0, 5} So basically, the rules are: In the ith position of the series (1D Tables) being added: If all the members are 'E', then put 'E' in the ith position of the result series Otherwise, add the numbers like normal, ignoring the Es So now that I've defined what I'm wanting to do, can anyone think of an elegant way to implement it? Only thing I can think of is by defining my own custom adding function, but it seems like there should be a more clever way to do it? Thanks. I have n series of numbers, let's say two for example, with some "empty" spaces between the numbers, and I want to add them element-wise. The problem is, I need to differentiate between the numbers, which can include 0, and the empty spaces, so I cannot use 0 as a placeholder in the empty spaces. For example, using 'E' as the empty space character, 'adding' the following two series (in reality, they'd be much longer and be more than two series being added): {-2, E, E, 0, E, -1, E} {-1, E, 0, E, E, 1, 5} I want to get {-3, E, 0, 0, E, 0, 5} So basically, the rules are: In the ith position of the series (1D Tables) being added: If all the members are 'E', then put 'E' in the ith position of the result series Otherwise, add the numbers like normal, ignoring the Es So now that I've defined what I'm wanting to do, can anyone think of an elegant way to implement it? Only thing I can think of is by defining my own custom adding function, but it seems like there should be a more clever way to do it? Thanks. (Edit: I mean 'differentiate' as in telling the difference between two things, not as in f'(x) ) I have n series of numbers, let's say two for example, with some "empty" spaces between the numbers, and I want to add them element-wise. The problem is, I need to differentiate between the numbers, which can include 0, and the empty spaces, so I cannot use 0 as a placeholder in the empty spaces. For example, using 'E' as the empty space character, 'adding' the following two series (in reality, they'd be much longer and be more than two series being added): {-2, E, E, 0, E, -1, E} {-1, E, 0, E, E, 1, 5} I want to get {-3, E, 0, 0, E, 0, 5} So basically, the rules are: In the ith position of the series (1D Tables) being added: If all the members are 'E', then put 'E' in the ith position of the result series Otherwise, add the numbers like normal, ignoring the Es So now that I've defined what I'm wanting to do, can anyone think of an elegant way to implement it? Only thing I can think of is by defining my own custom adding function, but it seems like there should be a more clever way to do it? Thanks. 1 asked Mar 24 '16 at 6:43 Eriek 38011 silver badge55 bronze badges | 2019-09-18 14:29:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274376392364502, "perplexity": 482.3054460014524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573289.83/warc/CC-MAIN-20190918131429-20190918153429-00430.warc.gz"} |
https://www.hackerearth.com/zh/practice/algorithms/graphs/maximum-flow/tutorial/ | Algorithms
Topics:
Maximum flow
• Searching
• Sorting
• Greedy Algorithms
• Graphs
• String Algorithms
• Dynamic Programming
# Maximum flow
In graph theory, a flow network is defined as a directed graph involving a source($S$) and a sink($T$) and several other nodes connected with edges. Each edge has an individual capacity which is the maximum limit of flow that edge could allow.
Flow in the network should follow the following conditions:
• For any non-source and non-sink node, the input flow is equal to output flow.
• For any edge($E_i$) in the network, $0 \le flow(E_i) \le Capacity(E_i)$.
• Total flow out of the source node is equal total to flow in to the sink node.
• Net flow in the edges follows skew symmetry i.e. $F(u,v) = -F(v,u)$ where $F(u,v)$ is flow from node u to node v. This leads to a conclusion where you have to sum up all the flows between two nodes(either directions) to find net flow between the nodes initially.
Maximum Flow:
It is defined as the maximum amount of flow that the network would allow to flow from source to sink. Multiple algorithms exist in solving the maximum flow problem. Two major algorithms to solve these kind of problems are Ford-Fulkerson algorithm and Dinic's Algorithm. They are explained below.
Ford-Fulkerson Algorithm:
It was developed by L. R. Ford, Jr. and D. R. Fulkerson in 1956. A pseudocode for this algorithm is given below,
Inputs required are network graph $G$, source node $S$ and sink node $T$.
function: FordFulkerson(Graph G,Node S,Node T):
Initialise flow in all edges to 0
while (there exists an augmenting path(P) between S and T in residual network graph):
Augment flow between S to T along the path P
Update residual network graph
return
An augmenting path is a simple path from source to sink which do not include any cycles and that pass only through positive weighted edges. A residual network graph indicates how much more flow is allowed in each edge in the network graph. If there are no augmenting paths possible from $S$ to $T$, then the flow is maximum. The result i.e. the maximum flow will be the total flow out of source node which is also equal to total flow in to the sink node.
A demonstration of working of Ford-Fulkerson algorithm is shown below with the help of diagrams.
Implementation:
• An augmenting path in residual graph can be found using DFS or BFS.
• Updating residual graph includes following steps: (refer the diagrams for better understanding)
• For every edge in the augmenting path, a value of minimum capacity in the path is subtracted from all the edges of that path.
• An edge of equal amount is added to edges in reverse direction for every successive nodes in the augmenting path.
The complexity of Ford-Fulkerson algorithm cannot be accurately computed as it all depends on the path from source to sink. For example, considering the network shown below, if each time, the path chosen are $S-A-B-T$ and $S-B-A-T$ alternatively, then it can take a very long time. Instead, if path chosen are only $S-A-T$ and $S-B-T$, would also generate the maximum flow.
Dinic's Algorithm
In 1970, Y. A. Dinitz developed a faster algorithm for calculating maximum flow over the networks. It includes construction of level graphs and residual graphs and finding of augmenting paths along with blocking flow.
Level graph is one where value of each node is its shortest distance from source.
Blocking flow includes finding the new path from the bottleneck node.
Residual graph and augmenting paths are previously discussed.
Pseudocode for Dinic's algorithm is given below.
Inputs required are network graph G, source node S and sink node T.
function: DinicMaxFlow(Graph G,Node S,Node T):
Initialize flow in all edges to 0, F = 0
Construct level graph
while (there exists an augmenting path in level graph):
find blocking flow f in level graph
F = F + f
Update level graph
return F
Update of level graph includes removal of edges with full capacity. Removal of nodes that are not sink and are dead ends. A demonstration of working of Dinic's algorithm is shown below with the help of diagrams.
Contributed by: Vinay Kumar
? | 2020-06-05 23:09:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7862534523010254, "perplexity": 994.3442697911119}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00534.warc.gz"} |
https://search.r-project.org/CRAN/refmans/car/html/vif.html | vif {car} R Documentation
Variance Inflation Factors
Description
Calculates variance-inflation and generalized variance-inflation factors (VIFs and GVIFs) for linear, generalized linear, and other regression models.
Usage
vif(mod, ...)
## Default S3 method:
vif(mod, ...)
## S3 method for class 'lm'
vif(mod, type=c("terms", "predictor"), ...)
## S3 method for class 'merMod'
vif(mod, ...)
## S3 method for class 'polr'
vif(mod, ...)
## S3 method for class 'svyolr'
vif(mod, ...)
Arguments
mod for the default method, an object that responds to coef, vcov, and model.matrix, such as a glm object. type for unweighted lm objects only, how to handle models that contain interactions: see Details below. ... not used.
Details
If all terms in an unweighted linear model have 1 df, then the usual variance-inflation factors are calculated.
If any terms in an unweighted linear model have more than 1 df, then generalized variance-inflation factors (Fox and Monette, 1992) are calculated. These are interpretable as the inflation in size of the confidence ellipse or ellipsoid for the coefficients of the term in comparison with what would be obtained for orthogonal data.
The generalized VIFs are invariant with respect to the coding of the terms in the model (as long as the subspace of the columns of the model matrix pertaining to each term is invariant). To adjust for the dimension of the confidence ellipsoid, the function also prints GVIF^{1/(2\times df)} where df is the degrees of freedom associated with the term.
Through a further generalization, the implementation here is applicable as well to other sorts of models, in particular weighted linear models, generalized linear models, and mixed-effects models.
Two methods of computing GVIFs are provided for unweighted linear models:
• Setting type="terms" (the default) behaves like the default method, and computes the GVIF for each term in the model, ignoring relations of marginality among the terms in models with interactions. GVIFs computed in this manner aren't generally sensible.
• Setting type="predictor" focuses in turn on each predictor in the model, combining the main effect for that predictor with the main effects of the predictors with which the focal predictor interacts and the interactions; e.g., in the model with formula y ~ a*b + b*c, the GVIF for the predictor a also includes the b main effect and the a:b interaction regressors; the GVIF for the predictor c includes the b main effect and the b:c interaction; and the GVIF for the predictor b includes the a and c main effects and the a:b and a:c interactions (i.e., the whole model), and is thus necessarily 1. These predictor GVIFs should be regarded as experimental.
Specific methods are provided for ordinal regression model objects produced by polr in the MASS package and svyolr in the survey package, which are "intercept-less"; VIFs or GVIFs for linear and similar regression models without intercepts are generally not sensible.
Value
A vector of VIFs, or a matrix containing one row for each term, and columns for the GVIF, df, and GVIF^{1/(2\times df)}, the last of which is intended to be comparable across terms of different dimension.
Author(s)
John Fox jfox@mcmaster.ca and Henric Nilsson
References
Fox, J. and Monette, G. (1992) Generalized collinearity diagnostics. JASA, 87, 178–183.
Fox, J. (2016) Applied Regression Analysis and Generalized Linear Models, Third Edition. Sage.
Fox, J. and Weisberg, S. (2018) An R Companion to Applied Regression, Third Edition, Sage.
Examples
vif(lm(prestige ~ income + education, data=Duncan))
vif(lm(prestige ~ income + education + type, data=Duncan))
vif(lm(prestige ~ (income + education)*type, data=Duncan),
type="terms") # not recommended
vif(lm(prestige ~ (income + education)*type, data=Duncan),
type="predictor")
[Package car version 3.1-0 Index] | 2022-08-09 16:11:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.463521271944046, "perplexity": 4479.938957938778}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00298.warc.gz"} |
https://mathoverflow.net/questions/289824/relation-between-degree-of-root-of-determinant-polynomial-and-rank-of-the-matrix | # Relation between degree of root of determinant polynomial and rank of the matrix
Let $A=[a_{ij}]$ be an $n \times n$ matrix with $a_{ij}=f_{ij}(x_1,...,x_m)$ where $f_{ij}(x_1,...,x_m)$ is a polynomial in $m$ variables over a finite field $\mathbb{F}_q$.
Let $rank(A)=n$.
Now suppose $(x_i-cx_j)$ divides $determinant(A)$ with $c \in \mathbb{F}_q$ and $rank(A)=n-d$ when $x_i=cx_j$ then does it mean that $(x_i-cx_j)^d$ divides $determinant(A)$ ?
The question can be put more generally where $x_i-cx_j$ can be replaced by an irreducible polynomial $g(x_1,...,x_m)$ where we assume $rank(A)=n-d$ under the assumption $g(x_1,...,x_m)=0$ i.e., we calculate rank of $A$ in quotient space $\mathbb{F}_q[x_1,...x_m]/<g(x_1,...,x_m)>$.
I know that something like above is true for eigenvalues under specific circumstances which is a special case of above. I want to know what happens in general (other than the eigenvalue case).
Thanks
From your more general question I infer that you want to look at the coset of your matrix in the quotient (not at evaluation at specific $x_1,\ldots,x_n\in\mathbb{F}_q$).
Without loss of generality, $c=0$ and $i=n$ (else do a linear change of variables).
Your assumption is that if you consider the coset of this matrix modulo $x_n$, it is a matrix of rank $n-r$ with entries in $\mathbb{F}_q[x_1,\ldots,x_{n-1}]$. If you don't do evaluations, computing ranks can be done over the field $\mathbb{F}_q(x_1,\ldots,x_{n-1})$, where you can make the last $r$ rows of that coset matrix equal to zero by elementary row operations. Lifting that to $\mathbb{F}_q(x_1,\ldots,x_{n-1})[x_n]$, this means that all entries in each of the last rows of the transformed matrix are divisible by $x_n$, so the determinant is divisible by $x_n^r$.
I suppose a similar argument would work for an arbitrary irreducible polynomial where you should localise outside the ideal generated by that polynomial instead of looking at $\mathbb{F}_q(x_1,\ldots,x_{n-1})[x_n]$. | 2019-11-13 13:23:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9710052013397217, "perplexity": 93.04958747009303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667260.46/warc/CC-MAIN-20191113113242-20191113141242-00515.warc.gz"} |
http://www.chegg.com/homework-help/definitions/approximation-29 | # Definition of Approximation
Approximation is a method used to numerically evaluate a function using a series of sums based on the function. Approximation can be used to find solutions to ordinary differential equations. They can be convergent (the limit of the function has a finite value) or divergent (the function grows without bound toward ±∞). The two primary considerations when approximating a function as a series are whether it converges and where. In general, for two approximations and that are convergent, the following statements are true: , where c is a constant that can be factored out, and ; that is, the sum or difference of the series of a and the series of b equals the series of the sum or difference of a and b.
# Related Questions (10)
### Get Definitions of Key Math Concepts from Chegg
In math there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important math concepts and terms are, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand key math terms and concepts, we’ve identified some of the most important ones and provided detailed definitions for them, written and compiled by Chegg experts. | 2016-02-08 06:46:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762534856796265, "perplexity": 275.07573197587766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152959.66/warc/CC-MAIN-20160205193912-00104-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2277018/suppose-x-0-prove-that-frac-sqrtxx1-leq-frac12 | # Suppose $x > 0$. Prove that $\frac{\sqrt{x}}{x+1} \leq \frac{1}{2}$. [duplicate]
Suppose $x > 0$. Prove that $$\frac{\sqrt{x}}{x+1} \leq \frac{1}{2}$$
Hey everyone, here is a simple math prove question, but I had a hard to start this proving, please give me some ideas to deal with this. Thanks
## marked as duplicate by JMoravitz, dxiv, Namaste, Arnaldo, Simply Beautiful ArtMay 11 '17 at 21:09
• Are you familiar with the concept of the derivative in calculus? – Franklin Pezzuti Dyer May 11 '17 at 20:40
• Given $x>0$ one has $\frac{\sqrt{x}}{x+1}\leq \frac{1}{2}$ if and only if $2\sqrt{x}\leq x+1$ If and only if $4x \leq x^2+2x+1$ if and only if ... – JMoravitz May 11 '17 at 20:40
If $x>0$ then
$$\frac{\sqrt{x}}{x+1} \leq \frac{1}{2}\Leftrightarrow 2\sqrt{x}\le x+1\Leftrightarrow x-2\sqrt{x}+1\ge 0\Leftrightarrow (\sqrt{x}-1)^2\ge 0$$
Start by assuming that there is some integer $a$ for which $$\frac{\sqrt x}{x+1} \gt \frac{1}{2}$$ so that for some other positive number $b$, $$\frac{\sqrt x}{x+1} = \frac{1}{2}+b$$ Then we can proceed using algebra to elicit a contradiction: $${\sqrt x} = (\frac{1}{2}+b)(x+1)$$ $$x = (\frac{1}{2}+b)^2(x+1)^2$$ Let us set the quantity $(\frac{1}{2}+b)^2$ equal to $c$. Then $c$ is also a positive number that is greater than $\frac{1}{4}$, and $$x = c(x+1)^2$$ $$x = c(x^2+2x+1)$$ $$cx^2+(2c-1)x+c=0$$ Now we use the quadratic formula: $$x=\frac{1-2c\pm \sqrt{(2c-1)^2-4c^2}}{2c}$$ $$x=\frac{1-2c\pm \sqrt{4c^2-4c+1-4c^2}}{2c}$$ $$x=\frac{1-2c\pm \sqrt{1-4c}}{2c}$$ However, since $c$ is greater than $\frac{1}{4}$, then $1-4c$ is negative and $\sqrt{1-4c}$ is imaginary, showing us that there are no possible values of $x$ satisfying this.
For $x=1$, $\frac{x}{(x+1)^2}=\frac{1}{4}$.
For $x\geq1$, the denominator of $\frac{x}{(x+1)^2}$ increases faster than the numerator so the fraction is decreasing. So for $x\geq1$, $\frac{x}{(x+1)^2}\leq\frac{1}{4}$.
For $0\leq x\leq1$, $x=\frac{1}{X}$ for some $X\geq1$. So then $\frac{x^2}{(x+1)^2}=\frac{1/X^2}{(1/X^2+1)^2}=\frac{X^2}{1+2X^2+X^4}$. Clearly this quotient also has a denominator that increases faster than its numerator. But then as $X$ increases (i.e. as $x$ decreases to $0$), $\frac{X^2}{1+2X^2+X^4}$ must get smaller. So then for $0\leq x\leq1$, $\frac{x}{(x+1)^2}\leq\frac{1}{4}$ and this exhausts all cases.
Assume $x > 0$. Then we have $$2x^{1/2} \le x+1$$ Square each side:
$$4x \le x^2 + 2x + 1$$
Subtract $4x$ from each side:$$0 \le x^2 - 2x + 1$$
Factor: $$0 \le (x-1)^2$$ Since we know that any real number squared is non-negative, this is a true statement and the proof is done.
It is equivalent to prove that
$$\frac {x+1}{\sqrt {x}}\geq 2$$
or
$$\sqrt {x}+\frac {1}{\sqrt {x}}\geq 2$$
or if $t=\sqrt {x}$, $$f (t)=t+\frac {1}{t}\geq 2.$$
now $f'(t)=1-\frac {1}{t^2}$.
its minimum is attained at $t=1$.
thus, $$f (t)\geq f (1)=2.$$ | 2019-10-16 17:49:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9562721848487854, "perplexity": 152.11106324845542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00539.warc.gz"} |
https://projecteuclid.org/euclid.aoap/1186755241 | ## The Annals of Applied Probability
### Dynamic importance sampling for queueing networks
#### Abstract
Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network setting (e.g., a two-node tandem network).
Exploiting connections between importance sampling, differential games, and classical subsolutions of the corresponding Isaacs equation, we show how to design and analyze simple and efficient dynamic importance sampling schemes for general classes of networks. The models used to illustrate the approach include d-node tandem Jackson networks and a two-node network with feedback, and the rare events studied are those of large queueing backlogs, including total population overflow and the overflow of individual buffers.
#### Article information
Source
Ann. Appl. Probab., Volume 17, Number 4 (2007), 1306-1346.
Dates
First available in Project Euclid: 10 August 2007
https://projecteuclid.org/euclid.aoap/1186755241
Digital Object Identifier
doi:10.1214/105051607000000122
Mathematical Reviews number (MathSciNet)
MR2344308
Zentralblatt MATH identifier
1144.60022
#### Citation
Dupuis, Paul; Sezer, Ali Devin; Wang, Hui. Dynamic importance sampling for queueing networks. Ann. Appl. Probab. 17 (2007), no. 4, 1306--1346. doi:10.1214/105051607000000122. https://projecteuclid.org/euclid.aoap/1186755241
#### References
• Asmussen, S. (1982). Conditioned limit theorems relating a random walk to its associates, with applications to risk reverse process and $GI/G/1$ queue. Adv. in Appl. Probab. 14 143–170.
• Asmussen, S. (2003). Applied Probability and Queues. Spring, New York.
• De Boer, P. J. (2006). Analysis of state-independent importance sampling measures for the two-node tandem queue. ACM Trans. Modeling Comp. Simulation 16 225–250.
• Dupuis, P. and Ellis, R. S. (1997). A Weak Convergence Approach to the Theory of Large Deviations. Wiley, New York.
• Dupuis, P., Ellis, R. S. and Weiss, A. (1991). Large deviations for Markov processes with discontinuous statistics, I: General upper bounds. Ann. Probab. 19 1280–1297.
• Dupuis, P., Ishii, H. and Soner, H. M. (1990). A viscosity solution approach to the asymptotic analysis of queueing systems. Ann. Probab. 18 226–255.
• Dupuis, P. and Wang, H. (2004). Importance sampling, large deviations, and differential games. Stoch. Stoch. Rep. 76 481–508.
• Dupuis, P. and Wang, H. (2005). Dynamic importance sampling for uniformly recurrent Markov chains. Ann. Appl. Probab. 15 1–38.
• Dupuis, P. and Wang, H. (2007). Subsolutions of an Isaacs equation and efficient schemes for importance sampling. Math. Oper. Res. To appear.
• Glasserman, P. and Kou, S. (1995). Analysis of an importance sampling estimator for tandem queues. ACM Trans. Modeling Comp. Simulation. 4 22–42.
• Heidelberger, P. (1995). Fast simulation of rare events in queueing and reliability models. ACM Trans. Modeling Comp. Simulation 4 43–85.
• Lions, P.-L. (1985). Neumann type boundary conditions for Hamilton–Jacobi equations. Duke Math. J. 52 793–820.
• Parekh, S. and Walrand, J. (1989). A quick simulation method for excessive backlogs in networks of queues. IEEE Trans. Automat. Control 34 54–66.
• Sadowsky, J. S. (1991). Large deviations and efficient simulation of excessive backlogs in a $GI/G/m$ queue. IEEE Trans. Automat. Control 36 1383–1394.
• Sezer, A. D. (2005). Dynamic importance sampling for queueing networks. Ph.D. dissertation, Brown Univ.
• Weber, R. R. (1979). The interchangeability in $\cdot/M/1$ queues in series. J. Appl. Probab. 16 690–695. | 2019-10-23 00:43:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39361923933029175, "perplexity": 3011.833192279854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987826436.88/warc/CC-MAIN-20191022232751-20191023020251-00090.warc.gz"} |
http://gmatclub.com/forum/a-group-of-8-friends-want-to-play-doubles-tennis-how-many-65641.html?fl=similar | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 29 Nov 2015, 11:06
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
A group of 8 friends want to play doubles tennis. How many
Author Message
Manager
Joined: 22 Oct 2007
Posts: 120
Followers: 1
Kudos [?]: 43 [0], given: 0
A group of 8 friends want to play doubles tennis. How many [#permalink] 17 Jun 2008, 08:10
1
This post was
BOOKMARKED
00:00
Difficulty:
(N/A)
Question Stats:
0% (00:00) correct 0% (00:00) wrong based on 0 sessions
This topic is locked. If you want to discuss this question please re-post it in the respective forum.
A group of 8 friends want to play doubles tennis. How many different ways can the group be divided into 4 teams of 2 people?
I read the C(8,2) * C(6,2) * C(4,2) * C(2,2)/4! approach
What if we modify the question in following way:
Total friends: 8
We need: 3 teams of 2, 2 and 4
What will the approach in this case?
SVP
Joined: 30 Apr 2008
Posts: 1888
Location: Oklahoma City
Schools: Hard Knocks
Followers: 38
Kudos [?]: 496 [0], given: 32
Re: PS: teams [#permalink] 17 Jun 2008, 12:25
Quote:
Total friends: 8
We need: 3 teams of 2, 2 and 4
What will the approach in this case?
C(8,2) * C(6,2) * C(4,4) / 3!
Why would it be incorrect?
The first team of is a combination of 2 things taken from 8, or C(8,2), then the next is a team of 2 taken from 6 available choices, so C(6,2), and the final team you state is a team of 4 taken from 4 available choices C(4,4). We have a total of 3 teams and we do not want to treat the order of the teams differently so we divide that by 3!. Dividing by 3! removes the number of ways that each team can be ordered because unless we do that, we count the same teams, but in different order as a different combination, which is not what the question asks for.
_________________
------------------------------------
J Allen Morris
**I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$. GMAT Club Premium Membership - big benefits and savings Manager Joined: 21 Mar 2008 Posts: 244 Followers: 2 Kudos [?]: 21 [0], given: 0 Re: PS: teams [#permalink] 17 Jun 2008, 18:48 jallenmorris wrote: Why would it be incorrect? The first team of is a combination of 2 things taken from 8, or C(8,2), then the next is a team of 2 taken from 6 available choices, so C(6,2), and the final team you state is a team of 4 taken from 4 available choices C(4,4). We have a total of 3 teams and we do not want to treat the order of the teams differently so we divide that by 3!. Dividing by 3! removes the number of ways that each team can be ordered because unless we do that, we count the same teams, but in different order as a different combination, which is not what the question asks for. Good explanation Jallenmorris.. SVP Joined: 30 Apr 2008 Posts: 1888 Location: Oklahoma City Schools: Hard Knocks Followers: 38 Kudos [?]: 496 [0], given: 32 Re: PS: teams [#permalink] 18 Jun 2008, 06:25 C(8,2) * C(6,2) * C(4,4) / 3! For the explanation to the answer see my post above. _________________ ------------------------------------ J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$.
GMAT Club Premium Membership - big benefits and savings
Manager
Joined: 11 Apr 2008
Posts: 128
Location: Chicago
Followers: 1
Kudos [?]: 42 [0], given: 0
Re: PS: teams [#permalink] 18 Jun 2008, 20:07
jallenmorris wrote:
C(8,2) * C(6,2) * C(4,4) / 3!
For the explanation to the answer see my post above.
I see the explanation but no answer.
_________________
Factorials were someone's attempt to make math look exciting!!!
Manager
Joined: 27 Apr 2008
Posts: 110
Followers: 1
Kudos [?]: 8 [0], given: 0
Re: PS: teams [#permalink] 26 Jun 2008, 05:21
I dont see why shouldn't it be a simple 8c2.
i think u are all wrong
SVP
Joined: 30 Apr 2008
Posts: 1888
Location: Oklahoma City
Schools: Hard Knocks
Followers: 38
Kudos [?]: 496 [1] , given: 32
Re: PS: teams [#permalink] 26 Jun 2008, 05:41
1
KUDOS
First of all, which question are you referring to? There was the original question and then a subsequent question that was modified. The second question was 3 teams, First = 2 people, Second = 2 people and the 3rd = 4 people (total of 8) people.
Can you tell us which question you're talking about so we can explain ourselves?
rino wrote:
I dont see why shouldn't it be a simple 8c2.
i think u are all wrong
_________________
------------------------------------
J Allen Morris
**I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a.
GMAT Club Premium Membership - big benefits and savings
Intern
Joined: 28 Feb 2008
Posts: 34
Followers: 0
Kudos [?]: 1 [0], given: 0
Re: PS: teams [#permalink] 26 Jun 2008, 06:25
Jallenmorris, u rock..!...
nice explanation man...!!...
Re: PS: teams [#permalink] 26 Jun 2008, 06:25
Display posts from previous: Sort by | 2015-11-29 19:06:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2523833215236664, "perplexity": 4795.89155136631}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459214.39/warc/CC-MAIN-20151124205419-00057-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/190627-newbie-problem-that-i-just-cant-solve/ | Public Group
#### Archived
This topic is now archived and is closed to further replies.
# Newbie problem that I just can't solve
This topic is 5428 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
As part of a project in a computer class I am taking (EAST Lab), I am programming a math calculation program to do useful mathematical calculations for a teacher or student. This is not the final version, just a DOS-console prototype. However, I have run into a problem I''ve never come across before. I am relatively new to coding, and although I know a bit about it, I''m still failing pretty badly in a few spots, apparently. The problem I''m having comes in the averaging function. I can average integers, but it will not give me a decimal, so I tried changing the ints to floats instead. This is really just a shot in the dark...I''ve never done this before. Anyway, I seem to be having a very ugly problem that is giving me 4 errors, every time I try to compile. Here''s the function: int AverageNums() { float fNumstoAvg = 0.0f; float i = 1.0f; system("cls"); cout << "Average -- Average some numbers."; cout << "\n\nHow many numbers would you like to average? "; cin >> fNumstoAvg; if (fNumstoAvg > 255.0f) { cout << "\nYou cannot average any more numbers than 255.\n\n"; system("cls"); return 0; } cout << "\n\n"; float fNumsIAvg[255]; for (i=1.0f;i <= fNumstoAvg; i++) { cout << "What is number " << i << "? "; cin >> fNumsIAvg; } // end for float fSum = 0.0f; for (i=1.0f;i <= fNumstoAvg; i++) { fSum = fSum + fNumsIAvg[i]; } float fAveraged = 0.0f; fAveraged = fSum / fNumstoAvg; cout << "\n\n"; cout << "Averaged together, the total is " << fAveraged << ".\n\n"; system("pause"); system("cls"); return 0; } I am aware of a couple of stupid mess-ups, such as having an int function return 0 for no apparent reason, when I could just make it a void function and have it simply return. However, the problem seems to be in the addition for-loop, which contains "fSum = fSum + fNumsIAvg[i];". Also, a problem seems to exist in the "cin >> fNumsIAvg[i];" line...the first two errors relate to this line, and the last two relate to the line in the above paragraph. The errors are: 1. error C2108: subscript is not of integral type 2. error C2679: binary ''>>'' : no operator defined which takes a right-hand operand of type ''float *'' (or there is no acceptable conversion) 3. error C2108: subscript is not of integral type 4. error C2111: pointer addition requires integral operand Anyone have any ideas what my lacking mind is missing?
##### Share on other sites
[qoute]
error C2108: subscript is not of integral type
if you're going to use i as a subscript (array) it can't be a float... or you'll have to cast it as an int (array[(int)i]) I don't recommend that
quote:
error C2679: binary '>>' : no operator defined which takes a right-hand operand of type 'float *' (or there is no acceptable conversion)
you can't use cin with a float... sorry
I'd suggest going back to integers... you only really need to make fAveraged a float... you could also make fSum a float so you don't have to worry about integer division
[edited by - tempuself on November 11, 2003 10:40:12 AM]
##### Share on other sites
You cannot have a float in a for loop. It must be an integer. Change i to an int.
##### Share on other sites
Most \$19.99 calculators will do all of that already.
##### Share on other sites
well, AverageNum() returns int ...
##### Share on other sites
quote:
Original post by Anonymous Poster
You cannot have a float in a for loop.
Of course you can.
for (float loop = 9.99f; loop > 0.01f; loop -= 0.01f){ // bla bla}
##### Share on other sites
quote:
Original post by TempusElf
quote:
error C2679: binary ''>>'' : no operator defined which takes a right-hand operand of type ''float *'' (or there is no acceptable conversion)
you can''t use cin with a float... sorry
Nonsense. Of course you can use cin with a float. Not, however, with a float pointer ...
##### Share on other sites
Almost forgot, this should work:
int AverageNums(){ int fNumstoAvg = 0; int i = 0; system("cls"); cout << "Average -- Average some numbers."; cout << "\n\nHow many numbers would you like to average? "; cin >> fNumstoAvg; if (fNumstoAvg > 255) { cout << "\nYou cannot average any more numbers than 255.\n\n"; system("cls"); return 0; } cout << "\n\n"; float fNumsIAvg[255]; for (i=0; i < fNumstoAvg; i++) { cout << "What is number " << i << "? "; cin >> fNumsIAvg[i]; } // end for float fSum = 0.0f; for (i= 0; i < fNumstoAvg; i++) { fSum = fSum + fNumsIAvg[i]; } float fAveraged = 0.0f; fAveraged = fSum / fNumstoAvg; cout << "\n\n"; cout << "Averaged together, the total is " << fAveraged << ".\n\n"; system("pause"); system("cls"); return 0;}
[edited by - EL on November 11, 2003 4:02:25 PM]
##### Share on other sites
Here's a revamped version because I'm bored: (also, use [ source ] tags for big code snippets)
void average(){ long total=0; int numberOfNumbers=0; cout<<"How many numbers would you like to average? "; cin>>numberOfNumbers; for(int i=0;i<numberOfNumbers;i++) { cout<<"Enter number "<<i<<": "; int temp=0; cin>>temp; total+=temp; } float result=((float)total/(float)numberOfNumbers); cout<<"result: "<<result;}
[edited by - brassfish89 on November 11, 2003 10:23:07 PM]
1. 1
2. 2
Rutin
19
3. 3
4. 4
5. 5
frob
13
• 9
• 15
• 10
• 9
• 17
• ### Forum Statistics
• Total Topics
632602
• Total Posts
3007363
×
## Important Information
We are the game development community.
Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!
Sign me up! | 2018-09-22 04:06:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18253496289253235, "perplexity": 6637.115741516956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158011.18/warc/CC-MAIN-20180922024918-20180922045318-00310.warc.gz"} |
https://www.semanticscholar.org/paper/Ordinal-utility-models-of-decision-making-under-Manski/69c856d34aaa35980ebfdbeb116a00f37ba70cb0 | # Ordinal utility models of decision making under uncertainty
@article{Manski1988OrdinalUM,
title={Ordinal utility models of decision making under uncertainty},
author={Charles F. Manski},
journal={Theory and Decision},
year={1988},
volume={25},
pages={79-104}
}
• C. Manski
• Published 1 July 1988
• Economics
• Theory and Decision
This paper studies two models of rational behavior under uncertainty whose predictions are invariant under ordinal transformations of utility. The ‘quantile utility’ model assumes that the agent maximizes some quantile of the distribution of utility. The ‘utility mass’ model assumes maximization of the probability of obtaining an outcome whose utility is higher than some fixed critical value. Both models satisfy weak stochastic dominance. Lexicographic refinements satisfy strong dominance.The…
90 Citations
### Quantile Maximization in Decision Theory
This paper introduces a model of preferences, in which, given beliefs about uncertain outcomes, an individual evaluates an action by a quantile of the induced distribution. The choice rule of
### Dynamic Quantile Models of Rational Behavior
• Economics
Econometrica
• 2019
This paper develops a dynamic model of rational behavior under uncertainty, in which the agent maximizes the stream of future τ‐quantile utilities, for τ ∈ (0,1). That is, the agent has a
### Do people maximize quantiles?
• Economics
Games Econ. Behav.
• 2022
Payoff quantiles have been used for decision making in banking and investment (in the form of Value-at-Risk) and in the mining, oil and gas industries (in the form of "probabilities of exceeding" a
### Decision analysis using targets instead of utility functions
• Economics
• 2000
Abstract.A common precept of decision analysis under uncertainty is the choice of an action which maximizes the expected value of a utility function. Savage's (1954) axioms for subjective expected
### Partial Prescriptions for Decisions with Partial Knowledge
This paper concerns the prescriptive function of decision analysis. I suppose that an agent must choose an action yielding welfare that varies with the state of nature. The agent has a welfare
### Robust ordinal regression for decision under risk and uncertainty
• Economics
• 2016
We apply the Robust Ordinal Regression (ROR) approach to decision under risk and uncertainty. ROR is a methodology proposed within multiple criteria decision aiding (MCDA) with the aim of taking into
### Static and dynamic quantile preferences
• Economics
Economic Theory
• 2021
This paper axiomatizes static and dynamic quantile preferences. Static quantile preferences specify that a prospect should be preferred if it has a higher $$\tau$$ τ -quantile, for some \tau \in
### Portfolio Selection in Quantile Decision Models
• Economics
SSRN Electronic Journal
• 2019
This paper develops a model for optimal portfolio allocation for an investor with quantile preferences. The investor chooses optimal allocation of weights to maximize the τ-quantile of the utility of
## References
SHOWING 1-10 OF 12 REFERENCES
### Prospect theory: analysis of decision under risk
• Economics
• 1979
Analysis of decision making under risk has been dominated by expected utility theory, which generally accounts for people's actions. Presents a critique of expected utility theory as a descriptive
### Rational Behavior, Uncertain Prospects, and Measurable Utility (1950)
After introducing some basic concepts and three postulates on rational choice, it is proposed to show that if the economists’ theory of assets is completed by a fourth postulate on rational choice,
### Subjective expected utility: A review of normative theories
This paper reviews theories of subjective expected utility for decision making under uncertainty. It focuses on normative interpretations and discusses the primitives, axioms and
### Axiomatic Theories of Choice, Cardinal Utility and Subjective Probability: a review
Most of the papers collected in this volume rely, explicitly or implicitly, upon (i) a formal description of uncertainty situations in terms of the concepts of events, acts and consequences; and (ii)
### Asymmetric Least Squares Estimation and Testing
• Economics, Mathematics
• 1987
This paper considers estimation and testing using location measures for regression m odels that are based on an asymmetric least-squares criterion functio n. These estimators have properties that are
### Prospect theory: An analysis of decision under risk Econometrica 47
• Engineering
• 1979
A lubricator valve apparatus adapted for use when running wireline tools into an offshore well during a production test of the well. The valve includes a valve body having a central flow passage and | 2022-10-07 22:44:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7711628079414368, "perplexity": 4379.4349059777105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00387.warc.gz"} |
https://www.physicsforums.com/threads/moved-to-homework-area-sorry-kiyoshi7.707625/ | # Moved to homework area sorry.-kiyoshi7
1. Aug 28, 2013
### kiyoshi7
Moved to homework area sorry.
-kiyoshi7
#### Attached Files:
File size:
32.1 KB
Views:
74
• ###### 002.jpg
File size:
14.8 KB
Views:
78
Last edited: Aug 28, 2013
2. Aug 29, 2013
### tiny-tim
similarity
Last edited: Aug 29, 2013
3. Aug 29, 2013
### symbolipoint
Study Triangle Similarity and study Supplementary Angles, from the standard high school Geometry course. The triangle problem with the single square relies on supplementary angles where the upper right and left square vertices meet two sides of the larger triangle. You can conclude that all three of the smaller triangles are similar, and so the ratios of their corresponding sides are equal. A proportion can be arranged for the two smaller left & right-hand triangles; and note too that all three of the smaller triangles are RIGHT triangles. | 2016-10-26 02:23:52 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682078123092651, "perplexity": 2201.1030767408424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720475.79/warc/CC-MAIN-20161020183840-00131-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.atmos-meas-tech.net/12/5039/2019/ | Journal topic
Atmos. Meas. Tech., 12, 5039–5054, 2019
https://doi.org/10.5194/amt-12-5039-2019
Atmos. Meas. Tech., 12, 5039–5054, 2019
https://doi.org/10.5194/amt-12-5039-2019
Research article 19 Sep 2019
Research article | 19 Sep 2019
# Use of spectral cloud emissivities and their related uncertainties to infer ice cloud boundaries: methodology and assessment using CALIPSO cloud products
Use of spectral cloud emissivities and their related uncertainties to infer ice cloud boundaries: methodology and assessment using CALIPSO cloud products
Hye-Sil Kim1, Bryan A. Baum2, and Yong-Sang Choi1 Hye-Sil Kim et al.
• 1Department of Climate and Energy Systems Engineering, Ewha Womans University, Seoul, Korea
• 2Science and Technology Corporation, Madison, Wisconsin, USA
Correspondence: Yong-Sang Choi (ysc@ewha.ac.kr)
Abstract
Satellite-imager-based operational cloud property retrievals generally assume that a cloudy pixel can be treated as being plane-parallel with horizontally homogeneous properties. This assumption can lead to high uncertainties in cloud heights, particularly for the case of optically thin, but geometrically thick, clouds composed of ice particles. This study demonstrates that ice cloud emissivity uncertainties can be used to provide a reasonable range of ice cloud layer boundaries, i.e., the minimum to maximum heights. Here ice cloud emissivity uncertainties are obtained for three IR channels centered at 11, 12, and 13.3 µm. The range of cloud emissivities is used to infer a range of ice cloud temperature and heights, rather than a single value per pixel as provided by operational cloud retrievals. Our methodology is tested using MODIS observations over the western North Pacific Ocean during August 2015. We estimate minimum–maximum heights for three cloud regimes, i.e., single-layered optically thin ice clouds, single-layered optically thick ice clouds, and multilayered clouds. Our results are assessed through comparison with CALIOP version 4 cloud products for a total of 11873 pixels. The cloud boundary heights for single-layered optically thin clouds show good agreement with those from CALIOP; biases for maximum (minimum) heights versus the cloud-top (base) heights of CALIOP are 0.13 km (−1.01 km). For optically thick and multilayered clouds, the biases of the estimated cloud heights from the cloud top or cloud base become larger (0.30/−1.71 km, 1.41/−4.64 km). The vertically resolved boundaries for ice clouds can contribute new information for data assimilation efforts for weather prediction and radiation budget studies. Our method is applicable to measurements provided by most geostationary weather satellites including the GK-2A advanced multichannel infrared imager.
1 Introduction
Satellite sensors provide data daily that are essential for determining global cloud properties, including cloud height–pressure–temperature, thermodynamic phase (ice or liquid water), cloud optical thickness, and effective particle size. These variables are essential for understanding the net radiation of the Earth and the impact of clouds (L'Ecuyer et al., 2019). In particular, cloud heights at the top and base levels are necessary to determine upwelling and downwelling infrared (IR) radiation (Slingo and Slingo, 1988; Baker, 1997; Harrop and Hartmann; 2012). Additionally, cloud heights are used to derive atmospheric motion vectors that are important for most global data-assimilation systems (Bouttier and Kelly, 2001), affecting the accuracy of the global model forecast (Lee and Song, 2018). However, in most operational retrievals of cloud properties, only a single cloud height is inferred for a given pixel, or field of view. The goal of this study is to develop an algorithm to infer cloud height boundaries for semitransparent ice clouds using only IR measurements for its applicability of global data regardless of solar illumination. Where this study could provide the most benefit is for the case where an ice cloud is geometrically thick but optically thin.
Although our approach will be applied to geostationary satellites in future work, the algorithm is developed for the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor for two reasons: (1) our resulting cloud temperatures can be compared to those from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation Cloud-Aerosol Lidar with Orthogonal Polarization (CALIPSO CALIOP) active lidar version 4 products for verification and (2) further comparison can be made to the MODIS Collection 6 cloud products. The approach adopted in our study for the inference of ice cloud height has a basis in the work of Inoue (1985), who developed this approach using only the split-window channels on the Advanced Very High Resolution Radiometer (AVHRR). The goal of the Inoue (1985) approach was to improve the inference of cloud temperatures for semitransparent ice clouds. Heidinger and Pavolonis (2009) further improved this approach and generated a 25-year climatology of ice cloud properties from AVHRR analysis.
For satellite-based cloud height retrievals based on passive IR measurements, the radiative emission level is regarded as the cloud top. When the emissivity is 1, the cloud is emitting as a blackbody and the cloud top is at, or close to, the actual cloud's upper boundary. As the emissivity decreases, the cloud top inferred from IR measurements will be lower than the actual cloud-top level. This is demonstrated in Holz et al. (2006), who compared the cloud tops from aircraft Scanning High-Resolution Interferometer Sounder (S-HIS) measurements to those from coincident measurements from the Cloud Physics Lidar (CPL). They found that the best match between the cloud tops based on the passive S-HIS measurements and the CPL occurs when the integrated cloud optical thickness is approximately 1. This implies that the differences of cloud-top heights by IR measurements from those by CALIOP are expected since the IR method reports the height where the integrated cloud optical thickness, beginning at cloud top and moving downwards into the cloud, is approximately 1 while CALIOP reports the actual cloud top to be where the first particles are encountered.
With regard to geometric differences of IR cloud tops from the actual cloud tops, optically thin but geometrically thick clouds show the largest bias since the level at which the integrated optical thickness reaches 1 is much lower than the height at which the first ice particles occur. In a review of 10 different satellite retrieval methods for cloud-top heights by IR measurements (Hamann et al., 2014), the heights inferred for optically thin clouds are generally below the cloud's mid-level height. When lower-level clouds are present below the cirrus in a vertical column, the inferred cloud height can be between the cloud layers, depending on the optical thickness of the uppermost layer.
There is a retrieval approach to infer optically thin cloud-top pressure that uses multiple IR absorption bands within the 15 µm CO2 band (e.g., Menzel et al., 2008; Baum et al., 2012), called the CO2 slicing method. These 15 µm CO2 band channels are available on the Terra–Aqua MODIS imagers, the HIRS sounders, and with any hyperspectral IR sounder (IASI, CrIS, AIRS). MODIS is the only imager where multiple 15 µm CO2 channels are available. Zhang and Menzel (2002) showed improvement of the retrieval of ice cloud height when they take into account spectral cloud emissivity that has some sensitivity to the cloud microphysics. As the goal of our work is to develop a reliable method for inferring ice cloud height from geostationary data, we are limiting this study to the use of the relevant IR channels, i.e., measurements at 11, 12, and 13.3 µm.
To complement the use of IR window channels, the addition of a single IR absorption channel, such as one within the broad 15 µm CO2 band, has been shown to improve the inference of cirrus cloud temperature (Heidinger et al., 2010). Their study shows how adding a single IR absorption channel at 13.3 µm to the IR 11 and 12 µm window channels decreases the solution space in an optimal estimation retrieval approach and leads to closer comparisons in cloud height–temperature with CALIPSO CALIOP cloud products.
Rather than inferring a single ice cloud temperature in each pixel, we infer a range of ice cloud temperatures (minimum to maximum temperature per ice cloud pixel) that correspond to uncertainties in the cloud spectral emissivity. We note that the spectral cloud emissivity, which can be obtained using measurements at 11, 12, and 13.3 µm, has some dependence on the ice cloud microphysics. The emissivities are subsequently used to estimate ranges of cloud height, which are found by converting the estimated cloud temperature ranges using a simple linear interpolation of the Numerical Weather Prediction (NWP) model profiles. Cloud boundary results are presented for three cloud categories, i.e., single-layered optically thin ice clouds, single-layered optically thick ice clouds, and multilayered clouds, and these results are assessed with measurements from a month of collocated CALIOP version 4 data. The focus area for the data analysis and resulting analyses is the western North Pacific Ocean for the month of August 2015.
The paper is organized as follows. Section 2 describes the data used in this study. Section 3 presents the methodology and the generation of the relevant look-up tables (LUTs) for the radiances and brightness temperatures used in our analyses. Section 4 provides results for the western North Pacific Ocean during August 2015 and comparisons with CALIOP. Section 5 discusses the results and Sect. 6 summarizes this paper.
2 Data
## 2.1 Study domain
The study domain is the western North Pacific Ocean (0–30 N, 120–170 E) during August each year from 2013 to 2015. Two of these months (August 2013 and August 2014) are used for generating the LUTs, while the month of August in 2015 is used for testing and validating the current algorithm. The reason for restriction of the study domain is to obtain a clear relationship between radiances–brightness temperatures and spectral cloud emissivity. In the western North Pacific Ocean, the ice clouds can be generated from diverse meteorological conditions including frequent typhoons.
Table 1The detailed information used to generate empirical look-up tables (LUTs) of min–max(ec) and min–max(Δec). MODIS bands 31, 32, and 33 have spectral wavelengths ranges of 10.78–11.28, 11.77–12.27, and 13.185–13.485 µm, respectively.
## 2.2 Aqua Moderate Resolution Imaging Spectroradiometer (MODIS)
The MODIS is a 36-channel whisk-broom scanning radiometer on the NASA Earth Observing System Terra and Aqua platforms. The Aqua platform is in a daytime ascending orbit at 13:30 LST. The MODIS sensor has four focal planes that cover the spectral range 0.42–14.24 µm. The longwave bands are calibrated with an onboard blackbody. Table 1 shows the Aqua MODIS products used in this study; these products include the Collection 6 1 km Level-1b radiance data (MYD021KM), geolocation data (MYD03), and cloud properties at 1 km resolution (MYD06). In this study, the radiances and brightness temperatures at 11, 12, and 13.3 µm (channels 31, 32, and 33, respectively) are taken from the C6 MYD021KM data. Latitude–longitude information for each granule is from C6 MYD03. The C6 MYD06 product provides cloud emissivity values in the IR window (8.5, 11, and 12 µm) and also cloud-top height (CTH), all at 1 km spatial resolution; these parameters were not included in earlier collections (Menzel et al., 2008; Baum et al., 2012). The cloud emissivities at 11 and 12 µm are used in this study.
## 2.3 CALIPSO CALIOP
The CALIPSO satellite platform carries several instruments, among which is a near-nadir-viewing lidar called CALIOP (Winker et al., 2007, 2009). Originally, CALIPSO flew in formation with NASA's Earth Observing System Aqua platform since 2006 and was part of the A-Train suite of sensors. At the time of this writing, it is no longer part of the A-Train but flies in formation with CloudSat in a lower orbit. CALIOP takes data at 532 and 1064 nm. The CALIOP 532 nm channel also measures the linear polarization state of the lidar returns. The depolarization ratio contains information about aerosol and cloud properties. This study uses CALIPSO version 4 products that were released in November 2016. With the updated radiometric calibration at 532 and 1064 nm (Getzewich et al., 2018; Vaughan et al., 2019), cloud products such as cloud–aerosol discrimination and extinction coefficients show significant improvement relative to previous versions (Young et al., 2018; Liu et al., 2019). CALIPSO products are used to validate our retrievals, including CAL_L1D_L2_ VFM-Standard-V4, which provides cloud vertical features; CAL_LID_L2_05kmCPro-Standard-V4 and CAL_LID_L2_05kmCLay-Standard-V4, which provide cloud-top and cloud-base temperature (height); extinction coefficients; and temperature profiles (Table 1).
## 2.4 Numerical weather model product
The Global Forecast System (GFS) model is produced by the National Centers for Environmental Prediction (NCEP) of the National Oceanic and Atmospheric Administration (NOAA) (Moorthi et al., 2001). GFS provides global NWP model outputs at 0.5 resolution at 3 h forecast intervals every 6 h that are available online (https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/global-forcast-system-gfs, last access: 31 March 2019). We use two variables from the NWP products, temperature profiles, and geopotential heights, with cloud heights provided for 26 isobaric layers that are related to cloud temperatures. These data are used for the conversion of cloud temperatures to cloud heights. The NWP fields are remapped to the resolution of satellite imagery by linear interpolation. We use the NWP products that are closest in time to the satellite observations.
Figure 1The estimated clear-sky radiance map at $\mathrm{0.1}{}^{\circ }×\mathrm{0.1}{}^{\circ }$ resolution for (a) 11 µm (Iclr|11) and (b) differences of 11 µm from 12 µm (${I}_{\mathrm{clr}|\mathrm{11}}-{I}_{\mathrm{clr}|\mathrm{12}}$) in units of W m−2µm−1 sr−1. Iclr|11 and Iclr|12 are the maximum values among MODIS C6 radiances for August in 3 three years (2013–2015) in each $\mathrm{0.1}{}^{\circ }×\mathrm{0.1}{}^{\circ }$ grid box. Green-shaded contours over the map show land, which is generally from the Philippines.
## 2.5 Clear-sky maps generated from MODIS
The MODIS pixels identified as being clear sky are used to generate a gridded clear-sky map, which is another ancillary product required for our method. To simplify the generation of this map, the MODIS data with 1 km resolution are converted to 5 km resolution. Monthly composites of clear-sky radiances (Iclr) at $\mathrm{0.1}{}^{\circ }×\mathrm{0.1}{}^{\circ }$ resolution are generated by choosing the maximum value among radiances for August in 3 years (2013–2015) in each $\mathrm{0.1}{}^{\circ }×\mathrm{0.1}{}^{\circ }$ grid box. To confirm the availability of the generated Iclr, we present the spatial distribution of Iclr at 11 µm (Iclr|11, Fig. 1a), from 8 to 11 W m−2µm−1 sr−1. The largest Iclr|11 values are shown over the northwestern region of the domain, whereas the smallest Iclr|11 values are shown over the southeastern region of the domain. The pattern of Iclr|11 is similar to the spatial distribution of the monthly average of sea surface temperature in 2015 (https://bobtisdale.wordpress.com/2015/09/08/august-2015-sea-surface-temperature-sst-anomaly-update/, last access: 31 March 2019). Also, we show the spatial distribution of spatial distribution of differences of Iclr|11 from Iclr|12 in Fig. 1a, examining the reliability of the generated Iclr|12. Note that the differences of Iclr|11 and Iclr|12 are positive over the domain because water vapor absorption is stronger at 12 µm than at 11 µm. Large differences are shown in the western region, near the Philippines (green-colored contours in Fig. 1).
Figure 2The conceptual model for (a) a plane-parallel homogeneous cloud layer with no scattering, characterized by cloud emissivity (ec) and cloud emissivity differences between two infrared channels (Δec) at the cloud temperature (Tc) and (b) a number of plane-parallel homogeneous cloud layers (the stripes box) with a possible range of ec and Δec such as ${e}_{\mathrm{c}}=\left[{e}_{\mathrm{c}}^{\mathrm{1}}{e}_{\mathrm{c}}^{\mathrm{2}}\mathrm{\dots },{e}_{\mathrm{c}}^{n}$] and $\mathrm{\Delta }{e}_{\mathrm{c}}=\left[\mathrm{\Delta }{e}_{\mathrm{c}}^{\mathrm{1}}\mathrm{\Delta }{e}_{\mathrm{c}}^{\mathrm{2}}\mathrm{\dots },\mathrm{\Delta }{e}_{\mathrm{c}}^{n}$] corresponding to a possible range of cloud temperature, ${T}_{\mathrm{c}}=\left[{T}_{\mathrm{c}}^{\mathrm{1}}{T}_{\mathrm{c}}^{\mathrm{2}}\mathrm{\dots },{T}_{\mathrm{c}}^{n}$], where Iclr and B are the clear-sky radiance and the Planck's function, respectively. Arrows represent upwelling radiances.
3 Methodology
## 3.1 Cloud retrieval algorithm
The basis for the retrieval algorithm is provided in Inoue (1985). Figure 2a shows the plane-parallel homogeneous cloud model with no scattering. The ice cloud layer at a given height has a corresponding ice cloud temperature (Tc) and an associated cloud emissivity (ec). The observed upwelling radiance (Iobs) at the cloud top is composed of two terms: the first depending on the upwelling clear-sky radiance (Iclr) at the cloud base and the other depending on the radiance (B(Tc)) computed for a cloud emitting as a blackbody:
$\begin{array}{}\text{(1)}& {I}_{\mathrm{obs}}=\left(\mathrm{1}-{e}_{\mathrm{c}}\right){I}_{\mathrm{clr}}+{e}_{\mathrm{c}}B\left({T}_{\mathrm{c}}\right),\end{array}$
where B(Tc) is the Planck emission for a cloud computed at Tc (Liou, 2002). All terms in Eq. (1) are wavelength dependent except for the Tc. Iobs is determined from the satellite measurements, and Iclr can be found from clear-sky conditions in the imagery or computed by a radiative transfer model given a set of atmospheric profiles of temperature, humidity, and trace gases. However, ec and Tc are unknown.
Equation (1) can be rearranged to solve for the emissivity:
$\begin{array}{}\text{(2)}& {e}_{\mathrm{c}}=\left({I}_{\mathrm{obs}}-{I}_{\mathrm{clr}}\right)/\left(B\left({T}_{\mathrm{c}}\right)-{I}_{\mathrm{clr}}\right).\end{array}$
One can relate two channels by taking a ratio of the radiances, similar to that of the CO2 slicing method (e.g., Menzel et al., 2008), and assuming that the emissivity between two channels spaced closely in wavelength are the same. However, Zhang and Menzel (2002) showed improvement of the retrieval of ice cloud pressure by accounting for differences in the spectral cloud emissivity.
Inoue (1985) discusses the range of uncertainties in both Tc and ec and further suggests that use of multiple IR channels can reduce the uncertainties. To relate the effective emissivity between two channels, Inoue uses the relation of the cirrus emissivity to the optical thickness. The ec is a function of the absorption coefficient (κ) and the cloud thickness (z),
$\begin{array}{}\text{(3)}& {e}_{\mathrm{c}}=\mathrm{1}-{\mathrm{exp}}^{-\mathit{\kappa }z/\mathit{\mu }}.\end{array}$
The term μ in Eq. (3) is a cosine of the viewing zenith angle; the quantity κz is called the optical thickness and is also wavelength dependent. Given a value for ec, the Tc can be obtained by Eq. (2). The estimate of ec from an IR measurement will have inherent uncertainties due to the diversity of ice particle size distributions (i.e., cloud microphysics), sensor calibration, and in-cloud vertical inhomogeneity.
Another way to constrain these uncertainties is by using multiple IR channel measurements, specifically the spectral emissivity differences between two IR window channels (Δec). We can express the Δec between two IR channels by
$\begin{array}{}\text{(4)}& \mathrm{\Delta }{e}_{\mathrm{c}}={\mathrm{exp}}^{-\frac{{\mathit{\kappa }}^{\prime }z}{\mathit{\mu }}}-{\mathrm{exp}}^{-\frac{\mathit{\kappa }z}{\mathit{\mu }}}.\end{array}$
In Eq. (4), κ is the absorption coefficient at “another” IR window channel. That is, the Δec is determined by ($\mathit{\kappa }-{\mathit{\kappa }}^{\prime }\right)/z$, which depends on the cloud particle size and cloud thickness (Kikuchi et al., 2006). Many studies have adopted this, or a similar, approach to apply the representative relations of spectral cloud emissivity relying on cloud types to retrieve the Tc (e.g., Inoue, 1985; Parol et al., 1991; Giraud et al., 1997; Cooper et al., 2003; Heidinger and Pavolonis, 2009).
For the case of two IR channels, Inoue (1985) formulated the retrieval of the cirrus cloud temperature and effective emissivity by setting up three equations with three unknowns (specifically referring to Inoue's equations 5, 6, and 7): two equations are the same as Eq. (2) at 11 and 12 µm in this paper, and the last equation is as follows.
$\begin{array}{}\text{(5)}& {e}_{\mathrm{c}}{\mathrm{|}}_{\mathrm{12}}=\mathrm{1}-\left(\mathrm{1}-{e}_{\mathrm{c}}{\mathrm{|}}_{\mathrm{11}}{\right)}^{\mathrm{1.08}},\end{array}$
where ec|11 and ec|12 represent cloud emissivity for 11 and 12 µm, respectively. In Inoue (1985), the extinction coefficient ratio between the 11 and 12 µm channels is set to a constant value of 1.08. The cloud temperature is determined by assuming a cloud emissivity at one wavelength, calculating the emissivity at the other wavelength, and modifying the emissivities until a consistent cloud temperature is found for both wavelengths. The initial assumed 11 µm cloud emissivity begins with a value of 0 and increases by a value of 0.01 until Tc converges.
The approach of Inoue (1985) for developing the spectral cloud emissivity relationship improved the accuracy of the cirrus temperature retrievals. More recent studies explored the extinction coefficient ratio between the 11 and 12 µm channels for various cloud types (Parol et al., 1991; Duda and Spinhirne, 1996; Cooper et al., 2003). Heidinger et al. (2009) use an optimal estimation method that employs extinction coefficient ratios using pairs of the 8.6, 11, 12, and 13 µm channels to infer cloud heights for GOES-16/17.
In this study, we apply a range of spectral cloud emissivity values to infer cloud temperatures rather than an optimum value. In our approach, the cloud is considered to be a number of plane-parallel homogeneous cloud layers. The cloud layer temperature ranges, Tc, are estimated as a vector of possible Tc values given a range of the ec and Δec (hereafter, ec and Δec) such as ${e}_{\mathrm{c}}=\left[{e}_{\mathrm{c}}^{\mathrm{1}},{e}_{\mathrm{c}}^{\mathrm{2}},\mathrm{\dots },{e}_{\mathrm{c}}^{n}\right]$ and $\mathrm{\Delta }{e}_{\mathrm{c}}=\left[\mathrm{\Delta }{e}_{\mathrm{c}}^{\mathrm{1}},\mathrm{\Delta }{e}_{\mathrm{c}}^{\mathrm{2}},\mathrm{\dots },\mathrm{\Delta }{e}_{\mathrm{c}}^{n}\right]$ as shown in Fig. 2b. The ec and Δec in Fig. 2b describe a range of possible spectral cloud emissivity values that can simulate the measured channel radiances. Thus, this study aims to produce Tc given the ec and Δec and to examine how closely the retrieved Tc values are to the actual vertical cloud structure.
The differences between this study and Inoue (1985) are summarized as follows.
Constraints in the iteration range for cloud emissivity are provided in look-up tables (LUTs) discussed in the next section, as opposed to considering the full range of possible values from 0 to 1.
Emissivity differences (Δec) are used, rather than a single value for the extinction coefficient ratio between two infrared channels.
Given the range of emissivity differences (Δec provided in LUTs), we obtain a range of Tc (and hence a range of cloud heights, Hc) that can be compared to CALIPSO products.
Figure 3A flowchart for estimation of Tc and Hc corresponding to ec (from a light gray box that will be shown in Fig. 4) and Δec (from a dark gray box that will be shown in Fig. 5), which represent cloud microphysics uncertainty in a certain cloud thickness. We denoted functions for minimum–maximum values of a matrix, A, as min–max(A).
The first step in the current method (Fig. 3) is to constrain 11 µm cloud emissivity ranges (ec|11) that an ice cloud pixel can have based on the brightness temperatures. To obtain a reasonable ec|11 boundary corresponding to the ice cloud microphysical properties, the LUTs are generated to provide ec|11 ranges characterized by brightness temperature (BT) for 11 µm (BT|11) and BT differences (or BTD) between 11 and 13 µm (BTD${}_{|\mathrm{11},\mathrm{13}}$) and between 11 and 12 µm (BTD${}_{|\mathrm{11},\mathrm{12}}$) (the light gray box in Fig. 3).
The second step is to constrain cloud emissivity differences between 11 and 12 µm for an ice cloud pixel, $\mathrm{\Delta }{e}_{\mathrm{c}|\mathrm{11},\mathrm{12}}$, that are also provided in LUTs (the dark gray box in Fig. 3) with identical input parameters as in the first step. The third step is to find Tc values satisfying the three equations, i.e., Eq. (2) at 11 µm, Eq. (2) at 12 µm, and the equation for cloud emissivity differences (Eq. 4) between 11 and 12 µm with constraints in ec|11 and $\mathrm{\Delta }{e}_{\mathrm{c}|\mathrm{11},\mathrm{12}}$. That is, the last equation among the three equations in our method is different from Inoue's method (Eq. 5), where
$\begin{array}{}\text{(6)}& {e}_{\mathrm{c}|\mathrm{11}}={e}_{\mathrm{c}|\mathrm{12}}+\mathrm{\Delta }{e}_{\mathrm{c}|\mathrm{11},\mathrm{12}}.\end{array}$
The initial assumed 11 µm cloud emissivity begins with a value of min(ec|11) and increases by a value of 0.01 until Tc converges. Notice that the Tc value, an element of available ice cloud temperatures set as Tc, depends on $\mathrm{\Delta }{e}_{\mathrm{c}|\mathrm{11},\mathrm{12}}$ in Eq. (4). That is, we obtain two Tc values as the minimum and maximum temperatures that an ice cloud pixel can have, corresponding to min–max($\mathrm{\Delta }{e}_{\mathrm{c}|\mathrm{11},\mathrm{12}}$). Finally, we estimate cloud height ranges, Hc, relating to min–max(Tc) using a dynamical lapse rate calculated from GFS NWP temperature profiles provided for 26 isobar layers. The dynamical lapse rate on each grid is calculated from differences in temperatures between 200 and 400 hPa per difference in height between 200 and 400 hPa. In this study, no cloud heights are allowed to be higher than the tropopause, which is provided in the GFS NWP model product.
## 3.2 Generation of look-up tables (LUTs)
For our method, relevant information for the western North Pacific Ocean is stored in look-up tables (LUTs). The LUTs include the min–max(ec) and min–max(Δec) values for three indices: BTD${}_{|\mathrm{11},\mathrm{13}}$, BTD${}_{|\mathrm{11},\mathrm{12}}$, and BT|11. The reason for selecting these three indices is that they are linked with cloud optical thickness, cloud effective radius, and cloud temperatures, respectively. Both solar and infrared radiances have been used to investigate cloud microphysics using passive satellite measurements (e.g., Freud et al., 2008; Lensky and Rosenfeld, 2006; Martins et al., 2011). A primary benefit of using IR measurements is that the ice cloud temperature and emissivity do not depend on solar illumination, so the cloud properties are consistent between day and night.
First, the BTD${}_{|\mathrm{11},\mathrm{13}}$ is sensitive to the presence of mid- to high-level clouds and the cloud height. While both the 12 and 13.3 µm measurements are both affected by CO2 absorption, the 12 µm channel is at the wing of the broad 15 µm CO2 band and has less CO2 absorption than the 13.3 µm channel. Additionally, the peak of weighting function for the 13.3 µm channel is in the vicinity of 700–800 hPa so that the observed radiance at 13.3 µm represents the lower-tropospheric temperature. Thus, the BT at 13.3 µm is generally colder than that of the two other IR window channels. The BTD${}_{|\mathrm{11},\mathrm{13}}$ is larger for clear-sky pixels than for ice clouds, but BTD${}_{|\mathrm{11},\mathrm{13}}$ depends on degree of cloud opacity. The BTD${}_{|\mathrm{11},\mathrm{13}}$ has been applied by Mecikalski and Bedka (2006) to monitor changes in cloud thickness and height for signals of convective initiation.
Second, the BTD${}_{|\mathrm{11},\mathrm{12}}$ depends in part on the microphysics and cloud opacity, i.e., the number and distribution of the ice particles; the imaginary part of the refractive index for ice varies in the IR region under study. The BTD${}_{|\mathrm{11},\mathrm{12}}$ has been used to identify cloud type (Inoue, 1985; Pavolonis and Heidinger, 2004; Pavolonis et al.,2005). Prata (1989) used the BTD${}_{|\mathrm{11},\mathrm{12}}$ to discern volcanic ash from nonvolcanic absorbing aerosols. Recently, adding BTD from 8.6 and 11 µm, the BTD${}_{|\mathrm{11},\mathrm{12}}$ is also applied to infer cloud phase (Strabala et al.,1994; Baum et al., 2000, 2012).
Finally, BT|11 values can provide cloud height information, at least for optically thick clouds including low-level clouds. For optically thick clouds, the BT|11 values approximate the actual cloud temperature since at 11 µm the primary absorber is water vapor and there is generally little absorption above high-level ice clouds. As noted earlier, the BT|11 for optically thin clouds includes a contribution from upwelling radiances from the surface and lower atmosphere.
The LUTs are compiled for ec and Δec by three input parameters, i.e., BTD${}_{|\mathrm{11},\mathrm{13}}$, BTD${}_{|\mathrm{11},\mathrm{12}}$, and BT|11 from information in the C6 MODIS products. Data used in generating our LUTs are summarized in Table 1. The first step is to collect all ice cloud radiances at 11, 12, and 13.3 µm from MYD021KM over the western North Pacific Ocean during the recurring period of August 2013 and 2014. Ice cloud pixels are identified by the MODIS IR cloud thermodynamic phase product in MYD06 (Baum et al., 2012) and where the pixels have a cloud-top temperature ≤260 K. The spatial and temporal domain is restricted to obtain a clear relationship between spectral cloud emissivity and three IR parameters for the case study analyses that will be presented in Sect. 4.
Table 2Parameter ranges and discretization of parameters in the LUTs for ec (Fig. 4) and Δec (Fig. 5).
The second step is to categorize the ensemble of ice cloud pixels by three parameters, BTD${}_{|\mathrm{11},\mathrm{13}}$, BTD${}_{|\mathrm{11},\mathrm{12}}$, and BT|11. The collected cloud pixels are separated into cloud types linked with cloud microphysical properties. We convert radiances centered at 11, 12, and 13.3 µm to BT by the inverse Planck's function and then calculate BTD${}_{|\mathrm{11},\mathrm{13}}$, BTD${}_{|\mathrm{11},\mathrm{12}}$, and BT|11 for each pixel. Subsequently the ice cloud pixels are sorted into range bins defined for the three parameters as follows: BT|11 values in a range from 190 to 290 K in increments of 5 K, BTD${}_{|\mathrm{11},\mathrm{13}}$ values in a range from −2 to 30 K in increments of 2 K, and BTD${}_{|\mathrm{11},\mathrm{12}}$ values ranging from −1 to 10 K in increments of 0.5 K (Table 2). For example, the first category is 190 K BT|11< 195 K, −2 BTD${}_{|\mathrm{11},\mathrm{13}}$< 0, and −1 BTD${}_{|\mathrm{11},\mathrm{12}}$<−0.5.
The final step is to find the possible ranges of ec and Δec in each of the bins of BTD${}_{|\mathrm{11},\mathrm{13}}$, BTD${}_{|\mathrm{11},\mathrm{12}}$, and BT|11. Here we use the cloud emissivity values at 11 and 12 µm for each ice cloud pixel provided in MYD06, for which the Scientific DataSet (SDS) names are “cloud_emiss11_1km” and “cloud_emiss12_1km”. The cloud emissivity for a single band is obtained by the following equation:
$\begin{array}{}\text{(7)}& {e}_{\mathrm{c}}=\left({I}_{\mathrm{obs}}-{I}_{\mathrm{clr}}\right)/\left({I}_{\mathrm{ac}}+{T}_{\mathrm{ac}}B\left({T}_{\mathrm{c}}\right)-{I}_{\mathrm{clr}}\right).\end{array}$
In Eq. (7), Tac and Iac are the above-cloud transmittance and the above-cloud emission (Baum et al., 2012), which are additional terms compared to the definition of the cloud emissivity in the infrared window regions in this paper (Eq. (2). In spite of s different definition of Eq. (7) from the Eq. (2), we use this cloud emissivity data since there the differences are small from the two different equations in the infrared window region. Note that the cloud emissivity data from C6 MYD06 are retrieved under the assumption of the single-layered cloud. Here the possible ranges of ec and Δec are determined as the min–max(ec) and (Δec) among cloud emissivity values allocated by the bins of three parameters. To exclude extreme values, the min–max(ec) and (Δec) are defined as the 2nd and 98th percentiles of the ec and Δec distributions when there are at least 5000 pixels available for a given bin. When there are between 500 and 5000 pixels, the 5th and 95th percentiles are chosen as the min–max(ec) and (Δec). In the rare case when there are between only 200 and 500 pixels, the 10th and 90th percentiles are used. Any case with fewer than 200 ice cloud pixels is not included in the LUTs.
Figure 4Look-up table values for min–max(ec) (left and right panels in colors) by BTD${}_{|\mathrm{11},\mathrm{12}}$ (x axis) and BTD${}_{|\mathrm{11},\mathrm{13}}$ (y axis) for (a) 230 K BT|11< 235 K and (b) 270 K BT|11< 275 K. For this look-up table, ice cloud pixels with temperatures ≤260 K were collected from MODIS C6 over the western North Pacific Ocean during two Augusts (2013–2014). Table 1 summarizes data used in the look-up table. Also, Table 2 is for dimensions of the look-up table.
Figure 4 shows examples of LUT values for ec belonging to the specific category for 230 K BT|11< 235 K (Fig. 4a) and 270 K BT|11< 275 K (Fig. 4b), which imply the presence of optically thick and thin ice clouds, respectively. The minimum (the left panel) and maximum (the right panel) values of the ec are shown as colors in the space of BTD${}_{|\mathrm{11},\mathrm{12}}$ (x axis) and BTD${}_{|\mathrm{11},\mathrm{13}}$ (y axis). In Fig. 4a, the ec values range from about 0.8 to 1.1. The ec generally ranges from 0 to 1, but a nonphysical ec value over 1 might occur in the case of an overshooting cloud (from strong convection that briefly enters the lower stratosphere) that has a colder temperature than the surrounding environment temperature (Negri, 1981; Adler et al., 1983). As for optically thin clouds, the ec values of Fig. 4b range from around 0.3 to 0.8. In general, ec values are low when cloudy pixels have large values of BTD${}_{|\mathrm{11},\mathrm{12}}$ and BTD${}_{|\mathrm{11},\mathrm{13}}$.
Figure 5Look-up tables for min–max(Δec) (left and right panels in colors) by BTD${}_{|\mathrm{11},\mathrm{12}}$ (x axis) and BTD${}_{|\mathrm{11},\mathrm{13}}$ (y axis) for (a) 230 K BT|11< 235 K and (b) 270 K BT|11< 275 K. Identical data as in Fig. 4 are used to generate these look-up tables, except that cloud emissivity differences between 11 and 12 µm come from MODIS C6 (referring to Tables 1 and 2).
Figure 5 shows examples of LUT values of Δec for optically thick (Fig. 5a) and thin (Fig. 5b) ice clouds as shown in Fig. 4. The Δec ranges from −0.12 to 0.04. The Δec shows a more complex relationship with BTD${}_{|\mathrm{11},\mathrm{12}}$ and BTD${}_{|\mathrm{11},\mathrm{13}}$ than with ec. It is notable that similar patterns Δec are repeated on the optically thick (Fig. 5a) and thin ice cloud (Fig. 5b). One reason for this could be that Δec values are more sensitive to particles sizes, whereas ec values are more directly linked with cloud opacity (refer to Eqs. 3 and 4). The optically thin ice cloud cluster tends to be more sensitive to BTD${}_{|\mathrm{11},\mathrm{12}}$, showing larger variations in Δec than the thick ice cloud cluster.
4 Results
The current algorithm analyses are performed over the study domain, the western North Pacific Ocean, in August 2015. Note that the Typhoon Goni formed on 13 August and dissipated on 30 August 2015, and affected East Asia. Case studies involving Typhoon Goni scenes are provided in Sect. 4.1. Quantitative analysis and comparison of our results with CALIOP cloud products are described in Sect. 4.2.
Figure 6(a) MODIS false color image (rotated 90 left) at 03:20 UTC on 19 August 2015. This scene captures part of Typhoon Goni. The heavy pink line on the image shows the CALIPSO track the closest to MODIS observation time. (b) Vertical cross section of the CALIPSO track designated by the heavy pink line in Fig. 6a. The vertical feature mask is shown as sky-blue contours (randomly and horizontally oriented ice). The red solid line shows where the layer COT (integrated Qe at 532 nm from CALIOP) reaches a value of 0.5. The green–blue and black circles are the min–max(Hc) and MODIS CTH, respectively. The gray solid (dashed) line on the right-side y axis is the column COT from CALIOP (standard deviation of 11 µm radiances from MODIS).
Table 3Data used for the tests shown in Fig. 3. Input and auxiliary data are taken from the MODIS C6 cloud products and from CALIOP v4 cloud products. The abbreviations CTT–CBT, CTH–CBH, COT, TP, and VFM refer to cloud-top and cloud-base temperature, cloud-top and cloud-base height, cloud optical thickness, temperature and pressure, and vertical feature mask. The vertical profile of the extinction coefficient at 532 nm is denoted as the Qe.
## 4.1 Comparison of min–max(Hc) with CALIPSO for three granules
### 4.1.1 A scene for single-layered optically thin ice clouds (19 August 2015, at 03:20 UTC)
Figure 6 is a scene analysis for single-layered optically thin ice clouds for a granule at 03:20 UTC on 19 August 2015. Figure 6a is a MODIS false color image that captures Typhoon Goni. Note that the image is rotated 90 left to simplify comparison with CALIPSO. The heavy pink line (Fig. 6a) is the south-to-north CALIPSO track at the closest time to the MODIS observation time. CALIPSO made a near-eye overpass of the cyclone. The CALIOP track measures a cross section of the cyclone, from the eye wall to the outer bands. Figure 6b is a cross section from CALIOP data (Table 3) at the time of the overpass that shows the horizontal (x axis) and vertical (y axis at the left side) locations of all cloud layers. The CALIOP vertical feature mask (VFM) indicates the presence of randomly oriented ice and horizontally oriented ice (sky blue) in the scene. The y axis at the right side is for two supplementary data shown as gray lines. The gray solid line is the CALIOP COT at 532 nm, for the opacity of ice clouds. The gray dashed line is the standard deviation of the MODIS Iobs|11 (SD(Iobs|11)) on the collocated path with the CALIOP track, calculated over a 5×5 pixel array centered at each cloud pixel. The SD(Iobs|11) includes cloud feature information (Nair et al., 1998). For example, pixels at cloud edges or fractional clouds have relatively large SD(Iobs|11). The SD(Iobs|11) values are used to filter overcast cloud pixels. The data in Fig. 6 are primarily of single-layered ice clouds with horizontal homogeneity as demonstrated by the low value of SD(Iobs|11).
For comparison with CALIPSO, the min–max(Tc) values are converted to max–min(Hc) and are shown from our method (blue and green circles) to the VFM in Fig. 6b. Also provided is the MODIS CTH (black circles) for reference. For these comparisons, we converted temperature to height using a dynamical lapse rate from GFS NWP temperature profiles. When the cloud pixel temperature is colder than the tropopause temperature, it is changed to be that of the tropopause and is converted to the tropopause height provided by GFS NWP. The solid red line indicates where the CALIOP COT is about 0.5. This line is a reference for the position where the passive remote sensing retrievals will place the cloud (Holz et al., 2006; Wang et al., 2014), well known as the radiative emission level. The radiative emission level should be thought of more as a guideline since the matched COT values can be different depending on cloud types or algorithm methods. To determine this depth in the cloud layer, we integrated the extinction coefficient, CALIOP Qe (Table 3), from the top of the cloud downwards until the COT reached about 0.5. Hereafter, we call that layer the effective emission layer, EEL. The enhancement of EEL at approximately 15.6 N in Fig. 6b is caused by an extraordinary value of Qe provided in CALIOP v4.
Note that the max(Hc) (blue circles) is close to the top of the clouds except in the region of cloud edges and the eye of Goni. Bias between the cloud top and the max(Hc) is 0.46 km, that is −4.5 K in the aspect of temperature. It is remarkable that the max(Hc) corresponding to uncertainties of cloud emissivity tends to occur at or slightly above the cloud top as indicated by CALIPSO, higher than the EEL and MODIS CTH. The height of the min(Hc) (green circles) also follows the base of the cloud layer with a bias of −1.58 km (10.6 K in temperature), slightly lower than EEL and MODIS CTH. These results show the feasibility of inferring single-layered ice cloud boundaries from spectral cloud emissivity and its uncertainties by IR measurements. The max–min(Hc) on the cloud edges and the edges surrounding the eye of the Goni have relatively large biases from the top and base of the cloud. Those regions show relatively large SD(Iobs|11) and small COT and contain multiple clouds. To sum up, our resulting cloud heights corresponding to cloud emissivity uncertainties are likely to exhibit similar variations to the CALIOP VFM, except the cloud edges and multiple cloud regions.
Figure 7(a) BT|11 image from MODIS (MYD021 C6) at 15:30 UTC on 19 August 2015. This scene captures part of Typhoon Goni. The heavy pink line on the BT|11 image shows the CALIPSO track the closest to MODIS observation time. (b) Vertical cross section of the CALIPSO track designated by the heavy pink line in Fig. 7a. The vertical feature mask is shown as sky-blue contours (randomly and horizontally oriented ice). The red solid line shows where the layer COT (integrated Qe at 532 nm from CALIOP) reaches a value of 0.5. The green–blue and black circles are the min–max(Hc) and MODIS CTH, respectively. The gray solid (dashed) line on the right-side y axis is the column COT from CALIOP (standard deviation of 11 µm radiances from MODIS).
### 4.1.2 A scene for single-layered optically thick ice clouds (19 August 2015, at 15:30 UTC)
The second case is the single-layered optically thick ice clouds (Fig. 7) at 15:30 UTC on 19 August 2015. Here we show the BT|11 image instead of RGB image (Fig. 7a) since this is a nighttime scene. Figure 7a is also rotated 90 left. For this overpass, CALIOP-observed clouds farther away from the center of Goni, and inspection of the cross section in Fig. 7b suggests that most of cloud pixels are optically thick with COT values higher than 5, about where the CALIOP signal attenuates, and have relatively low SD(Iobs|11) as indicated by the gray solid and dashed lines in Fig. 7b. In the comparison with the CALIOP VFM, the max(Hc) tends to occur at or slightly below the cloud top as indicated by CALIPSO, still higher than the EEL and MODIS CTH. The bias for the max(Hc) from the top of clouds is 2.38 km (−13.2 K), which is larger than that of optically thin ice clouds. The bias for min(Hc) from the cloud base is larger than that of optically thin clouds, −2.69 km (19.4 K), but the min(Hc) still exhibits similar variation to CALIOP VFM. The passive IR measurements have an upper COT limit as shown in earlier studies (Heidinger et al., 2009, 2010). The height boundaries from our method brackets both the CALIPSO measurements and the MODIS retrievals.
Figure 8(a) MODIS false color image (rotated 90 left) at 05:20 UTC on 8 August 2015. This scene captures part of Typhoon Goni. The heavy pink line on the image shows the CALIPSO track the closest to MODIS observation time. (b) Vertical cross section of the CALIPSO track designated by the heavy pink line in Fig. 8a. The vertical feature mask is shown as sky-blue and orange contours (randomly and horizontally oriented ice and water). The red solid line shows where the layer COT (integrated Qe at 532 nm from CALIOP) reaches a value of 0.5. The green–blue and black circles are the min–max(Hc) and MODIS CTH, respectively. The gray solid (dashed) line on the right-side y axis is the column COT from CALIOP (standard deviation of 11 µm radiances from MODIS).
Figure 9Joint histograms of three cloud categories: (a) single-layered optically thin ice cloud, (b) optically thick ice cloud, and (c) multilayered cloud during August 2015. The first column shows CALIOP CTH (cloud-top height, x axis) versus max(Hc) (y axis), the second column shows CALIOP CBH (cloud-base height, x axis) versus min(Hc) (y axis).
### 4.1.3 A scene for multilayered cloud (8 August 2015, at 05:20 UTC)
The third case also involves a cross section of Goni, but this scene is more complex in that there is evidence of both multilayered and less homogeneous ice clouds on the southern boundary of the typhoon (Fig. 8a). Note that the SD(Iobs|11) on the CALIPSO track shows relatively large variances, compared to the previous two cases (Fig. 8b). The CALIOP COT is omitted given the high fluctuations in the values. The CALIOP vertical feature mask (VFM) indicates the presence of randomly oriented ice and horizontally oriented ice (sky-blue) including water (orange) cloud phase. The enhancement of EEL at around 25.7 N in Fig. 8b is also caused by an extraordinary value of Qe provided in the CALIOP v4 product. In the region of 10–20 N, the max–min(Hc) values in this region are often outside the boundaries of the VFM. The max(Hc) (blue circles) varied from near the second cloud layer to the top of the first cloud at the tropopause. Some pixels of the min(Hc) (green circles) values are also outside the range of the VFM. There is more than one reason causing these increased variances, including the fact that the uppermost cloud layer is optically thin (over half of all pixels have COT < 1.5) and there are indications of lower cloud layers. In the region of 20–30 N, clouds on the top layer are relatively thick (on average, COT = 3.5). In that case, heights of the max(Hc) on the multilayer pixels tend to be close to the EEL, which is much lower than the top of clouds. This is to be expected for the case of a geometrically thick but optically thin cloud. Note that the value of the min(Hc) on the multilayered cloud pixels sometimes reaches almost to the second cloud layer, rather than near the first layer. Further thought needs to be given to these cases.
## 4.2 Comparison of max–min Hc with CALIPSO for August 2015
In this section, the max–min(Hc) is compared with the cloud-top and cloud-base height (CTH–CBH) from CALIOP over the western North Pacific during August 2015. The computationally efficient method of Nagle et al. (2009) is used to collocate the simultaneous nadir observations (SNOs) between two satellites. Following their approach, CALIOP is projected onto MODIS.
First, we qualitatively examine the max–min(Hc) with the cloud layer vertical cross section from CALIOP–MODIS matchup files (Table 3) in Figs. 6–8. Second, we quantitatively investigate the max–min(Hc) for all ice clouds against CALIOP CTH–CBH during the month. The extinction coefficient profiles, cloud phase and their quality flags, and the number of cloud layers are extracted from CALIOP and used in this analysis (Table 3).
The matchup data are filtered as follows: only ice cloud phase pixels are chosen that have the highest quality (CALIOP QC for cloud phase = 1), where CALIOP COT > 1.5 and SD(Iobs|11) from MODIS 1, which helps to remove cloud edges and fractional clouds. The relationship is investigated between the max–min(Hc) and CALIOP CTH–CBH for three cloud regimes: (1) single-layered optically thin ice clouds, (2) optically thick ice clouds, and (3) multilayered clouds where the uppermost layer is optically thin cirrus. The CALIOP–MODIS matchup clouds are separated into single-layered and multilayered cloud groups using the number of layers found (NLF) from CALIOP (Table 3). The multilayered cloud group includes two or more cloud layers, excluding single-layered clouds. Among single-layered cloud pixels, we define optically thin and thick cloud groups as CALIOP COT, which is less and greater than 3.5, referring to the ISCCP cloud classification (Rossow et al., 1985; Rossow and Schifer,1999).
Table 4Comparison of max(Hc) (min(Hc)) to the CALIOP CTH (CALIOP CBH) for all cloud pixels and three cloud regimes; single-layered optically thin ice cloud, optically thick ice cloud, and multilayered cloud for August 2015. Pixel numbers (count), correlation coefficients (corr) and differences of the mean values (bias), and root-mean-square differences (rmsd) are provided. Additionally, comparison of min–max(Tc) to the CALIOP CTT–CBT is also shown as numbers in round brackets.
Figure 9 shows the joint histogram of the max–min(Hc) (y axis of left and right panels) as a function of the CALIOP CTH–CBH (x axis) for single-layered optically thin ice cloud (Fig. 9a), single-layered optically thick ice cloud (Fig. 9b), and multilayered clouds (Fig. 9c). Table 4 provides all statistical quantities for Fig. 9 as correlations (corr), differences of the mean value (bias), and root-mean-square differences (rmsd). Additionally, all statistical quantities in terms of temperature are in kelvin and are given in the round brackets in Table 4. For single-layered clouds, the majority of max(Hc) values are scattered about the one-to-one line. The statistical values are corr = 0.61, bias = 0.13 km, and rmsd = 0.91 for thin clouds. This implies that the maximum values of cloud height ranges corresponding to ec and Δec are close to the cloud top for single-layered clouds as determined from CALIOP.
However, the scatter is higher for optically thick clouds, with corr = 0.65, bias = 0.30 km, and rmsd = 1.08 (Table 4). As for the max(Hc) for multilayered clouds, the majority of scatter points are on the lower right side of the one-to-one line, with corr = 0.25, bias = 1.41 km, and rmsd = 2.64. The lowest correlation and the largest bias for multilayered clouds are shown, as expected given the assumption of the single-layered clouds in our method.
The comparisons of the min(Hc) (y axis of right panels in Fig. 9) to the CALIOP CBH (x axis) for all cloud categories show relatively large correlations, at least over 0.48. Scatter points in three joint histograms for all cloud types are parallel to the one-to-one line, but show negative biases implying higher heights than CALIOP CBT. As with the cases of the max(Hc), bias of the min(Hc) increases from single-layered optically thin ice (−1.01 km) to optically thick ice (−1.71 km) and multilayered clouds (−4.64 km).
5 Discussion of results
The results in Figs. 6–9 show the comparisons of the ice cloud height ranges obtained based on the ice cloud emissivity uncertainties with both MODIS C6 products and vertical cross sections of clouds from CALIOP. We investigated minimum and maximum ice cloud heights for each cloud pixel for three cloud regimes during August 2015: (1) single-layered optically thin clouds, (2) optically thick ice clouds, and (3) multilayered clouds.
Overall, the maximum values of the estimated ice cloud height ranges for single-layered optically thin and thick ice clouds show some skill in comparison with the cloud tops from CALIOP: corr = 0.61 and 0.65 as well as bias = 0.13 and 0.30 km. In particular, we note that the upper height boundary for optically thin clouds derived from our method is very close to the geometric cloud tops. For multilayered clouds, the maximum heights are occasionally much lower than the uppermost cloud layer as observed by CALIOP, showing the highest bias at 1.41 km. Higher biases are expected in our method given the assumption of single-layered clouds in each pixel. Additionally, the skill of our method decreases when the upper cloud layer is composed of optically thin (having very low COT values) and fractional clouds; in some cases, the method cannot determine an emissivity range from the LUTs, which were generated for single-layered ice clouds.
The minimum heights for single-layered optically thin ice clouds reach near the base of the cloud, with corr = 0.83 and bias =−1.01 km. However, for thick and multilayer, the biases became larger, at most –4.64 km. That is, the minimum heights for thick clouds became much higher than the CALIOP-observed cloud bases. This indicates that the IR method has an optical thickness limitation and is more useful for lower optical thicknesses, which has been noted previously (e.g., Heidinger et al., 2010). Even with large biases of minimum heights, it is notable that correlation coefficients between minimum heights and the cloud base for all three cloud regimes are sufficiently large, at least 0.48.
Figure 10A frequency of biases of mean(Hc) from mean(CALIOP Hc) as a function of CALIOP COT during August 2015. The mean(CALIOP Hc) implies the average of the upper and lower cloud boundaries, simply defined as . The mean(Hc) is also the average of cloud heights by our method, defined as $\mathrm{0.5}\cdot \left(min\left({H}_{\mathrm{c}}\right)+max\left({H}_{\mathrm{c}}\right)\right)$. The red dotted lines are references for single-layered optically thin (1.5 < COT 3.5) and optically thick (COT > 3.5) ice clouds in this study.
To better understand the potential biases of the current algorithm in comparison with CALIOP, we compare the mean(Hc) to the mean(CALIOP Hc) that are defined as $\mathrm{0.5}\cdot \left(\text{max}\left({H}_{\mathrm{c}}\right)+\text{min}\left({H}_{\mathrm{c}}\right)\right)$ and as , respectively. Figure 10 shows the frequency of occurrence of biases, that is, the mean(CALIOP Hc) minus the mean(Hc), as a function of CALIOP COT for the single-layered ice clouds during August 2015. In a comparison of the MODIS cloud mask with CALIOP, Ackerman et al. (2008) noted that the cloud mask performs best at optical thicknesses above about 0.4. The lidar has a greater sensitivity to particles in a column than passive radiance measurements. Based on this consideration, we limited our results to those pixels where the COT ≥0.5 on the x axis of Fig. 10.
Figure 10 illustrates that our resulting single-layered ice cloud boundaries are consistent with CALIOP measurements, showing slightly negative biases except for the region near “COT 1.5”. These results suggest that our approach for applying a range of cloud emissivity values to estimate cloud boundaries has potential merit for using IR channels to produce cloud boundaries similar to those that the lidar observes, especially for optically thin but geometrically thick ice clouds which tend to have large uncertainties (Hamann et al., 2014).
The negative biases of the mean(Hc) from CALIOP measurements are caused primarily by two factors: (1) the min(Hc) values for all cloud regimes tend to be higher than the geometric cloud base, and (2) the max(Hc) values are sometimes slightly outside the actual cloud boundaries. Perhaps this is caused in part by the conversion of temperature to height using the NWP model product. Another source of error could be that the radiances have some amount of uncertainty that was not considered in our methodology. A notable point is that the boundary heights for optically thin cirrus (1.5 < COT 3.5) show the lowest biases.
Figure 10 also addresses the weaknesses of our method. In the region of COT 1.5, biases of mean(Hc) from CALIOP are largest and positive. This region might be relevant to fractional clouds or cloud edges. We infer that the relationship of cloud emissivity at 11 and 12 µm, the key controller in our method, might not be optimal in the fractional clouds or cloud edges, resulting in lower heights.
A limitation of this study is that the LUTs are generated for spectral emissivity using IR sensor observations and level-2 products that still have errors and uncertainties. It would be interesting to extend this preliminary research by generating LUTs for spectral emissivity using CALIOP, not IR sensors. If we can obtain more diverse ice cloud emissivity in vertical cloud thickness, it could result in improvements in the resulting cloud temperatures and height ranges. Also, the LUTs based on CALIOP data/products could be used to reduce errors in inferring cloud temperatures for multilayered clouds.
6 Summary
The intent of our study is to demonstrate that ice cloud emissivity uncertainties, obtained from three IR channels generally available on various satellite-based sensors, can be used to estimate a reasonable range of ice cloud temperatures as verified through comparison with active measurements from CALIPSO. For satellite-based retrievals with heavy data volumes, the general assumption is that the cloud in any given pixel can be treated as plane parallel, which simplifies the retrieval algorithms. However, for ice clouds and particularly optically thin ice clouds known as cirrus, the plane-parallel assumption breaks down because cirrus tends to be optically thin but geometrically thick, which is different with lower-level liquid water clouds. For cirrus, the inference of a cloud-top temperature for a given measurement may not be optimal. In our approach, a range of spectral ice cloud emissivity is calculated, which is, in turn, used to infer a range of cloud temperatures. These temperatures are converted to heights and subsequently compared to active lidar measurements provided by CALIPSO CALIOP products.
This study provides a methodology to infer a range of spectral cloud emissivity for each cloud pixel. The range in emissivity represents uncertainty in the cloud microphysics to some degree. In our approach, we generate two LUTs for cloud emissivity at 11 µm and cloud emissivity differences between 11 and 12 µm using the brightness temperatures at 11, 12, and 13.3 µm. The 11 µm channel is a window channel where the primary absorption is caused by water vapor. The 12 µm channel is impacted by both H2O and CO2, while the 13.3 µm channel has more absorption by CO2 than by water vapor. The benefit of a method that relies on IR channels is that it does not depend on solar illumination, so the cloud heights can be obtained consistently between day and night.
We estimate a range of ice cloud temperature corresponding to the ice cloud uncertainty generated by three IR channels centered at 11, 12, and 13.3 µm by MODIS C6. The focus area is the western North Pacific Ocean during August 2015. We verified the estimated ranges of ice cloud temperature for three cloud categories, i.e., single-layered optically thin ice and optically thick ice clouds and multilayered clouds, against the vertical feature mask for CALIOP. We show that the minimum–maximum values for the estimated range of ice cloud heights agree with CALIPSO measurements fairly well for single-layered optically thin clouds. However, for optically thick and multilayered clouds, the biases of the minimum–maximum values for those ranges from the cloud top and cloud base became larger.
This approach can be applied to the new geostationary satellites, such as Himawari-8 (launched in 2015), GOES-16/17 (launched in 2016 and 2017), and GK-2A (launched in 2018). The new features of ice cloud temperatures from base to top by geostationary IR observation could contribute to improved accuracy of weather prediction and cloud radiative effects.
In future work, we intend to improve upon this methodology by developing lookup tables for spectral cloud emissivity uncertainty with CALIOP. Above all, it is required to study for global area for applying this method to the new geostationary satellites. Also, further study is required to add more infrared channels to resolve more accurate spectral cloud emissivity uncertainties.
Data availability
Data availability.
The current algorithm uses MODIS C6, that are available from https://earthdata.nasa.gov (last access: 31 March 2019). The ice cloud boundaries products from the current algorithm are available for only one month, August 2015, through personal communication with the corresponding author of this paper.
Author contributions
Author contributions.
HSK built, tested, and validated the algorithm and wrote the paper. BB contributed to completing the algorithm and to reviewing and editing the paper carefully. YSC provided the initial idea for the algorithm and guidance on this study. All authors were actively involved in interpreting results and discussions on the paper.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
We are grateful to the MODIS and CALIOP science teams for their continuous efforts in providing high-quality measurements and products. We also appreciate Carynelisa Haspel and the two anonymous referees for deliberate reviewing and fruitful comments and suggestions.
Financial support
Financial support.
This work was supported by the “Development of Cloud/Precipitation Algorithms” project, funded by ETRI, which is a subproject of the “Development of Geostationary Meteorological Satellite Ground Segment (NMSC-2019-01)” program funded by the National Meteorological Satellite Center (NMSC) of the Korea Meteorological Administration (KMA).
Review statement
Review statement.
This paper was edited by Alexander Kokhanovsky and reviewed by Carynelisa Haspel and three anonymous referees.
References
Ackerman, S. A., Holz, R. E., Frey, R., Eloranta, E. W., Maddux, B. C., and McGill, M.: Cloud detection with MODIS, Part 2: Validation, J. Atmos. Ocean. Tech., 25, 1073–1086, 2008.
Adler, R. F., Markus, M. J., Fen, D. D., Szejwach, G., and Shenk, W. E.: Thunderstorm top structure observed by aircraft overflights with an infrared radiometer, J. Appl. Meteorol. Clim., 22, 579–593, 1983.
Baker, M. B.: Cloud microphysics and climate, Science, 276, 1072–1078, 1997.
Baum, B. A., Soulen, P. F., Strabala, K. I., King, M. D., Ackerman, S. A., Menzel, W. P., and Yang, P.: Remote sensing of cloud properties using MODIS airborne simulator imagery during SUCCESS: 2. Cloud thermodynamic phase, J. Geophys. Res., 105, 11781–11792, 2000.
Baum, B. A., Menzel, P., Frey, R. A., Tobin, D., Holz, R. E., Ackerman, S., Heidinger, A., and Yang, P.: MODIS Cloud-Top Property Refinements for Collection 6, J. Appl. Meteorol. Clim., 51, 1145–1163, 2012.
Bouttier, F. and Kelly, G.: Observing-system experiments in the ECMWF 4D-Var assimilation system, Q. J. Roy. Meteor. Soc., 127, 1469–1488, 2001.
Cooper, S. J., L'Ecuyer, T. S., and Stephens, G. L.: The impact of explicit cloud boundary information on ice cloud micro- physical property retrievals from infrared radiances, J. Geophys. Res., 108, 4107, https://doi.org/10.1029/2002JD002611, 2003.
Duda, D. P. and Spinhirne, J. D.: Split-window retrieval of particle size and optical depth in contrails located above horizontally inhomogeneous ice clouds, Geophys. Res. Lett., 23, 3711–3714, 1996.
Freud, E., Strom, J., Rosenfeld, D., Tunved, P., and Swietlicki, E.: Anthropogenic aerosol effects on convective cloud microphysical properties in southern Sweden, Tellus B, 60, 286–297, 2008.
Getzewich, B. J., Vaughan, M. A., Hunt, W. H., Avery, M. A., Powell, K. A., Tackett, J. L., Winker, D. M., Kar, J., Lee, K.-P., and Toth, T. D.: CALIPSO lidar calibration at 532 nm: version 4 daytime algorithm, Atmos. Meas. Tech., 11, 6309–6326, https://doi.org/10.5194/amt-11-6309-2018, 2018.
Giraud, V., Buriez, J. C., Fouquart, Y., Parol, F., and Seze, G.: Large-scale analysis of cirrus clouds from AVHRR data: Assessment of both a microphysical index and the cloud-top temperature, J. Appl. Meteorol., 36, 664–675, 1997.
Hamann, U., Walther, A., Baum, B., Bennartz, R., Bugliaro, L., Derrien, M., Francis, P. N., Heidinger, A., Joro, S., Kniffka, A., Le Gléau, H., Lockhoff, M., Lutz, H.-J., Meirink, J. F., Minnis, P., Palikonda, R., Roebeling, R., Thoss, A., Platnick, S., Watts, P., and Wind, G.: Remote sensing of cloud top pressure/height from SEVIRI: analysis of ten current retrieval algorithms, Atmos. Meas. Tech., 7, 2839–2867, https://doi.org/10.5194/amt-7-2839-2014, 2014.
Harrop, B. E. and Hartmann, D. L.: Testing the role of radiation in determining tropical cloud-top temperature, J. Climate, 25, 5731–5747, 2012.
Heidinger, A. K. and Pavolonis, M. J.: Gazing at cirrus clouds for 25 years through a split window, Part I: Methodology, J. Appl. Meteorol. Clim., 48, 1100–1116, 2009.
Heidinger, A. K., Pavolonis, M. J., Holz, R. E., Baum, B. A., and Berthier, S.: Using CALIPSO to explore the sensitivity to cirrus height in the infrared observations from NPOESS/VIIRS and GOES-R/ABI, J. Geophys. Res., 115, D00H20, https://doi.org/10.1029/2009JD012152, 2010.
Holz, R. E., Ackerman, S. A., Antonelli, P., Nagle, F., and Knuteson, R.: An improvement to the High-Spectral-Resolution CO2-slicing cloud-top altitude retrieval, J. Atmos. Ocean. Tech., 23, 653–670, 2006.
Inoue, T.: On the temperature and effective emissivity determination of semi-transparent cirrus clouds by bi-spectral measurements in the 10 µm window region, J. Meteorol. Soc. Jpn, 63, 88–99, https://doi.org/10.2151/jmsj1965.63.1_88, 1985.
Kikuchi, N., Nakajima, T., Kumagai, H., Kuroiwa, H., Kamei, A., Nakamura, R., and Nakajima, T. Y.: Cloud optical thickness and effective particle radius derived from transmitted solar radiation measurements: Comparison with cloud radar observations, J. Geophys. Res., 111, D07205, https://doi.org/10.1029/2005JD006363, 2006.
L'Ecuyer T. S. and Hang Y.: Reassessing the effect of cloud type on Earth's energy balance in the age of active spaceborne observations, Part I: Top-of-atmosphere and surface, J. Climate, 32, 6219–623, https://doi.org/10.1175/JCLI-D-18-0753.1, 2019.
Lee, S. and Song, H.-J.: Impacts of the LEOGEO Atmospheric Motion Vectors on the East Asian weather forecasts, Q. J. Roy. Meteor. Soc., 144, 1914–1925, 2018.
Lensky, I. M. and Rosenfeld, D.: The time-space exchangeability of satellite retrieved relations between cloud top temperature and particle effective radius, Atmos. Chem. Phys., 6, 2887–2894, https://doi.org/10.5194/acp-6-2887-2006, 2006.
Liou, K.-N.: An Introduction to Atmospheric Radiation, Vol. 84, access online via Elsevier, 583 pp., 2002.
Liu, Z., Kar, J., Zeng, S., Tackett, J., Vaughan, M., Avery, M., Pelon, J., Getzewich, B., Lee, K.-P., Magill, B., Omar, A., Lucker, P., Trepte, C., and Winker, D.: Discriminating between clouds and aerosols in the CALIOP version 4.1 data products, Atmos. Meas. Tech., 12, 703–734, https://doi.org/10.5194/amt-12-703-2019, 2019.
Martins, J. V., Marshak, A., Remer, L. A., Rosenfeld, D., Kaufman, Y. J., Fernandez-Borda, R., Koren, I., Correia, A. L., Zubko, V., and Artaxo, P.: Remote sensing the vertical profile of cloud droplet effective radius, thermodynamic phase, and temperature, Atmos. Chem. Phys., 11, 9485–9501, https://doi.org/10.5194/acp-11-9485-2011, 2011.
Mecikalski, J. R. and Bedka, K. M.: Forecasting convective initiation by monitoring the evolution of moving cumulus in daytime GOES imagery, Mon. Weather Rev., 134, 49–78, 2006.
Menzel, W. P., Frey, R. A., Zhang, H., Wylie, D. P., Moeller, C. C., Holz, R. E., Maddux, B., Baum, B. A., Strabala, K. I., and Gumley, L. E.: MODIS global cloud-top pressure and amount estimation: Algorithm description and results, J. Appl. Meteorol. Clim., 47, 1175–1198, 2008.
Moorthi, S., Pan, H. L., and Caplan, P.: Changes to the 2001 NCEP operational MRF/AVN global analysis/forecast system, Tech. Procedures Bull., 484, Office of Meteorology, National Weather Service, 14, 2001.
Nair, U. S., Weger, R. C., Kuo, K. S., and Welch, R. M.: Clustering, randomness, and regularity in cloud fields: 5. The nature of regular cumulus cloud fields, J. Geophys. Res., 103, 11363–11380, 1998.
Nagle, F. W. and Holz, R. E.: Computationally efficient methods of collocating satellite, aircraft, and ground observations, J. Atmos. Ocean. Tech., 26, 1585–1595, 2009.
Negri, A. J. and Adler, R. F.: Relation of satellite-based thunderstorm intensity to radar-estimated rainfall, J. Appl. Meteorol., 20, 288–300, 1981.
Parol, F., Buriez, J. C., Brogniez, G., and Fouquart, Y.: Information content of AVHRR channels 4 and 5 with respect to the effective radius of cirrus cloud particles, J. Appl. Meteorol., 30, 973–984, 1991.
Pavolonis, M. J. and Heidinger, A. K., Daytime cloud overlap detection from AVHRR and VIIRS, J. Appl. Meteorol., 43, 762–778, 2004.
Pavolonis, M. J., Heidinger, A. K., and Uttal, T.: Daytime global cloud typing from AVHRR and VIIRS: Algorithm description, validation, and comparisons, J. Appl. Meteorol., 44, 804–826, 2005.
Prata, A. J.: Observations of volcanic ash clouds in the 10–12 µm window using AVHRR/2 data, Int. J. Remote Sens., 10, 751–761, https://doi.org/10.1080/01431168908903916, 1989.
Rossow, W. B. and Schiffer, R. A.: Advances in understanding clouds from ISCCP, B. Am. Meteorol. Soc., 80, 2261–2287, 1999.
Rossow, W., Mosher, F., Kinsella, E., Arking, A., Desbois, M., Harrison, E., Minnis, P., Ruprecht, E., Seze, G., Simmer, C., and Smith, E.: ISCCP cloud algorithm intercomparison, J. Clim. Appl. Meteorol., 24, 877–903, 1985.
Slingo, A. and Slingo, J. M.: The response of a general circulation model to cloud longwave forcing. I: Introduction and initial experiments, Q. J. Roy. Meteor. Soc., 114, 1027–1062, 1988.
Strabala, K. I., Ackerman, S. A., and Menzel, W. P.: Cloud properties inferred from 8–12 µm data, J. Appl. Meteorol., 33, 212–229, 1994.
Vaughan, M., Garnier, A., Josset, D., Avery, M., Lee, K.-P., Liu, Z., Hunt, W., Pelon, J., Hu, Y., Burton, S., Hair, J., Tackett, J. L., Getzewich, B., Kar, J., and Rodier, S.: CALIPSO lidar calibration at 1064 nm: version 4 algorithm, Atmos. Meas. Tech., 12, 51–82, https://doi.org/10.5194/amt-12-51-2019, 2019.
Wang, C., Luo, Z. J., Chen, X., Zeng, X., Tao, W.-K., and Huang, X.: A Physically Based Algorithm for Non-Blackbody Correction of Cloud-Top Temperature and Application to Convection Study, J. Appl. Meteorol., 53, 1844–1856, 2014.
Winker, D. M., Hunt, W. H., and McGill, M. J.: Initial performance assessment of CALIOP, Geophys. Res. Lett., 34, L19803, https://doi.org/10.1029/2007GL030135, 2007.
Winker, D. M., Vaughan, M. A., Omar, A. H., Hu, Y., Powell, K. A., Liu, Z., Hunt, W. H., and Young, S. A.: Overview of the CALIPSO mission and CALIOP data processing algorithms, J. Atmos. Ocean. Tech., 26, 2310–2323, 2009.
Young, S. A., Vaughan, M. A., Garnier, A., Tackett, J. L., Lambeth, J. D., and Powell, K. A.: Extinction and optical depth retrievals for CALIPSO's Version 4 data release, Atmos. Meas. Tech., 11, 5701–5727, https://doi.org/10.5194/amt-11-5701-2018, 2018.
Zhang, H. and Menzel, W. P.: Improvement in thin cirrus retrievals using an emissivity-adjusted CO2 slicing algorithm, J. Geophys. Res., 107, 4327, https://doi.org/10.1029/2001JD001037, 2002. | 2019-12-07 17:04:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 59, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5943428874015808, "perplexity": 4296.644979644637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540500637.40/warc/CC-MAIN-20191207160050-20191207184050-00268.warc.gz"} |
http://hal.in2p3.fr/in2p3-00723981 | # Centrality Dependence of Charged Particle Production at Large Transverse Momentum in Pb--Pb Collisions at $\sqrt{s_{\rm{NN}}} = 2.76$ TeV
Abstract : The inclusive transverse momentum ($p_{\rm T}$) distributions of primary charged particles are measured in the pseudo-rapidity range $|\eta|<0.8$ as a function of event centrality in Pb--Pb collisions at $\sqrt{s_{\rm{NN}}}=2.76$ TeV with ALICE at the LHC. The data are presented in the $p_{\rm T}$ range $0.1530$ GeV/$c$. In peripheral collisions (70--80%), the suppression is weaker with $R_{\rm{AA}} \approx 0.7$ almost independently of $p_{\rm T}$. The measured nuclear modification factors are compared to other measurements and model calculations.
Document type :
Journal articles
Domain :
http://hal.in2p3.fr/in2p3-00723981
Contributor : Emmanuelle Vernay <>
Submitted on : Thursday, August 16, 2012 - 9:48:36 AM
Last modification on : Wednesday, July 28, 2021 - 1:36:04 PM
### Citation
B. Abelev, N. Arbor, G. Conesa Balbastre, J. Faivre, C. Furget, et al.. Centrality Dependence of Charged Particle Production at Large Transverse Momentum in Pb--Pb Collisions at $\sqrt{s_{\rm{NN}}} = 2.76$ TeV. Physics Letters B, Elsevier, 2013, 720, pp.52-62. ⟨10.1016/j.physletb.2013.01.051⟩. ⟨in2p3-00723981⟩
Record views | 2021-07-30 03:19:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.749215841293335, "perplexity": 7854.3259127750525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00240.warc.gz"} |
https://www.bartleby.com/solution-answer/chapter-64-problem-13e-calculus-an-applied-approach-mindtap-course-list-10th-edition/9781305860919/evaluating-an-improper-integral-in-exercises-7-20-determine-whether-the-improper-integral-diverges/3aec1e21-6361-11e9-8385-02ee952b546e | Chapter 6.4, Problem 13E
### Calculus: An Applied Approach (Min...
10th Edition
Ron Larson
ISBN: 9781305860919
Chapter
Section
### Calculus: An Applied Approach (Min...
10th Edition
Ron Larson
ISBN: 9781305860919
Textbook Problem
1 views
# Evaluating an Improper Integral In Exercises 7-20, determine whether the improper integral diverges or converges. Evaluate the integral if it converges. See Examples 1, 2, and 3. ∫ − ∞ − 1 e x d x
To determine
Whether the improper integral 1exdx diverges or converges and evaluate if it converges.
Explanation
Given Information:
The expression is provided as:
1exdx
From definition of improper integral.
bf(x)dx=limaabf(x)dx
Also, the expression for the integration of an exponential is as follows:
eaxdx=eaxa+C; where, a0.
The improper integral converges if the limit exists otherwise the improper integral diverges.
Consider the provided expression:
1exdx
Use the property of improper integral and simplify as:
1ex
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started | 2019-12-08 20:53:36 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059397578239441, "perplexity": 4880.194072522476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00498.warc.gz"} |
https://electronics.stackexchange.com/tags/digital-logic/new | # Tag Info
1
Your wire is not defined on the schematic. Typically Delays are made for Half Bridges to delay turn-on to avoid short circuit or shoot -thru and depend on load inductance and FET Capacitance which I modelled here for a power FET with Ciss. The Turn-ON delay is circled and OFF is assumed to be x ns with low diode resistance. You must define more details on ...
2
r1 <= r1 is not required in any synthesiser, as it is in the definition of RTL. Since the logic is written under @posedge clk, it is implicit that the previous value of a the register r1 should be held in that clock cycle if r1 is not driven any value at that clock edge ie., in this case, if the condition cond1 is violated. This is true for the second ...
1
4’d0 means a vector constant that is 4 bits wide, decimal value of 0. In this code it is used to initialize the 4-bit reg variable q to all zeroes in the reset statement. If it were 3’d0, only the lower 3 bits would be reset. This bit-width characteristic of Verilog is important to keep in mind. Careful Verilog coding style pays close attention to how ...
0
When I asked "Is there a better design method?" that was not a rhetorical question. I'd be grateful if someone could show a better design method in an answer. This should be a better (higher speed) driver because you are not forcing the BJT to operate in saturation (it takes vital nanoseconds to come out of saturation): - If it's still not fast ...
1
As for "how is this handled in gate level simulation", I've done vlsi design in industry for 15 years and I've never seen a T flip flop since college. A TFF without reset is nonsensical since you can never know what the value is. You could conceivably make a circuit that asserts T if the output is 1 through some FSM that activates once, to put the ...
0
Using bit instead of logic in the design, output q will be initialized to 0, since bit is of 2-state type. Using this technique one can verify the logical correctness of a design. This works for simulation, but it hides the fact that the output is not initialized to a known state when the circuit is powered up. module t_ff(input bit t, clk, output bit q, ...
4
Interesting circuit, and in fact, you're very nearly there. I see three problems. If your input is really RS-232 and not TTL (as implied by your 9-pin connector), then you need a circuit to convert RS-232 levels to TTL levels. Back in the day, this would have been the 1489 RS-232 line receiver chip, but there are more modern alternatives today. Your timing ...
0
Just my thoughts, without any claim to be correct: 3 inputs: reset to restart the logic clock to mark the change of days (each day is one cycle) sick is active on the edge of clock, if the kid is sick (take setup time into account) 1 output: reward will be become active after the edge of clock, if the condition for the reward is met (it will become ...
0
Note: The circuit you are using is not the correct one. See JK latch, possible Ben Eater error? The output remains unknown when simulated without a reset. Try this one which uses a reset: Try this code: use IEEE.std_logic_1164.all; entity JK_FF is port (J, K, CLK, reset: in std_logic; -- inputs Q, Q_bar: out std_logic); end entity JK_FF; architecture ...
0
This is how I would solve it based on my understanding of the problem, however by no means do I claim this to be the correct solution. So the kid spends $1 on the first visit, and$2 on the second visit, so total \$3, open the reward box. Since it is not mentioned to close the reward box, the reward box can be assumed to be left open. If the problem is not ...
8
tl; dr: You need to understand how UARTs actually work. You're missing a lot of stuff. What you have designed thus far is a basic deserializer, with a kind of weird way of making the clock that depends on the data input. Critically, it's failing to properly frame the input data and thus pick off the bits at the right time. And, your setup needs at least 1-...
0
This was already answered in Are Verilog if blocks executed sequentially or concurrently? : "Statements within an always block are evaluated sequentially, doesn't matter if blocking or non blocking assignments are used - nonblocking assignments are simply deferred assignments, a subsequent nonblocking assignment to the same reg in the same always block ...
0
The two pieces of code are identical in simulation and synthesis; their values will be swapped. Assignments to x and y use their current RHS values and the order of assignments does not matter because the LHS updates happen after both statements have completed. If you were to print the the values right after the assignment like always @(posedge clk) begin ...
0
Here is the SystemVerilog code for a reusable full-subtractor: // By Shashank V M module full_subtractor #(parameter WIDTH = 4) (input logic [WIDTH - 1: 0] a, b, input logic borrow_in, output logic borrow_out, output logic [WIDTH - 1: 0] result); assign {borrow_out, result} = a - b - borrow_in; endmodule
0
the following Boolean valued function on n Boolean variables: f(x1,…,xn)=x1+⋯+xn(mod 2), where addition is over integers, mapping ‘FALSE’ to 0 and ‘TRUE’ to 1 It's basically a one-bit XOR of all the input LSBs, so the simplest way with 2-input gates is a xor tree: The minimum size of such a circuit computing f (asymptotically in n) is : n+n/2+n/4+...
0
The first problem is (as far as I can tell, the not gates are not going in the right direction) I circled this red The second problem is you are using NAND gates not AND gates (green circle)
0
That looks like some sort of SPI-like protocol, but probably not byte-based -- so the number of bits is not a multiple of 8. You channel 0 is likely "data", channel 1 is "clock", channel 2 is "latch", and channel 3 is something like "reset" or "apply". the next steps would be: Zoom on the data to figure out ...
2
No, that will not work. The output will drive the capacitor voltage high or low relatively quickly (maybe a millisecond) and the resistor will do nothing of value. You'd need a gate or to short the capacitor with a switch + small value resistor. Ignoring the top inverter, this kind of Rube Goldberg R-C reset only works some of the time and has hazards that ...
2
It may not be damaged immediately, but it also a bad circuit and makes no sense, as the chip will have to charge and discharge the capacitor which is quite large in value. So the answer is, that is not OK to have capacitors directly on logic outputs.
2
The term has been used in different ways but typically means a setting (one or more bits) that could be implemented as a latch, switch, jumper etc. in the early days it might have been a wire with a banana plug on each end. My assumption is that it derives from electrical engineering where one would strap something to ground by attaching a metal strap ...
2
The slightly more elaborate and concise term is "configuration strap", a method of configuring a device by pulling a pin up or down, for instance to override its internal default pull-up/down. To "strap" means, in general, to fasten or secure in a specified place or position with a belt. In our case that could be to configure at a ...
0
Without seeing the full document (and perhaps not even then), only educated guesses remain about the exact implementation details (unless someone has experience with a device whose documentation used that exact jargon for a programmable connection, and can write an answer from that experience). As mentioned above, I have seen "strap" used only for ...
1
If you actually want to do addition or subtraction then use the built-in + and - operators, that is what they are there for. Still if we want to design a multi-bit adder from first principles, we can do it by operating on vectors. So we essentially build a bunch of 1 bit adders in paralell, then wire them up to each other. module full_adder(a, b, cin, cout, ...
6
Sure, but you don't need to. Just write a - b wherever you would instantiate the full substractor. Your code will be more readable and synthesis hasn't had difficulties with this construct since the 90s.
2
Fused links or jumpers or "soft, firm or hard" registers are "straps" to indicate status with logic levels, such as read by BIOS for configuration of RAM and may be hard-coded in flash memory.
11
Put a register in the feedback path and yes. Common in the early 80s when TTL PROMs were cheaper than PALs (before FPGAs came along) Use a clocked register rather than a latch, to hold the ROM output. This register holds the state which forms part of the next ROM address (and may hold outputs too). Then inputs form a further part of the ROM address, so the ...
3
No, I don't see this working reliably. When the address inputs of an EEPROM change to access a new location, there will be a period of time when the outputs are changing unpredictably from the contents of the old location to the new one. Some outputs will probably change quicker than others. If some of these changing outputs are being fed straight back to ...
3
Not really a flip-flop but a state-table lookup. But the answer is yes, you can do this but there are other options that may be better. There are PALs (Programmable Array Logic) and CPLDs that might suite your application better. But if you are just doing this as an experiment or proof-of-concept, go ahead.
2
You can't implement a DLL as purely digital logic, because the feedback that varies the buffer delay is analog. But the good news is that most FPGA families have DLLs available as built-in "hard" modules. The bad news is that they generally have a limited number of outputs (less than 8), so the length of your FF chain would be similarly limited. ...
3
I strongly advise you flip the design round so that all the D-flops are clocked from the same system-wide clock resource, and that you place the delays on your data. FPGAs work very hard to distribute a clock to all parts of the chip with decent fanout and minimal skew, you want to ride that horse in the direction it's going. The fun part is going to be ...
0
Zooming in your photo, i can see that there is flux on the board. Have you washed your board after soldering? If there is flux left on it, it acts as a conductor (sometimes). It has to be clean. Clean it using isopropanol (if you dnt have isoropanol you can use alcohol but if its not 100% clean alcohol it can corrode your leads) Other that that, your circuit ...
1
Before I go any further, you should consider using a microcontroller for this. I know this is a really simple circuit, but unless you know the signal source will never have short pulses you could end up losing a mute press somewhere and annoying the user. I'm assuming the function here is to mute the radio when another sound source is present. A ...
2
That is expected behaviour when shift and latch clocks are tied together. Quote from TI 74HC595 datasheet : "If both clocks are connected together, the shift register always is one clock pulse ahead of the storage register."
3
Decreasing the voltage decreases the maximum frequency that can be used such that the operation of the digital system is as desired. This is because the equivalent resistance, $R_{eq}$, of the MOS transistor increases if $V_{dd} < V_T$. $V_{dd}$ is the supply voltage and $V_{T}$ is the threshold voltage. As the equivalent resistance increases the ...
0
This is something the designer of the system can choose. The CPU doesn't know about your different types of memory chips. The CPU puts an address like 0x1234 (0001 0010 0011 0100) onto the address bus, and reads the data from the data bus. It doesn't know which chip is outputting the data to the data bus. This designer has chosen that addresses 0x0000 - ...
1
No, because you have less cache than memory. If you have, let's say, 4GB of memory and 4MB of cache with 64-byte cache lines, then you have addresses that look like this: 1111 1111 1111 1111 1111 1111 1111 1111 <- example memory address 11 1111 1111 1111 11 <- example cache index It's direct-mapped which means each memory address ...
2
There is no contradiction. The behaviour of the simulator is correct. If you drive the inputs of flip-flop EXACTLY at the clock edge, then those values are not guaranteed to be sampled at that clock edge. It will be sampled only at the next clock edge. At least that's what I have observed in many of the logic simulators. You can confirm this if you see that ...
0
To use k-map you must consider all the rules. What you miss is the rule saying that you should have as few groups as possible. So after covering all 1s you should not add additional groups just because they can fit. In problem 7, the group (m2,m6) is just one additional group that you do not need. To get the boolean expression of (m0,m2) for example, you ...
0
Well, if you use the conventional 'clock is active on the rising edge' you are correct, you can only do a combinatory circuit (and propagation time will be your enemy). However if you are going DDR (i.e. clocking on both rising and falling edge) technically you have your data 'in the same' clock period. Would that be useful? probably no, since the following ...
2
The only way this would work (i.e output is provided on the same cycle as the input is seen) if there are no flip-flops in the module. Well, there could be a flop triggered on the falling edge of the clock and if we split enough hairs it could be argued the output is still in the same cycle, just delayed a half cycle... But from the module and input/output ...
1
I strongly recommend adding an addendum at the bottom of your question that incorporates your comment that extends your question. You are now talking about multiplying a 4-bit binary input by 0x6. This requires at most a 7-bit result. (4 bits times 3 bits.) Referring to the sidelined discussion I gave earlier (see below), multiplying by 6 just means that two ...
2
The waveform generator can make a variety of shapes: sine, triangle, sawtooth, square, or one of your own design (that's the "arbitrary" part.) It can certainly replace a 555 as a clock source. To do that, your generator should have a setting to make logic pulses, that is square-waves that swing from 0 to some + logic voltage (e.g., 5V or 3.3V). ...
1
In a Moore machine each state is associated with a certain output or in your case a certain 6 outputs. Look at what combinations of the 6 outputs are repeated. Where the outputs are the same in your photo, that combination of outputs can be represented by the same state in a Moore machine. For example outputs of 000000 occur 3 times but can be represented by ...
-1
Well, let see, you have: State 0: Rest ... ... State 1: Brake *** *** State 2, 3, 4 Turn right ... *.. ... **. ... *** State 5, 6, 7 Turn left ..* ... .** ... *** ... State 8, 9, 1 Brake right *** *.. *** **. *** *...
0
As suggested in the comments, I'm not sure on why you used zeners, expecially on the MCU side where the GPIOs are well specified. I didn't check the computations (I'm lazy) however the golden rule with transistor output optos is to check the range of CTR: your part can do 50-600% if not binned so work with that. An 'excess' current is not a problem since it'...
0
Your solution is wrong. In pull-down network, series combination of B and C should be in parallel with {A, D, E} network since it is an OR function. Pull-up network is correct. The {A, D, E} network is correct in the pull-down network. Pull-down network and pull-up networks have to be duals of each other.
1
In the screenshot, it looks like you're summing carry_TPLH twice? If the problem you've got is format related, I know in Virtuoso IC6 and upwards that you can do (carry_TPLH + carry_TPHL)/2, but alternatively you could try the calcVal function: (calcVal("carry_TPLH" "your_test") + calcVal("carry_TPHL" "your_test"))/2
2
The rightmost strip is likely a higher metal layer than the metal strip you see in blue with diagonal lines. In this case, it is likely metal 2 (M2). By going up M2, they are able to cross over the M1 horizontal strip at the top without forming a connection. Another reason we can assume this is likely metal 2 is that it is a vertical strip. The convention ...
0
Without explicitly telling you the answer, let me add some annotations to your two diagrams: Does this help answer your question about whether #1 or #2 is the Moore machine? Also, are you asking for help in determining the final circuit? Or just which diagram is the right one?
2
I believe this is a layout for a tri-state buffer. In this circuit, both the top two PMOS transistors are in series as well as the bottom two NMOS. The middle two MOSFETs are used to turn the tri-state "ON" or "OFF" and then the outer two MOSFETs act as a normal buffer/inverter. The "z" labels seem to suggest the inverted of the ...
Top 50 recent answers are included | 2021-03-03 16:57:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48312437534332275, "perplexity": 1329.9846343235172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00476.warc.gz"} |
http://www.mitp.ru/en/publ/abs_vol2/abs25.html | FREQUENCY ESTIMATION PERFORMANCE BY EIGENVECTOR METHOD
G. M. Molchan
Abstract
We study the theoretical performance of the MUSIC (Multiple Signal Classification) and MN (Minimum Norm) algorithms in estimating the hidden periodicities in the presence of white noise. Both algorithms are based on the principal components of signal correlation matrix of size $m$. An asymptotical analysis of frequency distribution is given under conditions that the number of observation $N\to \infty$ and $m$ is fixed or it increases with $N$. Surprisingly, the frequency accuracy of order $o(N^{-1})$ is impossible for $m\simeq N- c$.
Back to
Computational Seismology, Vol. 2. | 2017-12-17 19:40:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7363542914390564, "perplexity": 689.3709389482091}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597485.94/warc/CC-MAIN-20171217191117-20171217213117-00015.warc.gz"} |
https://azimuthproject.org/azimuth/revision/diff/Blog+-+The+stochastic+resonance+program+%28part+1%29/66 | # The Azimuth Project Blog - The stochastic resonance program (part 1) (Rev #66, changes)
Showing changes from revision #65 to #66: Added | Removed | Changed
This page is a blog article in progress, written by David Tanzer. To see discussions of this article while it was being written, visit the Azimuth Forum. Please remember that blog articles need HTML, not Markdown.
guest post by David Tanzer</i>
At the Azimuth Code Project, we are aiming to produce educational software that is relevant to the Earth sciences and the study of climate. Our software takes the form of interactive web pages, which allow you to experiment with the parameters of a model and view its outputs. But in order to fully understand the meaning of a program, we need to know about the concepts and theories that inform it. So we will be writing articles to explain the science, the math, and the programming behind these models.
In this two-part series, I will cover the Azimuth stochastic resonance example program, by Allan Erskine and Glyn Adgie . Here Today I we’ll will look outline at some of the math and science behind the program, and next time I’ll we’ll dissect the program. By way of introduction, I am a software developer with research training in computer science, so this is a new field area of investigation for me. Any amendments or clarifications are welcome!
### The concept of stochastic resonance
Stochastic resonance is a phenomenon in which, under certain conditions, a noise source may amplify the effect of a weak signal. This concept was used in an early hypothesis about the timing of ice-age cycles, but and that has since been applied to a wide range of phenomena, including neuronal detection mechanisms, and patterns of traffic congestion.
Suppose that we have a signal detector whose internal, analog state is driven by an input signal, and suppose the analog states are classified partitioned into a two region regions, of called the “on” states and a region of “off” states. state This is so the we have a digital state, state which is abstracted from the analog state. With a light switch, we could take the force as the input signal, the angle as the analog state, and the up/down classification of the angle as the digital state.
Let’s consider the effect of a periodic input signal on the digital state. Suppose that the wave amplitude is not big enough to change the digital state, yet large enough to drive the detector’s analog state close to the digital state boundary. Then, a bit of random noise, occurring near the peak of an input cycle, may “tap” the system over to the other digital state. So, We there will be therefore see a phase-dependent probability of transitions state-transitions between that digital is states. synchronized This with relationship between signal phase and state-transition probabilities bears the stamp of the input frequency. signal. The In a complex way, the noise hasamplified the input signal.
But it’s a pretty funky amplifier! Here is a picture from the Azimuth library article on stochastic resonance:
Stochastic resonance has been found in the signal detection mechanisms of neurons. There are, for example, cells in the tails of crayfish which that are tuned to low-frequency signals in the movement water of arising from the water, generated by the motions of predators. These signals alone are do too not weak to cross the firing threshold for the neurons, but with the right amount of noise, the neurons do are respond triggered to by the these signals.
See:
Stochastic resonance, Azimuth Library
Stochastic resonance in neurobiology , David Lyttle. Lyttle
### Bistable stochastic resonance and Milankovitch theories of ice-age cycles
Stochastic resonance was originally defined for the systems special that case are of a bistable system – where each digital state is the basin of attraction for a stable point of stable equilibrium.
An early application of stochastic resonance was to a hypothesis, within the framework of bistable climate dynamics, about the timing of the ice-age cycles. Although it has not been confirmed, it remains of interest (1) historically, (2) because the the timing of the ice-age cycles remains an open problem, and (3) because the Milankovitch hypothesis upon which it rests is an active part of the current scientific research. research agenda.
In the bistable model, the two climate states are a cold, “snowball” Earth and a hot, iceless Earth. The snowball Earth is stable because it is white, and hence reflects solar energy, which keeps it frozen. The iceless Earth is stable because it is dark, and hence absorbs solar energy, which keeps it melted.
The Milankovitch hypothesis states that the drivers of climate state change are long-duration cycles in the insolation the solar energy received in the northern latitudes (called the “insolation”) that are caused by periodic changes in the Earth’s orbital parameters. The significance of the north is significant because that is where the glaciers are concentrated concentrated, there, and so a sufficient “pulse” in the northern temperatures could trigger initiate a state change.
Three such astronomical cycles have been identified:
• Changing of the eccentricity of the Earth’s elliptical orbit, with a period of 100 kiloyears
• Changing of the obliquity (tilt) of the Earth’s axis, with a period of 41 kiloyears
• Precession (swiveling) of the Earth’s axis, with a period of 23 kiloyears
In the stochastic resonance hypothesis, the Milankovitch signal is amplified by random events to produce climate state changes. More In more recent forms of Milankovitch theories theories, invoke a deterministic forcing mechanism. mechanism is used. In a theory by Didier Paillard, the climate is modeled as with having three states, called interglacial, mild glacial and full glacial, and the state changes depend on the volume of ice as well as the insolation.
See:
Milankovitch cycle, Azimuth Library
Mathematics of the environment (part 10), John Baez. This gives an exposition of Paillard’s theory.
Increasing the Signal-to-Noise Ratio with More Noise, Glyn Adgie and Tim van Beek, Azimuth Blog. Subtitle: Are the Milankovitch Cycles Causing the Ice Ages?
### Bistable systems defined by a potential function
Anytime Any we smooth have a function that with has two local minima, minima we can use be that used as a function to define a bistable system. For instance, consider the function V(x) = x4$V(x) = x^4/4 - x^2/2$ /4 : - x2/2:
To define a bistable system, construct a differential equation in where which the time derivative of x is set to the negative of the derivative of the potential at x:
$dx/dt = -V'(x) = -x^3 + x = x(1 - x^2)$
So, for instance, at a place where the potential graph is sloping upward as x increases, the -V’(x) time derivative X’(t) is negative, which sends X(t) “downhill” towards the potential minimum. As it approaches the minimum, the slope of the potential graph goes to zero, which means the motion of X(t) slows down. It asymptotically approaches rest at the minimum.
The roots of V’(x) yield stable equilibria at 1 and -1, and an unstable equilibrium at 0. The unstable equilibrium separates the basins of attraction for the stable equilibria.
### Discrete stochastic resonance
We Here now we talk describe about a the discrete-time discrete model model, that which exhibits stochastic resonance this is implemented what is used in the Azimuth demo program.
The We Azimuth construct demo program implements a discrete discrete-time model, derivative, which by exhibits combining stochastic the resonance. negative of the derivative of the potential function, a sampled sine wave, and a normally distributed random number:
We use the potential function just described, but using discrete time; the derivative is assumed to be constant over the interval between time points. In addition, a random number is added to the discretely sampled derivative. This gives us a discrete-time difference equation:
$\Delta X_t = -V'(X_t) *\Delta t + SineWave(t) + RandomSample(t) = X_t (1 - X_t^2) \Delta t + \alpha * sin(\omega t) + \beta * GaussianSample(t)$
where $\Delta t$ is a constant, constant and$t$ is restricted to multiples of $\Delta t$ , . and GaussianSample(t) is a sampling from a normal distribution with zero mean and unit variance.
Note that this differential difference equation is the discrete discrete-time version counterpart of to a correspondingStochastic differential equation.
In Next the time next article, we will take a look into at the Azimuth demo program: how to use it, how it works, and how to modify change it to develop make new programs.
category: blog | 2021-06-19 00:25:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6323215365409851, "perplexity": 1186.0202877447232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00589.warc.gz"} |
https://gmatclub.com/forum/viewtopic.php?f=141&t=102552&start&st&sk=t&sd=a&view=print | GMAT Club Forumhttps://gmatclub.com:443/forum/ If car X followed car Y across a certain bridge that is 21-mhttps://gmatclub.com/forum/if-car-x-followed-car-y-across-a-certain-bridge-that-is-21-m-102552.html Page 1 of 1
Author: niheil [ 09 Oct 2010, 12:13 ] Post subject: If car X followed car Y across a certain bridge that is 21-m If car X followed car Y across a certain bridge that is 21-mile long, how many seconds did it take car X to travel across the bridge?(1) Car X drove onto the bridge exactly 3 seconds after car Y drove onto the bridge and drove off the bridge exactly 2 seconds after car Y drove off the bridge.(2) Car Y traveled across the bridge at a constant speed of 30 miles per hour.Additional info on the problemSource: Paper TestTest Code: 42Section: 2 (Data Sufficiency)Problem: 5
Author: Bunuel [ 09 Oct 2010, 12:43 ] Post subject: Re: Plz Help with Data Sufficiency problem niheil wrote:Hi guys,Please explain how to answer the following Data Sufficiency question:If car X followed car Y across a certain bridge that is 21mile long, how many seconds did it take car X to travel across the bridge?(1) Car X drove onto the bridge exactly 3 seconds after car Y drove onto the bridge and drove off the bridge exactly 2 seconds after car Y drove off the bridge.(2) Car Y traveled across the bridge at a constant speed of 30 miles per hour.Let the time needed for car X to travel across the bridge be $$t_x$$ seconds and the time for Y $$t_y$$ seconds.Question: $$t_x=?$$(1) Car X drove onto the bridge exactly 3 seconds after car Y drove onto the bridge and drove off the bridge exactly 2 seconds after car Y drove off the bridge --> car X needs 1 second less to travel across the bridge than car Y --> $$t_y=t_x+1$$. Not sufficient to calculate $$t_x$$.(2) Car Y traveled across the bridge at a constant speed of 30 miles per hour = $$\frac{30}{3600}=\frac{1}{120}$$ miles per second --> car Y needs $$t_y=\frac{21}{\frac{1}{120}}=21*120$$ seconds to travel across the bridge. Not sufficient to calculate $$t_x$$.(1)+(2) $$t_y=t_x+1$$ and $$t_y=21*120$$ --> $$t_x=21*120-1$$. Sufficient.Answer: C.
Author: niheil [ 09 Oct 2010, 14:00 ] Post subject: Re: Plz Help with Data Sufficiency problem Awesome! Thanks again, Bunuel. I wish you could take the GMAT for me, lol.
Author: PennState08 [ 02 Dec 2010, 14:09 ] Post subject: Rate Problem All,This was a data sufficiency problem, but want to double check my math in case I see it in a problem solver.If Car X followed Car Y across a certain bridge that is $$\frac{1}{2}$$ mile long, how many seconds did it take Car X to travel across the bridge?(1) Car X drove onto the bridge exactly 3 seconds after Car Y drove onto the bridge and drove off the bridge exactly 2 seconds after Car Y drove off the bridge.(2) Car Y traveled across the bridge at a constant speed of 30 miles per hour.C is the correct answer for data sufficiency, but I want to go through the problem in various ways (problem solving) to make sure my math is correct.Provided all the data; What are the rates of each Car? How many seconds would it take for Car X to catch Car Y? At what distance would Car X catch Car Y? Rate Car X: ~30.5 mph OR $$\frac{1}{118}$$ miles per secondRate Car Y: 30 mph (given) OR $$\frac{1}{120}$$ miles per secondSecond for Car X to catch Car Y: 180 seconds OR 3 minutes OR $$\frac{1}{20}$$ hourDistance: 1.5 milesExplanations:Rate x Time(t) = DistanceCar Y (given at 30 mph, so find $$t$$ to solve for rate of car X)$$y$$ x $$t$$ = $$\frac{1}{2}$$30 x $$t$$ = $$\frac{1}{2}$$ (need miles per second, not hour)$$\frac{1}{120}$$ x $$t$$ = $$\frac{1}{2}$$$$t$$ = 60 Car X$$x$$ x ($$t$$ - 1) = $$\frac{1}{2}$$ ; 1 second for the time difference (waited 3 seconds after Y, finished 2 seconds after Y: 3 - 2 = 1)$$x$$ x (60 - 1) = $$\frac{1}{2}$$ $$x$$ x 59 = $$\frac{1}{2}$$$$x$$ = $$\frac{1}{118}$$Time (in seconds) Car X catches Car Y:$$\frac{1}{118}$$ ($$t$$ - 3) = $$\frac{1}{120}t$$ ; 3 = amount of seconds after Y left.$$\frac{1}{118} t$$ - $$\frac{3}{118}$$ = $$\frac{1}{120}t$$$$\frac{60}{7080}t$$ - $$\frac{180}{7080}$$ = $$\frac{59}{7080}$$$$\frac{1}{7080} t$$ = $$\frac{180}{7080}$$$$t$$ = 180 seconds180 seconds for Car X to catch Car YDistance (in miles) for Car X to catch Car YTaking either equation and substituting 180 for $$t$$X) $$\frac{1}{118}$$ x (180-3) = 1.5 milesY) $$\frac{1}{120}$$ x (180) = 1.5 milesI have all this set up correctly, right?
Author: Bunuel [ 02 Dec 2010, 14:20 ] Post subject: Re: Rate Problem Merging similar topics. The only difference is in the length of the bridge (21 miles in first question and 1/2 miles in the second one). But the answer for both of them is C. Please ask if anything remains unclear.
Author: Basshead [ 13 Oct 2020, 09:37 ] Post subject: Re: If car X followed car Y across a certain bridge that is 21-m (1) This tells us Car X crosses the bridge 1 second faster than Car Y. We can't determine the time it took for Car X to cross the bridge.(2) Car Y traveled across the bridge at a constant speed of 30 miles per hour. We can determine the time it takes Car Y to cross the bridge; however, this does not tell us anything about Car X(1 & 2) From Statement 2 we can determine the time it takes Car Y to cross the bridge. We know Car X crosses the bridge 1 second faster than Car Y. Therefore, with both statements, we can determine the time it takes Car X to cross the bridge.
Author: Hoozan [ 25 May 2021, 00:52 ] Post subject: Re: If car X followed car Y across a certain bridge that is 21-m EducationAisle based on (1) doesn't car X take 5 seconds longer that car Y? Why is this not correct?
Author: EducationAisle [ 25 May 2021, 02:52 ] Post subject: Re: If car X followed car Y across a certain bridge that is 21-m Hoozan wrote:EducationAisle based on (1) doesn't car X take 5 seconds longer that car Y? Why is this not correct?Not really 5 seconds Hoozan. Car X a) drove onto the bridge exactly 3 seconds after car Y drove onto the bridge and b) drove off the bridge exactly 2 seconds after car Y drove off the bridge.Notice that if X and Y had same speeds, then X would have driven off the bridge exactly 3 seconds after car Y drove off the bridge (because X drove onto the bridge exactly 3 seconds after car Y drove onto the bridge).However, since X drove drove off the bridge exactly 2 seconds after car Y drove off the bridge, this would mean that X "caught up" with Y, 1 second, on the bridge. So, car X actually took 1 second less than car Y, to travel the bridge.
Author: bumpbot [ 03 Aug 2022, 10:11 ] Post subject: Re: If car X followed car Y across a certain bridge that is 21-m Hello from the GMAT Club BumpBot!Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
Page 1 of 1 All times are UTC - 8 hours Powered by phpBB © phpBB Grouphttp://www.phpbb.com/ | 2022-09-25 20:42:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5093717575073242, "perplexity": 2270.9608307739827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00183.warc.gz"} |
https://www.cmi.ac.in/~pranabendu/aml22/Lec19-ex.html | In [1]:
%matplotlib inline
Reinforcement Learning (DQN) Tutorial¶
Based on the tutorial by:
Author: Adam Paszke https://github.com/apaszke
This tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym https://gym.openai.com/
You can find an official leaderboard with various algorithms and visualizations at the Gym website https://gym.openai.com/envs/CartPole-v0
The player to decide between two actions - moving the cart left or right - so that the pole attached to it stays upright.
• +1 for every incremental timestep
• and the environment terminates if
• the pole falls over too far
• or the cart moves more then 2.4 units away from center.
This means better performing scenarios will run for longer duration, accumulating larger return.
Neural networks can solve the task purely by looking at the scene.
• we'll use a patch of the screen centered on the cart as the observation of the current state
• our actions are move left or move right
Strictly speaking, we will present the state as the difference between the current screen patch and the previous one. This will allow the agent to take the velocity of the pole into account from one image.
Packages
First, let's import needed packages. Firstly, we need gym https://gym.openai.com/docs for the environment (Install using pip install gym). We'll also use the following from PyTorch:
• neural networks (torch.nn)
• optimization (torch.optim)
• automatic differentiation (torch.autograd)
• utilities for vision tasks (torchvision - a separate package https://github.com/pytorch/vision).
In [2]:
!pip3 install gym[classic_control]
Requirement already satisfied: gym[classic_control] in /opt/conda/lib/python3.7/site-packages (0.26.2)
Requirement already satisfied: numpy>=1.18.0 in /opt/conda/lib/python3.7/site-packages (from gym[classic_control]) (1.21.6)
Requirement already satisfied: gym-notices>=0.0.4 in /opt/conda/lib/python3.7/site-packages (from gym[classic_control]) (0.0.8)
Requirement already satisfied: cloudpickle>=1.2.0 in /opt/conda/lib/python3.7/site-packages (from gym[classic_control]) (2.1.0)
Collecting pygame==2.1.0
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.3/18.3 MB 25.4 MB/s eta 0:00:00
Installing collected packages: pygame
Successfully installed pygame-2.1.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
In [3]:
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple, deque
from itertools import count
from PIL import Image
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
import matplotlib.pyplot as plt
# if gpu is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
To save an episode as gif and display it later¶
In [4]:
import imageio
import os
from IPython.display import HTML
def save_frames_as_gif(frames, path='./', filename='gym_animation.gif'):
"""Takes a list of frames (each frame can be generated with the env.render() function from OpenAI gym)
and converts it into GIF, and saves it to the specified location.
Code adapted from this gist: https://gist.github.com/botforge/64cbb71780e6208172bbf03cd9293553
Args:
frames (list): A list of frames generated with the env.render() function
path (str, optional): The folder in which to save the generated GIF. Defaults to './'.
filename (str, optional): The target filename. Defaults to 'gym_animation.gif'.
"""
imageio.mimwrite(os.path.join(path, filename), frames, fps=15)
In [5]:
# setup the environment
env = gym.make('CartPole-v1', render_mode='rgb_array')
In [6]:
env.reset()
frame = env.render()
plt.imshow(frame)
plt.grid(False)
In [7]:
frames = []
env.reset()
total_reward = 0
for i in range(100):
action = env.action_space.sample()
next_state, reward, terminated, truncated, info = env.step(action)
done = terminated or truncated
total_reward += reward
frame = env.render()
frames.append(frame)
if done:
break
print("Game terminated after", len(frames), " steps with reward ", total_reward)
save_frames_as_gif(frames, path='./', filename='random_agent.gif')
Game terminated after 12 steps with reward 12.0
In [8]:
HTML('<img src="./random_agent.gif">')
Out[8]:
Lets compute the average reward of the random agent¶
In [9]:
sum_reward=0
for j in range(100):
env.reset()
frames = []
total_reward = 0
for i in range(500):
action = env.action_space.sample()
pesudo_state, reward, terminated, truncated, info = env.step(action)
done = terminated or truncated
#reward = torch.tensor([reward], device=device)
total_reward += reward
frame = env.render()
frames.append(frame)
#print(i, action.item())
if done:
break
print("Game ", j , " terminated after", len(frames), "steps with reward", total_reward)
sum_reward += total_reward
print("Average reward", sum_reward/100)
Game 0 terminated after 29 steps with reward 29.0
Game 1 terminated after 12 steps with reward 12.0
Game 2 terminated after 11 steps with reward 11.0
Game 3 terminated after 18 steps with reward 18.0
Game 4 terminated after 9 steps with reward 9.0
Game 5 terminated after 24 steps with reward 24.0
Game 6 terminated after 29 steps with reward 29.0
Game 7 terminated after 33 steps with reward 33.0
Game 8 terminated after 13 steps with reward 13.0
Game 9 terminated after 16 steps with reward 16.0
Game 10 terminated after 14 steps with reward 14.0
Game 11 terminated after 16 steps with reward 16.0
Game 12 terminated after 34 steps with reward 34.0
Game 13 terminated after 19 steps with reward 19.0
Game 14 terminated after 10 steps with reward 10.0
Game 15 terminated after 20 steps with reward 20.0
Game 16 terminated after 31 steps with reward 31.0
Game 17 terminated after 27 steps with reward 27.0
Game 18 terminated after 12 steps with reward 12.0
Game 19 terminated after 28 steps with reward 28.0
Game 20 terminated after 21 steps with reward 21.0
Game 21 terminated after 26 steps with reward 26.0
Game 22 terminated after 22 steps with reward 22.0
Game 23 terminated after 20 steps with reward 20.0
Game 24 terminated after 24 steps with reward 24.0
Game 25 terminated after 9 steps with reward 9.0
Game 26 terminated after 14 steps with reward 14.0
Game 27 terminated after 38 steps with reward 38.0
Game 28 terminated after 27 steps with reward 27.0
Game 29 terminated after 16 steps with reward 16.0
Game 30 terminated after 22 steps with reward 22.0
Game 31 terminated after 16 steps with reward 16.0
Game 32 terminated after 25 steps with reward 25.0
Game 33 terminated after 24 steps with reward 24.0
Game 34 terminated after 11 steps with reward 11.0
Game 35 terminated after 11 steps with reward 11.0
Game 36 terminated after 68 steps with reward 68.0
Game 37 terminated after 18 steps with reward 18.0
Game 38 terminated after 10 steps with reward 10.0
Game 39 terminated after 33 steps with reward 33.0
Game 40 terminated after 15 steps with reward 15.0
Game 41 terminated after 26 steps with reward 26.0
Game 42 terminated after 52 steps with reward 52.0
Game 43 terminated after 41 steps with reward 41.0
Game 44 terminated after 14 steps with reward 14.0
Game 45 terminated after 15 steps with reward 15.0
Game 46 terminated after 17 steps with reward 17.0
Game 47 terminated after 30 steps with reward 30.0
Game 48 terminated after 13 steps with reward 13.0
Game 49 terminated after 16 steps with reward 16.0
Game 50 terminated after 17 steps with reward 17.0
Game 51 terminated after 11 steps with reward 11.0
Game 52 terminated after 11 steps with reward 11.0
Game 53 terminated after 59 steps with reward 59.0
Game 54 terminated after 13 steps with reward 13.0
Game 55 terminated after 18 steps with reward 18.0
Game 56 terminated after 12 steps with reward 12.0
Game 57 terminated after 61 steps with reward 61.0
Game 58 terminated after 32 steps with reward 32.0
Game 59 terminated after 22 steps with reward 22.0
Game 60 terminated after 57 steps with reward 57.0
Game 61 terminated after 18 steps with reward 18.0
Game 62 terminated after 18 steps with reward 18.0
Game 63 terminated after 19 steps with reward 19.0
Game 64 terminated after 39 steps with reward 39.0
Game 65 terminated after 19 steps with reward 19.0
Game 66 terminated after 13 steps with reward 13.0
Game 67 terminated after 14 steps with reward 14.0
Game 68 terminated after 13 steps with reward 13.0
Game 69 terminated after 15 steps with reward 15.0
Game 70 terminated after 9 steps with reward 9.0
Game 71 terminated after 16 steps with reward 16.0
Game 72 terminated after 17 steps with reward 17.0
Game 73 terminated after 37 steps with reward 37.0
Game 74 terminated after 25 steps with reward 25.0
Game 75 terminated after 12 steps with reward 12.0
Game 76 terminated after 17 steps with reward 17.0
Game 77 terminated after 20 steps with reward 20.0
Game 78 terminated after 17 steps with reward 17.0
Game 79 terminated after 24 steps with reward 24.0
Game 80 terminated after 17 steps with reward 17.0
Game 81 terminated after 24 steps with reward 24.0
Game 82 terminated after 17 steps with reward 17.0
Game 83 terminated after 29 steps with reward 29.0
Game 84 terminated after 39 steps with reward 39.0
Game 85 terminated after 22 steps with reward 22.0
Game 86 terminated after 13 steps with reward 13.0
Game 87 terminated after 41 steps with reward 41.0
Game 88 terminated after 30 steps with reward 30.0
Game 89 terminated after 15 steps with reward 15.0
Game 90 terminated after 94 steps with reward 94.0
Game 91 terminated after 11 steps with reward 11.0
Game 92 terminated after 14 steps with reward 14.0
Game 93 terminated after 12 steps with reward 12.0
Game 94 terminated after 27 steps with reward 27.0
Game 95 terminated after 35 steps with reward 35.0
Game 96 terminated after 19 steps with reward 19.0
Game 97 terminated after 32 steps with reward 32.0
Game 98 terminated after 29 steps with reward 29.0
Game 99 terminated after 17 steps with reward 17.0
Average reward 23.11
Replay Memory¶
We'll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated. It has been shown that this greatly stabilizes and improves the DQN training procedure.
For this, we're going to need two classses:
• Transition - a named tuple representing a single transition in our environment. It essentially maps (state, action) pairs to their (next_state, reward) result, with the state being the screen difference image as described later on.
• ReplayMemory - a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a .sample() method for selecting a random batch of transitions for training.
In [10]:
# the structure of the transition that we store
Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward'))
# stores the Experience Replay buffer
class ReplayMemory(object):
def __init__(self, capacity):
self.cap = capacity
self.memory = deque([],maxlen=capacity)
def push(self, *args):
self.memory.append(Transition(*args))
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
Now, let's define our model. But first, let's quickly recap what a DQN is.
DQN algorithm¶
Our environment is deterministic, so all equations presented here are also formulated deterministically for the sake of simplicity. In the reinforcement learning literature, they would also contain expectations over stochastic transitions in the environment.
Our aim will be to train a policy that tries to maximize the discounted, cumulative reward ${R}_{{t}_{0}}=\sum _{t={t}_{0}}^{\mathrm{\infty }}{\gamma }^{t-{t}_{0}}{r}_{t}$, where ${R}_{{t}_{0}}$ is also known as the return. The discount, $\gamma$, should be a constant between $0$ and $1$ that ensures the sum converges. It makes rewards from the uncertain far future less important for our agent than the ones in the near future that it can be fairly confident about.
The main idea behind Q-learning is that if we had a function ${Q}^{\ast }:State×Action\to \mathbb{R}$, that could tell us what our return would be, if we were to take an action in a given state, then we could easily construct a policy that maximizes our rewards:
However, we don't know everything about the world, so we don't have access to ${Q}^{\ast }$. But, since neural networks are universal function approximators, we can simply create one and train it to resemble ${Q}^{\ast }$.
For our training update rule, we'll use a fact that every $Q$ function for some policy obeys the Bellman equation:
$\begin{array}{}\text{(2)}& {Q}^{\pi }\left(s,a\right)=r+\gamma {Q}^{\pi }\left({s}^{\prime },\pi \left({s}^{\prime }\right)\right)\end{array}$
The difference between the two sides of the equality is known as the temporal difference error, $\delta$:
$\begin{array}{}\text{(3)}& \delta =Q\left(s,a\right)-\left(r+\gamma \underset{a}{max}Q\left({s}^{\prime },a\right)\right)\end{array}$
To minimise this error, we will use the Smooth L1 Loss aka Huber loss https://en.wikipedia.org/wiki/Huber_loss. The Huber loss acts like the mean squared error when the error is small, but like the mean absolute error when the error is large - this makes it more robust to outliers when the estimates of $Q$ are very noisy. We calculate this over a batch of transitions, $B$, sampled from the replay memory:
Q-network¶
Our model will be a convolutional neural network that takes in the difference between the current and previous screen patches. It has two outputs, representing $Q\left(s,\mathrm{l}\mathrm{e}\mathrm{f}\mathrm{t}\right)$ and $Q\left(s,\mathrm{r}\mathrm{i}\mathrm{g}\mathrm{h}\mathrm{t}\right)$ (where $s$ is the input to the network). In effect, the network is trying to predict the expected return of taking each action given the current input.
In [11]:
class DQN(nn.Module):
def __init__(self, h, w, output_size):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
# Number of Linear input connections depends on output of conv2d layers
# and therefore the input image size, so compute it.
def conv2d_size_out(size, kernel_size = 5, stride = 2):
return (size - (kernel_size - 1) - 1) // stride + 1
convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w)))
convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h)))
linear_input_size = convw * convh * 32
self.lin1 = nn.Linear(linear_input_size, 50)
self.lin2 = nn.Linear(50, output_size)
# Called with either one element to determine next action, or a batch
# during optimization. Returns tensor([[left0exp,right0exp]...]).
def forward(self, x):
x = x.to(device)
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
x = F.relu(self.lin1(x.view(x.size(0), -1)))
return self.lin2(x)
Preprocess the Input¶
The input image, from the video game display, is larger than necessary. Processing it directly will be more expensive. So we trim it down
The code below are utilities for extracting and processing rendered images from the environment. It uses the torchvision package, which makes it easy to compose image transforms. Once you run the cell it will display an example patch that it extracted.
In [12]:
resize = T.Compose([T.ToPILImage(),
T.Resize(40, interpolation=Image.CUBIC),
T.ToTensor()])
def get_image_center(screen_width):
world_width = env.x_threshold * 2
scale = screen_width / world_width
return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART
def get_screen():
# Returned screen requested by gym is 400x600x3, but is sometimes larger
# such as 800x1200x3. Transpose it into torch order (CHW).
screen = env.render().transpose((2, 0, 1))
# Cart is in the lower half, so strip off the top and bottom of the screen
_, screen_height, screen_width = screen.shape
screen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)]
view_width = int(screen_width * 0.6)
cart_location = get_image_center(screen_width)
if cart_location < view_width // 2:
slice_range = slice(view_width)
elif cart_location > (screen_width - view_width // 2):
slice_range = slice(-view_width, None)
else:
slice_range = slice(cart_location - view_width // 2,
cart_location + view_width // 2)
# Strip off the edges, so that we have a square image centered on a cart
screen = screen[:, :, slice_range]
# Convert to float, rescale, convert to torch tensor
# (this doesn't require a copy)
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
screen = torch.from_numpy(screen)
# Resize, and add a batch dimension (BCHW)
return resize(screen).unsqueeze(0)
env.reset()
plt.figure()
plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none')
#plt.imshow(env.render(mode='rgb_array'))
plt.title('Example of extracted screen')
plt.show()
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:2: DeprecationWarning: CUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
/opt/conda/lib/python3.7/site-packages/torchvision/transforms/transforms.py:333: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
"Argument interpolation should be of type InterpolationMode instead of int. "
Training¶
Hyperparameters and utilities¶
This cell instantiates our model and its optimizer, and defines some utilities:
• select_action - will select an action accordingly to an epsilon greedy policy. Simply put, we'll sometimes use our model for choosing the action, and sometimes we'll just sample one uniformly. The probability of choosing a random action will start at EPS_START and will decay exponentially towards EPS_END. EPS_DECAY controls the rate of the decay.
• plot_durations - a helper for plotting the durations of episodes, along with an average over the last 100 episodes (the measure used in the official evaluations). The plot will be underneath the cell containing the main training loop, and will update after every episode.
In [13]:
def select_action(state, policy=None, train=True):
global steps_done
sample = random.random()
eps_threshold = EPS_END + (EPS_START - EPS_END) * math.exp(-1. * steps_done / EPS_DECAY)
steps_done += 1
# t.max(1) will return largest column value of each row.
# second column on max result is index of where max element was
# found, so we pick action with the larger expected reward.
action = policy(state).max(1)[1].view(1, 1)
if train:
if sample > eps_threshold:
return action
else:
else:
return action
episode_durations = []
Training loop¶
Finally, the code for training our model.
Here, you can find an optimize_model function that performs a single step of the optimization. It first samples a batch, concatenates all the tensors into a single one, computes $Q\left({s}_{t},{a}_{t}\right)$ and $V\left({s}_{t+1}\right)=\underset{a}{max}Q\left({s}_{t+1},a\right)$, and combines them into our loss. By definition we set $V\left(s\right)=0$ if $s$ is a terminal state. We also use a target network to compute $V\left({s}_{t+1}\right)$ for added stability. The target network has its weights kept frozen most of the time, but is updated with the policy network's weights every so often. This is usually a set number of steps but we shall use episodes for simplicity.
In [14]:
def optimize_model():
if len(memory) < BATCH_SIZE:
return
transitions = memory.sample(BATCH_SIZE)
# Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for
# detailed explanation). This converts batch-array of Transitions
# to Transition of batch-arrays.
batch = Transition(*zip(*transitions))
# Compute a mask of non-final states and concatenate the batch elements
# (a final state would've been the one after which simulation ended)
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
batch.next_state)), device=device, dtype=torch.bool)
non_final_next_states = torch.cat([s for s in batch.next_state
if s is not None])
state_batch = torch.cat(batch.state)
action_batch = torch.cat(batch.action)
reward_batch = torch.cat(batch.reward)
# Compute Q(s_t, a) - the model computes Q(s_t), then we select the
# columns of actions taken. These are the actions which would've been taken
# for each batch state according to policy_net
state_action_values = policy_net(state_batch).gather(1, action_batch)
# Compute V(s_{t+1}) for all next states.
# Expected values of actions for non_final_next_states are computed based
# on the "older" target_net; selecting their best reward with max(1)[0].
# This is merged based on the mask, such that we'll have either the expected
# state value or 0 in case the state was final.
next_state_values = torch.zeros(BATCH_SIZE, device=device)
# Compute the expected Q values
expected_state_action_values = (next_state_values * GAMMA) + reward_batch
# Compute Huber loss
criterion = nn.SmoothL1Loss()
loss = criterion(state_action_values, expected_state_action_values.unsqueeze(1))
# Optimize the model
loss.backward()
for param in policy_net.parameters():
optimizer.step()
Below, you can find the main training loop. At the beginning we reset the environment and initialize the state Tensor. Then, we sample an action, execute it, observe the next screen and the reward (always 1), and optimize our model once. When the episode ends (our model fails), we restart the loop.
Below, num_episodes is set small. You should download the notebook and run lot more epsiodes, such as 300+ for meaningful duration improvements.
In [15]:
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
TARGET_UPDATE = 10
# Get screen size so that we can initialize layers correctly based on shape
# returned from AI gym. Typical dimensions at this point are close to 3x40x90
# which is the result of a clamped and down-scaled render buffer in get_screen()
init_screen = get_screen()
_, _, screen_height, screen_width = init_screen.shape
# Get number of actions from gym action space
n_actions = env.action_space.n
policy_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net.eval()
optimizer = optim.RMSprop(policy_net.parameters())
memory = ReplayMemory(10000)
steps_done = 0
num_episodes = 200
for i_episode in range(num_episodes+1):
# Initialize the environment and state
env.reset()
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
for t in count():
# Select and perform an action
action = select_action(state, policy_net)
next_state, reward, terminated, truncated, info = env.step(action.item())
done = terminated or truncated
reward = torch.tensor([reward], device=device)
# Observe new state
last_screen = current_screen
current_screen = get_screen()
if not done:
next_state = current_screen - last_screen
else:
next_state = None
# Store the transition in memory
memory.push(state, action, next_state, reward)
# Move to the next state
state = next_state
# Perform one step of the optimization (on the policy network)
optimize_model()
if done:
episode_durations.append(t + 1)
break
# Update the target network, copying all weights and biases in DQN
if i_episode % TARGET_UPDATE == 0:
print("Completed Episode", i_episode)
if i_episode % 50 == 0:
print("Playing a test game after episode ", i_episode)
frames = []
env.reset()
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
total_reward = 0
for i in range(1000):
if i == 0:
action = env.action_space.sample()
action = select_action(state, policy_net, train=False)
#pesudo_state, reward, done, info = env.step(action.item())
pesudo_state, reward, terminated, truncated, info = env.step(action.item())
done = terminated or truncated
#reward = torch.tensor([reward], device=device)
total_reward += reward
# Observe new state
last_screen = current_screen
current_screen = get_screen()
if not done:
state = current_screen - last_screen
else:
break
frame = env.render()
frames.append(frame)
if done:
break
print("Game terminated after", len(frames), "steps with reward", total_reward)
print('Complete')
env.render()
env.close()
Completed Episode 0
Playing a test game after episode 0
Game terminated after 23 steps with reward 24.0
Completed Episode 10
Completed Episode 20
Completed Episode 30
Completed Episode 40
Completed Episode 50
Playing a test game after episode 50
Game terminated after 17 steps with reward 18.0
Completed Episode 60
Completed Episode 70
Completed Episode 80
Completed Episode 90
Completed Episode 100
Playing a test game after episode 100
Game terminated after 15 steps with reward 16.0
Completed Episode 110
Completed Episode 120
Completed Episode 130
Completed Episode 140
Completed Episode 150
Playing a test game after episode 150
Game terminated after 27 steps with reward 28.0
Completed Episode 160
Completed Episode 170
Completed Episode 180
Completed Episode 190
Completed Episode 200
Playing a test game after episode 200
Game terminated after 89 steps with reward 90.0
Complete
Play a game¶
In [16]:
sum_reward=0
for j in range(100):
env.reset()
frames = []
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
total_reward = 0
for i in range(500):
if i == 0:
action = env.action_space.sample()
action = select_action(state, policy_net, train=False)
pesudo_state, reward, terminated, truncated, info = env.step(action.item())
done = terminated or truncated
#reward = torch.tensor([reward], device=device)
total_reward += reward
# Observe new state
last_screen = current_screen
current_screen = get_screen()
if not done:
state = current_screen - last_screen
else:
break
frame = env.render()
frames.append(frame)
#print(i, action.item())
if done:
break
print("Game ", j , " terminated after", len(frames), "steps with reward", total_reward)
sum_reward += total_reward
print("Average reward", sum_reward/100)
Game 0 terminated after 88 steps with reward 89.0
Game 1 terminated after 80 steps with reward 81.0
Game 2 terminated after 84 steps with reward 85.0
Game 3 terminated after 87 steps with reward 88.0
Game 4 terminated after 93 steps with reward 94.0
Game 5 terminated after 107 steps with reward 108.0
Game 6 terminated after 88 steps with reward 89.0
Game 7 terminated after 83 steps with reward 84.0
Game 8 terminated after 97 steps with reward 98.0
Game 9 terminated after 76 steps with reward 77.0
Game 10 terminated after 85 steps with reward 86.0
Game 11 terminated after 91 steps with reward 92.0
Game 12 terminated after 102 steps with reward 103.0
Game 13 terminated after 86 steps with reward 87.0
Game 14 terminated after 103 steps with reward 104.0
Game 15 terminated after 92 steps with reward 93.0
Game 16 terminated after 93 steps with reward 94.0
Game 17 terminated after 116 steps with reward 117.0
Game 18 terminated after 97 steps with reward 98.0
Game 19 terminated after 100 steps with reward 101.0
Game 20 terminated after 101 steps with reward 102.0
Game 21 terminated after 88 steps with reward 89.0
Game 22 terminated after 95 steps with reward 96.0
Game 23 terminated after 96 steps with reward 97.0
Game 24 terminated after 101 steps with reward 102.0
Game 25 terminated after 86 steps with reward 87.0
Game 26 terminated after 88 steps with reward 89.0
Game 27 terminated after 88 steps with reward 89.0
Game 28 terminated after 87 steps with reward 88.0
Game 29 terminated after 102 steps with reward 103.0
Game 30 terminated after 94 steps with reward 95.0
Game 31 terminated after 82 steps with reward 83.0
Game 32 terminated after 108 steps with reward 109.0
Game 33 terminated after 97 steps with reward 98.0
Game 34 terminated after 103 steps with reward 104.0
Game 35 terminated after 79 steps with reward 80.0
Game 36 terminated after 47 steps with reward 48.0
Game 37 terminated after 92 steps with reward 93.0
Game 38 terminated after 110 steps with reward 111.0
Game 39 terminated after 82 steps with reward 83.0
Game 40 terminated after 80 steps with reward 81.0
Game 41 terminated after 103 steps with reward 104.0
Game 42 terminated after 101 steps with reward 102.0
Game 43 terminated after 86 steps with reward 87.0
Game 44 terminated after 89 steps with reward 90.0
Game 45 terminated after 94 steps with reward 95.0
Game 46 terminated after 85 steps with reward 86.0
Game 47 terminated after 97 steps with reward 98.0
Game 48 terminated after 88 steps with reward 89.0
Game 49 terminated after 82 steps with reward 83.0
Game 50 terminated after 100 steps with reward 101.0
Game 51 terminated after 90 steps with reward 91.0
Game 52 terminated after 93 steps with reward 94.0
Game 53 terminated after 83 steps with reward 84.0
Game 54 terminated after 86 steps with reward 87.0
Game 55 terminated after 100 steps with reward 101.0
Game 56 terminated after 93 steps with reward 94.0
Game 57 terminated after 101 steps with reward 102.0
Game 58 terminated after 82 steps with reward 83.0
Game 59 terminated after 91 steps with reward 92.0
Game 60 terminated after 106 steps with reward 107.0
Game 61 terminated after 94 steps with reward 95.0
Game 62 terminated after 82 steps with reward 83.0
Game 63 terminated after 91 steps with reward 92.0
Game 64 terminated after 105 steps with reward 106.0
Game 65 terminated after 97 steps with reward 98.0
Game 66 terminated after 102 steps with reward 103.0
Game 67 terminated after 94 steps with reward 95.0
Game 68 terminated after 103 steps with reward 104.0
Game 69 terminated after 88 steps with reward 89.0
Game 70 terminated after 87 steps with reward 88.0
Game 71 terminated after 86 steps with reward 87.0
Game 72 terminated after 93 steps with reward 94.0
Game 73 terminated after 113 steps with reward 114.0
Game 74 terminated after 95 steps with reward 96.0
Game 75 terminated after 102 steps with reward 103.0
Game 76 terminated after 93 steps with reward 94.0
Game 77 terminated after 83 steps with reward 84.0
Game 78 terminated after 98 steps with reward 99.0
Game 79 terminated after 90 steps with reward 91.0
Game 80 terminated after 87 steps with reward 88.0
Game 81 terminated after 95 steps with reward 96.0
Game 82 terminated after 85 steps with reward 86.0
Game 83 terminated after 92 steps with reward 93.0
Game 84 terminated after 98 steps with reward 99.0
Game 85 terminated after 100 steps with reward 101.0
Game 86 terminated after 89 steps with reward 90.0
Game 87 terminated after 92 steps with reward 93.0
Game 88 terminated after 83 steps with reward 84.0
Game 89 terminated after 92 steps with reward 93.0
Game 90 terminated after 83 steps with reward 84.0
Game 91 terminated after 87 steps with reward 88.0
Game 92 terminated after 107 steps with reward 108.0
Game 93 terminated after 94 steps with reward 95.0
Game 94 terminated after 86 steps with reward 87.0
Game 95 terminated after 88 steps with reward 89.0
Game 96 terminated after 142 steps with reward 143.0
Game 97 terminated after 104 steps with reward 105.0
Game 98 terminated after 117 steps with reward 118.0
Game 99 terminated after 38 steps with reward 39.0
Average reward 93.59
In [17]:
frames = []
env.reset()
total_reward = 0
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
for i in range(500):
if i == 0:
action = env.action_space.sample()
action = select_action(state, policy_net, train=False)
pesudo_state, reward, terminated, truncated, info = env.step(action.item())
done = terminated or truncated
#reward = torch.tensor([reward], device=device)
total_reward += reward
# Observe new state
last_screen = current_screen
current_screen = get_screen()
if not done:
state = current_screen - last_screen
else:
break
frame = env.render()
frames.append(frame)
if done:
break
print("Game terminated after", len(frames), "steps with reward", total_reward)
save_frames_as_gif(frames, path='./', filename='RL_agent.gif')
Game terminated after 93 steps with reward 94.0
In [18]:
HTML('<img src="./RL_agent.gif">')
Out[18]: | 2022-12-01 16:46:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33126184344291687, "perplexity": 7954.893100497657}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00001.warc.gz"} |
https://ai.stackexchange.com/tags/breadth-first-search/hot | People who code: we want your input. Take the Survey
# Tag Info
16
Dennis Soemers' answer is correct: you should use a HashSet or a similar structure to keep track of visited states in BFS Graph Search. However, it doesn't quite answer your question. You're right, that in the worst case, BFS will then require you to store 16! nodes. Even though the insertion and check times in the set will be O(1), you'll still need an ...
8
You can use a set (in the mathematical sense of the word, i.e. a collection that cannot contain duplicates) to store states that you have already seen. The operations you'll need to be able to perform on this are: inserting elements testing if elements are already in there Pretty much every programming language should already have support for a data ...
8
While the answers given are generally true, a BFS in the 15-puzzle is not only quite feasible, it was done in 2005! The paper that describes the approach can be found here: http://www.aaai.org/Papers/AAAI/2005/AAAI05-219.pdf A few key points: In order to do this, external memory was required - that is the BFS used the hard drive for storage instead of RAM....
3
Ironically the answer is "use whatever system you want." A hashSet is a good idea. However, it turns out that your concerns over memory usage are unfounded. BFS is so bad at these sorts of problems, that it resolves this issue for you. Consider that your BFS requires you to keep a stack of unprocessed states. As you progress into the puzzle, the states ...
3
The primary reason is that Breadth-First Search requires much more memory (and this probably also makes it a little bit slower in practice, due to time required to allocate memory, jumping around in memory rather than working with what's still in the CPU's caches, etc.). Breadth-First Search needs memory to remember "where it was" in all the different ...
2
Welcome to AI.SE @GundamOfOasis! Your intuition is right: this is fundamentally a problem for combinatorial search. You're also right that problems are created by the fact that not every move is valid at state. To fix this, you need to add a function that can determine whether a given state is valid or not, in addition to the usual function that checks ...
2
The only general situation that comes to my mind where BFS could be preferred over A* is when your graph is unweighted and the heuristic function is $h(n) = 0, \forall n \in V$. However, in that case, A* (which is equivalent to UCS) behaves like BFS (except for the goal test: see section 3.4.2 of this book), i.e. it will first expand nodes at level $l$, then ...
1
There is an inherent assumption in heuristic search that the heuristic function points you in the right direction. A* largely depends on how good the heuristic function is. Two nice properties for the heuristic function are for it to be admissible and consistent. If the latter stands, I can't think of any case where BFS would outperform A*. However, this ...
1
Approaches to the Game It is true that the board has $16!$ possible states. It is also true that using a hash set is what students learn in a first year algorithms courses to avoid redundancy and endless looping when searching a graph that may contain graph cycles. However, those trivial facts are not pertinent if the goal is to complete the puzzle in the ...
1
In general, the process of modelling a problem as a search problem consists in creating a graph which contains nodes, which represent the possible states in your problem, and edges, which represent the relations between these states (that is, you will have an edge between nodes $A$ and $B$ if it is possible to go from state $A$ to state $B$, and vice-versa, ...
1
We use the LIFO queue, i.e. stack, for implementation of the depth-first search algorithm because depth-first search always expands the deepest node in the current frontier of the search tree. The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors. As those nodes are expanded, they are dropped from the ...
1
According to this article Breadth First Search (BFS) searches breadth-wise in the problem space. Breadth-First search is like traversing a tree where each node is a state which may a be a potential candidate for solution. It expands nodes from the root of the tree and then generates one level of the tree at a time until a solution is found. It is very ...
1
This is well covered in the corresponding chapters of Russell & Norvig (Ch. 3 & 4). It also depends on the distinction between TREE-SEARCH and GRAPH-SEARCH. First, note that Breadth-first search also can't handle cost functions that vary between nodes! Breath-first search only cares about the number of moves needed to reach a state, not the total ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2021-06-13 20:16:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6243297457695007, "perplexity": 466.25421683843524}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00531.warc.gz"} |
https://zbmath.org/?q=an:0736.42015 | # zbMATH — the first resource for mathematics
Two weighted norm inequalities for Riesz potentials and uniform $$L^ p$$- weighted Sobolev inequalities. (English) Zbl 0736.42015
The author proves a two-weighted norm inequality for the fractional maximal operator and the corresponding inequality for Riesz potentials of fractional order. These results also yield some inequalities coupling a weighted $$L^ p$$-norm of a function and a weighted $$L^ p$$-norm of its derivative.
##### MSC:
42B25 Maximal functions, Littlewood-Paley theory 46E35 Sobolev spaces and other spaces of “smooth” functions, embedding theorems, trace theorems 31B15 Potentials and capacities, extremal length and related notions in higher dimensions 31B35 Connections of harmonic functions with differential equations in higher dimensions
Full Text: | 2021-10-20 07:43:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7398826479911804, "perplexity": 1602.9663155594403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00301.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-6th-edition/chapter-5-systems-of-equations-and-inequalities-exercise-set-5-1-page-528/53 | ## College Algebra (6th Edition)
$y=x-4$ $y=-\displaystyle \frac{1}{3}x+4$
A graphical solution to a system of linear equations is a point of intersection of two lines. From the graph, we have two lines passing through the point (6,2), with equations $x-y=4\qquad$and$\quad x+3y=12$ Slope-intercept form: solve each equation for $y$: $\left[\begin{array}{lll} x-y=4 & & x+3y=12\\ -y=-x+4 & & 3y=-x+12\\ y=x-4 & & y=-\frac{1}{3}x+4 \end{array}\right]$ The system: $y=x-4$ $y=-\displaystyle \frac{1}{3}x+4$ | 2018-06-19 20:32:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6497171521186829, "perplexity": 253.59455675439546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00335.warc.gz"} |
https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/DeadCone?action=diff&version=4 | # Changes between Version 3 and Version 4 of DeadCone
Ignore:
Timestamp:
04/19/12 08:54:51 (8 years ago)
Comment:
--
### Legend:
Unmodified
v3 Compute the differential cross section for the case of massive final state using a program for symbolic calculations (such as Mathematica+FeynCalc) and compare your result with %$\frac{1}{\sigma^{LO}} \frac{d^2\sigma}{dx_1dx_2}= \frac{1}{\beta} C_F \frac{\alpha_S}{2\pi} \left[ \frac{2(x_1+x_2-1-\rho/2)}{(1-x_1)(1-x_2)} -\frac{\rho}{2} \left( \frac{1}{(1-x_1)^2}+ \frac{1}{(1-x_2)^2}\right) \left. + \frac{1}{1+\rho/2} \frac{(1-x_1)^2+(1-x_2)^2}{(1-x_1)(1-x_2)}\right]\, P, \right.$ (1) $\frac{1}{\sigma^{LO}} \frac{d^2\sigma}{dx_1dx_2}= \frac{1}{\beta} C_F \frac{\alpha_S}{2\pi} \left[ \frac{2(x_1+x_2-1-\rho/2)}{(1-x_1)(1-x_2)} -\frac{\rho}{2} \left( \frac{1}{(1-x_1)^2}+ \frac{1}{(1-x_2)^2}\right) \left. + \frac{1}{1+\rho/2} \frac{(1-x_1)^2+(1-x_2)^2}{(1-x_1)(1-x_2)}\right]\, P, \right.$ (1) where %$\rho=\frac{4 m^2}{s}\le 1\,,\qquad \beta=\sqrt{1-\rho}$, $\rho=\frac{4 m^2}{s}\le 1\,,\qquad \beta=\sqrt{1-\rho}$, and %$\sigma^{LO}= N_c (\sum_{f} Q_f^2) 4 \pi \alpha^2/(3s)$. $\sigma^{LO}= N_c (\sum_{f} Q_f^2) 4 \pi \alpha^2/(3s)$. ==== 2. ==== | 2020-05-27 07:39:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7213209867477417, "perplexity": 3012.053580249901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392141.7/warc/CC-MAIN-20200527044512-20200527074512-00239.warc.gz"} |
https://www.physicsforums.com/threads/what-is-tan-th-in-this-diagram.783893/ | # What is tan θ in this diagram?
< Moderator Note -- Thread moved from the technical math forums (that's why the HH Template is not shown) >
It's supposed to be a simple problem. But I can't for the life of me figure out how to go about it. I managed to find out cos θ using the cosine rule, but it is a very long expression and looks to be going in a direction opposite of the solution. cos θ is (2x^2 + 2xy + y^2 + x*sqrt(2) - y) / (2 * (2x^2 + 2xy + y^2) * (x*sqrt2)).
Any help on this would be appreciated.
Last edited by a moderator:
Can you express tan theta in terms of two other angles you know the tangents for?
Filip Larsen
Gold Member
Perhaps you can form a right-angled triangle involving theta and the two sides? Hint: try draw the full rectangle and see what that brings you. If you can't use angle theta directly then perhaps some other angle easily derived from it ...
It's supposed to be a simple problem. But I can't for the life of me figure out how to go about it. I managed to find out cos θ using the cosine rule, but it is a very long expression and looks to be going in a direction opposite of the solution. cos θ is (2x^2 + 2xy + y^2 + x*sqrt(2) - y) / (2 * (2x^2 + 2xy + y^2) * (x*sqrt2)).
Any help on this would be appreciated.
From intersection point of pieces "x" and "y" put a normal "n" to a hypotenuze of a big triangle. Then you have:
n : x√2 = sin θ , n : y = sin α
From this you have: sin θ = (y⋅sin α)/(x√2)
Knowing that sin2α = x2/(x2+(x+y)2) and that 1+ctg2θ = 1/sin2θ , you should obtain correct result (A).
This does seem a bit long winded. Can you expand tan(a-b) directly in terms of tan a and tan b?
This does seem a bit long winded.
1 min for drawing, 2 min for calculation, 3 min for Latex. This is how long it takes when derived from first principles.
Filip Larsen
Gold Member
By inspection of the diagram one can establish ##\tan(\pi/4-\theta) = x/(x+y)## from which it is easy to expand and solve for ##\tan(\theta)## (but here left as an exercise for the original poster).
zoki85, that is very neatly done. Turns out we didn't need the cosine rule at all.
Filip Larsen, yes it is established that tan (45 - θ) = x / ( x + y ). But after expansion, we are left with (1 - tan θ) / (1 + tan θ ) using this formula...
Thanks.
Mark44
Mentor
zoki85, that is very neatly done. Turns out we didn't need the cosine rule at all.
Filip Larsen, yes it is established that tan (45 - θ) = x / ( x + y ). But after expansion, we are left with (1 - tan θ) / (1 + tan θ ) using this formula
We are not "left with" (1 - tan θ) / (1 + tan θ ) -- we are left with an equation whose right side is this. Write the whole equation and solve it for tan θ.
PsychoMessiah said: | 2020-12-05 18:53:02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8098711371421814, "perplexity": 665.147051728886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141748276.94/warc/CC-MAIN-20201205165649-20201205195649-00138.warc.gz"} |
http://mathhelpforum.com/trigonometry/18066-trig-question.html | # Math Help - trig question
1. ## trig question
if i have cos theta = 0.4
what are the values of
cos (180-theta)
cos (360-theta)
i solve like this
cos.4 inverse = 66.42
therefore cos ( 180-66.42)
i have cos *113.57 is this answer or must i make multiplication which gets me -.4 ??
2. Originally Posted by gregorio
if i have cos theta = 0.4
what are the values of
cos (180-theta)
cos (360-theta)
i solve like this
cos.4 inverse = 66.42
therefore cos ( 180-66.42)
i have cos *113.57 is this answer or must i make multiplication which gets me -.4 ??
there are several ways to approach this
you can note that:
$\cos (180 - \theta ) = - \cos \theta$
and
$\cos (360 - \theta ) = \cos \theta$
OR
you can do it the way you did, and you would get the same answer
OR
you can use the double angle formula:
$\cos (A - B) = \cos A \cos B + \sin A \sin B$ ........which, by the way, is how we know the first two formulas i gave you
and, by the way, it is NOT cos * 113.57, it is cos(113.57). the cosine function is an argument, you cannot treat it like a term unto itself and multiply it with something | 2015-07-02 18:54:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9663962125778198, "perplexity": 2704.7494054859276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095668.34/warc/CC-MAIN-20150627031815-00129-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://socratic.org/questions/what-is-the-orthocenter-of-a-triangle-with-corners-at-9-3-6-9-and-2-4 | # What is the orthocenter of a triangle with corners at (9 ,3 ), (6 ,9 ), and (2 ,4 )?
Aug 3, 2018
color(maroon)("ortho-centre coordinates " O (73/13, 82/13)#
#### Explanation:
$A \left(9 , 3\right) , B \left(6 , 9\right) , C \left(2 , 4\right)$
Slope of $\overline{A B} = {m}_{A B} = \frac{{y}_{B} - {y}_{A}}{{x}_{B} - {x}_{A}} = \frac{9 - 3}{6 - 9} = - 2$
Slope of $\overline{C F} = {m}_{C F} = - \frac{1}{m} \left(A B\right) = - \frac{1}{-} 2 = \frac{1}{2}$
Equation of $\overline{C F}$ is $y - 4 = \frac{1}{2} \left(x - 2\right)$
$2 y - x = 7$ Eqn (1)
Slope of $\overline{A C} = {m}_{A C} = \frac{{y}_{C} - {y}_{A}}{{x}_{C} - {x}_{A}} = \frac{4 - 3}{2 - 9} = - \frac{1}{7}$
Slope of $\overline{B E} = {m}_{B E} = - \frac{1}{m} \left(A C\right) = - \frac{1}{- \frac{1}{7}} = 7$
Equation of $\overline{B E}$ is $y - 9 = 7 \left(x - 6\right)$
$7 x - y = 33$ Eqn (2)
Solving Eqns (1) and (2), we get the ortho-centre coordinates $O \left(x , y\right)$
$\cancel{2 y} - x + 14 x - \cancel{2 y} = 7 + 66$
$x = \frac{73}{13}$
$y = \frac{164}{26} = \frac{82}{13}$ | 2019-07-20 16:00:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9783337116241455, "perplexity": 4350.059879180327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526536.46/warc/CC-MAIN-20190720153215-20190720175215-00122.warc.gz"} |
http://mathhelpforum.com/calculus/151473-proving-formula.html | # Math Help - Proving a formula ..
1. ## Proving a formula ..
Hello
Question:
Prove that if f&g are continous and inverse functions for each other and a&b are constant where b>a .. then :
$\int_a^b f(x) \, dx= b f(b) - a f(a) - \int_{f(a)}^{f(b)} g(x) \, dx$
My FAILED try:
Am thinking about a substitution which makes the f(x) be g(x)
So I substitute x=g(g(x)) ..
But this failed; since dx will be compliacted
Any ideas?
2. Idea: Integration by parts.
3. I think it works for f increasing...
... but not decreasing...
I.e. subtracting as directed in your formula leaves the white region(s) inside the larger rectangle - which correspond(s) to
$\int_a^b f(x)\ \mathrm{d}x\$ in the increasing case only.
Edit: on the other hand...
Ah! Should have tried integration by parts before sounding off...
Spoiler:
Key in next spoiler...
Spoiler:
... is the chain rule. Straight continuous lines differentiate downwards (integrate up) with respect to x, and the straight dashed line similarly but with respect to the dashed balloon expression (the inner function of the composite which is subject to the chain rule).
... is lazy integration by parts, doing without u and v, and filling out
... the product rule from the bottom left corner instead of, as would happen in differentiation, the top one.
Still bothered by my graphs, though...
Late edit: Thanks for exuming this, BayernMunich, but the less said the better! I did spot my VERY SILLY graph error in the end!
4. Originally Posted by BayernMunich
Hello
Question:
Prove that if f&g are continous and inverse functions for each other and a&b are constant where b>a .. then :
$\int_a^b f(x) \, dx= b f(b) - a f(a) - \int_{f(a)}^{f(b)} g(x) \, dx$
My FAILED try:
Am thinking about a substitution which makes the f(x) be g(x)
So I substitute x=g(g(x)) ..
But this failed; since dx will be compliacted
Any ideas?
Well, come to think of it, f(x) and g(x)--as they are written--cannot, strictly speaking, be inverse functions of each other because they both have the same argument x. In order for two functions to be inverses of each other the one must be the argument of the other and vice versa. I would suggest writing the two functions as y(x) and x(y) and see if that helps to keep track of things.
5. tom@ballooncalculus :
No. It works for both increasing and decreasing functions.
Integrate it by parts with dv=dx and u=f(x) then substitute y=f(x), then the formula will be proved.
6. Originally Posted by General
tom@ballooncalculus :
No. It works for both increasing and decreasing functions.
Integrate it by parts with dv=dx and u=f(x) then substitute y=f(x), then the formula will be proved.
Yes! As I say, if I'd tried integration by parts in the first place I wouldn't have had any doubt. E.g., see the pic in the spoiler, above. (Works for me!)
So what's wrong with my second graph, I still wonder.
7. Thanks. | 2015-02-28 07:48:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196768999099731, "perplexity": 1765.6954010396603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461848.26/warc/CC-MAIN-20150226074101-00213-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://stacks.math.columbia.edu/tag/0GQM | Remark 102.10.6. Let $\mathcal{X}$ be an algebraic stack. Given two quasi-coherent $\mathcal{O}_\mathcal {X}$-modules $\mathcal{F}$ and $\mathcal{G}$ the tensor product module $\mathcal{F} \otimes _{\mathcal{O}_\mathcal {X}} \mathcal{G}$ is quasi-coherent, see Sheaves on Stacks, Lemma 95.15.1 part (5). Similarly, given two locally quasi-coherent modules with the flat base change property, their tensor product has the same property, see Proposition 102.8.1. Thus the inclusion functors
$\mathit{QCoh}(\mathcal{O}_\mathcal {X}) \to \textit{LQCoh}^{fbc}(\mathcal{O}_\mathcal {X}) \to \textit{Mod}(\mathcal{O}_\mathcal {X})$
are functors of symmetric monoidal categories. What is more interesting is that the functor
$Q : \textit{LQCoh}^{fbc}(\mathcal{O}_\mathcal {X}) \longrightarrow \mathit{QCoh}(\mathcal{O}_\mathcal {X})$
is a functor of symmetric monoidal categories as well. Namely, given $\mathcal{F}$ and $\mathcal{G}$ in $\textit{LQCoh}^{fbc}(\mathcal{O}_\mathcal {X})$ we obtain
$\xymatrix{ Q(\mathcal{F}) \otimes _{\mathcal{O}_\mathcal {X}} Q(\mathcal{G}) \ar[rr] \ar[rd] & & \mathcal{F} \otimes _{\mathcal{O}_\mathcal {X}} \mathcal{G} \\ & Q(\mathcal{F} \otimes _{\mathcal{O}_\mathcal {X}} \mathcal{G}) \ar[ru] }$
where the south-west arrow comes from the universal property of the north-west arrow (and the fact already mentioned that the object in the upper left corner is quasi-coherent). If we restrict this diagram to $U_{\acute{e}tale}$ for $U \to \mathcal{X}$ flat, then all three arrows become isomorphisms (see Lemmas 102.10.1 and 102.10.2 and Definition 102.9.1). Hence $Q(\mathcal{F}) \otimes _{\mathcal{O}_\mathcal {X}} Q(\mathcal{G}) \to Q(\mathcal{F} \otimes _{\mathcal{O}_\mathcal {X}} \mathcal{G})$ is an isomorphism, see for example Lemma 102.4.2.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2023-04-01 03:45:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9921910762786865, "perplexity": 407.99427593111284}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00442.warc.gz"} |
https://www.nature.com/articles/s41467-023-36146-3?error=cookies_not_supported&code=084193a8-1ffe-4741-8d36-1544ed0ccb25 | ## Introduction
Electron spins associated with nitrogen-vacancy (NV) defects in diamond are magnetic field sensors that provide high spatial resolution and sensitivity at room temperature1,2. They have been used to study nuclear magnetic resonance at the nanoscale3,4, bio-5, paleo-6, and solid-state magnetism7, and electric currents in quantum materials8,9. Most of these applications focus on detecting magnetic fields in the 0–100 megahertz (MHz) frequency range, in which a toolbox of spin-control techniques enables high sensitivity and a tunable detection frequency without requiring a specific electron spin resonance (ESR) frequency1. In contrast, NV-based sensing in the microwave regime [1–100 gigahertz (GHz)] currently relies on tuning the ESR to the frequency of interest using a magnetic bias field10. This bias field changes the properties of e.g., magnetic or superconducting samples under study11,12, for instance by altering their excitation spectrum, which limits its application in materials science. Furthermore, the field must be on the Tesla scale for operation in the 10–100 GHz range13, making the required magnets large and slow to adjust, precluding the small sensor packaging desired for technological applications.
Here, we enable broadband spin-based microwave sensing by interfacing a diamond chip containing a layer of NV sensor spins with a thin-film magnet. The central concept is that the non-linear dynamics of spin waves—the collective spin excitations of the magnetic film14—locally convert a target signal to the NV ESR frequency under the application of a pump field (Fig. 1a, b). We realize a ~1-GHz detection bandwidth at fixed magnetic bias field via four-spin-wave mixing, and microwave detection at multiple GHz above the ESR frequency via difference-frequency generation. The pump-tunable detection frequency enables characterizing the spin-wave band structure despite a multi-GHz detuning and provides insight into the non-linear spin-wave dynamics limiting the conversion process. Furthermore, the converted microwaves are highly coherent, enabling high-fidelity control of the sensor spins via off-resonant drive fields.
## Results
### Sensor platform
Our hybrid diamond-magnet sensor platform consists of an ensemble of near-surface NV spins in a diamond membrane positioned onto a thin film of yttrium iron garnet (YIG)—a magnetic insulator with low spin-wave damping14 (Fig. 1b). A stripline delivers the “two-color” signal and pump microwave fields to the YIG film, in which they excite spin waves at the signal and pump frequencies, fs and fp, respectively. The frequency-converted microwaves at the ESR frequency fNV are detected by measuring the spin-dependent NV photoluminescence under green laser excitation (“Methods” and Fig. 1c). The ESR frequency is fixed by an external magnetic bias field BNV (Fig. 1d).
### Microwave detection via four-spin-wave mixing
Our first detection protocol harnesses degenerate four-spin-wave mixing15,16,17,18,19,20—the magnetic analog of optical four-wave mixing (Fig. 2a). In the quasiparticle picture, this process corresponds to the scattering of two “pump” magnons into a “signal” magnon and an “idler” magnon at frequency $${f}_{{{\mbox{i}}}}=2{f}_{{{\mbox{p}}}}-{f}_{{{\mbox{s}}}}$$. This conversion enables the detection of a microwave signal that is detuned from the ESR frequency, which would be otherwise invisible in the optical response of the NV centers (Fig. 2b). By tuning the frequency of the pump, we enable the detection of signals of specific microwave frequencies (Fig. 2c).
We characterize the bandwidth of the four-wave-mixing detection scheme by measuring the NV photoluminescence contrast as a function of the microwave signal frequency and magnetic bias field. As in Fig. 2b, when the pump field is switched off, we only detect signals resonant with fNV (Fig. 2d). In contrast, when the pump is switched on, a broad band of frequencies becomes detectable (Fig. 2e). The bandwidth Δf of ~1 GHz is limited from below by the ferromagnetic resonance (FMR), the spatially homogenous spin-wave mode below which spin waves cannot be excited in our measurement geometry, and from above by the limited efficiency of our 5-micron-wide stripline to excite high-momentum spin waves. As such, the bandwidth can be extended by using narrower striplines or magnetic coplanar waveguides21.
At 14 dBm signal and pump power, consecutive mixing processes generate higher-order idler modes at discrete and equally spaced frequencies (Fig. 2f). Motivated by the success of their optical counterparts in high-precision spectrometry22, such “spin-wave frequency combs” are of great interest because of potential applications in microwave metrology20,23,24. We use the spin-wave comb to realize sensitivity to multiple microwave frequencies by detecting the n-th order idler frequency,
$${f}_{i}^{(n)}=\left(n+1\right){f}_{{{{{\rm{p}}}}}}-n{f}_{{{{{\rm{s}}}}}}$$
(1)
when it is resonant with the ESR frequency (Fig. 2f, upper inset). An increasing number of idler modes appears with increasing drive power (Supplementary Fig. 3), such that at large powers we resolve up to the n = 10th idler order (Fig. 2f, bottom inset). The shift of the idler frequency is amplified by the integer n over the shift of the signal frequency (Eq. 1), leading to a 1/n decrease in the linewidth of the NV ESR response24 (Fig. 2f) and a correspondingly enhanced ability to resolve closely spaced signal frequencies.
### Coherent spin control via four-spin-wave mixing
In addition to enabling off-resonant quantum sensing, the idlers also provide a resource for off-resonant control of spin- or other quantum systems. The resolving of the NV’s 3-MHz hyperfine splitting in the idler-driven ESR spectrum (Fig. 3a) evidences the high coherence of the microwave field emitted by the idler spin wave, implying that the linewidth is determined by the drive rather than the spin-wave damping24. This allows driving coherent NV spin rotations (Rabi oscillations) by pulsing the pump with varying duration τ (Fig. 3b).
Remarkably, these Rabi oscillations respond to externally applied microwaves that are detuned by hundreds of MHz from the ESR frequency (Fig. 3c). Such magnon-mediated, off-resonant Rabi control is a new instrument in the toolbox of spin-manipulation techniques, providing universal off-resonant quantum control with potential applications in quantum information processing. The idler-driven Rabi frequency exceeds the signal-induced AC Stark shift25 by about an order of magnitude for the same off-resonant signal power (Supplementary Fig. 4). The decrease of the Rabi frequency with increasing detuning δf (Fig. 3c) is the combined result of a reduced spin-wave excitation efficiency at higher frequency, because the stripline is less efficient in exciting spin waves with short wavelengths (Supplementary Note 2), and a reduced spin-wave scattering strength due to the increasing momentum mismatch between signal and pump spin waves17,18,19.
Since the Rabi frequency depends linearly on the idler amplitude11, it provides insight into the magnetization dynamics in the film. As expected, the idler amplitude initially grows with increasing signal and pump power15,20, but then reaches a maximum and starts to decrease (Fig. 3d). We attribute the decrease to Suhl instabilities of the second type16: Both signal and pump modes decay into a pair of high-momentum magnons beyond a certain threshold amplitude, which drains energy from the idler mode. This interpretation is supported by a model of the four-wave interactions between the dominant two idler modes, the signal and pump modes, and the two pairs of high-momentum “Suhl” magnons (Supplementary Figs. 5 and 6). The intermode coupling is induced by exchange and dipolar interactions, as well as crystalline anisotropy, and follows from the leading-order terms in the Holstein-Primakoff expansion17. Based on the interacting eight-mode Hamiltonian we compute the steady-state dynamics of the idler mode as a function of pump and signal power (Fig. 3e, Supplementary Note 4), which qualitatively reproduces the observed power dependence in Fig. 3d.
### Microwave detection via difference-frequency generation
Our second detection protocol relies on difference-frequency generation, which enables down-conversion of GHz signals to MHz frequencies accessible to established quantum sensing techniques1. The difference frequency is generated by the longitudinal component of the magnetization under the driving of two spin waves of different frequencies26 (Fig. 4a, Supplementary Note 5). In contrast to the four-wave mixing protocol, the converted frequency does not have to lie within the spin-wave band. By tuning the ESR frequency into resonance with the difference frequency (Fig. 4b), we detect microwave signals that are detuned by several gigahertz when $${f}_{{{\mbox{p}}}}-{f}_{{{\mbox{s}}}}=\pm {f}_{{{\mbox{NV}}}}$$ (Fig. 4c). Alternatively, AC magnetometry protocols can provide difference-frequency detection with enhanced sensitivity at arbitrary bias fields1. We only observe ESR contrast when both fs and fp are above the FMR (Fig. 4d), confirming that the conversion is mediated by spin waves in the YIG. We anticipate the conversion process can also be applied in other magnetic materials to characterize high-frequency magnetic band structures that would otherwise be out of reach for NV magnetometry (Supplementary Note 6). Similar to Fig. 2e, the conversion is limited by the spin-wave excitation efficiency, which explains the observation of the largest ESR contrast for long-wavelength spin waves (i.e., just above the FMR).
## Discussion
We demonstrated magnon-mediated, spin-based sensing of microwave magnetic fields over a gigahertz bandwidth at fixed magnetic bias field. The frequency of the pump determines the detection frequency, with a detection range that is limited only by the frequencies at which spin waves can be excited efficiently. The coherent nature of the frequency conversion enables coherent manipulation of solid-state spins via off-resonant drive fields, as demonstrated here for spins in diamond. This coherence allows combining with advanced spin-manipulation protocols such as heterodyne or dressed-state sensing27,28,29 to further enhance the detection capabilities, and opens the way for applications in hybrid quantum technologies30. Wide-field readout of NV centers in a larger sensing volume would enhance the microwave sensitivity, which is ultimately limited by thermal spin-wave noise. We envision the detection of free-space microwaves using on-chip microwave-to-spin-wave transducers31 such as stripline resonators, and the characterization of local microwave generators such as spin-torque oscillators by combining with a suitable magnetic material32 and applying a pump field. Imaging of the spatial magnetization dynamics generated by spin-wave mixing using scanning-NV magnetometry could provide insight into the spin-wave dispersion and interactions with nanoscale sensitivity2. The demonstrated hybrid diamond-magnet sensor platform enables broadband microwave characterization without requiring large magnetic bias fields and opens the way for probing high-frequency magnetic spectra of new materials, such as van-der-Waals magnets.
## Methods
### Experimental setup
The NV photoluminescence is read out using a confocal microscope described in ref. 11. The NV-YIG chip and its fabrication were described in ref. 33. It consists of a 2 × 2 × 0.05-mm3 diamond membrane with an estimated near-surface NV density of 103/μm2 placed on top of a 235-nm-thick YIG film grown using liquid phase epitaxy on a 500-μm-thick GGG substrate (Matesy GmbH). The diamond-YIG separation distance is ~2 μm, limited by small particles (such as dust) between the diamond and the YIG surfaces. The signal and pump microwaves are generated by two Rohde & Schwarz microwave sources (SGS100A), combined by a Mini-Circuits power combiner (ZFRSC-123-S+, total loss: ~ −10 dB) and amplified by an AR amplifier (30S1G6, amplification: ~44 dB). All measurements were performed at room temperature.
### NV microwave magnetometry
The four NV-center families are sensitive to microwave magnetic fields at their electron spin resonance (ESR) frequencies, which are determined by the magnetic bias field BNV via the NV spin Hamiltonian $$H=D{S}_{z}^{2}+\gamma {{{{{{\bf{B}}}}}}}_{{{\mbox{NV}}}}\cdot {{{{{\bf{S}}}}}}$$, with D = 2.87 $${{\mbox{GHz}}}$$ the zero-field splitting, γ = 28 $${{\mbox{GHz}}}/{{\mbox{T}}}\,$$ the electron gyromagnetic ratio and $${S}_{i\in \{x,y,z\}}$$ the ith spin-1 Pauli matrix. In this work, we align the field along one of the NV orientations, such that this “on-axis” family has $$|0\rangle \leftrightarrow|\pm 1\rangle$$ ESR frequencies given by $$D\pm \gamma {B}_{{{\mbox{NV}}}}$$ (with $${B}_{{{\mbox{NV}}}}=|{{{{{{\bf{B}}}}}}}_{{{\mbox{NV}}}}|$$). For the other three “off-axis” families, the bias field is equally misaligned by ~71° due to crystal symmetry, leading to the ESR frequency plotted in Fig. 4b (labeled “Off-axis”). The photoluminescence dips were recorded using continuous-wave microwaves and non-resonant optical excitation at 515 nm. For the Rabi oscillations, we first initialize the NV spin in the $$|0\rangle$$-state via a ~1-μs green laser pulse, then we drive the spin using an idler pulse and finally we read out the NV photons in the first 300–400 ns of a second laser pulse.
### Data processing
The data presented in Figs. 2f and 4d are normalized by the median of each row and column (Supplementary Fig. 2). | 2023-04-01 21:33:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5278885960578918, "perplexity": 3358.327508814128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00649.warc.gz"} |
https://msp.org/agt/2014/14-1/p10.xhtml | #### Volume 14, issue 1 (2014)
Recent Issues
Author Index
The Journal About the Journal Editorial Board Subscriptions Editorial Interests Editorial Procedure Submission Guidelines Submission Page Ethics Statement ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 To Appear Other MSP Journals
The bumping set and the characteristic submanifold
### Genevieve S Walsh
Algebraic & Geometric Topology 14 (2014) 283–297
##### Abstract
We show here that the Nielsen core of the bumping set of the domain of discontinuity of a Kleinian group $\Gamma$ is the boundary of the characteristic submanifold of the associated $3$–manifold with boundary. Some examples of interesting characteristic submanifolds are given. We also give a construction of the characteristic submanifold directly from the Nielsen core of the bumping set. The proofs are from “first principles”, using properties of uniform domains and the fact that quasi-conformal discs are uniform domains.
##### Keywords
Kleinian group, characteristic submanifold
##### Mathematical Subject Classification 2010
Primary: 30F40, 57M60 | 2020-04-02 16:29:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5411758422851562, "perplexity": 2928.1382307727386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00218.warc.gz"} |
http://www.amc8.net/community/threads/2011-amc-8-problem-12.7/ | # 2011 AMC 8 Problem 12
#### shanghai
##### Member
Angie, Bridget, Carlos, and Diego are seated at random around a square table, one person to a side. What is the probability that Angie and Carlos are seated opposite each other?
$$\textbf{(A) } \frac{1}{4} \qquad\textbf{(B) } \frac{1}{3} \qquad\textbf{(C) } \frac{1}{2} \qquad\textbf{(D) } \frac{2}{3} \qquad\textbf{(E) } \frac{3}{4}$$ | 2019-07-18 05:53:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5054690837860107, "perplexity": 3121.1691504892283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00288.warc.gz"} |
https://library.kiwix.org/genealogy.stackexchange.com_en_all_2021-04/A/question/10043.html | ## What records might survive of nautical assessor who died on London Docks in second half of 18th century?
2
In Could John Stacy who lived/married at London and later lived at Exmouth (Devon) have been baptized in 1759 at North Petherton (Somerset)? I have been trying to identify the parents of John Stacy (1759-1831).
Today I was reading "Colonial Cameos and Genetic Gambits - the Stacy Brown Story" compiled by Albert E. Stacy (my 1st cousin twice removed) in 1986. In it he writes on page 12:
Reliable record also has it that John Stacy's father was a nautical assessor on the London Docks and was killed by a fall in the hold of a merchant ship in the course of his duty.
He does not elaborate on what reliable record that came from but later, on page 18, he writes:
My dear cousin Elfreda of Buckhurst Hill, London, recalls:
"When I was about thirteen years old (1904) we spent a holiday in Exmouth and great-aunt Agnes Leatt joined us for a week. She was an old lady then in her eighties and one day when we were walking dorn The Strand she pointed out an old-fashioned, double fronted shop with slightly bowed windows of smallish panes as the one she remembered John Stacy occupied."
Consequently, I am assuming that he may have had access to some oral history that possibly reached him intact.
I gather that a nautical assessor was someone who worked for an insurance company and assessed the damages on any ruined cargo that arrived at the docks.
Does anyone know of a record source that might help me learn more about a nautical assessor named Stacy who worked (and died) on the London Docks and presumably died in about the last quarter of the 18th century?
1
In the same book, "Colonial Cameos and Genetic Gambits - the Stacy Brown Story", Albert E. Stacy writes on page 14:
John Stacy was a boy chorister in St Paul's Cathedral Choir. He was later apprenticed to a vintner in Leadenhall Street and when so employed, his portrait was painted and presented to him by the customers of his master. The occasion coincided with his attendance at a function hosted by the then Lord Mayor of London.
John Stacy was allegedly proud of the fact that he wore his own hair and not a wig. Consequently, he had it carefully dressed for the Lord Mayor's party and so that it might not be ruffled before he sat down for the portrait on the following day, he would not lie down that night.
I think the portrait below is the portrait being discussed above but I know nothing of its provenance other than it being labelled as John Stacy in the same book.
Albert does not provide any source for the above information, but in FindMyPast I have found a London Apprenticeship Abstracts, 1442-1850 Transcription that may relate to John Stacy, as a vintner apprentice:
Stacy John son of Henry, Bermondsey, Surrey, perukemaker, deceased, to John Bates, 7 Dec 1774, Vintners' Company
This seems to be him, which contradicts the story of John Stacy's father being a nautical assessor but may corroborate that his father died young. My understanding is that a peruke maker is a wig maker and it is interesting that one of the few early stories about John Stacy seems to relate to the non-wearing of a wig. Also, John Stacy named his eldest son Henry so a father of the same name seems reasonable.
The most compelling evidence that this is the right John Stacy and father comes from:
Aldermen of the City of London: Queenhithe ward at British History Online which lists a term served by a vintner named John Bates:
January 15, 1784 John Bates, Vintner S. 1784–5.
[Sworn Jan. 27] (fn. 84) Elected by 87 votes to 49 for George Mackenzie Macauley (Bowyer).
Died May 13, 1785.
This helps tie John Stacy's master to an office close to the Lord Mayor (whose function/party is mentioned in the wig story) and to the electorate in/near Shoreditch where John Stacy lived from about 1780-1791. Leadenhall Street where John Bates was a vintner is also in/near Shoreditch.
There is an obituary to John Bates in the Gentleman's Magazine and Historical Chronicle, Part 1 on page 406
It is interesting to note that John Bates acquired his fortune at the Queen's Arms Tavern, St Paul's Churchyard which would seem to be very close to where John Stacy was reported to be a boy chorister.
I am still trying to uncover the sources that Albert E. Stacy used to write his brief account of John Stacy's early life but, with the exception of the story of John Stacy's father being a nautical assessor, it seems to now be corroborated. I suspect that the nautical assessor may have been the father of John Stacy's father-in-law John Smyth but at the moment that is little more than conjecture. | 2021-07-29 22:40:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30666324496269226, "perplexity": 6239.496956758163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153897.89/warc/CC-MAIN-20210729203133-20210729233133-00199.warc.gz"} |
https://www.groundai.com/project/associated-higgs-production-in-cp-violating-supersymmetry-probing-the-open-hole-at-the-large-hadron-collider/ | 1 Introduction
###### Abstract
A benchmark -violating supersymmetric scenario (known in the literature as ‘CPX-scenario’) is studied in the context of the Large Hadron Collider (LHC). It is shown that the LHC, with low to moderate accumulated luminosity, will be able to probe the existing ‘hole’ in the - plane, which cannot be ruled out by the Large Electron Positron Collider data. This can be done through associated production of Higgs bosons with top quark and top squark pairs leading to the signal dilepton + jets (including b-jets) + missing . Efficient discrimination of such a -violating supersymmetric scenario from other contending ones is also possible at the LHC with a moderate volume of data.
HRI-P07-10-001
HRI-RECAPP-07-14
Associated Higgs Production in CP-violating supersymmetry: probing the ‘open hole’ at the Large Hadron Collider
Regional Centre for Accelerator-based Particle Physics
Harish-Chandra Research Institute
Department of Physics
Jadavpur University, Kolkata, India 700032
## 1 Introduction
One of the main motivations for suggesting supersymmetry (SUSY) is to remove the fine-tuning problem in the Higgs sector of the standard model. The condition of holomorphicity of the superpotential requires two Higgs doublets in the minimal SUSY extension of the standard model (SM). There the Higgs sector has a larger particle content than the SM, and the physical states in this sector comprise two neutral scalars, one pseudoscalar and one charged Higgs boson. Finding the signatures of these scalars is thus inseparably linked with the search for SUSY at the upcoming Large Hadron Collider (LHC).
Prior to the LHC several Higgs search experiments have yielded negative results. The strongest lower bound on the smallest Higgs mass () from the Large Electron Positron Collider (LEP) is 114.4 GeV [1, 2]. This limit is valid for a SM like Higgs as well as for the lightest neutral Higgs boson in the minimal supersymmetric standard model (MSSM) in the decoupling limit i.e. the limit in which the masses of all other scalars in the Higgs sector become very large. Although smaller values of are allowed away from the decoupling limit, the lower bound on its mass is approximately the -mass. However, when the Higgs sector inherits some -violating phase through radiative corrections [3, 4], the above limit ceases to be valid. Our discussion is centred around such situations.
It is well-known by now that lower bound on the mass of the lightest Higgs boson of the -conserving MSSM from LEP [2] can be drastically reduced or even may entirely vanish if non-zero -violating phases are allowed [5]. This can happen through radiative corrections to the Higgs potential, whereby the phases, if any, of the Higgsino mass parameter and the trilinear soft SUSY breaking parameter enter into the picture. As a result of the -violating phase, the neutral spinless states are no more of definite parity, and their couplings to gauge bosons as well as fermions are thus modified, depending on the magnitude of the phases. Thus there are three neutral states (=1,2,3); the collider search limits for all of them are modified since the squared amplitudes for production via , and couplings for all of them now consist of more than one terms. Mutual cancellation among such terms can take place in certain regions of the parameter space, thus resulting in reduced production rates and consequent weakening of mass limits at collider experiments.
For example, in the context of a benchmark -violating scenario (often called the CPX scenario in the literature [5]), it has been found that as low as 50 GeV or even smaller, cannot be ruled out by the final LEP data for low and moderate values of , where is the lightest neutral Higgs, and is the ratio of the vacuum expectation values of the two Higgs doublets. In other words, a ‘hole’ is found to exist in the - parameter space covered by the LEP searches, the underlying reason being the reduction in the coupling due to the -violating phase(s), as mentioned above. Moreover, complementary channels such as , suffer from coupling as well as phase-space suppression within this ‘hole’, thus making it inaccessible to LEP searches. The existence of this hole has been confirmed by the analyses of the LEP data by different experimental groups [2, 5, 6], although people are not unanimous about the exact span of the hole.
The next natural step is to assess the prospect of closing the hole at Tevatron Run II or the LHC. The existing analysis on this [7], however, focuses on the discovery channels based on the conventional Higgs production and decay mechanisms employed in the context of the SM. It has been noted that although the hadron colliders can probe most of the parameter space of the CPX scenario and can indeed go beyond some regions of the parameter space scanned by the LEP searches, the lightest Higgs boson within the aforementioned hole may still escape detection. This is because not only but also the and couplings tend to be very small within this hole. On the other hand, the relatively heavy neutral Higgs bosons couple to , and favourably, but they can decay in non-standard channels, thus requiring a modification in search strategies. The work [8] which has compiled possible signals of the CPX scenario at the LHC is also restricted to the production of (=1,2,3) bosons in SM-like channels. However, it looked into more decay channels of the bosons thus produced. It has been henceforth concluded that parts of the holes in the - or the - parameter space can be plugged, although considerable portions of the hole, especially for low , may escape detection at the LHC even after accumulating 300 fb of integrated luminosity.
Thus it is important to look for other production channels for the scalars in the CPX region, especially by making use of the couplings of with the sparticles. It is gratifying to note in this context that the coupling, where is the lighter top squark, indeed leads to such a discovery channel, in cases where the -- and --, -- couplings are highly suppressed. In fact it has been noted that in a general -violating MSSM, the cross section of production could be dramatically larger than that obtained by switching off the -violating phases [9]. Since the trilinear SUSY breaking parameter is necessarily large in the CPX scenario, tends to be relatively light and may be produced at the LHC with large cross section. As a bonus, both and also couple favourably to the pair and can add modestly to the signal although by themselves they fail to produce a statistically significant signal. In this work we investigate the implications of these couplings at the LHC, by concentrating on a specific signal arising from the associated production of the neutral Higgs bosons with a top-pair or a pair of lighter stop squarks.
Our task, however, does not end here. While we wish to extract information on the neutral Higgs sector in the CPX scenario, other SUSY processes driven by other particles in the spectrum may yield the same final state. To make sure that one is indeed looking at the Higgs sector, one needs to isolate the Higgs-induced channels, and find event selection criteria to not only reduce the SM backgrounds but also ensure that the canonical SUSY channels do not overwhelm the Higgs signatures. In our analysis, we first introduce suitable criteria which will suppress the SM background compared to the total SUSY contribution in CPX. Next, we suggest additional discriminators for further filtering out the contributions of the lightest Higgs () from other SUSY channels. We finally show that if nature prefers the SM alone with 114.4 GeV, or, alternatively, -conserving SUSY, the proposed signal would indeed be much smaller if our selection criteria are imposed.
The paper is organised as follows. In Section 2 we discuss the basic inputs of the CPX scenario, the resulting mass spectrum and other features they lead to. All of our subsequent numerical analysis would be in this framework where we also use the alternative expression CPV-SUSY to mean the CPX-scenario. In section 3 we set out to define the proposed signal, devise the event selection criteria to reduce both SM and residual SUSY backgrounds and fake events, and present the final numerical results. We summarise and conclude in section 4.
## 2 The CPX Model: values of various parameters
As indicated in the Introduction, we adopt the so called CPX scenario in which the LEP analyses have been performed. It has been observed [3, 4] that the -violating quantum effects on the Higgs potential is proportional to , where is the trilinear soft SUSY breaking parameter occurring in the top squark mass matrix, and is the characteristic SUSY breaking scale, being of the order of the third generation squark masses. With this in mind, a benchmark scenario known as CPX was proposed [5] and its consequences were studied [[10][23]] in some of which steps are suggested for closing the aforementioned ‘hole’ [24, 25, 26]. In this scenario, the effects of -violation are maximized. The corresponding inputs that we adopt here are compatible with the “hole” left out in the analysis.
GeV, TeV
TeV,
TeV,
GeV,
where the only departure from reference [7] lies in a small tweaking in the mass ratio of the and gaugino masses and , aimed at ensuring gaugino mass unification at high scale. It has been checked that this difference does not affect the Higgs production or the decay rates [27]. The presence of a relatively large ensures that one of the top squarks will be relatively light. The value of the top quark mass has been taken to be 175 GeV555The frequent shift in the central value of , coming from Tevatron measurements, causes the size of the hole to change, although its location remains the same. However, there is little point in worrying about this uncertainty, since the very quantum corrections which are at the root of all -violating effects in the Higgs sector are prone to similar, if not greater, theoretical uncertainties..
It is to be noted that the first two generation sfermion masses must be kept sufficiently heavy so that the stringent experimental bound (for example, the electric dipole moment of the neutron) is satisfied. Here we have not considered possible ways of bypassing such bounds, and set the masses of the first two sfermion families at 10 TeV. Thus our analysis is based on the mass spectrum showed in Table 1.
The specific choice of is made to obtain the mass of the lightest Higgs boson within the LEP-hole in - space. It should be noted that such a choice makes the remaining two neutral Higgs bosons not so heavy either. This kind of a situation has a special implication in CPV-MSSM, namely, all the neutral Higgs bosons can be produced in association with a pair. Such production is kinematically suppressed in the -conserving case due to the lower bound on .
The CPX set of parameters listed above constitutes our benchmark point number 1 (BP1) in the detailed analysis to be undertaken in the next section. We list at the end of that section the final results corresponding to six more benchmark points within the hole unprobed by current data. These points are denoted by BP2 - BP7.
## 3 Signals at the LHC
Since, in CPX-SUSY the (=) and interactions are suppressed for the lightest neutral scalar(), we shall have to think of some alternative associate production mechanism at the LHC. One possibility is to consider the associated production of with a pair of lighter stops. The large value of is encouraging in this respect. In addition, since the point CPX yields a not-so-high value of the lighter stop mass, this production mechanism is kinematically quite viable.
The cross sections for different supersymmetric associated production processes are computed with CalcHEP [28] (interfaced with the program CPSuperH [29, 30]) and listed in Table 2. As one can see, while a substantial production rate is predicted for associated with a pair of , the corresponding cross sections for and are smaller by two orders of magnitude. This is not only because of phase space suppression for the latter at the CPX point, but also due to the conspiracy of a number of terms in the effective interaction involved. Table 2 also reveals a complementary feature in Higgs production in association with a pair of top quarks, the underlying reason being again the multitude of terms that enters into the squared amplitudes, and the provision of their mutual cancellation in the CPX scenario. Thus we can identify, for the given set of input parameters, and as the production processes that can be potentially useful in closing the hole in the parameter space.
Also indicated in Table 2 is the gluino pair production cross section in the CPX scenario for 1 TeV which is a CPX input indicated earlier in this section. Later in this section, we shall explain how this process could affect our signal.
The branching fractions of the lighter scalar top and the lightest neutral Higgs boson plays a crucial role in selecting the viable modes in which the signal for CPV-SUSY can be looked for. In Table 3 we present the relevant branching fractions, keeping in mind that new final states emerge whenever the branching fraction for a heavier neutral scalar decaying into two lighter ones is of sizable magnitude. In any case, it is interesting to note that not only the lightest Higgs but also and could play significant roles in signals of the Higgs sector in the CPX scenario, given the possibility of all of them being rather light.
Before we enter into the discussion of our specifically chosen signal, let us mention that, in this study, CalcHEP (interfaced to the program CPSuperH) has also been used for generating parton-level events for the relevant processes. The standard CalcHEP-PYTHIA interface [31], which uses the SLHA interface [32] was then used to pass the CalcHEP-generated events to PYTHIA [33]. Further, all relevant decay-information are generated with CalcHEP and are passed to PYTHIA through the same interface. All these are required since there is no public implementation of CPV-MSSM in PYTHIA. Subsequent decays of the produced particles, hadronization and the collider analyses are done with PYTHIA (version 4.610).
We used CTEQ6L parton distribution function (PDF) [34, 35]. In CalcHEP we opted for the lowest order evaluation, which is appropriate for a lowest order PDF like CTEQ6L. The renormalization/factorization scale in CalcHEP is set at . This choice of scale results in a somewhat conservative estimate for the event rates.
As discussed earlier, the processes of primary importance for the present study are and . At the parton level, the lightest Higgs and both top quarks (or top squarks) dominantly decay to quarks. For our signal, the associated ’s (or charginos) produced in the decay of (or ) are required to decay into leptons with known or calculable branching ratios. These decays lead to a final state with four -quarks along with other SM particles. In addition, the large branching ratios for can make the modest contributions from the particularly rich in final state ’s, which, with a finite -tagging efficiency, can provide a combinatoric factor of advantage to us.
However, although decays dominantly into , our simulation reveals that in a fairly large fraction of events both the -quarks do not lead to sufficiently hard jets with reasonable -tagging efficiency. This is because of the lightness of in this scenario. To illustrate this, we present in Figure 1 the ordered distributions for the four parton-level -quarks in the signal from . It is clear from this figure that the -quark with the lowest in a given event is often below 40 GeV or thereabout, which could have ensured a moderate tagging efficiency ( 50%). This forces us to settle for three tagged -jets in the final state, and look for
3 \emphtagged$b$−jets+dilepton+otheruntaggedjets+missing$pT$.
Later in this section we will demonstrate that this feature is retained under a realistic situation, i.e. on inclusion of hadronization.
We have used PYCELL, the toy calorimeter simulation provided in PYTHIA, with the following criteria:
• the calorimeter coverage is and the segmentation is given by which resembles a generic LHC detector
• a cone algorithm with has been used for jet finding
• GeV and jets are ordered in
• leptons () are selected with GeV and
• no jet should match with a hard lepton in the event
In addition, the following set of basic (standard) kinematic cuts is incorporated throughout our analysis:
GeV GeV
where and measure the lepton-jet and lepton-lepton isolations respectively, with , being the pseudo-rapidity difference and being the difference in azimuthal angle for the adjacent leptons and/or jets. Since efficient identification of the leptons is crucial for our study, we required, on top of above set of cuts, that hadronic activity within a cone of between two isolated leptons should be minimum with GeV in the specified cone. Throughout the analysis we have assumed that a -jet with GeV can be tagged with 50% probability. In addition, as we shall see below, some further kinematic cuts are necessary to make the proposed signal stand out.
Below the contributions to the final state from different scenarios are discussed:
• Contributions coming from the CPV-SUSY scenario and comprised of and where could escape the LEP bound and can be as light as 50 GeV for low to moderate .
• If nature is supersymmetric but conserves (CPC-SUSY), contributions could dominantly come from and , where the appropriate LEP bound hold for . Obviously, now has to be much larger than that in the CPV-SUSY case. For our study, this constitutes a crucial difference between these two scenarios for a given set of masses for the gluino and the lighter top squark.
• If the SM is the only theory relevant for the LHC, then the dominant signal process is from , where is the SM Higgs boson for which the LEP bound of GeV is valid.
• The SM contributions coming from Z666We thank Manas Maity for estimating this background using the calculation reported in [36]. etc., which appear as “common background” for all the above three situations.
Note that in first three scenarios the contributing processes all involve characteristic masses and/or couplings either in the production or in the subsequent cascades. Thus observations made there directly carry crucial information on the scenario involved and hence may help discriminate the same from the others.
The SM contributions in the last item of the above list are not sensitive, in any relevant way, to the details of any new physics scenarios. Thus they appear as universal backgrounds to the chosen signal coming from all of the other three scenarios. The major sources in this category are (i) production with a -jet from QCD radiation mistagged as the third -jet (we assume mistagging probability to be 1/25 [37]), (ii) production where the semileptonic decays of the quarks produce the hard, isolated OSD pair and (iii) production where the decays into -quarks and the leptons come from -decay.
The most effective way to reduce the contribution from in the SM(with =120 GeV) is found to come from the missing distributions. In Figure 2, we present the distribution for our proposed signal, arising from the associated lightest Higgs production along with a stop sqaurk pair. Since the plots demonstrate that the CPX signal contains more events with on the higher side (due to the massive lightest neutralino pair in the final state), an appropriate -cut is clearly useful. Therefore, we have subjected our generated events to the additional requirement
GeV.
This is added to the basic cuts listed earlier, yielding an overall efficiency factor denoted here by which contains the effects of all cuts described so far as well as those to be mentioned later in the text. The finally important numbers for the signal and any of the faking scenarios are thus given by the quantity , being the cross section for the aforementioned final state without any cuts.
In case the SM is the only relevant theory for such final states at the LHC, as well as the sources of ‘common backgrounds’ will contribute to our final state. In this, one will have to take GeV to be consistent with the experimental observations. The missing- cut of GeV effectively reduces events of both these types. Thus having enough signal events above the standard model predictions is ensured in this search strategy.
However, the same final state can have strong contributions from strong production such as , followed by a cascade like
~g→t~t∗1→t¯tχ01→b¯bW+W−χ01
While these may add to the signal strength, there is always the possibility that the fluctuation in the gluino-induced events owing to the uncertainties of strong interaction will tend to submerge the channels of our real interest, namely, the associated production of the neutral Higgs bosons. In the same way, contributions from strong processes may also fake the proposed signals in -conserving SUSY. The next task, therefore, is to devise acceptance criteria to avoid such fake events. We take as representative the gluino pair production process as the interfering channel, the contributions from squarks being small at the corresponding parameter region.
The first point to note here is that the contributions from strong processes leading to this final state usually have a higher jet multiplicity than in our case. This is evident from Figure 3 where we present the jet-multiplicity distribution at the CPX point. While the contributions from associated Higgs production peak at four jets, the overall peak lies at seven. This immediately suggests jet multiplicity as a useful acceptance criterion here, and thus we demand , thereby reducing considerably the artifacts of strong processes.
There are other SUSY processes which may tend to obfuscate the presence of a rather light Higgs boson. For example, similar final states may arise from processes like , where the ’s decay into a -quark and the second lightest neutralino. The latter, in turn, decays into two leptons and the lightest supersymmetric particle (LSP). The number of such events, however, is negligible due to a highly suppressed -- coupling at moderate to low values, i.e., the range of answering to the CPX scenario. In case of faking in a -conserving SUSY spectrum with high ( or so), one has to study independently the and interactions, for example, in the vector boson fusion channel [38, 39, 40, 41], where the values of the parameters can be established as different from those giving rise to the ‘hole’ in the CPX case.
The strong cascades, however, continue to remain problematic even after imposing the jet multiplicity cut, since the production cross-sections are quite large and the multiplicity cut removes only about half of the events. The next suggestion thus is to use those characteristics of the events that reflect the mass (1 TeV) of the gluino in the CPX case. The obvious distributions to look at are those of the transverse momenta of the various jets, for the final states arising from associated Higgs production vis-a-vis strong processes. It is natural to expect that jets originating in gluino decays will have harder distributions compared to those coming from the associated Higgs productions. This is obvious on comparing the left and right panels of Figure 4 which shows the ordered -distributions of jets arising from and productions in this scenario.
Thus we further impose an upper cut on , viz., GeV, which ‘kills’ the more energetic jets from the strong production process. Together with the stipulated upper limit on jet multiplicity, this helps in enhancing the share of the associated Higgs production processes in the final state under investigation. Thus the effects of the , multiplicity and maximum cuts all enter into the quantity determining the final rates after all the event selection criteria are applied.
Now we are in a position to make a comparative estimate of the contributions to dilepton + jets including three tagged b-jets + from the various scenarios, and assess the usefulness of this channel in extracting the signature of a -violating SUSY scenario with light neutral scalars. Such an estimate is readily available from Tables 4 and 5.
Table 4 contains the contributions to the aforesaid final state from the CPX benchmark point 1 (BP1), -conserving SUSY and a standard model Higgs boson of masses 117 and 120 GeV respectively. These are over and above the ‘common backgrounds’ which are listed in Table 5. In each case, the main contributing processes and the corresponding hard cross-sections are shown. Also displayed are the final event rates once the various cuts are imposed, where the difference made by the upper cut on is clearly brought out.
As far as the choice of parameters in -conserving SUSY is concerned, we have used the same values of the gluino and first two generations of squark masses as in the CPX point. It is expected that any departure in the strong sector masses from those corresponding to the hole in the CPX case will be found out from variables such as the energy profile of jets, if any signal of SUSY is seen at the LHC. Thus other regions of the MSSM parameter space are unlikely to fake the signals of -violating situation. The value of is also kept at the region allowed by the CPX hole, and any departure from this region in a faking MSSM scenario has to show up in the branching ratios for , , using the supplementary data on the vector boson fusion channel. Finally, although some difference from the rates shown in Table 4 for -conserving SUSY can in principle occur due to different values of the lighter stop mass, the overall rates are not significantly different, so long as stop squark decays dominantly into either or . Thus the choice of the -conserving SUSY parameters in Table 4 can be taken as representative. We checked that for smaller choice of mass also and the number is still smaller than CPX contribution.
It is easy to draw one’s own conclusion from these two tables about the viability of the suggested search strategy. With the selection criteria proposed in this paper (without the upper cut on jet ) the size of the signal (50 events) from the dominant processes in CPV-SUSY for only 30 fb of integrated luminosity easily dwarfs the common SM background (13 events). Moreover, the signal size is much larger than that in the CPC scenario (with comparable squark and gluino masses) or in the SM. Thus, important hints regarding the existence of new physics and its nature will be available at this stage (we assume that the gluino mass and some other important parameters will be determined from complimentary experiments). The presence of the lightest Higgs boson and its not so heavy mates becomes clear after the upper cut on since nearly 75% of the new physics events are now induced by them. Clearly, even after imposing the upper cut on , the signals can rise above the SM backgrounds at more than 5 level within a moderate integrated luminosity like 30fb. This can be further magnified with the accumulation of luminosity. On the other hand, it is not too optimistic to assume that important hints will be available with only 10 fb of integrated luminosity.
Before we end this discussion, we show the viability of this signal in other regions of the CPX hole. It has already been noted in the literature that the size and the exact location of the hole in the parameter space depend on the method of calculating the loop corrections [30, 42, 43]. However, the calculations agree qualitatively and confirm the presence of the hole. To be specific we have chosen points from the hole as presented by [6].
In Table 6 we present different sets of values of and , keeping the other parameters fixed at their CPX values. These correspond to six different regions of the LEP hole and are termed as benchmark points 2 -7 (BP2 - BP7), all within the hole. The analysis for each of these points is an exact parallel of that already presented for the first benchmark point. We have computed the generic sensitivity of LHC to the ‘hole’ corresponding to each of these benchmark points, the results being summarised in Table 7. It is clear from this Table that we always have enough events () in our attempt to probe the LEP-hole even with an integrated luminosity of 30 fb. As the luminosity accumulates a statistically significant signal will be obtainable from any corner of this hole.
## 4 Summary and Conclusions
Taking a cue from the frequently discussed possibility of -violation in MSSM and its phenomenological consequences at colliders, we explore a popular benchmark scenario (called the CPX scenario) of this broad framework. The study is motivated by recent analyses which reveal that the LEP, in its standard Higgs searches, could not probe some of the region in the parameter space of this scenario having low and low to moderate values. We concentrated on this ‘unfilled hole’ in the parameter space and studied how well LHC could explore it.
We have found that the associated production of the lightest Higgs boson (which may evade the LEP bound and be as light as 50 GeV or smaller) and two of its ‘light’ mates along with a pair of top quarks and top squarks could be extremely useful in reaching out to this region. This is because one can now exploit modes where the involved couplings and the masses are very characteristic of the -violating SUSY scenario. The particular signal we choose for the study is 3-tagged -jets + dilepton + tagged jets + missing transverse momentum, the total number of jets being within 5. It is shown that the entire ‘LEP-hole’ can be probed in detail in this final state with less than 50 fb of LHC data, and that the -violating SUSY effects cannot be faked even by a combined effect from the contending scenarios like -conserving MSSM and/or the standard model.
Acknowledgments: We thank Siba Prasad Das for help in the initial stages of simulation and Manas Maity for providing some important information on the calculation of the backgrounds. We also thank Subhaditya Bhattacharya, Sudhir K. Gupta, Sujoy Poddar, Alexander Pukhov and Gaurab Sarangi for helpful discussions and suggestions on the code. AD thanks Apostolos Pilaftsis for a useful private communication. PB, AKD and BM thank the Theoretical Physics Group of Indian Association for the Cultivation of Science, Kolkata, India for hospitality while the project was in progress. AD acknowledges the hospitality of Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute during the latter part of the project. Computational work for this study was partially carried out in the cluster computing facility at Harish-Chandra Research Institute (HRI) (http://cluster.mri.ernet.in). This work is partially supported by the RECAPP, Harish-Chandra Research Institute, and funded by the Department of Atomic Energy, Government of India under the XIth 5-year Plan. AD’s work was supported by DST, India, project no SR/S2/HEP-18/2003.
## References
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | 2019-10-22 04:13:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7693406343460083, "perplexity": 876.8502088627522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987798619.84/warc/CC-MAIN-20191022030805-20191022054305-00364.warc.gz"} |
https://brainmass.com/math/ordinary-differential-equations/lipschitz-continuity-initial-value-problems-odes-514920 | Explore BrainMass
# Lipschitz Continuity and Initial Value Problems for ODE's
This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!
PROBLEM 1. Find a Lipschitz constant, K, for the function f (u, t) = u^3 + t u^2 which shows that f is Lipschitz in u on the set 0 ? u ? 2, 0 ? t ? 1.
PROBLEM 2. Show that the function f (u, t) = t u^(1/2), is not Lipschitz in u on [0, 1] × [0, 2].
PROBLEM 3. Find two solutions to the initial value problem y = |y|^(1/2) , y(0) = 0. What hypothesis of the Picard-Lindelöf Theorem is violated?
© BrainMass Inc. brainmass.com March 5, 2021, 12:32 am ad1c9bdddf
https://brainmass.com/math/ordinary-differential-equations/lipschitz-continuity-initial-value-problems-odes-514920
#### Solution Preview
The Picard Lindelöf Theorem
__________________________
We need two basic facts for the three problems discussed below. The first is the definition of Lipschitz continuity that is required for the theorem that follows.
DEFINITION . Suppose the function f (y, t) is defined on [a, b] × [c, d] . Then it is Lipschitz continuous in y on this domain if
(1) |f (x, t) ? f (y, t)| ? ...
#### Solution Summary
Lipschitz continuity and its role in the existence and uniqueness of solutions to ordinary differential equations is investigated. Three problems are solved. The first is to show that a given function is Lipschitz continuous. The second problem is to show that another given function is not Lipschitz continuous. The third problem shows that an initial value problem based on the function of the second problem fails to have unique solutions, in general. This shows that, without the Lipshitz condition, solutions to an initial value problem may fail to be unique.
\$2.49 | 2021-04-20 01:25:11 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839628100395203, "perplexity": 534.9552843460098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00603.warc.gz"} |
https://stats.stackexchange.com/questions/29104/how-to-perform-a-non-equi-spaced-histogram-in-r | # How to perform a non-equi-spaced histogram in R?
From the R docs for hist:
R's default with equi-spaced breaks (also the default) is to plot the counts in the cells defined by breaks. Thus the height of a rectangle is proportional to the number of points falling into the cell, as is the area provided the breaks are equally-spaced.
The default with non-equi-spaced breaks is to give a plot of area one, in which the area of the rectangles is the fraction of the data points falling in the cells.
So .. how do I get hist to plot non-equi-spaced breaks? It sounds as if it will calculate the breaks to end up with area one, but I don't see the options.
Edit: Also, what are recommended ways (in R) to do non-equi-spaced histograms? A typical case would be data that is spiky, causing all the action in one or a few cells, no matter how many are given as "breaks". Another would be two areas of activity separated by a large area of zero, meaning no matter how many breaks, all you see is flat, with two huge narrow spikes. Or perhaps worse, one area of activity, then another area of much less activity far away that causes the graph to be very wide and flat.
• This is a good question, but it appears to concern only how to get R to do something, as opposed to the statistical aspects of histograms. As such, I think it fits better on Stack Overflow than here. – gung - Reinstate Monica May 24 '12 at 16:43
• I wouldn't mind knowing best practices for non-equi-spaced bins either but it seems odd to change the question now. – dfrankow May 24 '12 at 17:03
• Not at all, change away. You should insure that the question reflects what you want to know, so that you can get the info you need. Questions are often updated after initial posting to clarify what the OP is really after & to facilitate more appropriate answers. Also, it would make CV the appropriate place for the question IMO, should you want to keep it here. – gung - Reinstate Monica May 24 '12 at 17:16
You will notice that there is an argument breaks as a part of the function hist(), with the default set to "Sturges". You can also set your own breakpoints and use them instead of the default sturges algorithm as follows:
breakpoints <- c(0, 1, 10, 11, 12)
hist(data, breaks=breakpoints)
If you read all the way down to the bottom, there are a couple of examples with non-equidistant breaks as well.
Update: This may not be a direct answer to your question, but you could use a different approach (i.e., graph) than a histogram. Personally, I don't find histograms terribly useful. Instead you could try a kernel density plot, which I think would address the first two cases you list (I don't see how you can get out of the third). In R, the code would be: plot(density(data)).
• Looks like no default way to get reasonable non-equi breakpoints (e.g., equal-area). Thanks. – dfrankow May 24 '12 at 17:02
• .. without computing them by some other function. – dfrankow May 24 '12 at 17:08
Denby and Mallows 2009 ungated linkprovide a nice approach called the 'diagonally cut histogram', and provide a function 'dhist' in their supplementary material (available at the above link).
Here is the abstract:
When constructing a histogram, it is common to make all bars the same width. One could also choose to make them all have the same area. These two options have complementary strengths and weaknesses; the equal-width histogram oversmooths in regions of high density, and is poor at identifying sharp peaks; the equal-area histogram oversmooths in regions of low density, and so does not identify outliers. We describe a compromise approach which avoids both of these defects. We regard the histogram as an exploratory device, rather than as an estimate of a density. We argue that relying on the asymptotics of integrated mean squared error leads to inappropriate recommendations for choosing bin-widths
And a figure comparing the a) cdf, b) equal area histogram, c) equal bin-width histogram and d) dhist:
Lorraine Denby, Colin Mallows. Journal of Computational and Graphical Statistics. March 1, 2009, 18(1): 21-31. doi:10.1198/jcgs.2009.0002.
One easy solution would be to use quantiles as breaks:
x <- rnorm(100)
hist(x)
hist(x, breaks = quantile(x, 0:10 / 10)) | 2020-10-20 00:50:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6555708050727844, "perplexity": 969.2718753342406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107867463.6/warc/CC-MAIN-20201019232613-20201020022613-00223.warc.gz"} |
https://studydaddy.com/question/exp-105-week-4-dqs | QUESTION
# EXP 105 Week 4 DQs
In this paperwork of EXP 105 Week 4 Discussion Questions you will find the answers on the next points:
1. Writing is a very important component of online learning. College-level writing is more formal most of your daily writing. Describe the difference between college-level writing and casual writing. In chapter 4 of the book, six steps for writing are explained. Which step do you need to work on the most? Explain the specific action items you can do to develop this step.2. Critical thinking involves being able to solve a problem and examine information from several different perspectives. How do you define critical thinking? How is critical thinking used to solve a problem? Why is critical thinking an important part of your college learning experience? How do you define critical thinking? I define critical thinking as in depth thought process when a person has to judge, decide and solve. How is critical thinking used to solve a problem?
• $5.19 ANSWER Tutor has posted answer for$5.19. See answer's preview
*** *** Week 4 *** | 2017-08-22 09:21:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32817861437797546, "perplexity": 1662.8765972873991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110573.77/warc/CC-MAIN-20170822085147-20170822105147-00267.warc.gz"} |
https://en.wikipedia.org/wiki/Talk:Abuse_of_notation | # Talk:Abuse of notation
WikiProject Mathematics (Rated Start-class, Low-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Low Importance
Field: General
## Sound power should be "sound pressure level"
Sound power is measured in Watts and sound power level in dB. Also the text states that the A in dB(A) denotes a particular reference level, which is not correct. The A indicates A-weighting, a form of filter/frequency weighting. Both sound pressure level and sound power level can be weighted using A-weighting. Mikael Ogren (talk) 17:24, 1 March 2011 (UTC)
## Screwdriver?
I am not sure what you are trying to do with this page, in particular the opening comment is almost content-free. I always use screwdrivers to open paint tins. 8-) (and no, this is not vacuously true - I've been painting my house recently). I would think a page on abuse of notation should also describe why it is a useful thing to do, as well as describing why the examples are actual abuses of notation. (e.g. always insisting that functions and variables have distinct symbols leads to proliferation of symbols that only the most anal retentive mathematician (or painter) would delight in.) Andrew Kepert 01:45, 6 Apr 2005 (UTC)
Hi Andrew, Sounds like you could have made a better start at this than me. I used an abuse of notation recently in Combinadic so thought there should be such a page, but am still at a loss what should go into it. --J. W. McLeod 09:28, 6 Apr 2005 (UTC)
## Very common abuse?
"A very common abuse of notation is using sin2(x) instead of (sin(x))2."
It's not an abuse at all according to other things I've read - fn(x) = [f(x)]n for n not equal to -1 (for n=-1 it refers to the inverse function). The article Function (mathematics) seems to make no reference to this. Brianjd | Why restrict HTML? | 04:56, 2005 Apr 8 (UTC)
• Especially in the abstract, fn(x) = f[f(x)] However, sin2(x) is an exception to the rule. Bluap 16:48, 3 May 2005 (UTC)
• Seems like a double standard to me. So sin² means the square of the sine—i.e. the sine times itself—but sin-1 is the inverse function? OneWeirdDude (talk) 23:00, 16 October 2008 (UTC)
sin2x is an unusual notation but probably not abuse. For most functions, ${\displaystyle f^{2}(x)=f(f(x))}$, rather than ${\displaystyle [f(x)]^{2}}$. Still, all trig functions represent a common and systematic exception. This is true of all exponents, not just 2. —Preceding unsigned comment added by Eebster the Great (talkcontribs) 21:35, 2 February 2009 (UTC)
### Misuse rather than abuse?
sink(x) doesn't simplify exposition, nor suggests any correct intuition. It's just an arbitrary exception that creates ambiguity, so I propose we designate it a misuse of notation.
< rant >
It's being widely taught to school kids, roughly together with teaching f-1 as function inverse, and creates unjustified confusion. It's a shame!
(To make it worse, f-1 is usually introduced without explaining the general fk notation for composition, nor it's origin in paren-less "f f x" function application notation, and the wide abuse of exponentiation nota).
Presumed rationale for the exception:
* Trig functions are among the first functions that are commonly written without parentheses. This notation is commonly introduced without discussing order of operations w.r.t. exponentiation, making sin x2 ambiguous.
* Repeated application of trig functions is nearly useless. However, sin-1 is useful, limiting the exception to positive powers, which is ugly.
< /rant > 79.179.39.171 (talk) 22:24, 14 June 2009 (UTC)
## Have retreated from language criticized above
The present version would seem to make a better stub for this topic. I don't think there is much of a controversial nature left, but there remains enough structure so that it's pretty clear what the intended topic is. --J. W. McLeod 12:48, 10 Apr 2005 (UTC)
## John Harrison
Who is John Harrison? --Abdull 08:29, 30 May 2006 (UTC)
## Infinite limits
I don't think ${\displaystyle \lim _{x\to \infty }f(x)=\infty }$ qualifies as abuse of notation.
• If the domain and codomain under consideration are the extended real line, the limit may very well exist, and have the precise value of ${\displaystyle \infty }$, without any notational or conceptual difficulties whatsoever.
• If the domain and codomain under consideration are ${\displaystyle \mathbb {R} }$, then, as described, the limit does not exist (edit: and neither does the infinity), and, therefore, the sentence is not false but meaningless when considered merely as the sum of its parts, so the idiom (essentially bringing the extended real line into a real context) gives meaning to an otherwise meaningless sentence, rather than giving an additional meaning to a meaningful one.
Dfeuer 04:36, 30 October 2007 (UTC)
Yes, ${\displaystyle \lim _{x\to \infty }f(x)=\infty }$ has a precisely defined meaning, as our own article on limits shows. I'm getting rid of that example. -- 75.162.71.236 04:15, 8 November 2007 (UTC)
## Quantifiers or Definition vs Fact
Consider the question "f(x) = 0; Is it true that f'(x) = 0?" This can either mean "at some particular point x, f(x) = 0, in which case f' evaluated at the same point need not be 0, or it can be taken as a definition, in which for all x, f(x) = 0, and f'(x) is indeed 0 at all points. Is there a standard way of disambiguating these? —Preceding unsigned comment added by 76.113.64.59 (talk) 09:25, 17 June 2008 (UTC)
The ${\displaystyle =}$ operator means that something is always true. Without qualification, ${\displaystyle f(x)=0}$ means that ${\displaystyle x}$ is ${\displaystyle 0}$ for all (valid) ${\displaystyle x}$. So ${\displaystyle {\frac {df}{dx}}=0}$ for the same domain. Xihr (talk) 10:11, 17 June 2008 (UTC)
The ${\displaystyle =}$ operator in itself does neither imply always nor true, e.g. you can easily state "x=2" or even "2=3". Therefore, I agree with the poster of the unsigned comment above. As long as ${\displaystyle x}$ is a free variable the question is incomplete. However, the concept of always meaning "for all ${\displaystyle x}$" could also be implied with the equivalence operator like this: ${\displaystyle f}$≡0. Schellhammer (talk) 16:38, 5 November 2008 (UTC)
1. The determinant formula for the vector product is just a mnemonic device to help in remembering the definition. Therefore I can't see how it is an abuse of notation. McKay (talk) 09:39, 3 June 2009 (UTC)
2. The section on O(.) reads like a personal essay. The way "=" is used in this context is imo an abuse of notation, but this needs to be cited from a suitable source. The other claim, that f(n) is just a value rather than a function, doesn't belong here as it is not specific to this notation. Also, ambiguity of notation is not at all the same as abuse of notation, so the example O(nm) doesn't belong either. McKay (talk) 09:39, 3 June 2009 (UTC)
## Inner product vs. v^T w
Many people seem to write the inner product <v,w> between two vectors as v^T w, although strictly speaking the result of the latter operation should be a 1 by 1 matrix rather than a scalar. This is not the same thing: consider an m by n matrix A with m and n both > 1. Then, like any matrix, A can be multiplied by a scalar, but not by a 1x1 matrix. I'm not aware of any mathematical operator that will take a one by one matrix and extract its element as a scalar or vice versa.
Can anyone knowledgable either confirm or disconfirm this as a case of abuse of notation? —Preceding unsigned comment added by 131.111.20.201 (talk) 13:28, 8 June 2009 (UTC)
The inner product is always defined as a map ${\displaystyle V\times V\to \mathbb {F} }$ where ${\displaystyle \mathbb {F} }$ is the underlying field of V. You can define the standard inner product in a Euclidean space as ${\displaystyle \langle v,w\rangle =v^{T}w}$ if you look at it as a scalar rather than a 1x1 matrix. You don't need a special operator in order to use isomorphic spaces interchangeably (it's trivial to prove that R – as a vector space over R – is isomorphic to ${\displaystyle M_{1\times 1}^{\mathbb {R} }}$, i.e., the space of all 1x1 real matrices). This is exactly the same issue as writing ${\displaystyle 1\in \mathbb {C} }$ instead of ${\displaystyle (1,0)\in \mathbb {C} }$: ${\displaystyle \{(x,0)\mid x\in \mathbb {R} \}\subseteq \mathbb {C} }$ and ${\displaystyle \mathbb {R} }$ are isomorphic.
For the not too mathematically inclined I'll sum up: I don't think this qualifies as abuse of notation. —Preceding unsigned comment added by 132.66.234.217 (talk) 15:53, 10 August 2009 (UTC)
I agree. This is an example of a different but similar phenomenon from abuse of notation, which is the tendency for mathematicians to identify objects which are in some sense interchangeable. Here we see the identification of the 1x1 matrix [s] with the scalar s; another common example is the identification of a set and its characteristic function. skeptical scientist (talk) 08:07, 20 November 2009 (UTC)
There actually is an operator that will map a 1x1 matrix to a scalar. This operator is the det() determinant operator, which in the case of a 1x1 matrix is simply the number inside the matrix. 168.156.170.146 (talk) 18:00, 6 January 2012 (UTC)
To me, au contraire, it seems an obvious but usually harmless case of abuse of notation. (Reason: The 1x1 matrix is treated as a scalar multiplier of another matrix. Technically this is impossible. That's what defines abuse of notation.) But this abuse is so simple that it would be distressingly pedantic to flag it as "abuse of notation", and therefore in a practical sense I agree with skeptical scientist. If we draw attention to an "abuse of notation", it should be a more substantial kind of abuse than this. (Personal disclosure: I was put off from certain matrix calculations for years by distress over exactly this "abuse". That's one reason I say it is abusive.) Zaslav (talk) 18:50, 3 January 2010 (UTC)
## Direct sum operator
I believe the direct sum "operator" is abuse of notation. If V, W are vector spaces, V+W gives you a new vector space. However, ${\displaystyle V\oplus W}$ does not: it's either true or false. Writing ${\displaystyle U=V\oplus W}$ is even worse, because the notation suggests you're comparing a boolean to a vector space. Also, usually operators allow you to define new spaces (or whatnot) but ${\displaystyle \oplus }$ does not. You can define ${\displaystyle U{\stackrel {\rm {def}}{=}}V+W}$ and then ask whether ${\displaystyle V\oplus W}$. Writing ${\displaystyle U{\stackrel {\rm {def}}{=}}V\oplus W}$ would be invalid. Do you agree? Itayperl (talk) 16:09, 10 August 2009 (UTC)
I'm not sure what your complaint is. According to the linked page, ${\displaystyle V\oplus W}$ refers to the vector space where the underlying set is the cartesian product of V and W, and the addition and scaling operators are defined component-wise (this is standard notation). This is clearly a vector space. On the other hand, V+W in general means nothing. The only time it has meaning is when V and W are subspaces of some larger subspace X, in which case V+W refers to the set {v+w : v in V, w in W} which happens to also be a vector subspace. I don't see any abuse of notation here; all I see are two different notations which mean different things. skeptical scientist (talk) 08:13, 20 November 2009 (UTC)
skeptical scientist is correct. ${\displaystyle \oplus }$ does not assert anything. It's a binary operator on vector spaces. Zaslav (talk) 18:53, 3 January 2010 (UTC)
Sorry about that. My textbook defined ${\displaystyle \oplus }$ differently. The definitions on Wikipedia make a lot more sense. Thank you! —Preceding unsigned comment added by 132.66.234.217 (talk) 18:17, 9 August 2010 (UTC)
## The Quotation
seems to be rather dry. Sorry for complaining when I can't offer up a superior alternative, but I'm hoping that someone finds a more colourful quote :D 118.90.20.3 (talk) 11:48, 8 October 2009 (UTC)
I disagree, the commonly accepted interchangability of vector notations from bold letters, to underlined with tildes, to the quoted arrows make it very applicable from my experience. 204.52.215.3 (talk) 16:57, 21 October 2009 (UTC)
## Sound dB
I propose adding to the article a mention of the ubiquitous misuse of "dB" in sound level measurements. A dB (decibel) is only a numerical ratio of two quantities. Common sound level measurements are in dB(a) where the suffix "a" defines a reference level. Cuddlyable3 (talk) 17:21, 3 January 2010 (UTC)
"Abuse of notation" is a specifically mathematical concept referring to a certain way of sloughing over certain technicalities of notation. I don't believe it's used outside math, as for physical concepts like dB. But maybe you want to start a fashion of abusing the language of math by applying it to physics? (You'd be in good company.) Zaslav (talk) 19:01, 3 January 2010 (UTC)
There you go abusing dB yourself. "dB" is not a physical concept. I don't wish to start any fashion. Cuddlyable3 (talk) 22:47, 3 January 2010 (UTC)
"The decibel (dB) is a logarithmic unit of measurement that expresses the magnitude of a physical quantity ..." (Wikipedia, Decibel). Zaslav (talk) 05:09, 4 January 2010 (UTC)
That's a fair quotation Zaslav but expression is not the same as identification. I may tell you that a signal magnitude increases by 3 dB and that we understand. If I tell you a signal magnitude is 3 dB I make no sense. It is routine to use, not abuse, dB expressions of physical ratios in radio communications. Cuddlyable3 (talk) 15:45, 23 May 2010 (UTC)
## Pedantry
I regret to say that I find some of the remarks in the Bourbaki section to be pedantic in the extreme and reflecting a lack of understanding of the normal flexibility of the English language, as well as being POV. Specifically, the claim that the term "partial function" is abusive because a "partial function" on A is not a "function" on A is just a mistake of English. In language, the addition of an adjective is not required or expected to be always a narrowing of the meaning. I could give many examples in addition to "generalized X" if I could only remember them. (I'm sure they'll come after I close this comment.)
It appears to me that the article is becoming a sounding board for opinions about good writing. Obviously, some editor and I differ, so I have my POV and s/he has his/hers. That's why I call it POV. It's no longer about "abuse of notation/language".
The term "abuse of notation/language" is really not so broad; it means writing something that is technically incorrect without giving it a special definition, in the belief that it will be easily understood by everyone. Thus, for instance, "partial function", which is precisely defined, is not abusive. Using the term without a definition might well be called abuse, but that is not the example provided.
Similarly, to call "law of composition not everywhere defined" an abuse of language is to confuse quality of writing with the concept of "abuse of language". There may or may not be something ungrammatical or confusing or distressing about this construction; but this style of qualification is not "abuse" in the mathematical sense. (Personally, I think it needs two commas and will then be perfectly correct. Others may disagree. This is a stylistic or grammatical question.)
I am not surprised to see examples from Bourbaki. Bourbaki can be very pedantic. And was writing in French, for which the rules might not be the same – my French is not strong enough to justify an opinion.
I propose to erase some of the more extreme complaints about "abuse". But I await further comments. I'm sure they'll be interesting! Zaslav (talk) 06:43, 4 February 2010 (UTC)
The more I read of this article, the more I think "abuse of notation" has been misunderstood by some contributors to mean anything someone who is extraordinarily pedantic could find objectionable. The term as used (or abused?) in this article is losing its meaning and its usefulness. I suggest that a sizable contraction of the article is in order. Zaslav (talk) 06:39, 23 February 2010 (UTC)
An example of pedantry is the section "Misc" [Miscellaneous], which I have deleted. It said,
The so-called reflection through the origin is an involution, but not a reflection.
This is mistaken. There are many kinds of reflection. Reflection through a point is not reflection through a line, but it is a kind of reflection. Perhaps contributors should "reflect" longer before listing examples of abuse (sorry, I just couldn't help it). Zaslav (talk) 00:18, 15 March 2010 (UTC)
Actually, in geometry the conventional meaning of "reflection" is orthogonal symmetry with respect to a hyperplane. So symmetry with respect to a point is not a reflection, except in dimension 1. The reflection article does not give a clear definition, but it does say that exactly one eigenvalue is -1. Marc van Leeuwen (talk) 09:24, 23 March 2010 (UTC)
I'm sorry, but I believe you are mistaken about the correct definition. Perhaps by "conventional meaning" you mean that people who are not extremely knowledgeable about classical geometry -- this includes most expert mathematicians -- think there is only one kind of reflection, namely, in a hyperplane. This limitation appears to be very common among those who study "groups generated by reflections". If you read thorough books on geometry I think you'll find that there are other kinds of reflection. They simply are not as widely known.
Wikipedia articles ("Reflection") cannot be considered authoritative in deciding a question like this. One must go to the source. I regret that I don't have any sources available to me at the present time. I suggest that any book by Branko Grünbaum is authoritative, though perhaps not definitive. Zaslav (talk) 08:30, 30 March 2010 (UTC)
### Three examples are wrong and should be removed.
I've made a subheading to avoid messing around trying to indent lists etc. correctly.
Well, Zaslav, you've been waiting long enough for further comments - over three years! And I heartily concur in your opinion: that none of the following:
1. "partial function"
2. "generalized function"
3. "law of composition not everywhere defined" (or its allegedly abusive precursor)
is in fact either:
• an abuse of notation (since they are all terms, not notation), or
• an abuse of (the English) language.
Another example of a non-restrictive adjective used often in maths?: "approximate'" or "approximately". Technically speaking, the approximate includes the exact; two is certainly "approximately two", even if it is also "exactly two".
The writing is, as you wrote, POV. It reminds me of one of those oh-so-amusing newspaper pieces that regularly appears exposing the supposed illogicality of English and, metaphorically shaking its head and clucking its tongue, suggests that perhaps we really should go back to learning Latin in schools in order to impart the virtues of clear thinking. These three examples will only confuse the reader seeking a clear understanding of how mathematicians systematically abuse notation and use language to avoid pedantically complete specification within certain contexts. The following comment is also POV, but I believe has support from decades of research in linguistics:
It is a correct and sophisticated use of any language to qualify its terms no more than is absolutely necessary to understand them unambiguously from their context.
We should therefore remove these incorrect examples forthwith.
yoyo (talk) 17:33, 12 April 2013 (UTC)
## dx/dy is not always equal to 1/(dy/dx)
(The article mentions this to support that certain manipulations of differentials are notational abuses.) Could someone add an example (y[x] to demonstrate this)? Cesiumfrog (talk) 00:42, 23 May 2010 (UTC)
Consider a circle with equation x^2 + y^2 = R^2. Then x dx + y dy = 0. Hence dy/dx equals -x/y if y is nonzero but is undefined for y = 0. Similarly, dx/dy equals -y/x but is undefined for x = 0. Hence dx/dy = 1/(dy/dx) holds if and only if both x and y are nonzero. This example depends on being strict about 1/0. If you want to be less strict and replace "undefined" by "infinite", together with some rules you decide to accept, you may still accept that dx/dy = 1/(dy/dx) holds everywhere and want a different example. Boute (talk) 09:51, 7 August 2010 (UTC)
Well, yeah, I would like a different example. But I'm starting to doubt whether a more persuasive example (e.g., finite dx/dy) exists? Anyway, it seems almost like a circular argument (or rather assuming the conclusion): the article faults the lack of strictness (in manipulating derivatives like fractions) purely on extremely strict grounds (distinguishing 1/0 from infinity). Would it not be better to simply point out the standard way that the derivative notation is defined (such that the notation represents in shorthand a single complex entity rather than a simple ratio of two independent entities), without making any stronger claim? (If anything, it would seem justifiable to me if the article separately added 1/(1/0)=0 to its list of examples of notational abuses, since it seems to be a separate case of something disallowed in strict contexts but nonetheless tending toward correct answers.) Cesiumfrog (talk) 04:16, 9 August 2010 (UTC)
I don't think that this is an abuse of notation at all. When I was at school I formed the impression that separating dy and dx as though dy/dx were a fraction was an abuse of notation, but then at university I found that there is a perfectly simple interpretation which fully justifies the notation: dy and dx are simply real numbers in the appropriate ratio. This interpretation is, in fact, mentioned in the article (though the issue is somewhat muddied by expressing it in terms of the geometry of a graph). The case dy/dx=0 does not justify the remark in the article that "the derivative does not always behave exactly like a fraction (e.g. dx/dy is not always equal to 1/(dy/dx))" at all, because it is is in fact an example of how derivatives do behave exactly like fractions, not an example of how they don't, since if a/b=0 then b/a is no more and no less meaningful than dx/dy when dy/dx=0. I considered removing this section of the article, but on reflection it will probably be better to rewrite it. JamesBWatson (talk) 19:58, 19 August 2011 (UTC)
## Quotation
Is the quotation right? "We will occasionally use this arrow notation unless there is no danger of confusion." Shouldn´t the "no" be removed, for example?--190.188.2.122 (talk) 15:33, 24 December 2010 (UTC)
## Some adjectives, such as "generalized", can only be used in this way
Generalized function can be a non-function.--刻意(Kèyì) 00:11, 31 December 2010 (UTC)
## Some thoughts about the trigonometric function thing
I always used ${\displaystyle \mathrm {sin} (x)^{n}}$ notation to raise trigonometric functions to powers. I'm a programmer and thought that () is the function invocation operator. So the sine comes first then the power. Now I think a mathematician is confused by this and thinks I meant ${\displaystyle \mathrm {sin} \,x^{n}}$. Now I have quite a few papers to fix... Calmarius (talk) 14:17, 28 June 2013 (UTC)
## Derivative section
Right now, the derivative section reads like an opinion piece or argument. It cites no sources, and makes unverifiable claims. E.g. "fully justified", "completely rigorous". If there is a source for these claims, then the content should be attributed to that source as an opinion. If it is absolutely not an 'abuse of notation', then it should say so in an objective tone, or be removed from the article. If there is some sort of debate on the matter, all sides should be represented.
I tagged it with original research. GregRos (talk) 23:16, 10 August 2013 (UTC)
Why do you object to this, and what on earth makes you think it is "original research"? It is perfectly standard that dx and dy can be taken as real non-zero numbers in the ratio ${\displaystyle {\frac {dy}{dx}}}$ : 1, and this fully justifies manipulating them as though they are numbers, because in that case they are numbers. (It is probably true that very few mathematicians habitually think of them as such, but that is not the point: the point is that they can be so regarded, and that doing so justifies the techniques. Far from being original research, it was presented to me as perfectly standard and accepted in my time at university, over 40 years ago, and it was there in the text books. No doubt it would be possible to find such a text book and cite it to satisfy your demands, but the trouble of doing so would be disproportionate. JamesBWatson (talk) 20:45, 12 August 2013 (UTC)
I agree with GregRos and disagree with JamesBWatson. I believe the statement "dx and dy can be taken as real non-zero numbers in the ratio ${\displaystyle {\frac {dy}{dx}}}$ : 1" is simply false and as a mathematician I would never allow it. The thing about "separating dx and dy" is that in certain circumstances we have theorems that tell us that the result of such manipulation leads to the correct answer; it is certainly not because dx and dy are numbers. 09:09, 14 August 2013 (UTC) — Preceding unsigned comment added by McKay (talkcontribs) 09:09, 14 August 2013
Good heavens, I wasn't suggesting that "the result of such manipulation leads to the correct answer" is "because dx and dy are numbers", or that the notation is somehow a substitute for "theorems that tell us that the result of such manipulation leads to the correct answer". Obviously, the fact that the method works requires analytical proof. However, granted that it can be proved that it does work, there is the quite separate question of whether it possible to produce a sound, logical interpretation of such notation, rather than regarding them either as some sort of fictitious device or else as infinitesimals. Indeed, it turns out that there is such a sound interpretation. We can take, for example, ${\displaystyle f'(x){dx}={dy}}$ as referring only to real numbers, and this means that the manipulation of such notation is perfectly sound and logical, not some sort of "abuse of notation". In graphical terms, dx and dy refer to the lengths of lines parallel to the axes making a triangle with a segment of a tangent, just as δx and δy refer to the lengths of lines parallel to the axes making a triangle with a chord. If the expressions dx and dy are interpreted that way, then there is no "abuse of notation" involved, as it is literally true that the product of the real number ${\displaystyle f'(x)}$ and the real number ${\displaystyle dx}$ is equal to the real number ${\displaystyle dy}$. The point is not that this interpretation is "the reason" why the method works, or that it somehow avoids the need for proof that it does, but simply that this is a perfectly logical interpretation of the symbols, without any "it doesn't really mean anything, but I will do it because it works" nonsense. In my experience the place where this interpretation of the notation is most useful is in connection with partial differentiation, because uses with ordinary differentiation can always be rewritten without differential notation without significant increase in complexity, whereas such notation as ${\displaystyle {\frac {\partial f}{\partial x}}{dx}+{\frac {\partial f}{\partial y}}{dy}}$ cannot so easily be replaced. However, the point at issue as far as this article is concerned is not whether or under what circumstances the notation is useful, but simply whether or not a perfectly logical interpretation of the notation is possible. And the answer to that is that it certainly is possible, whether or not most mathematicians are unaware that it is, and think they are "abusing" the notation every time they use it. JamesBWatson (talk) 14:12, 14 August 2013 (UTC)
Just passing by and the tone in several sections reads too much like editors sending passive-aggressive digs at each other. It's important to summarize the opposing rationales for the conflicting attitudes but please keep the tone encyclopedic rather than evangelical. — Preceding unsigned comment added by 67.87.17.222 (talk) 07:41, 2 December 2013 (UTC)
Can you give specific examples of what you mean? Without them, it is difficult to take any steps to improve matters. Also, are you sure that an "Original Research" tag is appropriate? "Editors sending passive-aggressive digs at each other" sounds as though it means something very different from "Original Research", and I am at a loss to see how the tag applies in this case. JamesBWatson (talk) 15:56, 3 December 2013 (UTC)
## "f(x)=O(g(x))" is not considered abusive by all mathematicians
I put two templates on the "Big O notation" section: a POV template and an "unreferenced section" template. This section appears to me to be a personal essay, rather clumsily constructed, and to convey the particular point of view of its author(s). It states that the notation "f(x)=O(g(x)" is abusive, which is not at all universally admitted in the mathematical community, and particularly not amongst number theorists. The claim that it is abusive follows from the very rigid (and impossible to keep) position that a symbol should have one unique meaning in mathematics. But the use of alternate meaning of a symbol or group of symbols is common in mathematics and does not necessarily indicate an abuse of notation. (In this particular instance one can perfectly well consider that "=O" is one single operator, in which the part "=" has no individual meaning, and in particular is not an equivalence relation): with this acception there is no abuse of notation in the formal definition. This other point of view is not even addressed in the section.Sapphorain (talk) 19:46, 10 July 2015 (UTC)
## Section Integers
The section "integers", which contains several times the phrase "abuse of notation" has been removed with the edit summary "this section has nothing to do with abuse of notation!". As this assertion is clearly wrong, I have restored the section. To McKay: If you have a better reason for removing this section, please discuss it here. D.Lazard (talk) 10:34, 30 November 2016 (UTC)
I think you don't know what "abuse of notation" means. Most objects in mathematics can be formally constructed in different ways, giving us multiple arrangements of symbols that can be used to refer to them. That's not what abuse of notation is. The last part, about types in a computer language, is entirely irrelevant to the subject. As for the "rationals" section, considering "3/1 = 3" to be an abuse of notation is simply ridiculous, imo. McKay (talk) 02:02, 1 December 2016 (UTC)
To McKay: Your opinion on my knowledge is irrelevant for this discussion, and also for Wikipedia in general.
If you do not agree with the definition given in the lead, you must provide another definition and a source for it (the definition of the lead is essentially that of Bourbaki, which is certainly a highly reliable source). The first sentence of the lead is In mathematics, abuse of notation occurs when an author uses a mathematical notation in a way that is not formally correct but that seems likely to simplify the exposition or suggest the correct intuition (while being unlikely to introduce errors or cause confusion). This definition applies exactly to the equality 3 = 3/1. In fact, this is not formally correct, a the left-hand side is a integer, and the right-hand side is an equivalence class of pairs of integers, and certainly not an integer. But, clearly, writing this equality "simplifies the exposition and suggest the correct intuition, while being unlikely to introduce errors or cause confusion". The fact, that most people do not know that this is an abuse of notation, does not implies that it is not an abuse of notation. There is nothing ridiculous here. D.Lazard (talk) 16:59, 1 December 2016 (UTC)
This type of reasoning could be used to prove that a large fraction of all modern mathematical notation is an abuse, which renders the concept useless. McKay (talk) 00:56, 5 December 2016 (UTC)
The sections on the integers and rationals contain a lot of dubious assertions (and are pretty poorly written anyway). One problem is that, e.g., the statement "3/1 = 3 is an abuse" is only correct when we're working under the convention that 3 and 1 must refer to 3 and 1 as integers and not 3 and 1 as rational numbers to begin with. If we're using 3 and 1 to refer to rational numbers, then "3/1" means "the rational number 3 divided by the rational number 1", which really is equal to "the rational number 3". This is really only an abuse if we're describing the construction of the rational numbers (or something like that), and need to be very careful about what we're working with. This should either be made explicit in these sections, or the sections should just be removed. Deacon Vorbis (talk) 15:57, 1 February 2017 (UTC)
You are right. This section is, at best, subjective and totally useless, and, at worst, abusive itself and wrong. I think it should be suppressed altogether. Sapphorain (talk) 17:07, 1 February 2017 (UTC)
As an aside, a better example of an abuse under the equality-vs-isomorphism type would be something like writing ${\displaystyle \pi _{1}(S^{1})=\mathbb {Z} }$ for the fundamental group of a circle. Deacon Vorbis (talk) 18:54, 1 February 2017 (UTC) | 2017-03-30 18:43:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 43, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8181348443031311, "perplexity": 822.5568601697994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218199514.53/warc/CC-MAIN-20170322212959-00647-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://phy.princeton.edu/events/marcelo-magnasco-rockefeller-university-learning-be-critical | # Marcelo Magnasco, Rockefeller University "Learning to be critical"
Date
Oct 14, 2010, 4:30 pm6:00 pm
Location
McDonnell A02
## Details
Event Description
We hypothesize that many large-scale biological networks organize themselves to operate in a dynamical regime where extensively many (in the thermodynamic sense) degrees of freedom poise themselves'' at the edge of a dynamical instability; we further hypothesize such dynamical criticality underlies the ability of the system to propagate information through the network, and to deploy different behaviors, system-wide, depending on context. We demonstrate dynamical laws that allow an artificial neural network to reach such state, i.e., to learn how to balance on a many-dimensional critical transition, and point out many consequences, such as on the size, spectrum, and spatial structure of fluctuations, that may help to identify such a state. We work out consequences for wave-like propagation of signals on abstract models of cortex. We apply these ideas to experimental data on ecocorticography array in humans and demonstrate signatures of both dynamical and statistical criticality in the recorded brain activity. | 2023-03-31 13:37:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3144609332084656, "perplexity": 1962.3333088456404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00490.warc.gz"} |
http://www.thermospokenhere.com/wp/03_tsh/C0700___cork_muzzle_velocity/cork_muzzle_velocity.html | THERMO Spoken Here! ~ J. Pohl © ( C0700~1/15) ( C0850 - Bounce of a Brick)
# Champagne Cork Muzzle Velocity
After the speeches and hand-shaking at the Professors Emeritus Banquet," a student bartender, opened the first bottle of champagne with a loud "POP." As everyone laughed, the cork flew upward and over a rafter in the gymnasium. The mass of the cork is one gram and the height of the rafter is about 10 meters above the champagne bottle.
Estimate the speed of the cork at the instant it left the neck of the bottle.
The event involves change of energy change of the cork. A good practice is to write the energy equation to begin, then modify it to suit the system and event being considered.
(1) An increment form of the equation relateskinetic and potential energy changes withworks that might occur.
Inspecting the equation we realize the event, in general, will involve change of velocity and change of elevation so the kinetic and potential energy might not be zero.
Next we inspect the "sum of works." We realize during flight the cork experienced friction (a drag force) as it moved through air. That friction force acted on the boundary of the cork. The friction force opposed the motion. But the cork displaced along the path of its flight: The effect constitutes friction work of the cork. This work is negative meaning a decrease of energy of the cork (as we expect). At this level of our studies we cannot calculate the effect, the frictional work of the event. Our only choice is to assume it is negligibly small ~ zero.
Also we might realize that a gravity force acts on the cork and it is displaced. Should this effect be included as work? The answer is no. The work associated with gravity was transformed into an energy term: potential energy. Work is associated with surface forces only ( learning now and explained later). Hence, ignoring the drag force of the cork moving through air approximates the sum of works for the event to equal zero.
(2) This implicit equation states that thechange of the sum of energies equals zero.
Next write the energy equation for the cork explicitly:
(3) This equation identifies the energiesin terms of system properties.
Usually the first (some say "initial") state of a system is known. For the cork, 1 is the immediate instant in time the cork goes "pop." Just exploded from the bottle, it has a speed (straight-up, we assume). We seek an approximation of this speed. The elevation of the cork at that instance, is level with the upheld bottle, which we don't know. It is some number, z*
STATE (1): Elevation: zcork,1 = z*cork,1? and Speed: Vcork,1 = ?
We choose our event to end at the precise (imagined) time that the cork passed over the beam. Some call this the final State, or we might just label it as 2. In the event-ending condition, the cork has an elevation greater by 10 meters than initially. But its speed, at the very top of its flight becomes zero. We don't know the final elevation of the cork, so we write it as an "increase from the initial elevation."
STATE (2): Elevation: zcork,2 = z*cork,1 + 10 m and Speed: Vcork,2 = 0
We enter these conditions into the equation to see what it will tell us.
(4) It is best to avoid needless algebra.Work with the equation, as is, untilonly one unknown remains.
A calculation shows: V1 = 14 m/s. But we have made many assumptions. We assumed no friction and a vertical flight. Our number is a minimum value. To be precise, we realize the actual, initial speed of the cork had to be greater.
Our answer: vcork,1 > 14.0 m/s.
## Champagne Cork Muzzle Velocity
After the speeches and glad-handing at the Professors Emeritus Banquet," a student bartender, opened the first bottle of champagne with a loud "POP." Everyone laughed as the cork flew upward to pass over a gymnasium rafter. | 2017-08-19 03:37:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6660294532775879, "perplexity": 1597.9770854166263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105297.38/warc/CC-MAIN-20170819031734-20170819051734-00202.warc.gz"} |
https://physics.aps.org/articles/v15/s93 | Synopsis
# Squeezing a Wigner Solid
Physics 15, s93
Researchers have made electrons crystallize into an anisotropic structure, which could lead to new insights into quantum many-body systems.
In 1934, theoretical physicist Eugene Wigner predicted that a low-temperature, low-density gas of electrons on a background of evenly distributed positive charges will crystallize to form a 2D lattice—a structure now known as a Wigner crystal (WC). In the past two decades, the realization of such structures has given physicists a powerful platform for investigating quantum many-body interactions, but these experiments have always involved WCs that were isotropic, limiting the kinds of phenomena that can be studied. Now, Shafayat Hossain and colleagues at Princeton University have created a 2D WC that is anisotropic [1].
Hossain and his colleagues created their WC in a structure that combined the necessary high degree of order with an anisotropic electronic energy band: a quantum well formed from a single crystal of aluminum arsenide (AlAs). Electrons in the conduction band of AlAs exhibit two energy minima, or “valleys,” aligned with two of the crystal axes. By squeezing the sample along one of these axes, the team manipulated the energy of the conduction-band electrons, confining them—and therefore the WC—to a single valley.
The researchers measured a pronounced anisotropy in their WC’s electrical properties, and also found that it was more “slippery” in one direction. WCs are typically pinned in place by rare defects in the host material but can be unpinned by a strong enough electric field. The WC created by Hossain and his colleagues was much easier to slide along the squeezed axis than along the other axis. Most surprising, however, was their WC’s melting point of up to $0.9\phantom{\rule{2.22198pt}{0ex}}\text{K}$—far above the $100\phantom{\rule{2.22198pt}{0ex}}\text{mK}$ predicted by theory. The team is now planning experiments to explore whether the anisotropy explains this high melting point.
–Allison Gasparini
Allison Gasparini is a freelance science writer based in Santa Cruz, CA.
## References
1. Md. S. Hossain et al., “Anisotropic two-dimensional disordered Wigner solid,” Phys. Rev. Lett. 129, 036601 (2022).
## Related Articles
Quantum Physics
### Superconducting Vortices Made Without Magnetic Fields
A quantum phase of matter detected in an iron-based superconductor could host Majorana zero modes—quasiparticles that may serve as building blocks for future quantum computers. Read More »
Condensed Matter Physics
### Allegations of Scientific Misconduct Mount as Physicist Makes His Biggest Claim Yet
Condensed-matter physicist Ranga Dias and his colleagues reported on Tuesday the discovery of a room-temperature, near-ambient-pressure superconductor; Dias is also being accused of committing scientific misconduct, including data manipulation and plagiarism. Read More »
Condensed Matter Physics
### Density-Functional Models Get Excited
A venerable strategy for approximating a system’s ground states has now been extended to accommodate its excited states. Read More » | 2023-03-28 07:56:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34124115109443665, "perplexity": 2784.4263709416286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00626.warc.gz"} |
https://worldbuilding.stackexchange.com/questions/12303/building-a-base-on-the-surface-of-a-planet-like-venus | # Building a Base on the surface of a planet like Venus
I know living on Venus would be virtually impossible for us now. But I'm trying to picture what life would be like on the surface of such a world, and how advanced in technology a species would have to be in order to achieve it.
Whatever species is on Venus obviously didn't originate there, so this Venus like planet would be a colony of sorts. I am placing no limits at this point on how advanced this species could be, but they are either humans or humanoids.
If I wanted to build a base on the surface of a planet like Venus, with its thick, crushingly heavy corrosive atmosphere, intense heat and rivers of lava, what kind of technologies would I need?
I'm thinking the Venusian Base would be something like a large pressurized container with its own internal environment, that was strong enough to withstand the atmosphere. But I wonder if this is enough? How would people get in and out of the base? Would mining activities be possible from within the base?
Just building a base might even be possible with current technology, just incredibly expensive and very challenging from an engineering perspective.
Dealing with the pressure is in princple no different than for a submarine. It just requires a strong pressure hull. The surface temperature can be dealt with, 450°C isn't hospitable, but at least its not so hot that everything melts, so a base could be constructed using more or less common materials. Isolating the base is in principle no different than isolating a fridge to reduce heat transfer from the environment; just the scale is a little different. Its all technology we already posess, whats missing is just the engineering work to put it together. And a way to bring it to Venus XD.
Living on such a base would in many respects be similar to living in a submarine. Access to the outside is impossible, or at least if a pressure suit can be made it will be pretty cumbersome and heavy. More likely surface activities, if at all, would be restricted to using some kind of vehicle, probably resembling something in between a tank and a submarine. Deploying and retrieving the vehicle could work the same way as with an airlock, just that the high pressure is on the outside.
If any mining is done, it would most likely be all robotic. But considering that about anything you could possibly mine from Venus could be obtained much easier elsewhere, mining seems mostly pointless. A base could more believably be scientific, and mining be replaced by drilling to learn about geology... erm whatever the term for studying Venus rocks is.
• You forgot the sulfuric acid rain. The habitat must be entirely ceramic (or lithic if you prefer) on the outside since metals dissolve and plastics melt. Non-melting organics will degrade badly with the combination of acid rain and heat. – pojo-guy Feb 13 '18 at 13:31
• @pojo-guy You mean sulfuric acid virga. It never reaches the ground, 'cause it gets too hot and evaporates. – Logan R. Kearsley May 16 at 4:20
There used to be a lot more science fiction about colonizing venus. That was until we actually sent probes there, took one look and decided, "You know what? Mars is actually pretty nice."
The Surface
The surface is difficult. I imagine it would be a similar habitat that could survive under a kilometer of ocean inside a volcano. Very high pressure and very high temperature. We don't currently have any habitats that can even survive the pressure, 9.2 MPa. That's 90 times Earth atmosphere. The temperature is hot enough to melt lead, 462 °C. We would have to expend a lot of energy just to keep cool.
We'd have to bring everything for life. There is no water or molecular oxygen (though we might be able to scrub oxygen from the atmospheric $CO_2$).
One relatively neat thing for a base there is the Sun will set in the East. Venus rotates in the opposite direction on its axis than Earth. It wouldn't be totally obvious, since we achieve the same effect on Earth by confusing north and south. However, as boring as that is, it might be the neatest thing about living on the surface.
Alternative
It's far more likely we'd have floating habitats. It's certainly much nicer up there, where the planet isn't trying to kill you.
The atmosphere is dense enough that our normal breathing air could be stored in a massive bag and used as a lifting gas. A blimp that stores our breathing air is pretty cool. Once the floating habitat was set up we might make excursions to the surface. This method makes a lot of sense, it's what we do with the ocean.
So, exploring and colonizing Venus will almost certainly be from 50km up. Miners, or more likely miner robots, will make dives to the surface. But unless we terraform, humans or other fragile human like creatures, won't be living on the surface.
• are you saying that living on the surface of Venus would be completely impossible - no matter how technologically advanced the Species is? – Jimmery Mar 20 '15 at 10:54
• @Jimmery Not at all, it's just difficult with little reward. Humans would be more likely to populate the floor of the ocean, it would certainly be much easier. – Samuel Mar 20 '15 at 14:53
• Why the down vote? – Samuel Mar 20 '15 at 16:23
Personally I'd start exporting bacteria or similar to Venus right now who thrive on heat and pressure, and who consume carbon dioxide and methane and excrete solid carbon in a useful form. The more carbon we can strip out of the Venusian atmosphere, and sequester in its surface, the faster it will cool down. Frankly we could use such bacteria here on earth, so long as we were assured that they wouldn't mutate into something that wiped us or our life-supporting ecosystem out.
In that regard, Venus is a great lab for such experiments. It might take ages to cool it down to habitable levels - can someone run some numbers? - but it would be handy to have a backup planet in the same neighbourhood if it is at all possible. And even if we don't manage to get a habitable planet out of it, maybe we'll get some useful science from it.
• It looks like this approach has already been considered, and the missing ingredient turns out to be Hydrogen. – Seth Mar 19 '15 at 23:00
• How damned inconvenient! I'm not sure that exporting Hindenburgs full of hydrogen to Venus is going to get us far... – omatai Mar 19 '15 at 23:14 | 2019-06-15 21:08:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46291372179985046, "perplexity": 1131.0657583032691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997335.70/warc/CC-MAIN-20190615202724-20190615224724-00248.warc.gz"} |
https://en.wikipedia.org/wiki/User:Kmhkmh | User:Kmhkmh
This user is a member ofWikiProject Mathematics.
This user likes geometry.
${\displaystyle e^{i\pi \ }}$ This user is a mathematician.
I'm a mathematician and I've been using Wikipedia since 2002/2003. Aside from occasional corrections or improvements I set up this account in 2007 to contribute missing articles as well. I can be found in the German wikipedia too:[1]
Templates
{{Google books|ID|displayed text|page=}} {{MathWorld|title=|urlname=}} {{springer|id=|title=|first=|last=}} | 2019-11-17 19:02:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5769613981246948, "perplexity": 4329.842173618579}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00375.warc.gz"} |
http://web.emn.fr/x-info/sdemasse/gccat/Kzebra_puzzle.html | ### 3.7.264. Zebra puzzle
A constraint that can be used for modelling the zebra puzzle problem. Here is the first known publication of that puzzle quoted in italic from Life International, December 17, 1962:
1. There are five houses.
2. The Englishman lives in the red house.
3. The Spaniard owns the dog.
4. Coffee is drunk in the green house.
5. The Ukrainian drinks tea.
6. The green house is immediately to the right of the ivory house.
7. The Old Gold smoker owns snails.
8. Kools are smoked in the yellow house.
9. Milk is drunk in the middle house.
10. The Norwegian lives in the first house.
11. The man who smokes Chesterfields lives in the house next to the man with the fox.
12. Kools are smoked in the house next to the house where the horse is kept.
13. The Lucky Strike smoker drinks orange juice.
14. The Japanese smokes Parliaments.
15. The Norwegian lives next to the blue house.
Now, who drinks water? Who owns the zebra?
In the interest of clarity, it must be added that each of the five houses is painted a different color, and their inhabitants are of different national extractions, own different pets, drink different beverages and smoke different brands of American cigarettes. In statement 6, right refers to the reader's right.
A first model involves $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraints with variables in their tables (i.e., the table of an $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraint corresponds to its second argument). It consists of creating for each house $i$ ($1\le i\le 5$) five variables ${C}_{i}$, ${N}_{i}$, ${A}_{i}$, ${D}_{i}$, ${B}_{i}$ respectively corresponding to the colour of house $i$, the nationality of the person leaving in house $i$, the preferred pet of the person leaving in house $i$, the preferred beverage of the person leaving in house $i$, the preferred brand of American cigarettes of the person leaving in house $i$. We first state the following five $\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}$ constraints on these variables for expressing the fact that colours, nationalities, pets, beverages, and brands of American cigarettes are distinct:
Now observe that most statements link two specific attributes (e.g., The Englishman lives in the red house). Consequently, in order to ease the encoding of such statements in term of constraints, we will first create for each attribute a variable that indicates the house where an attribute occurs. For instance, for the statement The Englishman lives in the red house we will create two variables which respectively indicate in which house the Englishman lives and which house is red. We now create all the variables attached to each class of attributes.
For each possible colour $c\in \left\{\mathrm{𝑟𝑒𝑑},\mathrm{𝑔𝑟𝑒𝑒𝑛},\mathrm{𝑖𝑣𝑜𝑟𝑦},\mathrm{𝑦𝑒𝑙𝑙𝑜𝑤},\mathrm{𝑏𝑙𝑢𝑒}\right\}$ we create a variable ${I}_{c}$ that corresponds to the index of the house having this colour. For each variable ${I}_{c}$, an $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraint links it to the variables ${C}_{1},{C}_{2},{C}_{3},{C}_{4},{C}_{5}$ giving the colour of each house:
Note that we can replace the five previous $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraints by the following $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$ constraint:
• $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$$\left(〈\begin{array}{ccc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{C}_{1}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑟𝑒𝑑}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{C}_{2}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑔𝑟𝑒𝑒𝑛}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-3\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{C}_{3}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑖𝑣𝑜𝑟𝑦}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-4\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{C}_{4}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑦𝑒𝑙𝑙𝑜𝑤}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-5\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{C}_{5}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑏𝑙𝑢𝑒}}\hfill \end{array}〉\right)$
For each possible nationality $n\in \left\{\mathrm{𝑒𝑛𝑔𝑙𝑖𝑠ℎ𝑚𝑎𝑛},\mathrm{𝑠𝑝𝑎𝑛𝑖𝑎𝑟𝑑},\mathrm{𝑢𝑘𝑟𝑎𝑖𝑛𝑖𝑎𝑛},\mathrm{𝑛𝑜𝑟𝑤𝑒𝑔𝑖𝑎𝑛},$ $\mathrm{𝑗𝑎𝑝𝑎𝑛𝑒𝑠𝑒}\right\}$ we create a variable ${I}_{n}$ that corresponds to the index of the house where the person with this nationality lives. For each variable ${I}_{n}$, an $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraint links it to the variables ${N}_{1},{N}_{2},{N}_{3},{N}_{4},{N}_{5}$ giving the nationality associated with each house:
Again we can replace the five previous $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraints by the following $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$ constraint:
• $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$$\left(〈\begin{array}{ccc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{N}_{1}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑒𝑛𝑔𝑙𝑖𝑠ℎ𝑚𝑎𝑛}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{N}_{2}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑠𝑝𝑎𝑛𝑖𝑎𝑟𝑑}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-3\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{N}_{3}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑢𝑘𝑟𝑎𝑖𝑛𝑖𝑎𝑛}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-4\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{N}_{4}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑛𝑜𝑟𝑤𝑒𝑔𝑖𝑎𝑛}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-5\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{N}_{5}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑗𝑎𝑝𝑎𝑛𝑒𝑠𝑒}}\hfill \end{array}〉\right)$
For each possible preferred pet $a\in \left\{\mathrm{𝑑𝑜𝑔},\mathrm{𝑠𝑛𝑎𝑖𝑙},\mathrm{𝑓𝑜𝑥},\mathrm{ℎ𝑜𝑟𝑠𝑒},\mathrm{𝑧𝑒𝑏𝑟𝑎}\right\}$ we create a variable ${I}_{a}$ that corresponds to the index of the house where the person that prefers this pet lives. For each variable ${I}_{a}$, an $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraint links it to the variables ${A}_{1},{A}_{2},{A}_{3},{A}_{4},{A}_{5}$ giving the preferred pet of each house:
Again we can replace the five previous $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraints by the following $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$ constraint:
• $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$$\left(〈\begin{array}{ccc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{A}_{1}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑑𝑜𝑔}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{A}_{2}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑠𝑛𝑎𝑖𝑙}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-3\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{A}_{3}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑓𝑜𝑥}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-4\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{A}_{4}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{ℎ𝑜𝑟𝑠𝑒}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-5\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{A}_{5}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑧𝑒𝑏𝑟𝑎}}\hfill \end{array}〉\right)$
For each possible preferred beverage $d\in \left\{\mathrm{𝑐𝑜𝑓𝑓𝑒𝑒},\mathrm{𝑡𝑒𝑎},\mathrm{𝑚𝑖𝑙𝑘},\mathrm{𝑜𝑟𝑎𝑛𝑔𝑒}_\mathrm{𝑗𝑢𝑖𝑐𝑒},\mathrm{𝑤𝑎𝑡𝑒𝑟}\right\}$ we create a variable ${I}_{d}$ that corresponds to the index of the house where the person that prefers this beverage lives. For each variable ${I}_{d}$, an $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraint links it to the variables ${D}_{1},{D}_{2},{D}_{3},{D}_{4},{D}_{5}$ giving the preferred beverage of each house:
Again we can replace the five previous $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraints by the following $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$ constraint:
• $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$$\left(〈\begin{array}{ccc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{D}_{1}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑐𝑜𝑓𝑓𝑒𝑒}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{D}_{2}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑡𝑒𝑎}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-3\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{D}_{3}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑚𝑖𝑙𝑘}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-4\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{D}_{4}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑜𝑟𝑎𝑛𝑔𝑒}_\mathrm{𝑗𝑢𝑖𝑐𝑒}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-5\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{D}_{5}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑤𝑎𝑡𝑒𝑟}}\hfill \end{array}〉\right)$
For each possible preferred brand of American cigarettes $b\in \left\{\mathrm{𝑜𝑙𝑑}_\mathrm{𝑔𝑜𝑙𝑑},\mathrm{𝑘𝑜𝑜𝑙},$ $\mathrm{𝑐ℎ𝑒𝑠𝑡𝑒𝑟𝑓𝑖𝑒𝑙𝑑},\mathrm{𝑙𝑢𝑐𝑘𝑦}_\mathrm{𝑠𝑡𝑟𝑖𝑘𝑒},\mathrm{𝑝𝑎𝑟𝑙𝑖𝑎𝑚𝑒𝑛𝑡}\right\}$ we create a variable ${I}_{b}$ that corresponds to the index of the house where the person that prefers this brand lives. For each variable ${I}_{b}$, an $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraint links it to the variables ${B}_{1},{B}_{2},{B}_{3},{B}_{4},{B}_{5}$ giving the preferred brand of American cigarettes of each house:
Again we can replace the five previous $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraints by the following $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$ constraint:
• $\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}$$\left(〈\begin{array}{ccc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{B}_{1}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑜𝑙𝑑}_\mathrm{𝑔𝑜𝑙𝑑}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{B}_{2}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑘𝑜𝑜𝑙}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-3\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{B}_{3}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑐ℎ𝑒𝑠𝑡𝑒𝑟𝑓𝑖𝑒𝑙𝑑}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-4\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{B}_{4}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑙𝑢𝑐𝑘𝑦}_\mathrm{𝑠𝑡𝑟𝑖𝑘𝑒}},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-5\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{B}_{5}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{I}_{\mathrm{𝑝𝑎𝑟𝑙𝑖𝑎𝑚𝑒𝑛𝑡}}\hfill \end{array}〉\right)$
Finally we state one constraint for each statement from 2 to 15:
• ${I}_{\mathrm{𝑒𝑛𝑔𝑙𝑖𝑠ℎ𝑚𝑎𝑛}}={I}_{\mathrm{𝑟𝑒𝑑}}$ (the Englishman lives in the red house).
• ${I}_{\mathrm{𝑠𝑝𝑎𝑛𝑖𝑎𝑟𝑑}}={I}_{\mathrm{𝑑𝑜𝑔}}$ (the Spaniard owns the dog).
• ${I}_{\mathrm{𝑐𝑜𝑓𝑓𝑒𝑒}}={I}_{\mathrm{𝑔𝑟𝑒𝑒𝑛}}$ (coffee is drunk in the green house).
• ${I}_{\mathrm{𝑢𝑘𝑟𝑎𝑖𝑛𝑖𝑎𝑛}}={I}_{\mathrm{𝑡𝑒𝑎}}$ (the Ukrainian drinks tea).
• ${I}_{\mathrm{𝑔𝑟𝑒𝑒𝑛}}={I}_{\mathrm{𝑖𝑣𝑜𝑟𝑦}}+1$ (the green house is immediately to the right of the ivory house).
• ${I}_{\mathrm{𝑜𝑙𝑑}_\mathrm{𝑔𝑜𝑙𝑑}}={I}_{\mathrm{𝑠𝑛𝑎𝑖𝑙}}$ (the Old Gold smoker owns snails).
• ${I}_{\mathrm{𝑘𝑜𝑜𝑙}}={I}_{\mathrm{𝑦𝑒𝑙𝑙𝑜𝑤}}$ (kools are smoked in the yellow house).
• ${I}_{\mathrm{𝑚𝑖𝑙𝑘}}=3$ (milk is drunk in the middle house).
• ${I}_{\mathrm{𝑛𝑜𝑟𝑤𝑒𝑔𝑖𝑎𝑛}}=1$ (the Norwegian lives in the first house).
• $|{I}_{\mathrm{𝑐ℎ𝑒𝑠𝑡𝑒𝑟𝑓𝑖𝑒𝑙𝑑}}-{I}_{\mathrm{𝑓𝑜𝑥}}|=1$ (the man who smokes Chesterfields lives in the house next to the man with the fox).
• $|{I}_{\mathrm{𝑘𝑜𝑜𝑙}}-{I}_{\mathrm{ℎ𝑜𝑟𝑠𝑒}}|=1$ (kools are smoked in the house next to the house where the horse is kept).
• ${I}_{\mathrm{𝑙𝑢𝑐𝑘𝑦}_\mathrm{𝑠𝑡𝑟𝑖𝑘𝑒}}={I}_{\mathrm{𝑜𝑟𝑎𝑛𝑔𝑒}_\mathrm{𝑗𝑢𝑖𝑐𝑒}}$ (the Lucky Strike smoker drinks orange juice).
• ${I}_{\mathrm{𝑗𝑎𝑝𝑎𝑛𝑒𝑠𝑒}}={I}_{\mathrm{𝑝𝑎𝑟𝑙𝑖𝑎𝑚𝑒𝑛𝑡}}$ (the Japanese smokes Parliaments).
• $|{I}_{\mathrm{𝑛𝑜𝑟𝑤𝑒𝑔𝑖𝑎𝑛}}-{I}_{\mathrm{𝑏𝑙𝑢𝑒}}|=1$ (the Norwegian lives next to the blue house).
Now note that variables ${C}_{i}$, ${N}_{i}$, ${A}_{i}$, ${D}_{i}$, ${B}_{i}$ ($1\le i\le 5$) do not occur at all within the constraints encoding statements 2 to 15. Consequently they can be removed, as long as we replace the five $\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}$ constraints on these variables by the following $\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}$ constraints:
In our experience, when confronted for the first time to this puzzle, a lot of people come up with the model that associates to each house $i$ ($1\le i\le 5$) five variables ${C}_{i}$, ${N}_{i}$, ${A}_{i}$, ${D}_{i}$, ${B}_{i}$ that describe the attributes of the person living in house $i$. However it is difficult to directly express the constraints according to these variables and the second model which associates to each attribute a variable that gives the corresponding house is more convenient for expressing the constraints. | 2017-09-20 07:29:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 197, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966779351234436, "perplexity": 2822.436982165313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686705.10/warc/CC-MAIN-20170920071017-20170920091017-00594.warc.gz"} |
http://tex.stackexchange.com/tags/unicode/new | # Tag Info
2
There are a few issues with your setup. You need the T1 encoding You need the textcomp package The palatino package is obsolete The utf8x option is not recommended. \documentclass{memoir} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} \usepackage{mathpazo} \begin{document} ĵÊÄ¡£ÄãÇÕãl£¿¬FÚÔ 50.0441° N \end{document} ...
1
The following seems to work: \documentclass{memoir} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} % <-- new \usepackage{palatino} \begin{document} ĵÊÄ¡£ÄãÇÕãl£¿¬FÚÔ 50.0441° N \end{document} Addendum: Instead of the using the nearly obsolete palatino package, you may want to consider loading the more recent newpxtext and newpxmath packages. ...
7
The form of the error message you show suggests an older latex release but for all releases the error comes from this or an older version with the same name. \def\UTFviii@defined#1{% \ifx#1\relax \PackageError{inputenc}{Unicode\space char\space\expandafter \UTFviii@splitcsname\string#1\relax ...
7
cmss10 is not a Unicode font unfortunately - you need to use a font which implements Unicode maths - the TeX Gyre collection is probably a good starting point. Here is an example: \documentclass{article} \usepackage{unicode-math} \setmainfont{TeX Gyre Schola} \setmathfont{TeX Gyre Schola Math} \begin{document} Some Unicode maths: $x ∈ ℕ$ \end{document} ...
9
If the engine is Unicode aware and a font is used, which contains the glyph for the private Unicode code point: ^^^^e25f See: The ^^ notation in various engines. This is TeX's method to encode non-ASCII characters with ASCII and can also be used inside command tokens. There are also commands to select a character by slot in the current font: LaTeX ...
3
If you're using xelatex, you should load fontspec that allows you to set the main font with \setmainfont{} and you can choose any font you have in your own OS. This should be the font of the main part of your document. If you're writing primarily using the Latin alphabet, then you should choose an appropriate font. As far as the languages are concerned, I'd ...
1
There are several parts to your question. How to use XeLaTeX ? In order to compile a unicode document using XeLaTeX, you first need to write a unicode document. So you will have to make it an UTF-8 document. Once this is done, it is very easy. You write your document as if you were using PDFLatex, but instead, you will compile it using xelatex.exe. As ...
9
The problem is that the section command sets a header. The problem is that the the CJK environments ends before the header is typeset and so the chinese chars are no longer set up. Using \clearpage makes the CJK-environment span two pages so that is is active when the header is typeset, but is clearly only a work-around. The second problem is that the book ...
11
It happens when latex tries to typeset the page heading. This happens at page shipout time; if the CJK* environment has ended before the last page ships out, the necessary definitions for doing the typesetting are gone, and you get this error. You see this clearly if you add \errorcontextlines=99 to your document (outside the CJK* environment please). Also, ...
5
Load inputenc before titling, to make it aware about the unicode settings etc, otherwise it uses the wrong encoding for \author{äää} etc. \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{titling} \author{äää} \date{\today} \title{üüüü} \RequirePackage[pdfencoding=unicode, psdextra]{hyperref} \AtBeginDocument{ \hypersetup{ ...
4
Since you are using XeLaTeX you should use the fontspec package instead of the fontenc package. In other words the following code should produce the desired result. \documentclass{article} \usepackage{fontspec} \begin{document} \thispagestyle{empty} \pagestyle{empty} ää--llll \end{document}
2
If you wanted it to conform to the current font, both in size and style, you could build your own: \documentclass[a2]{article} \usepackage{stackengine,scalerel} \newcommand\NUL{\scalerel*{$\Shortstack[l]{N \phantom{N}U \phantom{NU}L}$}{X}} \begin{document} \LARGE This is \NUL \normalsize This is \NUL \itshape This is \NUL \upshape\ttfamily This is \NUL ...
3
If you are on pdflatex, you can use the ascii package: \documentclass{article} \usepackage{ascii} \begin{document} Is this it? \NUL \end{document} You can also input the symbol directly as Unicode, so the code can be portable to XeLaTeX or LuaLaTeX. \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{ascii} \usepackage{newunicodechar} ...
9
If a TeX engine is used with Unicode/OpenType font support, then it is just a matter to find a font that contains the Unicode code point U+2400, e.g.: % lualatex or xelatex \documentclass{article} \usepackage{fontspec} \begin{document} \def\test#1{#1:&\fontspec{#1}\symbol{"2400}\\} \begin{tabular}{l@{ }l} \test{FreeMono} \test{FreeSans} ...
1
After lot's of searching I managed to get full support with a twitter based emoji set from here: https://github.com/alecjacobson/coloremoji.sty
7
I can get most of them e.g. with DejaVu Sans (but some are missing): %compiled with lualatex \documentclass{report} \usepackage{fontspec} \setmainfont{DejaVu Sans} \begin{document} How would I go about adding the large range of emotions (😀 😁 😂 😃 😄 😅 😆 😇 😈 😉 😊 😋 😌 😍 😎 😏 😐 😑 😒 😓 😔 😕 😖 😗 😘 😙 😚 😛 😜 😝 😞 😟 😠 😡 😢 😣 😤 😥 😦 😧 ...
Top 50 recent answers are included | 2016-05-05 10:47:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881442189216614, "perplexity": 9412.678719782447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860126502.50/warc/CC-MAIN-20160428161526-00095-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://dqsd.net/ | ## Dave's Quick Search Taskbar Toolbar Deskbar
This new version is revamped to work correctly Windows XP Service Pack 2 in order not to require the workarounds required by the 3.x version.
Check out the latest 4.1 BETA, now with 64-bit support, at SourceForge
Google does it right: they are fast . Their loadtime is quick. Their searches are instantaneous. Voila! What could be faster?
Good question. You know that the I'm Feeling Lucky button speeds things up. So does a shortcut to Google on your taskbar, and so does the official Google Toolbar . And maybe you're already using all that.
Still need to go faster? Install Dave's Quick Search Deskbar . It launches Google, Yahoo and other searches straight from your desktop taskbar.
Dave thinks it's indispensable; you can also read what other users around the Internet say.
### What is it?
Dave's Quick Search Deskbar is a tiny textbox that Dave Bau designed for search hounds with weary mouse-fingers. Unlike the Google Toolbar, this little deskbar lets you launch searches without starting a web browser first, directly from your Windows Explorer Taskbar.
You type your search and hit Enter for a regular Google search.
If you're feeling lucky, tack an exclamation point on to the end of your search - "pow! " - and go directly to the top ranked hit. It is Powered By Google, and Yes it Really Works.
Now you can do searches no matter what you are doing - email, word processing, programming, whatever.
There's more. You're not a captive to Google. Do Yahoo searches with a "yh question", get Merriam-Webster definitions with a "colon:", get Bloomberg stock quotes like this "msft intc csco", and find Switchboard phone numbers by saying "Lois Lane#". You can search real "news." search "newsgroups," check "weather*", or "comparison shop". There's a built in calculator when you need to know "pow(1.0625, 30) " is 6.1640785. And so on. Too much to remember? Click the button on the search bar or press F1 and a menu shows you all your choices. Not enough space on your deskbar? It includes a clock so that you can free up some space by turning off the system deskbar clock. Missing a feature you need? If you know HTML and want to add your own functionality, you can - it is distributed under GPL and is available at SourceForge . It'll make you super-quick. You'll be ready to take on the world and surf like never before. ### More Questions? For more information be sure to check the Frequently Asked Questions area. ### Installation Instructions Want it? First off, you need to be running Windows 95 or better and using IE 5.5 or newer. It's been tested with IE 5.5, and 6.0, on Windows 95, 98, NT4, 2000, XP and 2003 Server. (Please let us know if it works on various other configurations.) Here's what you do: 1. Run the following setup program. ("Run this program from its current location" is fine.) Dave's Quick Search Deskbar Installer It will install the deskbar in the "Quick Search Deskbar" folder in your "Program Files" directory, along with sources and an uninstallation program. 2. Clear out a nice empty gray space on your taskbar by closing some apps, then right-click on the empty gray part of the taskbar, and select the menu Toolbars > Add Quick Search... If "Add Quick Search" doesn't appear, wait ten seconds and try again - Windows takes a few seconds to discover the existence of a newly installed deskbar. 3. You're done! Dave's Quick Search Deskbar has appeared on your taskbar. Drag things around so that it's just the right size and in the right place. 4. Saving screen real estate. Since screen real estate on the taskbar is precious, you probably want to get rid of the little caption that says "Search" by right-clicking it and unchecking "Show Title". And since the search bar includes its own clock, you probably want to remove the system clock from the system tray by right-clicking in the empty space in the taskbar and selecting "Properties", then unchecking "Show Clock". Type ? and hit Enter in the search box to get more information about other features. You can also type ? followed by a search string to find searches. For example, to display help about all the searches that have something to do with movies, type "? movies". ### Alternate Installation Instructions Some users of Windows 98SE, Windows ME and Windows NT4 have reported that the search bar does not show up when they follow the steps above. Here is another way you can install the search bar that works if the regular way does not work: 1. From Start | Settings | Internet Options | Security | Trusted Sites add http://www.dqsd.net and hit Apply 2. Download and run dqsd.exe as usual. 3. Right-click on your Windows Explorer taskbar and select Toolbars > Add New Toolbar... 4. When it prompts you for a folder, type in the following URL: http://www.dqsd.net/install.htm (be sure not to copy in any spaces at the beginning or the end) 5. The Quick Search bar should appear. 6. Now, if you want to leave your machine as it was, you can safely remove http://www.dqsd.net from the list of trusted sites. ### Alternate Alternate Installation Instructions The trick above only works if your program files directory is on your C: drive. If it's on your D: drive you'll want to use a different URL, and if it's on your E: drive, there's yet another one. • D: drive users can try http://www.dqsd.net/installd.htm • E: drive users can try http://www.dqsd.net/installe.htm ### Support There are new discussion forums for support, ideas, and enhancements. If you've got any, please email them to the dqsd-users@lists.sourceforge.net group. An archive of the list is here. (Note, we're moving away from the old forum on Yahoo because the advertisements and downtime have become too annoying.) There are three mailing lists for DQSD: • Release announcements. If you want to be notified of beta and final releases of the Deskbar, this low traffic list is for you. • Users. If you are regular Deskbar user and would like to hear about new ways to use the Deskbar, or need a place to report bugs, this moderate traffic list is for you. • Developers. If you would like to contribute to the development of the Deskbar, this moderate traffic list is for you. If you want to contribute to the future development of the search bar, you may want to sign-up on the developer's list ### User Contributions Searches/utilities created by DQSD users that aren't a part of the default installation package. If you have created searches that others would be interested in, drop a note to dqsd-users@lists.sourceforge.net . ### Uninstallation Instructions If you are unhappy with your Dave's Quick Search Deskbar, email Dave to complain, or check the old or new discussion group archives to see if there is a workaround. Maybe somebody has already found a fix. In the meantime, it's easy to uninstall. 1. Right-click on the "Search" caption of the deskbar (or the little gripper to the left of the deskbar if you've hidden the caption) and select "Close". 2. Then, to unintall the files and registry entries, go to the control panel and click on "Add/Remove Programs". You'll find an entry for "Dave's Quick Search Deskbar", which you can use to run the uninstaller. This erases it from your computer without a trace. ### Some Notes for Real Search Hounds Search Menu. You can get a menu of searches by clicking on the » button that is available to the right of the textbox. So you don't need to remember the punctuation. The menu button will not appear if the deskbar is too narrow; it won't show itself if it would take up more than some percentage of the deskbar's screen real estate. Even if the menu button isn't being shown, you can display the menu of choices by typing "F1" key while you're entering your search terms (you'll still need IE 5.5 or better for this feature). You can continue editing your search while the menu is shown. People Lookups. Most names are too common to do ordinary lookups. If you say Jackie Jones# you will get hundreds of hits. So you can specify the state and, if you like, the city too. You put them in parentheses, like this: Jackie Jones (ca)# Jackie Jones (san fransisco, ca)# If you specify the city, you need to say the state, but not the other way around. Drag and Drop . Drag and drop is pretty neat. If you drop in some text, it opens a Google search directly so that you don't need to hit Enter. It trims punctuation for you so you don't search newsgroups just because you dragged in a comma. If an instant Google search is not what you want, then just use cut and paste. The regular cut and paste keys ctrl-C, ctrl-X, and ctrl-V do work. (It wasn't easy.) Calculator . The deskbar has a built-in Javascript calculator that lets you evaluate ordinary Javascript expressions using an "=" sign, e.g., "9*(3+22.4)=" will give you "228.6". If you type in something that looks like a math expression without the equals sign, the calculator will also try to evaluate it. The code is executed within Javascript's Math package, which gives you access to various functions and mathematical constants. For example, "cos(pi)+sin(pi/2)" gives you "0". You have access to the entire execution environment of the page, so you can show the about box with "about()=" or do an Ask Jeeves search with "aj('why')=", etc. All the single-letter variable names are left for your use, so they are useful as memory for your calculator. If you say "x=4344", then "x*x-x" later on will yield "18865992". Search History . The deskbar remembers your last 50 searches (by default). You can use the up and down arrows or ctrl-P (Previous) and ctrl-N (Next) to browse through your history of searches. If your deskbar is not docked in the taskbar, you can use the up and down arrow keys for the same thing. FreeTranslation.com Language Translation . You can translate words or web pages through FreeTranslation translator like this: "algunas palabras es-en" translates some words from Spanish to English, and "calvinone.net pt-en" translates a web page from Portugese to English. Supported translation codes include include en-zh, en-fr, en-de, en-it, en-ja, en-ko, en-pt, en-es, zh-en, fr-en, fr-de, de-en, de-fr, it-en, ja-en, ko-en, pt-en, ru-en, and es-en. XE Currency Converter . You can convert between currencies using current rates by typing "100usd>gbp" to see how many British pounds you get for 100 U.S. Dollars. There are currency codes for every currency you can think of (eur, dem, jpy, frf, itl, aud, cad, hkd...). There is a complete list of three-letter currency codes here on xe.com . Other Shortcuts . There are several more shortcuts that you can use. Click the button next to the toolbar or type "?" to display a list of all the searches grouped by category. To search the search descriptions themselves, type "? [search string]" and only the searches that have [search string] in their description will be displayed in a popup window. E.g., "? translate". Dave's favorite searches . When a Google search fails, Dave usually goes to the FAST engine search next. Why? Dave finds that the main problem with Google is that its view of the Web is a month or more out-of-date. So Google is no good when Dave is searching for something new. The FAST engine reindexes the internet every 12 days, so it's a fresher view than Google. Launching addresses . Some things that you type into the textbox will not launch a search. If you type in what looks like a URL (http://foo.com/bar), a DNS name (foo.co.jp), a local filename (c:\foo\bar), or a UNC name (\\foo\bar\file), it will open those directly. If you type one of the bits of special punctuation without a search term, it will open an appropriate page for that service. (Some are nice; for example, * is great because AccuWeather remembers your previously specified zip code.) Launching addresses faster . A lot of websites have DNS names like "www.joelonsoftware.com". To zap to these sites even faster, you can just type in "joelonsoftware" and type control-Enter, and it works just like the IE address bar does with control-Enter. It'll add the www. and the .com parts for you. Reverse Phone Number Lookup . If you type in what looks like a phone number (including area code), it will do an AnyWho reverse phone number lookup. What about other search engines? As I mentioned before, you're free to modify the code and even redistribute it as allowed under GPL. See the Contributing section below for more information on modifying the source code. If you add a cool feature, please let me know. What about Linux? For your GNOME desktop, you might want to check out WebSearcher on SourceForge, which seems to have similar UI. (I've never tried it.) What about Windows NT? It works, but you've got to get Active Desktop enabled on NT, which can take some contortions. Jeff Winkler offers the following tip: "Took me a while to get it going under NT/SP6 because I hadn't enabled Active Desktop -- there were no toolbars in my taskbar. It's fairly tricky - instructions are at http://www.jsiinc.com/SUBG/TIP3200/rh3235.htm " What about Netscape or other non-IE browsers? Update : As of version 2.5.0, thanks to Koen Mannaerts code contribution, you should now be able to launch Netscape, Opera, or other non-IE browsers using the deskbar. This normally works out-of-the box just fine: searches are launched using your default browser. However, sometimes, the deskbar can be tricked into believing that IE is your default browser when it is not. If this is happening to you, you can edit your "preferences.js" file and change "launchmode" from 2 to 1. Want to see the current list of available 'Searches'? Go HERE ### What's New For versions 4 and later, see what's new at SourceForge. Version 3.1.9.2 (April 5, 2006) • 50 new searches, 374 in all • Added 2006 holidays for Canada, Portugal and Sweden • Fixed: "googletrans", "mnr" and "mwd" • Fixed: Clock tooltip no longer off by three days • Fixed: Most of the default aliases removed so as not to capture chars like ~, %, #, etc. • Removed: "ch", no longer working after site changes • Feature: Calendar modified to allow color definitions in the events xml files • Feature: httpinst now supports install from CVS • Feature: tinyurl now writes the generated short URL to the DQSD text window instead of opening a browser window • Feature: Added IRC URL autodetection • Complete details at SourceForge. Version 3.1.9.1 (January 6, 2005) • 2 new searches, 325 in all • Removed: "encyfr" • Fixed: "winres", "enc" and "wik" • Fixed: Now installs and works on Windows 98 (sorry about the disruption!) • Feature: Typing an IP address in the toolbar automatically performs an ARIN WhoIs lookup • Complete details at SourceForge. Version 3.1.9 (December 24, 2004) • 33 new searches, 324 in all • Removed: "nwtsr", "dicfr", "pcm" • Replaced: "wiq" with "wordiq" and "httpd" with "whats" • Feature: Support for Czech month- and day names • Feature: Clock now supports week number - formattable using the W symbol • Feature: Mouse-wheel history browsing • Improvement: Proper theme support - autodetects Windows theme on installation, and allows theme selection from menu (Configure -> Load Color Scheme...) • Improvement: Startup time can be as much as 25% shorter with MSXML4 installed • Improvement: Better XP theming with vertical bar • Bug fixes • Complete details are at SourceForge. Version 3.1.8 (January 18, 2004) • Lots of new searches. • Many fixed bugs. • New themes for Windows XP • On-demand installation of searches (httpinst ? for help) • Added nice DQSD button • Complete details are at SourceForge. Version 3.1.6 (April 20, 2003) • Lots of new searches. • Many fixed bugs. • Complete details are at SourceForge. Version 3.1.5 (December 25, 2002) • Added ability to change specify the first day of the week for the pop-up calender. • Many fixed bugs. • Lots of new searches. • Complete details are at SourceForge. Version 3.1.4 (October 3, 2002) • Added IE support for maximized windows by using 'pagetemplate' preference. • User-defined searches can be added to a 'localsearches' subdirectory. • Add-on/search writers can now modify the popup menu using calls to registerMenuHook. • Add-on/search writers can specify subcategories for the popup menu. • Lots of new searches. • Complete details are at SourceForge. Version 3.1.3 (September 9, 2002) • Fixes the default aliases.txt that comes with the installer to use the new shortcuts rather than the old ones. Version 3.1.2 (September 8, 2002) • Fixes two bugs (related to msxml versioning) that were causing scripting errors on specific operating system configurations. • Also, renames many searches so that their default shortcuts are not English words. • Complete details are at SourceForge. Version 3.1 (August 27, 2002) • Read about all the new features and searches at SourceForge. Version 3.0 (June 29, 2002) Version 2.5.7 (April 1, 2002) was coordinated by Glenn Carr (through six beta releases) and adds the following features contributed by the open source community: 1. New searches: reget, winres, day, rgb, hexconv, chart, phone, multi 2. Removed currency.js and phoneno.js, all the script needed for those 'searches' are in their respective XML files. Monty added an autodetect_* method for each search. This might also speed the load (but not sure) 3. Rework of switch parsing; addition of parseArgs() method 4. Neel Doshi's changes to tools.js to correct the browser launch mode 5. Added searches Downseek, TechWeb Encyclopedia, Borland Newsgroup Database by Tom Corcoran 6. Added telephone country code lookup by Stephen Montgomery 7. Shortcuts have been fixed so that entries in localaliases.txt that contain backslashes don't require escaping those backslashes. I.e., instead of... run explorer /e, "C:\\Program Files\\Quick Search Deskbar" ...the actual path should be used... run explorer /e, "C:\Program Files\Quick Search Deskbar" 8. Updated ZDNet search from John Bairen 9. The following variables can now be used to customize the help fonts and help window... helpstyle // contains standard CSS style settings helpoptions // contains options for the window.open() options Here are some examples that could be used in preferences.js to increase the font size and to modify the help window... helpstyle = "font-size:12pt"; // the following are not CSS styles, but options // passed to window.open() helpoptions = "width=800, height=550, status=yes" 10. Modified the 'run' command to allow passing parameters to an executable. E.g., you can now enter: run explorer e, C:\Program Files\Quick Search Deskbar" Or, an alias can be defined in localaliases.txt like this: run explorer /e, "%s" ...so that you can enter: exp C:\Program Files\Quick Search Deskbar" This required a change to the DQSDTools.Launcher component (the OpenDocument method has a second optional argument which takes the parameters.) 11. Fix for the mysterious disappearing dollar signs in the help that JB found. 12. Added RobbyH's SurfWax search - "sw horses" 13. Monty added helpful comments to preferences.js 14. New searches/functions submitted by Tom and Monty: 1. fd - Free On-Line dictionary of computing: "fd xhtml" 2. fe - The File Extension Source fe {[a..z] | num | sym }: "fe d" 3. js - European specialist job site 4. tm - Teoma search using Subject Specific Popularity : "tm denali" 5. uktv - UK Television guide 6. base - Convert decimal values to hex, octal and binary equivalents 7. ascii - Convert ASCII characters to decimal, hex, octal and binary equivalents 15. Glenn added expanding/collapsing help categories 16. support for comments (leading '//') for alias files 17. localaliases.txt isn't overwritten on install 18. sync names of search files with function names Version 2.5.6 adds the following: 1. Ryan Edwards fixed drag-and-drop, which I inadvertently broke right before packaging up 2.5.5 for release. 2. Tom J. Corcoran added a whatis.com search. 3. Other miscellaneous fixes. Version 2.5.5 includes the following improvements: 1. Glenn Carr added a mechanism so that searches can be loaded from a searches directory instead of a huge search.xml file. 2. Monty Scroggins and others ported the search.xml file into the new format. 3. Ryan Edwards added infrastructure so that all searches that use option switches do it the same way. 4. Several new searches were added, and several bugs were fixed. This is the first release where Dave hasn't followed the development of the code in enough detail to provide a detailed log here. If anybody has a better changelog, please mail it to Dave! Version 2.5.4 merges the following contributions from the discussion group: 1. Olney Lee (xtrecate) contributed an AstaLaVista computer security search ("crax") 2. Daniel Baek contributed Danish strings 3. Glenn Carr contributed a send-email feature 4. Jonathan Payne contributed a bugfix for launching unc names with numbers 5. Kjetil Limkjaer contributed norwegian and dutch strings 6. Michael Baas contributed german strings 7. Glenn Carr improved the reverse phone number regex 8. Glenn Carr added a feature so that you can control when DQSD spawns new windows 9. Glenn Carr added a feature so that you can control when DQSD uses a multiline textbox. 10. The CIA world fact book is added as a reference 11. Glenn Carr updated the mapquest search to match the changed website 12. Monty Scroggins improved the cpan search 13. John W. Bairen contributed a pc magazine and a zdnet search 14. John W. Bairen contributed a tom's hardware search 15. Monty Scroggins added alarm and timer functionality 16. Glenn Carr fixed the aim function so that it behaves better when AIM isn't installed. 17. Monty Scroggins and John W. Bairen improved navigation inside the ? help window. Version 2.5.3 adds a bunch of new functionality and reorganizes the code significantly: 1. Thanks to help from Monty Scroggins, all the searches are moved into search.xml now. Search.htm is much smaller. the format of search.xml has changed. Shortcuts are no longer defined in this file; instead, they are defined in aliases.txt, and the menu layout is in menu.txt. Search.htm is exploded out to several different .js files. 2. Some of the default mnemonics for searches have been changed. Every search can now be done via a command like "gg" for google. The searches listed in the "?" help box are now organized into categories so it's slightly easier to find things. 3. Adam Stiles figured out why it wasn't correctly launching NetCaptor, and now it's fixed to work correctly! 4. Glenn Carr added a feature so that the history is persistent (it is saved accross reboots). It's saved in history.txt. 5. The calculator memory (single-letter variables) is also persistent. It's saved in calcmem.txt. 6. The history now supports a "find previous command" feature: if you previously searched for "foo bart", you can now type "!fo" or "!bar" to recall the search; or you can use ctrl-P/ctrl-N to scroll through all the matching searches. "!!" redoes the most recent search. (If you want the "I'm feeling lucky" shortcut, the "!" needs to be at the end.) 7. The input box is multiline instead of single-line (this feature was suggested by Mark Zeren). 8. There is an option (buttonalign="left") to move the button to the left (as requested by Mitch G). 9. Erik Hartmann contributed an aol IM shortcut a long time ago that I've finally gotten around to pasting in. I haven't tested it because I haven't installed AIM yet - and this should be changed so that it behaves better when aim isn't installed. 10. There are two new UK-oriented searches (suggested by David Brake): cdo searchs Cambridge Dictionary Online for a British view of the English language, and mm searches multimap.com for addresses in the UK. 11. Reginald Braithwaite-Lee contributed a bomis.com search 12. Stephen Granade contributed a better bartleby.com quotations search 13. Volker Wick contributed Fahrenheit to Celsius (and vice versa) conversion 14. Monty Scroggins contributed a hotscripts search for searching for resources for various scripting languages simultaneously. 15. Translations go to freetranslations.com where possible instead of going to AltaVista. en-no is added. 16. An assortment of bugs were fixed (in history, cut/paste, selection, layout, IE 5.0 problems, installation problems, etc, etc.). Version 2.5.2 includes the following improvements: 1. Rick Olson contributed an improved pricewatch search - "pw ram:64mb". The old search is still there but will probably be removed soon.
2. Monty Scroggins added a perl cpan search - "cpan CGI" (case-sensitive)
3. Glenn Carr fixed a bug in the Vivisimo search identified by Paul Shotts.
4. Nik Devereaux added an option to force the menu button on or off (showbutton = 0:off, 1:on, 2:(default)auto)
5. Eduardo J. Fernandez Corrales contributed the rest of the needed Spanish localization strings.
6. There's a new "run" command that just does a ShellExecute, e,g, "run winword" launches Microsoft Word.
7. The popup calendar now updates "today" correctly when it's tomorrow.
8. DNS names followed by slashed paths are now treated as http URLs
9. There is an option to turn off the popup calendar (cal=false) as requested by Mitch G.
Version 2.5.1 is a bugfix release:
1. Launching Yahoo calendar correctly appears to require timezone adjustments. These have been added.
2. Some calendar rendering problems have been fixed (the current day is highlighted on Sundays; the calendar can navigate faster; and it puts itself away in the right situations).
Version 2.5.0 introduces some significant improvements:
1. Thanks to Koen Mannaerts, Glenn Carr, and Monty Scroggins, this version is capable of launching your default browser even if it is not IE. It works with Netscape, Opera, and others. (Koen graciously made the ActiveX control that makes this possible available under GPL; the original idea came from his LaunchInIE control, which was introduced to us by Monty Scroggins. And Glenn came up with a way to handle POSTed forms through temporary files, so that changes to search.xml are minimal.)
2. Sidney Chong added a terrific calendar (that I hacked up a bit). Get it by right-clicking in the deskbar. Preferences.js can be used to customize it to launch yahoo, msn, aol, netscape, mycalendar, or evite calendar services.
3. Volker Wick contributed a bugfix in the ctrl-enter code.
4. Jimmy Lin contributed a "start" search that uses a natural language question answering system at MIT.
5. Glenn Carr implemented Nik Devereaux's idea of a MovieFone search: "mf Ocean's"
6. Glenn Carr added a Bible search: "bible 1 cor 13:4-7"
7. Glenn Carr implemented Chris Weiss's idea of ups and fedex tracking number searches: "fedex [tracking#]" and "ups [tracking#]"
8. Glenn Carr added a Yahoo movie search: "ym Potter".
9. Glenn added a date tooltip.
10. Alain D contributed some French localization strings.
11. Some more support for localization was added, partially through my clumsy efforts to use online translation tools. Please send in fixes or additional translations if you see errors or omissions. German, Spanish, Italian, Dutch, and other languages all have big omissions that need to be filled.
12. Also, thanks to Justin Frankel for quickly merging a delta to NSIS 1.91 to help improve dqsd.exe setup.
Also new with this version: dqsd is on SourceForge. See this message for more information.
Version 2.4.5 is another bugfix release:
1. Tim Lara fixed a bug in 2.4.4 so that the calculator works again.
2. Glenn Carr added a yahoo movie search ("ym Ocean's Eleven").
Version 2.4.4 is a bugfix release:
1. Bjorn Jonson, Glenn Carr, Kjetil Limkjaer, and Greg Mitchell quickly found and fixed a problem with the Vivisimo part of search.xml that prevented it from working with older versions of Microsoft's xml parser.
2. Tim Lara contribted a formatting fix for the popup menu that improves its rendering on some configurations.
3. Edney Soares de Souza contributed Portugese localization strings.
4. Kjetil Limkjaer contributed Norwegian and Dutch localization strings.
5. Eduardo J. Fernandez Corrales contributed Spanish localization strings. We still need more languages!
1. Jerome DeCock added localization and French strings (thanks to Martin for the short month names). We still need other languages!
2. Nikolai Devereaux added an AcronymFinder search ("af nasa").
3. Sam Gera suggested a Vivisimo search, and Glenn Carr implemented one ("viv anything").
4. Glenn Carr added a number of features, including making more things customizable from preferences.js, looking for a second xml file containing searches, and making things work better when there are errors.
5. Angus Johnson fixed drag-drop behavior so it copies text instead of moving it.
6. Errors are handled in a more robust way; layout fits tighter with different font selections.
1. Dave Maymudes contributed a PAD file to help automate distribution of the search bar on download sites.
2. Glenn Carr made the popup menu formatting a bit more robust.
Version 2.4.1 includes the following:
1. Jerome DeCock contriubted a date formatting bugfix so unabbreviated monthnames work.
2. Glenn Carr added a preferences.js file so that you can preserve your preferences when you get new versions of the deskbar. I've tweaked setup so that it doesn't overwrite this file.
3. Damian Maclennan contributed a sqlteam.com search ("sql triggers").
4. Now if you type "!" it refreshes the code; and the about box has helpful links for developers.
Version 2.4.0 has the following new features:
1. Sidney Chong added a clock that shows in the deskbar after some idle time. This saves some screen real estate by allowing you to remove the system clock. I've modified the clock to add some formatting options using code from Matt Kruse. Glenn Carr helped fix my buggy modifications.
2. Bjorn Jonson merged Sidney's changes into the latest version of the deskbar. He also contributed a new dns dig search (try "unip yahoo.com").
3. Monty Scroggins added a weather underground search (by zip code - try "wug 75287").
Version 2.3.4 is another quick bugfix release:
1. Michael McWilliam noticed that 2.3.2 introduced some changes that break the deskbar on IE 5.0 (that is what comes with vanilla Windows 2000, so that is a lot of people). This release fixes it, so you can now use the deskbar again without having to upgrade IE beyond 5.0.
Version 2.3.3 has the following improvements:
1. Raul Costa noticed that a Windows 98 setup bug (previously found by Edney Soares de Souza) somehow crept back into the distribution. This is fixed again.
2. Dan Sanderson's fancy Amazon search is added, so you can search an individual Amazon store.
3. Andrew Gilmartin's Access Medicine search is added.
Version 2.3.2 the code is a bit restructured to make it easier to add new searches and manage your shortcut bindings. I still haven't merged in all the cool searches that have been posted yet, but it should now be easier to do this. This version includes:
1. Glenn Carr's extensibility mechanism - now you can define new searches in a convenient search.xml file.
2. Stewart Rubenstein's Cambridgesoft chemfinder searches (in search.xml)
3. Alternate Merriam-Webster thesaurus
5. Glenn Carr's additions for image, half, amaz, xref, and isbn searches.
6. An autodocumentation feature so that the help in "?" is always accurate.
7. James Gleick's alternate thesaurus (not enabled by default)
8. Bugfixes from Jeff Winkler, Don Womick, Chris Farmer, and others.
Version 2.2.1 thanks to many contributors, this version merges a bunch of neat new functionality from the dqsdd discussion group. It includes:
1. Chris Sell's PhoneSpell support (use "523-3113 #*"),
2. Damian Maclennan's Samspade search (use "sams dnsname.com"),
3. Chris Farmer's Newsgroup menu item, decimal regex (say ".4+1"), and autocomplete (off) bugfixes,
4. Peter Risser's code to search various music databases (use "cdnow britney/t", "cddb britney/t", "alm britney/t"...),
5. Brian Ross's bugfix for the broken dogpile search,
6. Peter Risser's currency conversion (w/ Greg Mitchell's lowercase mods; use "100usd>gbp"),
7. Rick Olson's php and mysql searchs (use "php fopen" or "mysql alter table"),
8. Greg Mitchell's babelfish translation (use "some words en-es"),
9. Jelmer Cormont's request to be able to use long var names by saying "snoopy=34",
10. and Chris Weiss's request to round near-decimal numbers (700-639.84 is now 60.16 instead of a few trillionths less).
Version 2.1.9 incorporated Greg Mitchell's code to do an internet movie database search (**) and a pricewatch search (\$).
Version 2.1.8 implemented Joel Spolsky's suggestion to mimic the IE address bar's ctrl-enter behavior, which zaps you to www.just-type-this .com.
Version 2.1.7 fixed a bug introduced in 2.1.6 where URLs ending in / resulted in a search instead of going straight to the page.
Version 2.1.6 incorporated Adam Kalsey's code to search the very cool Wayback Machine with &&, and Edney Soares de Souza's code to search CNET Download.com with >>.
Version 2.1.5 thanks to a bug find by Edney Soares de Souza from Brasil (aka InterNey ), setup should now work correctly with Windows 98.
Version 2.1.4 the popup menu adjusts itself for smaller screens. Thanks to bob.rtps for the bug report.
Version 2.1.1 fixed overflow bug in about box, improved math expression heuristic.
Version 2.1.0 added FAST, Altavista, Excite, and Dogpile searches. And you don't need to use a trailing "=" to calculate a math expression any more: we heuristically discover when you type one in.
Version 2.0.1 added a few fixes suggested by Mark Rafn including a better label for the popup menu.
Version 2.0.0 introduced the NSIS setup script suggested by Dave Maymudes, so we have a downloadable self-extracting setup program now.
### Contributors
Besides Dave Bau , who wrote the thing, some other people have started to contribute.
Gary Burd was the first user and suggested several improvements to take advantage of IE's features including the popup menu. Some improvements are not yet implemented.
Dave Maymudes fixed the popup UI by attaching it to F1. He also suggested a good approach to open-source setup (nullsoft NSIS, which was used to generate the setup exe).
John Rhodes (WebWord) used the search bar early and promoted it by posting an interview on the widely-read WebWord, which lead the involvement of several other contributors.
Adam Kalsey contributed code to search the wayback machine with &&. It takes a little longer, but it goes way farther back than Google's cache.
Edney Soares de Souza (aka InterNey ) found a key bug in the Windows 98 setup, and he contributed code to search CNET Download.com with >>.
Joel ("on software ") Spolsky suggested hooking up ctrl-Enter to fill in "www." and ".com" for you, just like IE does in the address bar. He also linked to it from his popular weblog, which lead to several other contributors.
Greg Mitchell contributed Internet Movie Database and PriceWatch searches, and also the cool Altavista babelfish translator.
Peter Risser contributed code to search several online music databases, and he also contributed the currency exchange utility.
Glenn Carr contributed a number of searches including xref, isbn, half, image, screenit; and he added the cool search.xml feature where you can add searches in an external xml file.
Many others on the dqsdd discussion group have also contributed code; I've tried to give credit properly in the "What's New" section above. (If I've missed you, please let me know!)
Finally, thanks to the many people in the Internet community who have posted links and reviews on the deskbar.
### Contributing
If you have any ideas on how to improve the Quick Search Deskbar, please email them to the dqsd-users@lists.sourceforget.net discussion group. (You can read an archive of the list at mail-archive.com .)
Or better yet, implement your ideas yourself and post them to the group. It's easy. The deskbar's logic is coded entirely in HTML, and its setup script is an NSIS setup script.
If you've installed the deskbar, the source code is in your Program Files/Quick Search Deskbar folder. You can edit search.htm in that folder to modify the deskbar. Any modifications will show up in your toolbar after you right-click on the deskbar's gripper and select the "Refresh" menu item. You can also open search.htm directly in a web browser to preview it.
If you want to add new search engines, the easiest way is to copy, rename and edit a xml file in the searches subdirectory. (Thanks to Glenn Carr for introducing this mechanism.) More precisions about this can be found in the Frequently Asked Questions .
If you'd like to repackage your modified deskbar as a redistributable executable, you need to install the open source nullsoft NSIS install system. Once you've installed NSIS, all you need to do is right-click on the search.nsi file and select "Compile NSI". This will run NSIS and create the setup executable dqsd.exe .
If you'd like to share your changes with Dave and others, again email or post them on the discussion group so that others can use it, and so that Dave or others can merge them in to the version on this site.
Or you can merge your contributions in directly by using the SVN repository on SourceForge. To get write access, you need a SourceForge account to which Dave or others can grant permissions. But of course read-only access is open to all. Here's the link:
### Licensing Information
Dave's Quick Search Deskbar | 2014-10-31 05:19:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20870500802993774, "perplexity": 8261.485138012575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898978.59/warc/CC-MAIN-20141030025818-00197-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/343914/expected-value-of-maximum-of-samples-from-normal-distribution?noredirect=1 | Expected value of maximum of samples from normal distribution
Lets say I have a normal distribution $N(\mu, \sigma^2)$ from which I have drawn $n$ i.i.d. samples $x_1, \dots, x_n$.
Now, lets define a random variable $Y = max(x_1, \dots, x_n)$.
When $n=1$, the expected value of $Y$ is $\mu$. I would expect that as $n$ increases, the expected value of $Y$ should increase as well. Is it possible to determine the expected value of $Y$ for any value of $n$, in terms of $\mu$ and $\sigma$?
If we combine two of the answers here (Approximate order statistics for normal random variables), we have for the $r$th $\it{smallest}$ order statistic
$$E[r,n] \approx \mu + \sigma \ \Phi^{-1} \left( \frac{r-\frac{\pi}{8}}{n-\frac{\pi}{4}+1}\right)$$
For the largest value we want $r=n,$ so we have
$$E[Y] \approx \mu + \sigma \ \Phi^{-1} \left( \frac{n-\frac{\pi}{8}}{n-\frac{\pi}{4}+1}\right)$$
First note that\begin{align}Y_n=\max\{X_1,\ldots,X_n\}&=\max\{\sigma\epsilon_1+\mu,\ldots,\sigma\epsilon_n+\mu\}\\&=\sigma\max\{\epsilon_1,\ldots,\epsilon_n\}+\mu\\&=\sigma\xi_n+\mu\end{align} hence that $(\mu,\sigma)$ is also a location-scale parameter for the maximum. Asymptotically, the Normal distribution belongs to the domain of attraction of the Gumbel distribution, meaning that $$\sqrt{2\log(n)}(\xi_n-d_n)\stackrel{{\cal L}}{\longrightarrow} G_0$$with $G_0(x)=\exp\{-\exp(-x)\}$ the Gumbel pdf and $$d_n = \sqrt{2\log(n)}-\dfrac{\log\log n + \log(4\pi)}{2\sqrt{2\log(n)}}$$
EDIT:
I found this paper referenced in a thread on the maths stack exchange (Approximate order statistics for normal random variables), so I had a look. For the maximum, $r=n$.
"In a sample of size n the expected value of the rth largest order statistic is given by
$$E(r,n)=\frac{n!}{(r-1)!(n-r)!}\int_{-\infty}^{\infty}x\{1-\Phi(x)\}^{r-1}\{\Phi(x)\}^{n-r}\phi(x)dx,$$
where $\phi(x)=1/\sqrt(2\pi)exp(-\frac{1}{2}x^2)$ and $\Phi(x)=\int^x_{-\infty}\phi(z)dz.$"
• Royston, J. P. (1982), 'Algorithm AS 177: Expected Normal Order Statistics (Exact and Approximate)', Journal of the Royal Statistical Society. Series C (Applied Statistics), 31(2):161-165.
So $Y$ is an order statistic. Let's label its density function $g_{(n)}(x)$, to indicate that it's the pdf of the variable in the nth position (i.e. its the pdf of the maximum in the sample). Let's also label the normal $N(\mu, \sigma^2)$ density function as $f(x)$. It's a standard result that $$g_{(n)}(x)=n[F(x)]^{n-1}f(x),$$ where $F(x)$ is the cumulative density function of $N(\mu, \sigma^2)$ (as a reference, I suggest Mathematical Statistics (7th ed.) by Wackerly, Mendenhall, and Scheaffer, p.333).
It is at this point that I'm unable to proceed - I don't know how to evaluate the expected value of $Y$, given that it has such a strange pdf. However, I'd advise you to search for "expected value of order statistic" - in particular, I found a thread on this topic on the maths stack exchange site:
EDIT: As pointed out by Khol, thread is for a uniform distribution, not a normal distribution. The uniform is apparently more straightforward to deal with. Apologies for the partial answer! | 2019-10-20 09:54:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870292901992798, "perplexity": 178.8512130660288}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00164.warc.gz"} |
https://edurev.in/studytube/Poisson-Distribution-Mathematical-Methods-of-Physi/bfa40953-5735-4441-97df-59d435bd12de_t | Courses
# Poisson Distribution - Mathematical Methods of Physics, UGC - NET Physics Physics Notes | EduRev
## Physics for IIT JAM, UGC - NET, CSIR NET
Created by: Akhilesh Thakur
## Physics : Poisson Distribution - Mathematical Methods of Physics, UGC - NET Physics Physics Notes | EduRev
The document Poisson Distribution - Mathematical Methods of Physics, UGC - NET Physics Physics Notes | EduRev is a part of the Physics Course Physics for IIT JAM, UGC - NET, CSIR NET.
All you need of Physics at this link: Physics
A Poisson distribution is the probability distribution that results from a Poisson experiment.
Attributes of a Poisson Experiment
Poisson experiment is a statistical experiment that has the following properties:
• The experiment results in outcomes that can be classified as successes or failures.
• The average number of successes (μ) that occurs in a specified region is known.
• The probability that a success will occur is proportional to the size of the region.
• The probability that a success will occur in an extremely small region is virtually zero.
Note that the specified region could take many forms. For instance, it could be a length, an area, a volume, a period of time, etc.
Notation
The following notation is helpful, when we talk about the Poisson distribution.
• e: A constant equal to approximately 2.71828. (Actually, e is the base of the natural logarithm system.)
• μ: The mean number of successes that occur in a specified region.
• x: The actual number of successes that occur in a specified region.
• P(x; μ): The Poisson probability that exactly x successes occur in a Poisson experiment, when the mean number of successes is μ.
Poisson Distribution
Poisson random variable is the number of successes that result from a Poisson experiment. The probability distribution of a Poisson random variable is called a Poisson distribution.
Given the mean number of successes (μ) that occur in a specified region, we can compute the Poisson probability based on the following formula:
Poisson Formula. Suppose we conduct a Poisson experiment, in which the average number of successes within a given region is μ. Then, the Poisson probability is:P(x; μ) = (e-μ) (μx) / x!where x is the actual number of successes that result from the experiment, and e is approximately equal to 2.71828.
The Poisson distribution has the following properties:
• The mean of the distribution is equal to μ .
• The variance is also equal to μ .
Poisson Distribution Example
The average number of homes sold by the Acme Realty company is 2 homes per day. What is the probability that exactly 3 homes will be sold tomorrow?
Solution: This is a Poisson experiment in which we know the following:
• μ = 2; since 2 homes are sold per day, on average.
• x = 3; since we want to find the likelihood that 3 homes will be sold tomorrow.
• e = 2.71828; since e is a constant equal to approximately 2.71828.
We plug these values into the Poisson formula as follows:
P(x; μ) = (e) (μx) / x!
P(3; 2) = (2.71828-2) (23) / 3!
P(3; 2) = (0.13534) (8) / 6
P(3; 2) = 0.180
Thus, the probability of selling 3 homes tomorrow is 0.180 .
Cumulative Poisson Probability
cumulative Poisson probability refers to the probability that the Poisson random variable is greater than some specified lower limit and less than some specified upper limit.
Cumulative Poisson Example
Suppose the average number of lions seen on a 1-day safari is 5. What is the probability that tourists will see fewer than four lions on the next 1-day safari?
Solution: This is a Poisson experiment in which we know the following:
• μ = 5; since 5 lions are seen per safari, on average.
• x = 0, 1, 2, or 3; since we want to find the likelihood that tourists will see fewer than 4 lions; that is, we want the probability that they will see 0, 1, 2, or 3 lions.
• e = 2.71828; since e is a constant equal to approximately 2.71828.
To solve this problem, we need to find the probability that tourists will see 0, 1, 2, or 3 lions. Thus, we need to calculate the sum of four probabilities: P(0; 5) + P(1; 5) + P(2; 5) + P(3; 5). To compute this sum, we use the Poisson formula:
P(x < 3, 5) = P(0; 5) + P(1; 5) + P(2; 5) + P(3; 5)
P(x < 3, 5) = [ (e-5)(50) / 0! ] + [ (e-5)(51) / 1! ] + [ (e-5)(52) / 2! ] + [ (e-5)(53) / 3! ]
P(x < 3, 5) = [ (0.006738)(1) / 1 ] + [ (0.006738)(5) / 1 ] + [ (0.006738)(25) / 2 ] + [ (0.006738)(125) / 6 ]
P(x < 3, 5) = [ 0.0067 ] + [ 0.03369 ] + [ 0.084224 ] + [ 0.140375 ]
P(x < 3, 5) = 0.2650
Thus, the probability of seeing at no more than 3 lions is 0.2650.
159 docs
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
; | 2020-07-02 19:59:54 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695448637008667, "perplexity": 1176.646413236753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879738.16/warc/CC-MAIN-20200702174127-20200702204127-00238.warc.gz"} |
https://math.stackexchange.com/questions/2732819/if-a-otimes-b-is-nuclear-then-is-b-nuclear | # If $A \otimes B$ is nuclear , then is $B$ nuclear?
Suppose that $A$ and $B$ are unital $C^*$-algebras, and that $A \otimes B$ is a nuclear $C^*$-algebra. I want to see if $B$ is nuclear or not.
Since $A \otimes B$ is nuclear, there exist c.c.p maps $\phi_n: A\otimes B \to M_{k_n}(\mathbb{C})$ and $\psi_n: M_{k_n}(\mathbb{C}) \to A \otimes B$ such that $\psi_n \circ \phi_n \to \text{id}$ in the point norm topology.
So I have the following maps: $$B \xrightarrow[j]{b \to 1_A\otimes b}A \otimes B \xrightarrow{\phi_n}M_{k_n}(\mathbb{C})\xrightarrow{\psi_n}A\otimes B \xrightarrow[\pi]{a \otimes b \to b}B$$
Letting $\tilde{\phi_n}=\phi_n \circ j$ and $\tilde{\psi_n}=\pi \circ\psi_n$, we see that $\tilde{\psi_n}\circ \tilde{\phi_n}=\pi\circ(\psi_n \circ \phi_n)\circ j$ and this converges to $\text{id}$ in the point norm topology.
The same thing can be done for $A$ as well. The only question I have is whether $\pi$ is well defined or not? If not, then is there a way of showing this? Or, is there an example of two non-nuclear $C^*$-algebras whose tensor product is nuclear?
Thanks for the help!!
As stated, your $\pi$ cannot be linear (so, it is not well-defined). If $\pi$ were somehow linear you would have, for any $a\in A$, $$\pi(a\otimes b)=b=\pi(2a\otimes b).$$ So $b=\pi(a\otimes b)=\pi(2a\otimes b-a\otimes b)=b-b=0$. Not pretty.
What you need to do is take $\pi$ to be a slice map: you fix a state $\phi:A\to\mathbb C$, define $\pi(a\otimes b)=\phi(a)b$, and extend by linearity. The issue is to show that this actually defines a bounded operator. As far as I can tell, this is not obvious. What we can do is identify $B$ with $\mathbb C\otimes B$, so $\phi(a)b$ is identified with $\phi(a)\otimes B$ (this extends properly to an isomorphism of C$^*$-algebras, easy exercise).
So now $\pi$ looks like $\pi(a\otimes b)=\phi(a)\otimes b$, extended by linearity. Then for instance Theorem 3.5.3 in Brown-Ozawa implies that $\pi$ exists and is completely positive, and that $\|\pi\|=\|\phi\|=1$.
With this new $\pi$, your argument works and indeed $A$ and $B$ are nuclear.
• So $A$ and $B$ are both nuclear?? – tattwamasi amrutam Apr 12 '18 at 0:23
• Yes, that's what I wrote. – Martin Argerami Apr 12 '18 at 0:29
• Thank you. !!!! – tattwamasi amrutam Apr 12 '18 at 0:29
A comment on continuity of the slice map: For $\psi \in S(B)$ we define $\eta_\psi : A \odot B \to A : a \otimes b \mapsto a \psi(b)$. Let $\phi \in S(A)$. Then, for $x = \sum_i a_i \otimes b_i \in A \odot B$, we get $$\phi(\eta_\psi(x)) = \sum_i \phi(a_i)\psi(b_i) = (\phi \otimes \psi)(x).$$ If one knows that $\phi \otimes \psi$ defines a state on $A \otimes B$, it immediately follows that $\lVert \eta_\psi(x) \rVert \leq \lVert x \rVert$.
If not, it is not hard to show that $\phi \otimes \psi \in S(A \otimes B)$. Indeed, working with the universal representation of $A$ and $B$, one can easily show that $$\lVert x \rVert = \sup_{\phi \in S(A), \psi \in S(B)} \lVert (\pi_\phi \otimes \pi_\psi)(x) \rVert,$$ where $\pi_\phi$ resp. $\pi_\psi$ denote the associated GNS representation. If now $x_\phi, x_\psi$ denote the cyclic vectors for the representations $\pi_\phi$ and $\pi_\psi$, we see that $$\lvert (\phi \otimes \psi)(x) \rvert = \lvert \langle (\pi_\phi \otimes \pi_\psi)(x)(x_\phi \otimes x_\psi), x_\phi \otimes x_\psi \rangle \lvert \leq \lVert (\pi_\phi \otimes \pi_\psi)(x) \rVert \leq \lVert x \rVert.$$ | 2020-07-04 22:05:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971235454082489, "perplexity": 118.3191711586142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00311.warc.gz"} |
https://www.groundai.com/project/kaluza-klein-dark-matter-direct-detection-vis-a-vis-lhc/ | Kaluza-Klein Dark Matter: Direct Detection vis-a-vis LHC
Kaluza-Klein Dark Matter: Direct Detection vis-a-vis LHC
Abstract
We explore the phenomenology of Kaluza-Klein (KK) dark matter in very general models with universal extra dimensions (UEDs), emphasizing the complementarity between high-energy colliders and dark matter direct detection experiments. In models with relatively small mass splittings between the dark matter candidate and the rest of the (colored) spectrum, the collider sensitivity is diminished, but direct detection rates are enhanced. UEDs provide a natural framework for such mass degeneracies. We consider both 5-dimensional and 6-dimensional non-minimal UED models, and discuss the detection prospects for various KK dark matter candidates: the KK photon , the KK -boson , the KK Higgs boson and the spinless KK photon . We combine collider limits such as electroweak precision data and expected LHC reach, with cosmological constraints from WMAP, and the sensitivity of current or planned direct detection experiments. Allowing for general mass splittings, we show that neither colliders, nor direct detection experiments by themselves can explore all of the relevant KK dark matter parameter space. Nevertheless, they probe different parameter space regions, and the combination of the two types of constraints can be quite powerful. For example, in the case of in 5D UEDs the relevant parameter space will be almost completely covered by the combined LHC and direct detection sensitivities expected in the near future.
pacs:
95.35.+d,11.10.Kk,12.60.-i,95.30.Cq,95.30.-k,14.80.Ly
12
I Introduction
The Standard Model (SM) has been extremely successful in explaining all available experimental data in particle physics. However, there are several unsettling features of the SM, which have motivated a substantial research effort on physics beyond the Standard Model (BSM). The two issues continuously attracting the most attention are the hierarchy problem and the dark matter problem. The anticipated discovery of the Higgs boson of the SM at the Large Hadron Collider (LHC) at CERN would pose a challenging theoretical question: what is the next fundamental energy scale? If it is as high as the Planck scale, then what stabilizes the hierarchy between the Planck and electroweak scales? Or, if it is much lower than the Planck scale, what is the physics associated with it? The second issue is related to the now established existence of a dark matter (DM) component of the universe. Since the SM does not accommodate a suitable DM particle candidate, the dark matter problem is the most pressing phenomenological evidence for physics BSM (1).
i.1 The Dark Matter Problem and Physics Beyond the Standard Model
There are different avenues one could follow in extending the SM and addressing the dark matter problem. The common theme among them is the introduction of new particles, one of which is neutral and serves as the dark matter candidate; and a new symmetry, a remnant of which survives in the low energy effective theory and ensures that the lifetime of the DM particle is sufficiently long (at the minimum, longer than the age of the universe). In principle, simply postulating a new stable and neutral particle would be rather ad hoc and unsatisfactory without further corroborating evidence. Fortunately, the DM candidates in most BSM models typically have some kind of non-gravitational interactions, which are sufficient to keep them in thermal equilibrium in the early universe. Thus, their relic abundance can in fact be straightforwardly calculated in any given model (for details, see Section II.2 below). The generic result of this computation is that a weakly interacting massive particle (WIMP) with a mass near or below the TeV scale has a relic density in the right ballpark, and is a suitable candidate for dark matter. By now there are many examples of WIMPs in BSMs, perhaps the most popular being the lightest superpartner (LSP) in supersymmetry (SUSY) with R-parity conservation (2), the lightest Kaluza-Klein partner (LKP) in Universal Extra Dimensions (3), the lightest T-parity odd particle in Little Higgs models (4); (5), the lightest U-parity odd particle in -extended models (6); (7), etc.
The most exciting aspect of the WIMP DM hypothesis is that it is testable by experiment. Indeed, WIMPs near the TeV scale can be easily within reach of both high-energy colliders and dark matter detection experiments. Furthermore, the size of the corresponding DM signals can be readily calculated within any given BSM, providing some rough expectations for discovery in each case. In principle, the signals depend on a typically a large number of model parameters. However, speaking in a broader sense, the WIMP DM phenomenology mostly depends on the answers to the following two questions:
• Q1: What is the identity of the DM particle candidate?
• Q2: What is the size of the mass splitting between the DM particle and the rest of the (relevant) spectrum?
In the following two subsections we shall discuss each one of these questions and thus motivate our setup and methodology.
i.2 The Nature of the Dark Matter Particle
Within any given BSM, there are typically several potential dark matter candidates (i.e. neutral and stable particles) present in the spectrum. The answer to the first question (Q1) therefore selects one of them as the “true” dark matter. For example, in SUSY, the dark matter particle could be either a fermion (e.g. gravitino or the lightest neutralino) or a boson (the lightest sneutrino). In turn, the lightest neutralino could be the superpartner of a gauge boson (e.g. a Bino, a Wino, possibly a -ino), the superpartner of a Higgs boson (e.g. a Higgsino or a singlino) or some admixture of these (2). Similarly, the lightest sneutrino could carry any one of the three lepton flavors, and in addition, could be left-handed (8), right-handed (9), or some mixture of both (10). Since all of these particles have rather different properties, it is clear that it is impossible to make any generic predictions about SUSY dark matter without specifying the exact nature of the LSP, i.e. providing the answer to Q1 above.
On the positive side, the answer to Q1 goes a long way towards the determination of the size of the expected dark matter signals. Once the identity of the dark matter particle is specified, its couplings are fixed and can be used in the calculation of both direct and indirect detection rates. What is even better, the answer to question Q1 can be provided in a rather model-independent way, without reference to the exact specifics of the model, such as the physics of the ultraviolet completion, Renormalization Group Equation (RGE) evolution down from high scales, etc.
In this paper, we shall explore the dark matter phenomenology of general models with flat universal extra dimensions (11), where the usual Standard Model structure is embedded in 5 or 6 space-time dimensions. We shall assume the same gauge symmetry and particle content as in the SM. Similar to the SUSY case just discussed, the models contain several possible dark matter candidates (electrically-neutral particles which are stable due to KK parity conservation)3. In five dimensional models with minimal particle content, they are: the KK graviton (), the KK neutrino (), the KK photon (), the KK -boson () and the KK Higgs boson (). Six dimensional UED models present additional possibilities: the spinless KK photon () and the spinless KK -boson (), which are linear combinations of the gauge boson polarizations along the two extra dimensions. Just like the case of SUSY, which of these particles is the lightest and thus the dark matter candidate, depends on the model-building details. The issue is even more subtle than in SUSY, since all of these KK particles have tree-level masses of the same order, proportional to the inverse radius of the extra dimension. This mass degeneracy is lifted by two main sources: radiative corrections due to renormalization and nonuniversality in the boundary conditions at the cut-off scale. The former effect is in principle computable within any given model (12); (13); (14), while the latter is a priori unknown, as its origin lies in the ultraviolet physics above the cut-off scale (14); (15); (16); (17). A common assumption throughout the existing literature on UED is to ignore any boundary terms at the cut-off scale. The resulting model has been dubbed “Minimal UED” and is known to accommodate only and LKP in five dimensions (14); (18) and in six dimensions (19); (20). However, given our complete ignorance of the physics at and above the cut-off scale, the other possibilities for the nature of the KK dark matter particle should be given serious consideration as well.
One of the goals of this paper is to start filling these gaps in the literature, by exploring the phenomenology of the alternative dark matter candidates in UED. Of course, not all of them are on an equal footing. For example, the KK graviton interacts with the Standard Model particles too weakly to be relevant for direct detection searches. The KK neutrino is already ruled out due to its large elastic scattering cross section (21). We shall therefore concentrate on the remaining two possibilities in 5D UEDs: the KK -boson () and the KK Higgs boson (). We shall also review and update the previously published results on and , so that our work would provide a concrete and complete reference on KK dark matter.
i.3 The Effect of a Mass Degeneracy on Dark Matter Signals
The second important issue for dark matter phenomenology is the answer to question Q2, namely, what is the mass splitting between the dark matter particle and the rest of the spectrum. Of course, it is in principle possible to have the dark matter particle as the only new particle in the model, in which case Q2 does not apply, and the predictions for the dark matter signals are quite robust, once Q1 is addressed. However, realistic models typically contain a multitude of new particles, in addition to the dark matter candidate. Their proximity (in mass) to the dark matter particle therefore becomes an important issue, at least in three, very different aspects.
The first is related to the predicted dark matter relic abundance. A close mass degeneracy can increase the importance of coannihilation processes at freeze-out (22), and the results for the relic density are now sensitive not only to the properties of the dark matter particle itself, but also to the properties of the coannihilating particles. The size of the coannihilation effect depends on the particular scenario, and there are several known cases in which it can be significant, e.g. Bino-like neutralinos in supersymmetry. The calculation of the relic density in the presence of coannihilations is a bit more involved (due to the larger number of processes which need to be considered), but nevertheless pretty straightforward. For UED models, where mass degeneracies are generically expected, the complete set of coannihilation processes which are relevant for the and LKP case in 5D UED have been calculated (21); (23); (24). We shall make use of them in our analysis below in Section II.2. After reviewing the case of LKP, which has been previously discussed in the context of minimal UED, we shall also consider LKP and illustrate the effects of coannihilations with KK quarks on its relic abundance. Since a calculation of coannihilations in 6D UED models is still lacking, there we shall consider only one specific example in detail – the previously discussed case of (25). The corresponding results for the direct detection rates of can be obtained by a simple scaling of the gauge couplings.
A small mass splitting also has a large impact on the expected direct detection signals, whenever the particle degenerate with the LKP can be exchanged in an -channel. This situation may in principle arise in supersymmetry, if the squarks are very light, but this would be viewed by most people as a fortuitous accident. On the other hand, such a degeneracy occurs much more naturally in UED, where the masses of the KK quarks and the LKP necessarily have a common origin (the scale of the extra dimension). The mass degeneracy may lead to a substantial enhancement of the LKP elastic scattering rate (26). In Section II.3 we first review the calculation of the spin-independent and the spin-dependent elastic scattering cross sections for the LKP case (26). Then we also consider the case of , and LKP, paying special attention to the enhancement of the cross sections in the limit of small mass splittings.
Finally, the mass splitting between the dark matter candidate and the rest of the new physics spectrum is an important parameter for collider searches as well. The discovery reach for new physics at colliders is greatly diminished if the mass splittings are small. This is because the observable energy in the detector would then be rather small as well, in spite of the large amount of energy present in the events. Correspondingly, the measured missing energy (and any related variable such as ) is also rather small, which makes it more difficult to extract the new physics signal from the SM backgrounds. Fortunately, as mentioned above, this is precisely the case when direct detection is more promising. In Sec IV we shall explore this complementarity for various KK DM scenarios, focusing on KK gauge boson dark matter. From the previous discussion it should be clear that having specified the nature of the DM particle, the two most relevant parameters are the DM particle mass and the mass splitting with the nearest heavier colored particles. In Sec IV we shall utilize this two-dimensional parameter space, and contrast constraints from different sources: colliders, cosmological observations, and current or planned direct detection experiments (the latter are first extensively reviewed in Sec. III). As expected, we find that colliders and dark matter searches are highly complementary, while the WMAP constraint is orthogonal to them but is somewhat more model-dependent. Section V is reserved for a summary and conclusions. In the appendix we write out some technical details of our analysis.
Ii Universal Extra Dimensions and Kaluza-Klein Dark Matter
ii.1 Review on Universal Extra Dimensions
Models with universal extra dimensions place all Standard Model particles in the bulk of one or more compactified flat extra dimensions. In the simplest and most popular version, there is a single extra dimension compactified on an interval, . In UED, each SM particle has a whole tower of KK modes. The individual modes are labelled by an integer , called KK number, which is nothing but the number of quantum units of momentum which the SM particle carries along the extra dimension. A peculiar feature of UED is the conservation of Kaluza-Klein number at tree level, which is a simple consequence of momentum conservation along the extra dimension.
However, the fixed points in orbifold compactifications break translation invariance along the extra dimension. As a result, KK number is broken by bulk and brane radiative effects (12); (13); (14) down to a discrete conserved quantity, the so called KK parity . The geometrical origin of KK parity in the simplest () case is the invariance under reflections with respect to the center of the interval. Since KK parity is conserved, the lightest KK parity odd particle is a suitable WIMP candidate (27); (14); (21); (26). KK parity also ensures that the KK-parity odd KK partners (e.g. those at level one) are always pair-produced in collider experiments. This is reminiscent of the case of supersymmetry models with conserved -parity. Therefore, the limits on UED KK modes from collider searches are relatively weak and are rather similar to the limits on superpartners. KK-parity is also responsible for weakening the potential indirect limits on UED models from low-energy precision data. Just like SUSY models with R-parity, the virtual effects from new physics only appear at the loop level and are loop suppressed (28); (29); (30).
Since all KK modes carry momentum along the extra dimension, at tree-level their masses receive a dominant contribution , and a subdominant contribution from the corresponding SM particle mass. All KK modes at a given KK level are therefore quite degenerate. The KK modes of the lightest SM particles (photons, leptons, light quarks) even appear to be absolutely stable at tree level. However, this conclusion is invalidated after accounting for the radiative corrections to the KK masses. The latter are proportional to and are sufficient to lift the degeneracy between the lightest KK modes, leaving only one of them (the true LKP) as absolutely stable (14).
The nature of the LKP, on the other hand, is more model-dependent. In the minimal 5D UED model, where the boundary terms at the cut-off scale are ignored, the lightest KK particle is typically the mode of the hypercharge gauge boson (14). Since the Weinberg angle for the level one neutral gauge bosons is rather small, is essentially also a mass eigenstate, the KK “photon”, and we shall therefore denote it as . The KK photon is an attractive dark matter candidate (21); (26), whose relic abundance is consistent with the observed dark matter density for a mass range between 500 GeV and about 1.5 TeV, as shown by detailed computations including coannihilations (23); (24) and level-2 resonances (31); (32); (33). Direct detection of this KK dark matter may be within reach of the next generation experiments (26); (34); (35); (36). Indirect detection of KK dark matter also has better prospects than the case of neutralinos in SUSY (26); (37); (38); (39); (40); (41); (42); (44); (43).
In UED the bulk interactions of the KK modes readily follow from the Standard Model Lagrangian and contain no unknown parameters other than the mass of the Standard Model Higgs boson. In contrast, the boundary interactions, which are localized on the orbifold fixed points, are in principle arbitrary, and thus correspond to new free parameters in the theory. They are in fact renormalized by bulk interactions, and are scale dependent (12). Therefore, we need an ansatz for their values at a particular scale. Virtually all existing studies of UED have been done within the framework of minimal UED (MUED), in which the boundary terms are assumed to vanish at the cut-off scale , and are subsequently generated through RGE evolution to lower scales (see (14); (45) for 5D and (19); (20) for 6D). In the minimal UED model therefore there are only two input parameters: the size of the extra dimension , and the cut-off scale . Of course, there are no compelling reasons for assuming vanishing boundary terms: the UED model should be treated only as an effective theory which is valid up to the high scale , where it is matched to some more fundamental theory, which is generically expected to induce nonzero boundary terms at the matching scale . As already mentioned in the introduction, nonvanishing boundary terms may change both the nature of the LKP, as well as the size of the KK mass splittings. The resulting phenomenology may be very different from the minimal case. This is why in this paper we shall allow for more general scenarios with and LKP4. In each case, we shall take the LKP mass and the LKP - KK quark mass splitting
Δq1=mq1−mLKPmLKP, (1)
as free parameters. We remind the reader that after compactification, the low energy effective theory contains two massive (Dirac) KK fermions for each (Dirac) fermion in the Standard Model. The KK fermions are properly referred to as -doublet KK fermions or -singlet KK fermions. However, in the literature they are sometimes called “left handed” and “right handed”, referring to the chirality of the corresponding Standard Model fermion at the zero level of the KK tower. This nomenclature may lead to some confusion, since all KK fermions are Dirac and have both chiralities. In our study, we shall treat the -doublet KK quarks (often denoted by ) and the -singlet KK quarks (often denoted by ) equally, thus avoiding the need for two separate mass splitting parameters (for example, a separate and ). The generalization to the case of different KK quark masses is rather straightforward.
We shall also explore cases with more than one universal extra dimension. Theories with two universal extra dimensions also contain a KK parity. Under the simplest compactification which leads to chiral zero-mode fermions (a “chiral” square with adjacent sides identified (52); (51)), the KK parity transformations are reflections with respect to the center of the square. Momentum along the two compact dimensions is quantized so that any 6-dimensional field propagating on the square appears as a set of 4-dimensional particles labelled by two positive integers, . These particles are odd under KK parity when is odd and are even otherwise. In any process, odd particles may be produced or annihilated only in pairs. The lightest odd particle, which is one of the (1,0) states, is thus stable. Gauge bosons propagating in six dimensions may be polarized along the two extra dimensions. As a result, for each spin-1 KK particle associated with a gauge boson, there are two spin-0 KK fields transforming in the adjoint representation of the gauge group. One linear combination becomes the longitudinal degree of freedom of the spin-1 KK particle, while the other linear combination remains as a physical spin-0 particle, called the spinless adjoint5. In the minimal model with vanishing boundary terms, the radiative corrections (19); (20), are such that the lightest (1,0) particle on the chiral square (52); (51) is always a linear combination of the electrically-neutral spinless adjoints of the electroweak gauge group. Due to the small mixing angle, this linear combination is essentially a photon polarized along the extra dimensions. Similar to its 5D cousin , the spinless photon in 6D UED is also a viable dark matter candidate (25). See Refs. (53); (54); (55) for KK dark matter candidates in UED models with an extended gauge symmetry.
ii.2 Relic Density Calculation with Coannihilations
We briefly review the calculation of the relic density including coannihilation processes. When the relic particle is nearly degenerate with other particles in the spectrum, its relic abundance is determined not only by its own self-annihilation cross section, but also by annihilation processes involving the heavier particles. The generalization of the relic density calculation including this “coannihilation” case is straightforward (22); (21). Assume that the particles are labelled according to their masses, so that when . The number densities of the various species obey a set of Boltzmann equations. It can be shown that under reasonable assumptions (22), the ultimate relic density of the lightest species (after all heavier particles have decayed into it) obeys the following simple Boltzmann equation
dnχdt=−3Hnχ−⟨σeffv⟩(n2χ−n2eq) , (2)
where is the Hubble parameter, is the relative velocity between the two incoming particles, is the equilibrium number density and
σeff(x) = N∑ijσijgigjg2eff(1+Δi)3/2(1+Δj)3/2exp(−x(Δi+Δj)) , (3) geff(x) = N∑i=1gi(1+Δi)3/2exp(−xΔi) , (4) Δi = mi−m1m1 ,~{}~{}~{}~{}x=m1T. (5)
Here are the various pair annihilation cross sections into final states with SM particles, is the number of internal degrees of freedom of particle and is the density of we want to calculate.
By solving the Boltzmann equation analytically with appropriate approximations (22); (21), the abundance of the lightest species is given by
Ωχh2≈1.04×109 GeV−1MPlxF√g∗(xF)1Ia+3Ib/xF , (6)
where the Planck mass scale is GeV and is the total number of effectively massless degrees of freedom at temperature :
g∗(T)=∑i=bosonsgi+78∑i=fermionsgi . (7)
The functions and are defined as
Ia = xF∫∞xFaeff(x)x−2dx , (8) Ib = 2x2F∫∞xFbeff(x)x−3dx . (9)
The freeze-out temperature, , is found iteratively from
xF=ln(c(c+2)√458geff(xF)2π3m1MPl(aeff(xF)+6beff(xF)/xF)√g∗(xF)xF) , (10)
where the constant is determined empirically by comparing to numerical solutions of the Boltzmann equation and here we take as usual. and are the first two terms in the velocity expansion of
σeff(x)v=aeff(x)+beff(x)v2+O(v4) . (11)
Comparing Eqns. (3) and (11), one gets
aeff(x) = N∑ijaijgigjg2eff(1+Δi)3/2(1+Δj)3/2exp(−x(Δi+Δj)) , (12) beff(x) = N∑ijbijgigjg2eff(1+Δi)3/2(1+Δj)3/2exp(−x(Δi+Δj)) , (13)
where and are obtained from and is the relative velocity between the two annihilating particles in the initial state. Considering relativistic corrections (56) to the above treatment results in an additional subleading term which can be accounted for by the simple replacement
b→b−14a, (14)
in the above formulas. For our calculation of the relic density, we use the cross sections given in Refs. (21); (23); (24).
As explained earlier, the assumptions behind MUED can be easily relaxed by allowing nonvanishing boundary terms at the scale (45); (15); (16); (17). This would modify the KK spectrum and correspondingly change the MUED predictions for the KK relic density. Within the modified KK spectrum, any neutral KK particle could be a dark matter candidate. As an illustration here we shall consider the case of and LKP6, for which the results for the relevant coannihilation processes are available in the literature (23); (24). In Fig. 1, we show the relic densities of and as a function of the corresponding LKP mass ( or ) in 5D UED.
We include coannihilation effects with all KK particles with properly defined masses. The (black) solid lines show the LKP relic density for several choices of the mass splitting (1) between the LKP and the KK quarks. We assume that singlet and doublet KK quarks are degenerate (i.e., ). The green horizontal band denotes the preferred -WMAP region for the relic density (58). The cyan vertical band delineates values of disfavored by precision data (59); (60) 7. In each case of Fig. 1a, we use the MUED spectrum to fix the masses of the remaining particles, and then vary the (common) KK-quark mass by hand. The solid lines from top to bottom correspond to . The (red) dotted line is the result from the full calculation in MUED, including all coannihilation processes, with the proper MUED choice for all masses. In Fig. 1b we assumed and are degenerate, the gluon is heavier than by 20%, while all other KK particles are heavier than by 10%. The solid lines from top to bottom correspond to . Some individual quantities entering the relic density calculation for () LKP are shown in Fig. 2 (Fig. 3).
We see that coannihilations in the case of LKP decrease the prediction for and therefore increase the range of preferred values. For on the order of a few percent, the desired range of is pushed beyond 1 TeV. This poses a challenge for any collider searches for UED, since the KK production cross sections at the LHC become kinematically suppressed for heavier KK modes. What is even worse, the small mass splitting degrades the quality of the discovery signatures, e.g. the cascade decays of the KK quarks would yield only (rather soft) jets and no leptons.
On the other hand, Fig. 1b reveals that coannihilations with KK quarks have the opposite effect in the case of LKP8. This time the effect of coannihilations is to increase the prediction for and thus lower the preferred range of values for . The lesson from Figs. 1a and 1b is that while coannihilations can be quite important, the sign of the effect cannot be easily predicted, since, as will be illustrated in Figs. 2 and 3, it depends on the detailed balance of several numerical factors entering the computation. We shall discuss these in some detail in the remainder of this subsection. Readers who are not interested in these numerical details, are invited to jump to Section II.3.
In Fig. 2a (Fig. 3a) we plot the relic density of the () LKP, as a function of the mass splitting between the KK quarks and the corresponding LKP. The rest of the spectrum is held fixed as explained in the figure captions. Figs. 2a and 3a demonstrate the importance of coannihilations at small mass splittings. For larger than about , coannihilations are turned off, but for KK quarks within 10% of the LKP mass, the coannihilation effect is significant. For LKP, it lowers the prediction for the relic density , while in the case of LKP is enhanced. In order to understand this different behavior, it is sufficient to investigate the coannihilation effect on the effective cross section, and in particular the dominant term , which is plotted in Figs. 2b and 3b. As can be seen from eq. (12), every term contributing to is a ratio between two quantities, each of which has a nontrivial dependence. The denominator is common to all terms and is nothing but the effective number of heavy particle degrees of freedom defined in eq. (4). We show the dependence on in Figs. 2c and 3c. As expected, increases significantly after the turn-on of coannihilations (below ), due to the large multiplicity of KK quark states. At the same time, the numerator of each term contributing to the sum (12) is simply the Boltzmann suppressed annihilation cross section, which also increases with the onset of coannihilations (at small mass splittings ). The net effect on is determined by which of these two quantities increases faster at small , relative to the nominal case without coannihilations. In the case of LKP, the self-annihilation cross sections are rather weak, due to the smallness of the hypercharge gauge coupling. Adding the contributions from the strongly interacting KK quark sector has therefore a much larger impact than the associated increase in the effective number of degrees of freedom . As a result, increases and decreases, as shown in Figs. 2a and 2b. In contrast, in the case of LKP, the self-annihilation cross sections by themselves are already larger, due to the larger value of the weak gauge coupling. The gain from the addition of the KK quark coannihilation processes is more than compensated by the associated increase in the effective number of degrees of freedom . As a result, in this case decreases and increases, as shown in Figs. 3a and 3b.
In conclusion, we should mention that the KK Higgs boson in principle can also be a potential dark matter candidate. The calculation of its relic density is somewhat more model-dependent and we do not consider it here.
ii.3 Elastic Scattering Cross Sections
The elastic scattering of the LKP on a nucleon is described by the diagrams depicted in Fig. 4. For LKP, the corresponding results can be found in (26); (34). We follow the computation done in (26)9. The spin-independent cross section is given by
Missing or unrecognized delimiter for \left (15)
where is the mass of the target nucleus, and are respectively the nuclear charge and atomic number, while
fp=∑u,d,s(βq+γq)⟨p|¯qq|p⟩=∑u,d,sβq+γqmqmpfpTq , (16)
and similarly for . In eq. (16) () stands for the proton (neutron) mass.
For the nucleon matrix elements we take , , , , and (61). The numerical coefficients and in eq. (16) are defined as10
βq = Missing or unrecognized delimiter for \Big (17) ≈ Eqe2cos2θW⎡⎢⎣Y2qLm2γ1+m2q1L(m2q1L−m2γ1)2+(L→R)⎤⎥⎦~{}~{}~{}for α=0, (18) γq = mqe22cos2θW1m2h , (19)
where is the electric charge, is the Weinberg angle, () is the mass of an -doublet (-singlet) KK quark, and is the mixing angle in the KK quark mass matrix given by . Eq. (17) includes the mixing effect between two KK quarks and eq. (18) is obtained in the limit when . This mixing effect gives a minor correction to the cross section (at a few percent level) and we do not include it in our figures for 5D. However it is important to keep it in the 6D case, as shown in Ref. (25). Our convention for the SM hypercharge is , where () is the electric charge (weak isospin) of particle . in eq. (18) is the energy of a bound quark and is rather ill-defined. In evaluating eq. (16), we conservatively replace by the current11 mass . As alluded to earlier, in eq. (18) we only sum over light quark flavors, thus neglecting couplings to gluons mediated by heavy quark loops. Note that the two contributions (18) and (19) to the scalar interactions interfere constructively: even with extremely heavy KK quark masses (large ), there is an inescapable lower bound on the scalar cross section for a given Higgs mass, since the Higgs contribution from eq. (19) scales with the SM Higgs mass and not the KK quark masses.
The analogous results for the case of LKP can now be obtained from the above formulas by simple replacements: , and , since is mostly the neutral gauge boson, which has no interactions with the -singlet KK quarks (or equivalently, the right-handed SM quarks). In addition, one should replace to account for the different gauge coupling constant.
Theoretical predictions for the spin-independent LKP-nucleon elastic scattering cross sections are shown in Fig. 5 for different fixed values of the KK quark - LKP mass splitting , and for two different LKPs: (a) and (b) . In both cases the cross sections decrease as a function of LKP mass. This is due to the inverse scaling of the KK quark exchange contributions (18) with the KK mass scale. Comparing Fig. 5a to 5b, we notice that the scalar cross section for is more than one order of magnitude larger than the scalar cross section for of the same mass. This is mostly due to the larger gauge coupling. Notice that even when the KK quarks are very heavy, there is still a reasonable cross section, which is due to the Higgs mediated contribution (19). Perhaps the most noteworthy feature of Figs. 5a and 5b is the significant enhancement of the direct detection signals at small , often by several orders of magnitude. This greatly enhances the prospects for detecting KK dark matter, if the mass spectrum turns out to be rather degenerate.
The spin-dependent cross section is given by
σspin=16πm2T(mγ1+mT)2JN(JN+1)[∑u,d,sαqλq]2 , (20)
where and are
αq = 2e2cos2θW⎡⎢⎣Y2qLmγ1m2q1L−m2γ1+(L→R)⎤⎥⎦, (21) λq = Δpq⟨Sp⟩/JN+Δnq⟨Sn⟩/JN . (22)
Here is the nuclear spin operator. is given by and is the fraction of the nucleon spin carried by the quark . We use , , and (62). is the fraction of the total nuclear spin that is carried by the spin of protons or neutrons. For scattering off protons and neutrons, reduces to and , respectively.
Following (63), we can rewrite eq. (20) in the form
σspin=32πG2Fμ2JN+1JN(ap⟨Sp⟩+an⟨Sn⟩)2, (23)
where is the Fermi constant and
μ=mTmγ1mT+mγ1 (24)
is the reduced mass, while the coefficients and are given by
ap,n = 18√3GFmγ1∑u,d,sαqΔp,nq (25) = e24√3GFcos2θW∑u,d,s⎡⎢⎣Y2qLm2q1L−m2γ1+(L→R)⎤⎥⎦Δp,nq .
The main advantage of introducing the parameters and is that they encode all the theoretical model-dependence, thus allowing different experiments to compare their sensitivities in a rather model-independent way. From eqs. (23-24) it is clear that for any given target, the spin-dependent scattering rate depends on only three parameters: , and . Notice that in our setup there are only two relevant model parameters: and , therefore we will have a certain correlation between and , depending on the nature of the LKP12.
In Fig. 6 we show our result for the spin-dependent LKP elastic scattering cross sections off protons and neutrons for the case of (a) and (b) , for different mass splittings . The red solid curves are the LKP-proton cross sections and the blue dotted curves are the LKP-neutron cross sections. All curves exhibit the same general trends as the corresponding spin-independent results from Fig. 5: the cross sections decrease with the KK mass scale, and are enhanced for small mass splittings . One peculiar feature is that the proton and neutron spin-dependent cross sections are equal in the case of , as seen in Fig. 6b. This is an exact statement, which is due to the fact that does not particularly discriminate between the different quark flavors in the nucleon – it couples with equal strength to both up- and down-type (left-handed) quarks. On the other hand, couples differently to and , because of the different hypercharges of the right-handed quarks. As a result, the cross sections on protons and neutrons differ in the case of , as seen in Fig. 6a. Interestingly, for a given LKP mass and mass splitting , the proton cross section in Fig. 6a is larger than the neutron cross section by about a factor of 4, which is due to a numerical coincidence involving the values of the quark hypercharges and the parameters.13 Because of this simple scaling, for a given LKP mass , the proton cross section at a certain coincides with the neutron cross section for half the mass splitting () since to leading order both the proton and the neutron cross sections are proportional to .
We shall now review the corresponding results for the case of two universal extra dimensions. The sum for the spinless photon () LKP was computed in (25) (note that here we are using a different convention for the hypercharges )
βq+γq = e2cos2θW[mq(YqL+YqR)2(1m2q1−(mq−mγH)2+1m2q1−(mq+mγH)2) (26) + mγH(Y2qL+Y2qR)(1m2q1−(mq+mγH)2−1m2q1−(mq−mγH)2)+mq2m2h],
where is the mass of the spinless photon, is the (common) mass of the -doublet and -singlet KK quarks, while is the corresponding SM quark mass.
Using Eqn. (15), we obtain the spin-independent elastic scattering cross section for as shown in Fig. 7a. The different curves are labelled by the assumed fixed value of , and are plotted versus the LKP mass . We see that the size of the signal is about the same order as the cross sections from Fig. 5a. On the other hand, the relic density constraint would single out somewhat different regions for and . The annihilation cross section for is smaller than that of (25), and correspondingly, lower masses would be preferred, with enhanced prospects for direct detection14. Notice that there is no spin-dependent cross section for since it is a scalar particle.
In conclusion of this section, we shall briefly discuss the scenario of KK Higgs () LKP. Just like , is a scalar and does not have spin-dependent interactions. Its spin-independent elastic scattering cross section can be readily computed following the procedure outlined earlier in this section and in the appendix. In this case, the KK quark exchange diagrams are also Yukawa suppressed, and the dominant among them is the KK quark contribution. As in the LKP case, the diagrams with KK quark exchange and SM Higgs exchange interfere constructively. Therefore, the SM Higgs exchange diagram by itself provides a conservative lower bound on the elastic scattering cross section, independent of the other details of the KK spectrum, and in particular, the KK quark masses. This absolute minimum of the cross section is plotted in Fig. 7b as a function of the LKP mass . It is worth mentioning that this result is completely independent of the SM Higgs mass . The contribution corresponding to (19) is given by
γq=34e2sin2θWmqm2W, (27)
where is the mass of the boson. The coupling of the KK Higgs to the SM Higgs boson is the same as the triple Higgs coupling of the SM, which is proportional to . This dependence is exactly cancelled by the dependence of the SM Higgs propagator in the non-relativistic limit (see Eqn. (19)). Therefore the final cross section is indeed independent of the SM Higgs mass, and this fact remains true regardless of the values of the KK quark masses.
Iii Direct WIMP Detection and Experiments
The detailed distribution of dark matter in our galaxy, and in particular in the local neighborhood, is not well constrained by current observations and high-resolution simulations. The standard assumption for its distribution is a cored, non-rotating isothermal spherical halo with a Maxwell-Boltzmann velocity distribution with a mean of 220 km/s, and escape velocity from the galactic halo of 544 km/s (64). For the local density of dark matter particles we assume =0.3 GeV/cm (65).
The WIMP interaction signature in ultra-low-background terrestrial detectors (66) consists of nuclear recoils. Direct detection experiments attempt to measure the small (100 keV) energy deposited when a WIMP scatters from a nucleus in the target medium. The recoil energy of the scattered nucleus is transformed into a measurable signal, such as charge, scintillation light or lattice excitations, and at least one of the above quantities can be detected. Observing two signals simultaneously yields a powerful discrimination against background events, which are mostly interactions with electrons as opposed to WIMPs and neutrons, which scatter from nuclei. The WIMP interaction takes place in the non-relativistic limit, therefore the total cross section can be expressed as the sum of a spin-independent (SI) part (see Eqn. (15)), a coherent scattering with the whole nucleus, and of a spin-dependent (SD) part (see Eqn. (20)), which describes the coupling to the total nuclear spin (67).
Neutrons with energies in the MeV range can elastically scatter from nuclei and mimic a WIMP signal. Two methods are used to discriminate against the residual neutron background, which comes from (,n)- and fission-reactions in materials and from interactions of cosmic muons with the rock and experimental shields. First, the SI WIMP-nucleus cross section is proportional to the atomic mass-squared of the nucleus, making the expected total WIMP interaction rate material dependent. Second, the mean free paths of WIMPs and MeV neutrons are exceedingly different (10 m versus 8 cm in a typical WIMP target), allowing to directly constrain the neutron background from the ratio of observed single to multiple interaction events.
The experimental upper bounds of the SI cross section from direct detection experiments are WIMP-type independent and thus will not change if we consider different WIMP candidates. Similarly, the SD cross section limits can also be reinterpreted for various DM candidates. The only exception is a spin zero WIMP, such as in 6D UED, which does not have an axial-vector coupling with nuclei, hence no SD interaction is expected. We will extensively discuss the model dependence of the SD cross section in the next section.
In this study, we choose four direct detection experiments which demonstrated best experimental sensitivity to-date in various parts of the WIMP search parameter space. The CDMS experiment sets the best SI upper bound above a WIMP mass of 42 GeV (68), while XENON10 gives the most stringent upper bound on WIMP-neutron SD couplings (70) and SI couplings below 42 GeV (69). The KIMS (71) and COUPP (72) experiments show the best sensitivity for SD WIMP-proton couplings. As we shall see in the following section, the combined study of all four experiments strongly constrains the SD proton-neutron mixed coupling parameter space (the so called - parameter space, where and are the dark matter particle’s couplings to protons and neutrons, respectively, see eq. (23)).
Table 1 summarizes the relevant characteristics of the four experiments such as target material, total mass, energy range considered for the WIMP search, and location. In this paper we either calculated the LKP limits based on published data (XENON10), or we obtained the data points for the cross section upper bounds from the collaboration (CDMS, KIMS and COUPP).
The CDMS experiment (68) is operated in the Soudan Underground Laboratory, USA. It uses advanced Z(depth)-sensitive Ionization and Phonon (ZIP) detectors, which simultaneously measure the ionization and athermal phonon signals after a particle interacts in the crystal. The ZIP detectors provide excellent event-by-event discrimination of nuclear recoils from the dominant background of electron recoils. The most stringent limits on spin-independent couplings with nucleons above a WIMP mass of 42 GeV comes from the first two CDMS-II five tower runs with a raw exposure of 397.8 kg-days in germanium. The null observation of a WIMP signal sets a WIMP-nucleon cross section upper bound of 6.6 pb (for a 60 GeV WIMP mass) and of 4.6 pb when the results are combined with previous CDMS results.
The SuperCDMS project (73); (74) is a three-phase proposal to utilize CDMS-style detectors with target masses growing from 25 kg to 150 kg and up to 1 ton, with the aim of reaching a final sensitivity of 310 pb by mid 2015. This goal will be realized by developing improved detectors and analysis techniques, and concomitantly reducing the intrinsic surface contamination of the crystals.
The XENON10 collaboration (69) operated a 15 kg active mass, dual-phase (liquid and gas) xenon time projection chamber in the Gran Sasso Underground Laboratory (LNGS), in WIMP search mode from August 2006 to February 2007. XENON10 uses two arrays of UV-sensitive photomultipliers (PMTs) to detect the prompt and proportional light signals induced by particles interacting in the sensitive liquid xenon (LXe) volume. The 3D position sensitivity, the self-shielding of LXe and the prompt versus proportional light ratio are the most important background rejection features. The first results, using 136 kg-days exposure after cuts, demonstrated that LXe can be used for stable, homogeneous, large scale dark matter detectors, providing excellent position resolution and discrimination against the electron recoil background. The derived upper bound on SI cross sections on nucleons is 4.5 pb for a WIMP mass of 30 GeV. Since natural Xe contains Xe (26.4%) and Xe (21.2%) isotopes, each of these having an unpaired neutron, the XENON10 results substantially constrain the SD WIMP-nucleon cross section. We calculated the XENON10 SD LKP-neutron and LKP-proton upper bounds based on the observation of 10 events, without any background subtraction (70). The next phase, XENON100, will operate a total of 170 kg (70 kg fiducial) of xenon, viewed by 242 PMTs, in a dual-phase TPC in an improved XENON10 shield at the Gran Sasso Laboratory. While the fiducial mass is increased by more than a factor of 10, the background will be lower by about a factor of 100 (through careful selection of ultra-low background materials, the placing of cryogenic devices and high-voltage feed-throughs outside of the shield and by using 100 kg of active LXe shield) compared to XENON10. XENON100 is currently being commissioned at LNGS, the aim is to start the first science run in fall 2008, probing WIMP-nucleon SI cross sections down to 10 pb.
The Korea Invisible Mass Search (KIMS) experiment (71) is located at the Yangyang Underground Laboratory, Korea. The collaboration has operated four low-background CsI(Tl) crystals, each viewed by two photomultipliers, for a total exposure of 3409 kg-days. Both Cs and I are sensitive to the spin-dependent interaction of WIMPs with nuclei. KIMS detects the scintillation light after a particle interacts in one of the crystals, kept stably at (00.1) . The pulse shape discrimination technique, using the time distribution of the signal, allows to statistically separate nuclear recoils from the electron recoil background. The KIMS results are consistent with a null observation of a WIMP signal, yielding the best limits on SD WIMP-proton couplings for a WIMP mass above 30 GeV. Specifically, the upper bound for a WIMP mass of 80 GeV is 1.710 pb.
The Chicagoland Observatory for Underground Particle Physics (COUPP) experiment (72) is operated at Fermilab, USA. The experiment has revived the bubble chamber technique for direct WIMP searches. The superheated liquid can be tuned such that the detector responds only to keV nuclear recoils, being fully insensitive to minimum ionizing particles. A 1.5 kg chamber of superheated CFI has been operated for a total exposure of 250 kg-days. The presence of fluorine and iodine in the target makes COUPP sensitive to both SD and SI WIMP-nucleon couplings. The production of bubbles is monitored optically and via sound emission, reaching a reconstructed 3D spatial resolution of 1 mm. It allows to reject boundary-events and to identify multiple neutron interactions. The most recent COUPP results set the most sensitive limit on SD WIMP-proton cross sections for a WIMP mass below 30 GeV. As an example, the upper bound on the SD coupling is 2.710 pb at a WIMP mass of 40 GeV.
In Fig. 8 we show the current CDMS and XENON10 upper bounds for the SI cross section together with projected sensitivities for SuperCDMS 25 kg, XENON100 and for a ton-scale detector. The LKP boundaries for , and as dark matter candidates are also shown, for a wide range of mass splittings () and a fixed Higgs mass of 120 GeV. The small mass splitting regions are excluded up to a mass of about 600 GeV, 900 GeV and 700 GeV for , and , respectively. For large mass splittings of , only masses below about 100 GeV can be probed. Future ton-scale direct detection experiments should cover most of the interesting LKP parameter space.
In Fig. 9, we show the SD cross section limits for both (a) pure neutron and (b) pure proton couplings for three experiments together with the theoretical predictions for and for a range of mass splittings (). The most stringent SD pure neutron upper bound is set by the XENON10 experiment, while the best SD cross section for pure proton couplings in the region of interesting LKP masses ( 500 GeV) comes from the KIMS experiment. As explained in the previous section, the theoretical and regions are overlapping for pure neutron couplings, while for pure proton coupling these can be distinguished for a given mass splitting .
In the following section we investigate the details of the LKP specific parameter spaces.
Iv Limits on Kaluza-Klein Dark Matter
In the previous sections we introduced the different dark matter candidates in UED models: KK gauge bosons ( and ) and KK scalars ( and ). On the theoretical side, we discussed the calculation of their relic densities and elastic scattering cross sections. On the experimental side, we described the different types of experiments which are sensitive to KK dark matter. We shall now combine our theoretical predictions with the current/future measurements discussed earlier. Where applicable, we shall also include constraints from high energy collider experiments. We shall be particularly interested in the region of small mass splittings , which is problematic for collider searches, but promising for direct detection. We will concentrate on KK gauge boson dark matter (both and ), whose relic density can be reliably calculated, including all relevant coannihilation processes (23); (24).15
In Fig. 10 we present a combination of results for the case of (a) and (b) LKP in 5D UED. As we emphasized earlier, the two most relevant parameters are the LKP mass ( or , correspondingly) and the mass splitting between the LKP and the KK quarks. We therefore take both of these parameters as free and do not assume the MUED relation among them. For simplicity, we assume that the -doublet KK quarks and the -singlet KK quarks are degenerate, so that there is a single mass splitting parameter which we have been calling . However, this assumption is only made for convenience, and does not represent a fundamental limitation – all of our results can be readily generalized for different KK quark mass splittings (i.e. several individual parameters). The masses of the remaining KK particles in the spectrum are fixed as in Fig. 1: in the case of LKP, we use the MUED spectrum, while in the case of LKP, we take the gluon and the remaining particles to be respectively and heavier than the . This choice is only made for definiteness, and does not carry a big impact on the validity of our results, as long as the remaining particles are sufficiently heavy so that they do not participate in coannihilation processes.
In the so defined parameter plane, in Fig. 10 we superimpose the limit on the spin-independent elastic scattering cross section, the limit on the relic abundance and the LHC reach in the four leptons plus missing energy () channel which has been studied in (45). This signature results from the pair production (direct or indirect) of -doublet KK quarks, which subsequently decay to ’s and jets. The leptons (electrons or muons) arise from the decay, whose branching fraction is approximately (45). Requiring a 5 excess at a luminosity of 100 fb, the LHC reach extends up to TeV, which is shown as the right-most boundary of the (yellow) shaded region in Fig. 10a. The slope of that boundary is due to the fact that as increases, so do the KK quark masses, and their production cross sections are correspondingly getting suppressed, diminishing the reach. We account for the loss in cross section according to the results from Ref. (75), assuming also that, as expected, the level-2 KK particles are about two times heavier than those at level 1. Points which are well inside the (yellow) shaded region, of course, would be discovered much earlier at the LHC. Notice, however, that the LHC reach in this channel completely disappears for less than about 8%. This is where the KK quarks become lighter than the (recall that in Fig. 10a was fixed according to the MUED spectrum) and the decays are turned off. Instead, the KK quarks all decay directly to the LKP and (relatively soft) jets, presenting a monumental challenge for an LHC discovery. So far there have been no studies of the collider phenomenology of a LKP scenario, but it appears to be extremely challenging, especially if the KK quarks are light and decay directly to the LKP. This is why there is no LHC reach shown in Fig. 10b. In conclusion of our discussion of the collider reaches exhibited in Fig. 10, we draw attention once again to the lack of sensitivity at small : such small mass splittings are quite problematic for collider searches (see, for example, (76); (77) for an analogous situation in supersymmetry).
In Fig. 10 we contrast the LHC reach with the relic density constraints and with the sensitivity of direct detection experiments. To this end we convert our results from Figs. 1 and 8 into the - plane shown in Fig. 10. The green shaded region labelled by 100% represents 2 WMAP band, (58) and the black solid line inside this band is the central value . The region above and to the right of this band is ruled out since UED would then predict too much dark matter. The green-shaded region is where KK dark matter is sufficient to explain all of the dark matter in the universe, while in the remaining region to the left of the green band the LKP can make up only a fraction of the dark matter in the universe. We have indicated with the black dotted contours the parameter region where the LKP would contribute only 10% and 1% to the total dark matter budget. Finally, the solid (CDMS in blue and XENON10 in red) lines show the current direct detection limits, while the dotted and dashed lines show projected sensitivities for future experiments (for details, refer back to Sec. III)16.
Fig. 10 demonstrates the complementarity between the three different types of probes which we are considering. First, the parameter space region at very large is inconsistent with cosmology – if the dark matter WIMP is too heavy, its relic density is too large. The exact numerical bound on the LKP mass may vary, depending on the particle nature of the WIMP (compare Fig. 10a to Fig. 10b) and the presence or absence of coannihilations (compare the bound at small to the bound at large ). Nevertheless, we can see that, in general, cosmology does provide an upper limit on the WIMP mass. On the other hand, colliders are sensitive to the region of relatively large mass splittings , while direct detection experiments are at their best at small and small . The relevant parameter space is therefore getting squeezed from opposite directions and is bound to be covered eventually. This is already seen in the case of LKP from Fig. 10a: the future experiments push up the current limit almost to the WMAP band. Unfortunately in the case of LKP the available parameter space is larger and will not be closed with the currently envisioned experiments alone. However, one should keep in mind that detailed LHC studies for that scenario are still lacking.
While previously we already argued that and are the most relevant parameters for UED dark matter phenomenology, for completeness we also investigate the dependence on the SM Higgs mass , which is currently still unknown. In Fig. 11 we therefore translate the information from Fig. 8 into the - plane, for a given fixed KK mass splitting now taking the Higgs mass as a free parameter.
In each panel, the horizontal black solid lines mark the current Higgs mass bound of 114 GeV while the diagonal black solid lines show the indirect limit from the oblique corrections in this model (59).17 For low , the limit on the LKP mass (or equivalently, the compactification scale) is GeV (for GeV), but it gets weaker for larger , so that values as low as 300 GeV are still allowed if the SM Higgs boson is very heavy (60). In Fig. 11 we also show the current (solid lines) limits from CDMS (in blue) and XENON10 (in red), their projected near-future sensitivities, SuperCDMS 25 kg and XENON100 (dashed lines), and the projected sensitivity of a ton-scale detector (dotted line). The shape of these contours is easy to understand. At large , the Higgs exchange diagram in Fig. 4 decouples, the elastic scattering rate becomes independent of and the direct detection experimental sensitivity is only a function of (since is held fixed). In the other extreme, at small , the Higgs exchange diagram dominates, and the sensitivity now depends on both and . Unfortunately, for the current direct detection bounds do not extend into the interesting parameter space region, but future experiments will eventually start probing the large corner of the allowed parameter space. On the positive side, one important lesson from Fig. 11 is that the dependence starts showing up only at very low values of , which have already been ruled out by the Higgs searches at colliders. This observation confirms that when it comes to interpreting existing and future experimental limits on WIMPs in terms of model parameters, and are indeed the primary parameters, while plays a rather secondary role.
We remind the reader that the LHC will be able to probe all of the parameter space shown in Fig. 11a through the signature, while the discovery of UED in Fig. 11b appears quite problematic. Of course, the SM Higgs boson will be discovered in both cases, for the full range of masses shown.
We now turn to a discussion of the corresponding spin-dependent elastic scattering cross sections, which also exhibit an enhancement at small , as shown in Fig. 6. Similar to Fig. 10, in Fig. 12 we combine existing limits from three different experiments (XENON10, KIMS and COUPP) in the - plane. Panel (a) (panel (b)) shows the constraints from the WIMP-neutron (WIMP-proton) SD cross sections. The rest of the KK spectrum has been fixed as in Figs. 1 and 10, and GeV. The solid (dashed) curves are limits on () for each experiment. The constraints from LHC and WMAP on the - parameter space are the same as in Fig. 10.
By comparing Figs. 10 and 12 we see that, as expected, the parameter space constraints from SI interactions are stronger than those from SD interactions. For example, in perhaps the most interesting range of LKP masses from 300 GeV to 1 TeV, the SI limits on in Fig. 10 range from a few times down to a few times . On the other hand, the SD bounds on for the same range of are about an order of magnitude smaller (i.e. weaker). We also notice that the constraints for LKP are stronger than for LKP. This can be easily understood by comparing Fig. 6a and Fig. 6b: for the same LKP mass and KK mass splitting, the SD cross sections are typically larger.
Fig. 12 also reveals that the experiments rank differently with respect to their SD limits on protons and neutrons. For example, KIMS and COUPP are more sensitive to the proton cross section, while XENON10 is more sensitive to the neutron cross section. As a result, the current best SD limit on protons comes from KIMS, but the current best SD limit on neutrons comes from XENON10. Combining all experimental results can give a very good constraint on the - parameter space.
Fig. 13a (Fig. 13b) shows combined results for GeV ( GeV) in the (model-independent) - parameter space. The contours show limits from XENON10 (red solid line), KIMS (black dotted line) and COUPP (green dashed line). The blue near-horizontal bands show the evidence regions allowed by DAMA (78), while the green region shows the parameter space allowed by all current experiments. Note that these limits were computed in two different ways. The results from KIMS and COUPP are based on the method proposed in (63) whereas those from DAMA and XENON10 are calculated as advocated in (78). We believe that the latter is more accurate since limits are computed for all angles in the - plane separately whereas the former solely relies on the limits calculated considering pure coupling to neutrons and protons respectively. More details about these calculations can be found in the appendix. The two straight lines originating from are the theoretical predictions for and in the case of or LKP in 5D UED. These theory lines are parametrized by the value of as indicated by a few representative points. The feature which is readily apparent in Fig. 13 is the orthogonality between the regions allowed by the -sensitive experiments like KIMS and COUPP, on the one side, and the -sensitive experiments like XENON10, on the other. This indicates the complementarity of the two groups of experiments: the green-shaded region allowed by the combination of all experiments is substantially more narrow than the region allowed by each individual experiment.
In conclusion of this section, we shall also consider KK dark matter candidates in models with two universal extra dimensions (6D UED). As mentioned in Sec. II.1 the novel possibility here compared to 5D UED is the scalar photon () LKP. As a spin zero particle, it has no spin-dependent interactions and can only be detected through its spin-independent elastic scattering.
Fig. 14a (Fig. 14b) is the analogue of Fig. 10 (Fig. 11) for the case of LKP. In Fig. 14a we show lower bounds on versus the mass of the scalar photon, for a fixed Higgs mass ( GeV). The solid lines indicate the current experimental limits from CDMS (blue) and XENON10 (red). The dashed lines are the projected sensitivities of SuperCDMS 25 kg and XENON100 and the dotted line is the projected sensitivity of a ton-scale detector. Since the cosmologically preferred mass range for is much lower ( GeV before accounting for coannihilations) than for the LKP in 5D UED, the constraints are quite powerful – in particular, the future ton-scale experiments are expected to cover most of the interesting mass splitting () region.
In Fig. 14b we show lower bounds of the Higgs mass as a function of for a fixed . The WMAP preferred parameter space is marked as the green shaded region, while the black solid line is the LEP II lower limit on . The contours resemble in shape those seen earlier in Fig. 11. In particular, we notice that within the LEP II allowed range, the Higgs mass does not have a large impact on the direct detection bounds. However, if the LHC finds a SM Higgs boson with a mass smaller than 300 GeV, then the WMAP bound would constrain the mass of within a relatively narrow mass ranges at a given mass splitting (). For example in Fig. 14b, where the fixed mass splitting is , the corresponding constraint on the mass of would be . In fact, this conclusion is rather insensitive to the particular choice of . This is due to the fact that self-annihilation is helicity-suppressed and gauge boson final states are dominant in the WMAP allowed regions. Therefore, Fig. 14b would look qualitatively similar, if a different value of were used.
V Conclusions
The dark matter puzzle is among the most intriguing questions in particle physics. Its origin resides in cosmological observations such as the rotation curves of galaxies, cosmic microwave background, gravitational lensing, large scale structure, the mass to luminosity ratio and so on. Interestingly, many scenarios of new physics beyond the Standard Model provide a stable neutral particle which, in principle, can be produced and observed at colliders. In fact, one of the primary motivations for SUSY has always been the fact that it naturally accommodates a WIMP candidate. More recently, we have learned that extra dimensional models provide a viable alternative to SUSY dark matter, namely KK dark matter. Both of these scenarios have been attracting a lot of attention in terms of collider and astrophysical aspects. In this paper we performed a comprehensive phenomenological analysis of KK dark matter in universal extra dimensions, extending previous studies by considering new LKP candidates ( and ). We also revisited the cases of and LKP, focusing on the possibility of a small mass splitting with the KK quarks. All of these features can be realized in non-minimal UED scenarios and therefore deserve attention.
In our analysis we included the relevant theoretical constraints from cosmology (the relic density of KK dark matter) and particle physics (low energy precision data). We accounted for all coannihilation processes in our relic density calculation, focusing on coannihilations with KK quarks since they play an important role for direct detection at small mass splittings.
We then contrasted the sensitivities of the LHC and the different types of direct detection experiments, and exhibited their complementarity. We demonstrated that the parameter space is both convenient and sufficient for a simultaneous discussion of collider and direct detection searches. Collider experiments like the LHC and possibly ILC are sensitive to the region of relatively low | 2019-06-19 23:25:11 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076652884483337, "perplexity": 872.7568642437313}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999066.12/warc/CC-MAIN-20190619224436-20190620010436-00310.warc.gz"} |
https://www.studysmarter.us/explanations/math/calculus/rolles-theorem/ | ### Select your language
Suggested languages for you:
|
|
## All-in-one learning app
• Flashcards
• NotesNotes
• ExplanationsExplanations
• Study Planner
• Textbook solutions
# Rolle's Theorem
There are some theorems or ideas in Calculus that may seem rather obvious. Rolle's Theorem is one such theorem. Let's say you leave your house to go for a walk. After your walk, you return home. Rolle's Theorem says that because you started and ended at the same place, you must have made a turn at some point during your walk. Though this fact seems evident, Rolle's Theorem is a significant discovery in Calculus.
## Three Hypotheses / Conditions of Rolle's Theorem
To be able to use Rolle's Theorem, a few conditions must be met. The function should be:
1. continuous on the closed interval
2. differentiable on the open interval
## Rolle's Theorem Definition
Now that we've gone over the conditions for Rolle's Theorem, let's look at what this theorem says.
Rolle's Theorem states that if a function is:
• continuous on the closed interval
• differentiable on the open interval
then there exists at least one number in such that .
Geometrically speaking, if a function meets the requirements listed above, then there is a point on the function where the slope of the tangent line is 0 (the tangent line is horizontal).
A continuous and differentiable function f that has points a and b such that f(a) = f(b) has at least one point c where the slope of the tangent line is 0 - StudySmarter Original
In our walking example, Rolle's Theorem says that since we started and ended at the same place, there must have been a movement where we made a turn (the derivative is 0).
### Rolle's Theorem vs. The Mean Value Theorem
Recall the Mean Value Theorem, which states that if a function is:
• continuous on the open interval
• differentiable on the closed interval
then there is a number c such that and
Rolle's Theorem is a "special case" of the Mean Value Theorem. Rolle's Theorem says that if the requirements are met and there are points a and b such that , or , then there is a point where . If we plug in to the Mean Value Theorem equation for , we get . So, Rolle's Theorem is the case of the Mean Value Theorem where .
## Rolle's Theorem Proof
Let's assume that a function f is continuous on the interval [a, b], differentiable on the interval , and . Thus, the requirements of Rolle's Theorem are met. We must prove that the function has a point where . In other words, the point where occurs is either a maximum or minimum value (extrema) on the interval.
We know that our function will have extrema per the Extreme Value Theorem, which says that if a function is continuous, it is guaranteed to have a maximum value and a minimum value on the interval.
There are two cases:
1. The function is a constant value (a horizontal line segment).
2. The function is not a constant value.
### Case 1: The function is a constant value
This function, which meets the requirements of Rolle's Theorem, has a derivative of 0 everywhere - StudySmarter Original
Every point on the function meets the Rolle's Theorem requirements as everywhere.
### Case 2: The function is not a constant value
Because the function is not a constant value, it must change direction to start and end at the same function value. So, somewhere inside the graph, the function will either have a minimum, a maximum, or both.
This function, which meets the requirements of Rolle's Theorem, has both a minimum and maximum - StudySmarter Original
We must prove that the minimum or maximum (or both) occur when the derivative equals 0.
Extrema cannot occur when because when , the function is increasing. At an extrema value, the function cannot be increasing. At a maximum point, the function cannot be increasing because we are already at the maximum value. At a minimum point, the function cannot be increasing because the function was a little smaller to the left of where we are now. Since we're at the minimum value, cannot be any smaller than it is now.
Extrema cannot occur when because when , the function is decreasing. At an extrema value, the function cannot be decreasing. At a maximum point, the function cannot be increasing because which means was larger a little to the left of where we are now. Since we're at the maximum value, cannot be any larger than it is now. At a minimum point, the function cannot be decreasing because we are already at the minimum value.
Since isn't less than 0 or greater than 0, must equal 0.
## Rolle's Theorem Step-by-Step Procedure
While no explicit formula is associated with Rolle's Theorem, there is a step-by-step process to find the point .
1. ensure that the function meets Rolle's Theorem: continuous on the closed interval and differentiable on the open interval .
2. plug a and b into the function to guarantee that .
3. If the function meets all requirements of Rolle's Theorem, then we know that we are guaranteed at least one point where .
4. To find , we can set the first derivative equal to 0 and solve for .
## Rolle's Theorem Examples
### Example 1
Show through Rolle's Theorem that over has at least one value such that . Then, find the maximum or minimum value of the function over the interval.
#### Step 1: Ensure that f(x) meets the Rolle's Theorem requirements
By nature, we know that the cosine function is continuous and differentiable everywhere.
#### Step 2: Check that f(a) = f(b)
Plugging in 0 and into
Since , we can apply Rolle's Theorem.
#### Step 3: Set f'(x) = 0 to solve for x
By Rolle's Theorem, we are guaranteed at least one point where . So we can find and set it equal to 0.
Using our knowledge of trigonometry and the unit circle, we know the the sine function equals 0 when and multiples of . However, the only multiples of within our interval are and . So, in our interval, when .
#### Step 4: Plug in c values to f(x) to find the maximum or minimum function values
has a maximum value of 3 at and a minimum value of 1 at
### Example 2
Let . Does Rolle's Theorem guarantee a value where over the interval ? Explain why or why not.
To check if we can apply Rolle's Theorem, we must ensure that the requirements are met.
#### Step 1: Check if f(x) is continuous and differentiable
We know that is continuous over the given interval because it is a polynomial. We also know that is differentiable over the interval:
#### Step 2: Check if f(-1) = f(1)
When we plug in , we get . When we plug in , we get .
#### Step 3: Apply Rolle's Theorem
Since, is continuous over , differentiable over , and , then Rolle's Theorem tells us that there exists a number such that .
## Rolle's Theorem - Key takeaways
• Rolle's Theorem is a special case of the Mean Value Theorem where
• Rolle's Theorem states that if a function is:
• continuous on the closed interval
• differentiable on the open interval
• then there exists at least one number in such that f'(c) = 0
• To find apply Rolle's Theorem:
• Ensure that the requirements are met
• Check that the endpoints have the same function value
• Set the first derivative of the function equal to 0 and solve for
## Frequently Asked Questions about Rolle's Theorem
Rolle's Theorem is a special case of the Mean Value Theorem that states that if a function is continuous over the closed interval [a, b], differentiable over the open interval (a, b), and f(a) = f(b), then there exists at least one number in (a, b) such that f'(c) = 0.
Essentially, Rolle's Theorem is the same as MVT. It is a special case of the MVT where f(b) - f(a) = 0.
An example of Rolle's Theorem is the function f(x) = cos(x) + 2 over the interval [0, 2pi]. Rolle's Theorem states that because this function meets the theorem's requirements, there exists at least one value such that f'(c) = 0.
Assume that the requirements of Rolle's Theorem hold for a function f. We can prove Rolle's Theorem by considering the two cases: the function is a constant value and the function is not a constant value. If the function is a constant value, then f(a) = f(b) everywhere and Rolle's Theorem applies over the entire interval (a, b). If it is not a constant value, then we know the function must change direction in order to start and end at the same function value. So, somewhere inside the graph, the function will either have a minimum, a maximum, or both. Minimum and maximum values occur when f'(x) = 0.
Rolle's Theorem states that the value c where f'(c) = 0 is in the open interval (a, b). Thus, endpoints are not included.
## Final Rolle's Theorem Quiz
Question
State Rolle's Theorem.
Rolle's Theorem says that if a function f is continuous on the closed interval [a, b], differentiable on the open interval (ab), and f(a) = f(b), then there is at least one value where f'(c) = 0.
Show question
Question
Rolle's Theorem is a special case of the Mean Value Theorem where...
f(b) - f(a) = 0 or f(b) = f(a)
Show question
Question
How can we interpret Rolle's Theorem geometrically?
If a function meets the requirements of Rolle's Theorem, then there is a point on the function between the endpoints where the tangent line is horizontal, or the slope of the tangent line is 0.
Show question
More about Rolle's Theorem
60%
of the users don't pass the Rolle's Theorem quiz! Will you pass the quiz?
Start Quiz
## Study Plan
Be perfectly prepared on time with an individual plan.
## Quizzes
Test your knowledge with gamified quizzes.
## Flashcards
Create and find flashcards in record time.
## Notes
Create beautiful notes faster than ever before.
## Study Sets
Have all your study materials in one place.
## Documents
Upload unlimited documents and save them online.
## Study Analytics
Identify your study strength and weaknesses.
## Weekly Goals
Set individual study goals and earn points reaching them.
## Smart Reminders
Stop procrastinating with our study reminders.
## Rewards
Earn points, unlock badges and level up while studying.
## Magic Marker
Create flashcards in notes completely automatically.
## Smart Formatting
Create the most beautiful study materials using our templates.
Sign up to highlight and take notes. It’s 100% free. | 2022-11-26 09:38:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8226984739303589, "perplexity": 387.665702633621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00057.warc.gz"} |
https://solvedlib.com/n/asmt-11-other-exponential-models-and-sinusoidal-functions,8793266 | # Asmt 11 Other Exponential Models and Sinusoidal Functions: Problem 4Drevinis ProblemProblem ListNext Problempoint) An object in very viscous fluid
###### Question:
Asmt 11 Other Exponential Models and Sinusoidal Functions: Problem 4 Drevinis Problem Problem List Next Problem point) An object in very viscous fluid aiached t0 spring Ils vertical displacemeni (in cm) above Ihe equilibrium position al= seconds given by nt Weli Initially, IS 6.01 cm above the equilibrium posiion and descending at 4 47 Cmis Find and B Round your answvers at least significant figures what ume Cces the object cnange cirectin motion Trom descending ascending? Rcund Oucanser east s gniicant figures
#### Similar Solved Questions
##### Findfunction of x and evaluate it atx = 2,* = 5 and x = 7F(x)8t _ 5) dtFx)F(2) F(5) F(7)
Find function of x and evaluate it atx = 2,* = 5 and x = 7 F(x) 8t _ 5) dt Fx) F(2) F(5) F(7)...
##### What is the value of x? :) Assume that segments that appear to be tangent are tangent. (Round to the nearest tenth, one decimal place.) Thank you for your help.
What is the value of x? :) Assume that segments that appear to be tangent are tangent. (Round to the nearest tenth, one decimal place.) Thank you for your help....
##### Organic Chemistry Help Please A student performs the following steps to isolate biphenyl from a solid...
Organic Chemistry Help Please A student performs the following steps to isolate biphenyl from a solid 1:1 mixture of biphenyl and salicylic acid. Dissolve 2.0 grams of the mixture in 20 mL of ether Extract the ether layer 2-3 times with an aqueous solution of 5% sodium bicarbonate Add 2-3 spatula...
##### 11. (a) [2 marks] This 6th period element has the following ionization energies (kJ mol' ) 1st 2nd 3rd 4th 5th5039653,6004,5305,600Identify the element and explain your choice_(b) [3 marks] Arrange the following in order of increasing lattice energy. Explain your ordering SrO, BaS, KBr
11. (a) [2 marks] This 6th period element has the following ionization energies (kJ mol' ) 1st 2nd 3rd 4th 5th 503 965 3,600 4,530 5,600 Identify the element and explain your choice_ (b) [3 marks] Arrange the following in order of increasing lattice energy. Explain your ordering SrO, BaS, KBr...
##### Analysis of variance compares the means of a response variable for several groups. ANOVA compares the...
Analysis of variance compares the means of a response variable for several groups. ANOVA compares the variation within each group to the variation of the mean of each group. The ratio of these two is the F statistic from an F distribution with (number of groups - 1) as the numerator degrees of freed...
##### Consider the graph of f (x) log(81) . How do the graphs of f (x) + 6, f (1and f (z) compare to the graph of f (x)?Drag tiles to the empty boxes to correctly complete each sentence_The graph of f (2) + 6isthe graph of f (2).The graph of f (x 6) isthe graph of f (1)The graph of f (r) isthe graph of f (2).reflection across the X-axis ofreflection across the y-axis ofshifted 6 units up fromshifted 6 units down fromshifted 6 units right fromshifted 6 units left from
Consider the graph of f (x) log(81) . How do the graphs of f (x) + 6, f (1 and f (z) compare to the graph of f (x)? Drag tiles to the empty boxes to correctly complete each sentence_ The graph of f (2) + 6is the graph of f (2). The graph of f (x 6) is the graph of f (1) The graph of f (r) is the gr...
##### Describe the functional anatomy of the duct system that conveys bile from the liver and digestive...
Describe the functional anatomy of the duct system that conveys bile from the liver and digestive juice from the pancreas to the lumen of the duodenum...
##### A 15-cm-long microscope has an eyepiece with a focal lengthof 2.9 cm and an objective with a focal lengthof 0.35 cm . What is the approximatemagnification? M = ?
A 15-cm-long microscope has an eyepiece with a focal length of 2.9 cm and an objective with a focal length of 0.35 cm . What is the approximate magnification? M = ?...
##### What do you anticipate will be the greatest challenge of operationalizing strategy across various business functions?...
What do you anticipate will be the greatest challenge of operationalizing strategy across various business functions? Do you think the challenges change depending on the circumstances? Provide an example within nursing....
##### Under the Affordable Care Act, all managed care organizations must: Group of answer choices a. provide...
Under the Affordable Care Act, all managed care organizations must: Group of answer choices a. provide the 10 essential benefits categories. b. contract with states for Medicaid enrollees. c. reduce patient costs. d. increase patient enrollments....
##### QUESTION 2 Ahmad opened a Bike Repair Shop. He shared his net cash flow (NCF) figures...
QUESTION 2 Ahmad opened a Bike Repair Shop. He shared his net cash flow (NCF) figures for the 3 years, including the $17,000 amount that it took to get started in business. (a) Determine the rate of return value. Year Net Cash Flow,$ -17,000 20,000 -5,000 8000 oooo w...
##### If you use tape to hinge together two pocket mirrors as shown and place the mirrors at a $120^{\circ}$ angle, then a coin placed between the mirrors will be reflected, giving a pattern with $120^{\circ}$ and $240^{\circ}$ rotational symmetry. a. What kinds of symmetries occur when the mirrors are at a right angle? b. Experiment by forming various angles with two mirrors. Be sure to try $60^{\circ}, 45^{\circ},$ and $30^{\circ}$ angles. Record the number of coins you see, including the actual c
If you use tape to hinge together two pocket mirrors as shown and place the mirrors at a $120^{\circ}$ angle, then a coin placed between the mirrors will be reflected, giving a pattern with $120^{\circ}$ and $240^{\circ}$ rotational symmetry. a. What kinds of symmetries occur when the mirrors are a...
##### NNNNKIDMAEAN Calculating the pH ofa strong bage zolutionottne solution. (The temperature 90ml so ution. Calculate the PHnough nater Mare dissolves 891. mE Pure barium hydroxlde chemist Aciutiot 25 "C,) contect Qumder significant ciglts Eneer has0-
NNNNKIDMAEAN Calculating the pH ofa strong bage zolution ottne solution. (The temperature 90ml so ution. Calculate the PH nough nater Mare dissolves 891. mE Pure barium hydroxlde chemist Aciutiot 25 "C,) contect Qumder significant ciglts Eneer has 0-...
EHHo 60 Venrus H: 450. rndom samph ol gizain = 24 i5 oblainad from populabion " Complete Patd (0) trouph (d) bebu EE Click here lo viat the !-Distribution Aree Ripht Tail. knoen obe Fnennl dElntbuude (0) f* =,40.3 and $ 14.4 . compule the lest stalislia: t0 = Drround three decimal places a5 ... 5 answers ##### 0,60.40.250.10,05Ae dccrt5s J6 thetelabonstip ngative Inversc) coreltanYriabic Incter01 0 Lhc <(eluonsnnpositlve conrelauonOernitve leutiorTncicoitcnanThe relabaushln betreen the number 0' Ecmal 4Bolibar rorclxtanclutceprobabyilitv 0t surxw lis ar'Soaich tdc Uci 0,6 0.4 0.25 0.1 0,05 Ae dccrt5s J6 the telabonstip ngative Inversc) coreltan Yriabic Incter01 0 Lhc < (eluonsnn positlve conrelauon Oernitve leutior Tnci coitcnan The relabaushln betreen the number 0' Ecmal 4Bolibar rorclxtan clutce probabyilitv 0t surxw lis ar 'Soaich tdc Uci... 1 answer ##### Statement of Cost of Goods Manufactured for a Manufacturing Company Cost data for Sandusky Manufacturing Company... Statement of Cost of Goods Manufactured for a Manufacturing Company Cost data for Sandusky Manufacturing Company for the month ended January 31 are as follows: Inventories January 1 January 31 Materials$225,250 $200,470 Work in process 148,670 132,310 Finished goods 117,130 136,320 ... 1 answer ##### 1 points Save Answer Your employer contributes$100 at the end of each week to your...
1 points Save Answer Your employer contributes $100 at the end of each week to your retirement account. The account will earn a weekly interest rate of .17 percent. How much will the account be worth when you retire in 35 years?$1,153,340.58 O $182.000.00$1,235,722.05 $1,081.819.46$1,181.995.01... | 2022-08-12 03:06:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48609790205955505, "perplexity": 5432.660945345029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00488.warc.gz"} |
https://www.yaclass.in/p/mathematics-state-board/class-8/algebra-3091/graph-17064/re-1a1f573d-6f1f-4dbd-a446-b060f82a78f8 | In the previous topics, we have seen how to plot the points $$(1,5)$$, $$(-9, 10)$$.
Is it possible to plot the points with much bigger values $$(56, 78)$$, $$(89, 45)$$?
Yes, it is possible. In graph, while plotting the points there are some situations that may be the value of $$x$$ is much big than $$y$$ or the value of $$y$$ is much bigger than $$x$$. Thus, in these cases, let us use the concept of scale in the coordinate axes as per the requirement. And represent the measurement at the right side corner of the graph.
Scale is the measurement that have been taken for $$1$$ unit in the graph.
Example:
1. Look at the graph and find the scale.
Solution:
In this graph, we can see that in the $$x$$ - axis, the value increases by $$1$$ per unit.
Hence, the scale of $$x$$ - axis is $$1 \ cm =$$ $$1 \ unit$$.
In the $$y$$ - axis, the value increases by $$10$$ per unit.
The scale of $$y$$ - axis is $$1 \ cm =$$ $$10 \ units$$.
2. Find the scale of the given graph.
Solution:
In the $$x$$ - axis, the value of $$x$$ increases by $$50$$ per unit.
Similarly, in the $$y$$ - axis, the value of $$y$$ increases by $$50$$ per unit.
Therefore, the scale is given by:
$$x$$ - axis $$1 \ cm =$$ $$50 \ units$$
$$y$$ - axis $$1 \ cm =$$ $$50 \ units$$ | 2022-11-26 13:48:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6159111857414246, "perplexity": 229.1066242231616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00199.warc.gz"} |
http://mathhelpforum.com/calculus/4306-needs-help-tough-question.html | # Math Help - Needs help on a tough question
1. ## Needs help on a tough question
Question: Use the fact that 7 cos x - 4 sin x = (3/2)(cos x + sin x) + (11/2)(cos x - sin x) to find the exact value of (upper limit: 0.5?, lower limit: 0)?((7 cos x - 4 sin x)/(cos x + sin x)) dx.
I need help with the steps, thanks in advance!
2. Hello, margaritas!
Edit: I got your correction . . . I'll edit my solution.
Use the fact that: $7\cos x - 4\sin x \:= \:\frac{3}{2}(\cos x + \sin x) + \frac{11}{2}(\cos x - \sin x)$
to find the exact value of: . $\int^{\frac{\pi}{2}}_0\frac{7\cos x - 4\sin x}{\cos x + \sin x}\,dx$
Answer: $\frac{3}{4}\pi$ . . . Right!
The function is: . $\frac{7\cos x - 4\sin x}{\cos x + \sin x}\;= \;\frac{\frac{3}{2}(\cos x + \sin x) + \frac{11}{2}(\cos x - \sin x)}{\cos x + \sin x}$
. . $= \;\frac{\frac{3}{2}(\cos x + \sin x)}{\cos x + \sin x} + \frac{\frac{11}{2}(\cos x - \sin x)}{\cos x + \sin x}$ $=\;\frac{3}{2} + \frac{11}{2}\cdot\frac{\cos x - \sin x}{\cos x + \sin x}$
The integration is: . $\frac{3}{2}\!\int\!\! dx + \frac{11}{2}\!\!\int\frac{\cos x - \sin x}{\cos x + \sin x}\,dx$
. . For the second integral, let $u = \cos x + \sin x\quad\Rightarrow\quad du = (\cos x - \sin x)\,dx$
. . then we have: . $\int\frac{du}{u} \:=\:\ln|u| \:=\:\ln|\cos x + \sin x|$
Hence, we have: . $\frac{3}{2}x + \frac{11}{2}\ln|\cos x + \sin x|\:\bigg]^{\frac{\pi}{2}}_0$
Evaluate: . $\left[\frac{3}{2}\left(\frac{\pi}{2}\right) + \frac{11}{2}\ln\left|\cos \frac{\pi}{2} + \sin \frac{\pi}{2}\right|\right] -$ $\left[\frac{3}{2}\cdot0 + \frac{11}{2}\ln|\cos0 + \sin0|\right]$
. . $= \;\left[\frac{3}{4}\pi + \frac{11}{2}\ln(0 + 1)\right] - \left[0 + \frac{11}{2}\ln(1 + 0)\right] \;= \;\boxed{\frac{3}{4}\pi}$
3. I recommend in this type of problem to use a Weierstrauss Substitution because it is a rational function of sine and cosine.
4. Oops, all the '?'s should be pi's instead.
But I think the solution by soroban will be helpful so thanks muchly! | 2015-05-24 10:19:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530540704727173, "perplexity": 1658.2773039944168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927863.72/warc/CC-MAIN-20150521113207-00014-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://www.gwern.net/Nootropics | # Nootropics
Notes on nootropics I tried, and my experiments (nootropics, psychology, experiments, predictions, statistics, DNB, shell, Haskell, R, power analysis, survey, Bayes, reviews)
created: 02 Jan 2010; modified: 20 Dec 2018; status: in progress; confidence: likely;
A record of nootropics I have tried, with thoughts about which ones worked and did not work for me. These anecdotes should be considered only as anecdotes, and one’s efforts with nootropics a hobby to put only limited amounts of time into due to the inherent limits of drugs as a force-multiplier compared to other things like programming1; for an ironic counterpoint, I suggest the reader listen to a video of Jonathan Coulton’s I Feel Fantastic while reading.
# Background
Your mileage will vary. There are so many parameters and interactions in the brain that any of them could be the bottleneck or responsible pathway, and one could fall prey to the common U-shaped dose-response curve (eg. Yerkes-Dodson law; see also Chemistry of the adaptive mind & de Jongh et al 2007) which may imply that the smartest are those who benefit least23 but ultimately they all cash out in a very few subjective assessments like energetic or motivated, with even apparently precise descriptions like working memory or verbal fluency not telling you much about what the nootropic actually did. It’s tempting to list the nootropics that worked for you and tell everyone to go use them, but that is merely generalizing from one example (and the more nootropics - or meditation styles, or self-help books, or getting things done systems - you try, the stronger the temptation is to evangelize). The best you can do is read all the testimonials and studies and use that to prioritize your list of nootropics to try. You don’t know in advance which ones will pay off and which will be wasted. You can’t know in advance. And wasted some must be; to coin a Umeshism: if all your experiments work, you’re just fooling yourself. (And the corollary - if someone else’s experiments always work, they’re not telling you everything.)
The above are all reasons to expect that even if I do excellent single-subject design self-experiments, there will still be the old problem of internal validity versus external validity: an experiment may be wrong or erroneous or unlucky in some way (lack of internal validity) or be right but not matter to anyone else (lack of external validity). For example, alcohol makes me sad & depressed; I could run the perfect blind randomized experiment for hundreds of trials and be extremely sure that alcohol makes me less happy, but would that prove that alcohol makes everyone sad or unhappy? Of course not, and as far as I know, for a lot of people alcohol has the opposite effect. So my hypothetical alcohol experiment might have tremendous internal validity (it does prove that I am sadder after inebriating), and zero external validity (someone who has never tried alcohol learns nothing about whether they will be depressed after imbibing). Keep this in mind if you are minded to take the experiments too seriously.
Somewhat ironically given the stereotypes, while I was in college I dabbled very little in nootropics, sticking to melatonin and tea. Since then I have come to find nootropics useful, and intellectually interesting: they shed light on issues in philosophy of biology & evolution, argue against naive psychological dualism and for materialism, offer cases in point on the history of technology & civilization or recent psychology theories about addiction & willpower, challenge our understanding of the validity of statistics and psychology - where they don’t offer nifty little problems in statistics and economics themselves, and are excellent fodder for the young Quantified Self movement4; modafinil itself demonstrates the little-known fact that sleep has no accepted evolutionary explanation. (The hard drugs also have more ramifications than one might expect: how can one understand the history of Southeast Asia and the Vietnamese War without reference to heroin, or more contemporaneously, how can one understand the lasting appeal of the Taliban in Afghanistan and the unpopularity & corruption of the central government without reference to the Taliban’s frequent anti-drug campaigns or the drug-funded warlords of the Northern Alliance?)
## Golden age
Nootropics have been around a long time, but they’ve never been so prominent, easily accessed, cheap, or available in such a variety. I think there is no single factor responsible but rather existing trends progressing to the point where it’s possible to obtain much more obscurer things than before.
(In particular, I don’t think it’s because there’s a sudden new surge of drugs. FDA drug approval has been decreasing over the past few decades, so this is unlikely a priori. More specifically, many of the major or hot drugs go back a long time. Bacopa goes back millennia, melatonin I don’t even know, piracetam was the ’60s, modafinil was ’70s or ’80s, ALCAR was ’80s AFAIK, Noopept & coluracetam were ’90s, and so on.)
What I see as being the relevant trends are a combination of these trends:
1. the rise of IP scofflaw countries which enable the manufacture of known drugs: India does not respect the modafinil patents, enabling the cheap generics we all use, and Chinese piracetam manufacturers don’t give a damn about the FDA’s chilling-effect moves in the US. If there were no Indian or Chinese manufacturers, where would we get our modafinil? Buy them from pharmacies at $10 a pill or worse? It might be worthwhile, but think of the chilling effect on new users. 2. along with the previous bit of globalization is an important factor: shipping is ridiculously cheap. The most expensive S&H in my modafinil price table is ~$15 (and most are international). To put this in perspective, I remember in the 90s you could easily pay $15 for domestic S&H when you ordered online - but it’s 2013, and the dollar has lost at least half its value, so in real terms, ordering from abroad may be like a quarter of what it used to cost, which makes a big difference to people dipping their toes in and contemplating a small order to try out this ’nootropics thing they’ve heard about. 3. as scientific papers become much more accessible online due to Open Access, digitization by publishers, and cheap hosting for pirates, the available knowledge about nootropics increases drastically. This reduces the perceived risk by users, and enables them to educate themselves and make much more sophisticated estimates of risk and side-effects and benefits. (Take my modafinil page: in 1997, how could an average person get their hands on any of the papers available up to that point? Or get detailed info like the FDA’s prescribing guide? Even assuming they had a computer & Internet?) 4. the larger size of the community enables economies of scale and increases the peak sophistication possible. In a small nootropics community, there is likely to be no one knowledgeable about statistics/experimentation/biochemistry/neuroscience/whatever-you-need-for-a-particular-discussion, and the available funds increase: consider /r/Nootropics’s testing program, which is doable only because it’s a large lucrative community to sell to so the sellers are willing to donate funds for independent lab tests/Certificates of Analysis (COAs) to be done. If there were 1000 readers rather than 23,295, how could this ever happen short of one of those 1000 readers being very altruistic? 5. Nootropics users tend to stick. If modafinil works well for you, you’re probably going to keep using it on and off. So simply as time passes, one would expect the userbase to grow. Similarly for press coverage and forum comments and blog posts: as time passes, the total mass increases and the more likely a random person is to learn of this stuff. ## Defaults I do recommend a few things, like modafinil or melatonin, to many adults, albeit with misgivings about any attempt to generalize like that. (It’s also often a good idea to get powders, see the appendix.) Some of those people are helped; some have told me that they tried and the suggestion did little or nothing. I view nootropics as akin to a biological lottery; one good discovery pays for all. I forge on in the hopes of further striking gold in my particular biology. Your mileage will vary. All you have to do, all you can do is to just try it. Most of my experiences were in my 20s as a right-handed 5’11 white male weighing 190-220lbs, fitness varying over time from not-so-fit to fairly fit. In rough order of personal effectiveness weighted by costs+side-effects, I rank them as follows: 1. Modafinil/armodafinil (less than weekly for overnight; skipping days for day use) 2. Melatonin (daily) 3. Caffeine+theanine (daily) 4. Nicotine (weekly) 5. Piracetam+choline (daily) 6. Vitamin D (daily) 7. Sulbutiamine (daily) (People aged <=18 shouldn’t be using any of this except harmless stuff - where one may have nutritional deficits - like fish oil & vitamin D; melatonin may be especially useful, thanks to the effects of screwed-up school schedules & electronics use on teenagers’ sleep. Changes in effects with age are real - amphetamines’ stimulant effects and modafinil’s histamine-like side-effects come to mind as examples.) # Acetyl-l-carnitine (ALCAR) No effects, alone or mixed with choline+piracetam. This is pretty much as expected from reports about ALCAR (Examine.com), but I had still been hoping for energy boosts or something. (Bought from Smart Powders.) # Adderall Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it’s not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.) I took the first pill at 12:48 pm. 1:18, still nothing really - head is a little foggy if anything. later noticed a steady sort of mental energy lasting for hours (got a good deal of reading and programming done) until my midnight walk, when I still felt alert, and had trouble sleeping. (Zeo reported a ZQ of 100, but a full 18 minutes awake, 2 or 3 times the usual amount.) At this point, I began thinking about what I was doing. Black-market Adderall is fairly expensive;$4-10 a pill vs prescription prices which run more like $60 for 120 20mg pills. It would be a bad idea to become a fan without being quite sure that it is delivering bang for the buck. Now, why the piracetam mix as the placebo as opposed to my other available powder, creatine powder, which has much smaller mental effects? Because the question for me is not whether the Adderall works (I am quite sure that the amphetamines have effects!) but whether it works better for me than my cheap legal standbys (piracetam & caffeine)? (Does Adderall have marginal advantage for me?) Hence, I want to know whether Adderall is better than my piracetam mix. People frequently underestimate the power of placebo effects, so it’s worth testing. (Unfortunately, it seems that there is experimental evidence that people on Adderall know they are on Adderall and also believe they have improved performance, when they do not5. So the blind testing does not buy me as much as it could.) ## Adderall blind testing ### Blinding yourself But how to blind myself? I used my pill maker to make 9 OO pills of piracetam mix, and then 9 OO pills of piracetam mix+the Adderall, then I put them in a baggy. The idea is that I can blind myself as to what pill I am taking that day since at the end of the day, I can just look in the baggy and see whether a placebo or Adderall pill is missing: the big capsules are transparent so I can see whether there is a crushed-up blue Adderall in the end or not. If there are fewer Adderall than placebo, I took an Adderall, and vice-versa. Now, since I am checking at the end of each day, I also need to remove or add the opposite pill to maintain the ratio and make it easy to check the next day; more importantly I need to replace or remove a pill, because otherwise the odds will be skewed and I will know how they are skewed. (Imagine I started with 4 Adderalls and 4 placebos, and then 3 days in a row I draw placebos but I don’t add or remove any pills; the next day, because most of the placebos have been used up, there’s only a small chance I will get a placebo…) This is only one of many ways to blind myself; for example, instead of using one bag, one could use two bags and instead blindly pick a bag to take a pill out of, balancing contents as before. (See also my Vitamin D and day modafinil trials.) ### Results 1. Began double-blind trial. Today I took one pill blindly at 1:53 PM. at the end of the day when I have written down my impressions and guess whether it was one of the Adderall pills, then I can look in the baggy and count and see whether it was. there are many other procedures one can take to blind oneself (have an accomplice mix up a sequence of pills and record what the sequence was; don’t count & see but blindly take a photograph of the pill each day, etc.) Around 3, I begin to wonder whether it was Adderall because I am arguing more than usual on IRC and my heart rate seems a bit high just sitting down. 6 PM: I’ve started to think it was a placebo. My heart rate is back to normal, I am having difficulty concentrating on long text, and my appetite has shown up for dinner (although I didn’t have lunch, I don’t think I had lunch yesterday and yesterday the hunger didn’t show up until past 7). Productivity wise, it has been a normal day. All in all, I’m not too sure, but I think I’d guess it was Adderall with 40% confidence (another way of saying placebo with 60% confidence). When I go to examine the baggie at 8:20 PM, I find out… it was an Adderall pill after all. Oh dear. One little strike against Adderall that I guessed wrong. It may be that the problem is that I am intrinsically a little worse today (normal variation? come down from Adderall?). So, a change to the protocol. I will take a pill every other day - a day to washout and reacclimate to baseline, and then an experimental day. In subsequent entries, assume there was either a at least one intervening break or placebo day. 2. Took random pill at 2:02 PM. Went to lunch half an hour afterwards, talked until 4 - more outgoing than my usual self. I continued to be pretty energetic despite not taking my caffeine+piracetam pills, and though it’s now 12:30 AM and I listened to TAM YouTube videos all day while reading, I feel pretty energetic and am reviewing Mnemosyne cards. I am pretty confident the pill today was Adderall. Hard to believe placebo effect could do this much for this long or that normal variation would account for this. I’d say 90% confidence it was Adderall. I do some more Mnemosyne, typing practice, and reading in a Montaigne book, and finally get tired and go to bed around 1:30 AM or so. I check the baggie when I wake up the next morning, and sure enough, it had been an Adderall pill. That makes me 1 for 2. 3. Took pill 1:27 PM. At 2 my hunger gets the best of me (despite my usual tea drinking and caffeine+piracetam pills) and I eat a large lunch. This makes me suspicious it was placebo - on the previous days I had noted a considerable appetite-suppressant effect. 5:25 PM: I don’t feel unusually tired, but nothing special about my productivity. 8 PM; no longer so sure. Read and excerpted a fair bit of research I had been putting off since the morning. After putting away all the laundry at 10, still feeling active, I check. It was Adderall. I can’t claim this one either way. By 9 or 10 I had begun to wonder whether it was really Adderall, but I didn’t feel confident saying it was; my feeling could be fairly described as 50%. 4. Break; this day/night was for trying armodafinil, pill #1 5. Took pill around 6 PM; I had a very long drive to and from an airport ahead of me, ideal for Adderall. In case it was Adderall, I chewed up the pill - by making it absorb faster, more of the effect would be there when I needed it, during driving, and not lingering in my system past midnight. Was it? I didn’t notice any change in my pulse, I yawned several times on the way back, my conversation was not more voluminous than usual. I did stay up later than usual, but that’s fully explained by walking to get ice cream. All in all, my best guess was that the pill was placebo, and I feel fairly confident but not hugely confident that it was placebo. I’d give it ~70%. And checking the next morning… I was right! Finally. 6. Took pill 12:11 PM. I am not certain. While I do get some things accomplished (a fair amount of work on the Silk Road article and its submission to places), I also have some difficulty reading through a fiction book (Sum) and I seem kind of twitchy and constantly shifting windows. I am weakly inclined to think this is Adderall (say, 60%). It’s not my normal feeling. Next morning - it was Adderall. 7. Week-long break - armodafinil #2 experiment, volunteer work 8. Took pill #6 at 12:35 PM. Hard to be sure. I ultimately decided that it was Adderall because I didn’t have as much trouble as I normally would in focusing on reading and then finishing my novel (Surface Detail) despite my family watching a movie, though I didn’t notice any lack of appetite. Call this one 60-70% Adderall. I check the next evening and it was Adderall. 9. Took pill at 10:50 AM. At 12:30 I watch the new Captain America6, and come out as energetic as I went in and was not hungry for snacks at all during it; at this point, I’m pretty confident (70%) that it was Adderall. At 5 I check, and it was. Overall, pretty normal day, save for leading up to the third armodafinil trial. 10. Just 3 Adderall left; took random pill at 12:30. Hopefully I can get a lot of formatting done on hafu. I do manage to do a lot of work on it and my appetite seems minor up until 8 PM, although if not for those two observations; perhaps 60% that it was Adderall. I check the next morning, and it was not. 11. Skipping break day since it was placebo yesterday and I’d like to wind up the Adderall trials. Pill at 12:24 PM. I get very hungry around 3 PM, and it’s an unproductive day even considering how much stress and aggravation and the 3 hours a failed Debian unstable upgrade cost me. I feel quite sure (75%) it was placebo. It was. 12. Took pill at 11:27 AM. Moderately productive. Not entirely sure. 50% either way. (It’s placebo.) 13. Pill at 12:40 PM. I spend entirely too much time arguing matters related to a LW post and on IRC, but I manage to channel it into writing a new mini-essay on my past intellectual sins. This sort of thing seems like Adderall behavior, and I don’t get hungry until much later. All in all, I feel easily 75% sure it’s Adderall; and it was. 14. 12:18 PM. (There are/were just 2 Adderall left now.) I manage to spend almost the entire afternoon single-mindedly concentrating on transcribing two parts of a 1996 Toshio Okada interview (it was very long, and the formatting more challenging than expected), which is strong evidence for Adderall, although I did feel fairly hungry while doing it. I don’t go to bed until midnight and & sleep very poorly - despite taking triple my usual melatonin! Inasmuch as I’m already fairly sure that Adderall damages my sleep, this makes me even more confident (>80%). When I grumpily crawl out of bed and check: it’s Adderall. (One Adderall left.) 15. 10:50 AM. Normal appetite; I try to read through Edward Luttwak’s The Grand Strategy of the Byzantine Empire, slow going. Overall, I guess it was placebo with 70% - I notice nothing I associate with Adderall. I check it at midnight, and it was placebo. 16. 11:30 AM. By 2:30 PM, my hunger is quite strong and I don’t feel especially focused - it’s difficult to get through the tab-explosion of the morning, although one particularly stupid poster on the DNB ML makes me feel irritated like I might on Adderall. I initially figure the probability at perhaps 60% for Adderall, but when I wake up at 2 AM and am completely unable to get back to sleep, eventually racking up a Zeo score of 73 (compared to the usual 100s), there’s no doubt in my mind (95%) that the pill was Adderall. And it was the last Adderall pill indeed. My predictions were substantially better than random chance7, so my default belief - that Adderall does affect me and (mostly) for the better - is borne out. I usually sleep very well and 3 separate incidents of horrible sleep in a few weeks seems rather unlikely (though I didn’t keep track of dates carefully enough to link the Zeo data with the Adderall data). Between the price and the sleep disturbances, I don’t think Adderall is personally worthwhile. ### Value of Information (VoI) See also the discussion as applied to ordering modafinil & evaluating sleep experiments. The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly$4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one’s body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let’s say, and not ordinary aimless usage), that’s a cool$200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn’t do any formal statistics for it, much less a power calculation, so let’s try to be conservative by penalizing the information quality heavily and assume it had 25%. So $\frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512$! The experiment probably used up no more than an hour or two total.
Vaniver argues that since I start off not intending to continue Adderall, the analysis actually needs to be different:
In 3, you’re considering adding a new supplement, not stopping a supplement you already use. The I don’t try Adderall case has value $0, the Adderall fails case is worth -$40 (assuming you only bought 10 pills, and this number should be increased by your analysis time and a weighted cost for potential permanent side effects), and the Adderall succeeds case is worth $X-40-4099, where$X is the discounted lifetime value of the increased productivity due to Adderall, minus any discounted long-term side effect costs. If you estimate Adderall will work with p=.5, then you should try out Adderall if you estimate that $0.5 \times (X-4179) > 0$ ~> $X>4179$. (Adderall working or not isn’t binary, and so you might be more comfortable breaking down the various how effective Adderall is cases when eliciting X, by coming up with different levels it could work at, their values, and then using a weighted sum to get X. This can also give you a better target with your experiment- this needs to show a benefit of at least Y from Adderall for it to be worth the cost, and I’ve designed it so it has a reasonable chance of showing that.)
One thing to notice is that the default case matters a lot. This asymmetry is because you switch decisions in different possible worlds - when you would take Adderall but stop you’re in the world where Adderall doesn’t work, and when you wouldn’t take Adderall but do you’re in the world where Adderall does work (in the perfect information case, at least). One of the ways you can visualize this is that you don’t penalize tests for giving you true negative information, and you reward them for giving you true positive information. (This might be worth a post by itself, and is very Litany of Gendlin.)
Either way, this example demonstrates that anything you are doing expensively is worth testing extensively.
The adrafinil/Olmifon (bought simultaneously with the hydergine from Anti-Aging Systems, now Antiaging Central) was a disappointment. Almost as expensive as actual modafinil, with the risk of liver problems, but did nothing whatsoever that I noticed. It is supposed to be subtler than modafinil, but that’s a little ridiculous.
The advantage of adrafinil is that it is legal & over-the-counter in the USA, so one removes the small legal risk of ordering & possessing modafinil without a prescription, and the retailers may be more reliable because they are not operating in a niche of dubious legality. Based on comments from others, the liver problem may have been overblown, and modafinil vendors post-2012 seem to have become more unstable, so I may give adrafinil (from another source than Antiaging Central) a shot when my modafinil/armodafinil run out.
# Aniracetam
Very expensive; I noticed minimal improvements when combined with sulbutiamine & piracetam+choline. Definitely not worthwhile for me.
# Bacopa monnieri
Bacopa is a supplement herb often used for memory or stress adaptation. Its chronic effects reportedly take many weeks to manifest, with no important acute effects. Out of curiosity, I bought 2 bottles of Bacognize Bacopa pills and ran a non-randomized non-blinded ABABA quasi-self-experiment from June 2014 to September 2015, measuring effects on my memory performance, sleep, and daily self-ratings of mood/productivity. Because of the very slow onset, small effective sample size, definite temporal trends probably unrelated to Bacopa, and noise in the variables, the results were as expected, ambiguous, and do not strongly support any correlation between Bacopa and memory/sleep/self-rating (+/-/- respectively).
Main article: Bacopa.
# Beta-phenylethylamine (PEA)
Based on this H+ article/advertisement, I gave a PEA supplement a try. Noticed nothing. Critical commentators pointed out that PEA was notoriously degraded by the digestive system and has essentially no effect on its own8, though Neurvana’s pro supplement claimed to avoid that. I guess it doesn’t.
Discussions of PEA mention that it’s almost useless without a MAOI to pave the way; hence, when I decided to get deprenyl and noticed that deprenyl is a MAOI, I decided to also give PEA a second chance in conjunction with deprenyl. Unfortunately, in part due to my own shenanigans, Nubrain canceled the deprenyl order and so I have 20g of PEA sitting around. Well, it’ll keep until such time as I do get a MAOI.
# Caffeine
Caffeine (Examine.com; FDA adverse events) is of course the most famous stimulant around. But consuming 200mg or more a day, I have discovered the downside: it is addictive and has a nasty withdrawal - headaches, decreased motivation, apathy, and general unhappiness. (It’s a little amusing to read academic descriptions of caffeine addiction9; if caffeine were a new drug, I wonder what Schedule it would be in and if people might be even more leery of it than modafinil.) Further, in some ways, aside from the ubiquitous placebo effect, caffeine combines a mix of weak performance benefits (Lorist & Snel 2008, Nehlig 2010) with some possible decrements, anecdotally and scientifically:
1. slows memory retrieval for unprimed memories (although it speeds retrieval for related/primed memories)
2. the usual U-curve applies to caffeine doses: eg while a small dose of caffeine in energy drinks substantially improves reaction-time in the cued go/no-go task, higher doses improve reaction-time less and are much closer to baseline (their optimal tested dose is, for my weight of 93kg, ~100mg)
3. caffeine damages sleep (necessary for memory and alertness), even 6 hours before sleep
4. very low doses (9mg) of caffeine can still have negative effects
5. did I mention that it correlates with changed estrogen levels in women?
6. in rats, it inhibits memory formation in the hippocampus and in mice, although other mice saw mental benefits with improvement to long-term memory when tested with object recognition
Finally, it’s not clear that caffeine results in performance gains after long-term use; homeostasis/tolerance is a concern for all stimulants, but especially for caffeine. It is plausible that all caffeine consumption does for the long-term chronic user is restore performance to baseline. (Imagine someone waking up and drinking coffee, and their performance improves - well, so would the performance of a non-addict who is also slowly waking up!) See for example, James & Rogers 2005, Sigmon et al 2009, and Rogers et al 2010. A cross-section of thousands of participants in the Cambridge brain-training study found caffeine intake showed negligible effect sizes for mean and component scores (participants were not told to use caffeine, but the training was recreational & difficult, so one expects some difference).
This research is in contrast to the other substances I like, such as piracetam or fish oil. I knew about withdrawal of course, but it was not so bad when I was drinking only tea. And the side-effects like jitteriness are worse on caffeine without tea; I chalk this up to the lack of theanine. (My later experiences with theanine seems to confirm this.) These negative effects mean that caffeine doesn’t satisfy the strictest definition of nootropic (having no negative effects), but is merely a cognitive enhancer (with both benefits & costs). One might wonder why I use caffeine anyway if I am so concerned with mental ability.
My answer is that this is not a lot of research or very good research (not nearly as good as the research on nicotine, eg.), and assuming it’s true, I don’t value long-term memory that much because LTM is something that is easily assisted or replaced (personal archives, and spaced repetition). For me, my problems tend to be more about akrasia and energy and not getting things done, so even if a stimulant comes with a little cost to long-term memory, it’s still useful for me. I’m going continue to use the caffeine. It’s not so bad in conjunction with tea, is very cheap, and I’m already addicted, so why not? Caffeine is extremely cheap, addictive, has minimal effects on health (and may be beneficial, from the various epidemiological associations with tea/coffee/chocolate & longevity), and costs extra to remove from drinks popular regardless of their caffeine content (coffee and tea again). What would be the point of carefully investigating it? Suppose there was conclusive evidence on the topic, the value of this evidence to me would be roughly $0 or since ignorance is bliss, negative money - because unless the negative effects were drastic (which current studies rule out, although tea has other issues like fluoride or metal contents), I would not change anything about my life. Why? I enjoy my tea too much. My usual tea seller doesn’t even have decaffeinated oolong in general, much less various varieties I might want to drink, apparently because de-caffeinating is so expensive it’s not worthwhile. What am I supposed to do, give up my tea and caffeine just to save on the cost of caffeine? Buy de-caffeinating machines (which I couldn’t even find any prices for, googling)? This also holds true for people who drink coffee or caffeinated soda. (As opposed to a drug like modafinil which is expensive, and so the value of a definitive answer is substantial and would justify some more extensive calculating of cost-benefit.) I ordered 400g of anhydrous caffeine from Smart Powders. Apparently my oolong tea doesn’t contain very much caffeine, so adding a fraction of a gram wakes me up a bit. Surprisingly for something with anhydrous in its name, it doesn’t seem to dissolve very well. I ultimately mixed it in with the 3kg of piracetam and included it in that batch of pills. I mixed it very thoroughly, one ingredient at a time, so I’m not very worried about hot spots. But if you are, one clever way to get accurate caffeine measurements is to measure out a large quantity & dissolve it since it’s easier to measure water than powder, and dissolving guarantees even distribution. This can be important because caffeine is, like nicotine, an alkaloid poison which - the dose makes the poison - can kill in high doses, and concentrated powder makes it easy to take too much, as one inept Englishman discovered the hard way. (This dissolving trick is applicable to anything else that dissolves nicely.) # Choline/DMAE Does little alone, but absolutely necessary in conjunction with piracetam. (Bought from Smart Powders.) When turning my 3kg of piracetam into pills, I decided to avoid the fishy-smelling choline and go with 500g of DMAE (Examine.com); it seemed to work well when I used it before with oxiracetam & piracetam, since I had no piracetam headaches, and be considerably less bulky. In the future, I might try Alpha-GPC instead of the regular cholines; that supposedly has better bio-availability. # Cocoa Chocolate or cocoa powder (Examine.com), contains the stimulants caffeine and the caffeine metabolite theobromine, so it’s not necessarily surprising if cocoa powder was a weak stimulant. It’s also a witch’s brew of chemicals such as polyphenols and flavonoids some of which have been fingered as helpful10, which all adds up to an unclear impact on health (once you control for eating a lot of sugar). Googling, you sometimes see correlational studies like Intake of Flavonoid-Rich Wine, Tea, and Chocolate by Elderly Men and Women Is Associated with Better Cognitive Test Performance; in this one, the correlated performance increase from eating chocolate was generally fairly modest (say, <10%), and the maximum effects were at 10g/day of what was probably milk chocolate, which generally has 10-40% chocolate liquor in it, suggesting any experiment use 1-4g. More interesting is the blind RCT experiment Consumption of cocoa flavanols results in acute improvements in mood and cognitive performance during sustained mental effort11, which found improvements at ~1g; the most dramatic improvement of the 4 tasks (on the Threes correct) saw a difference of 2 to 6 at the end of the hour of testing, while several of the other tests converged by the end or saw the controls winning (Sevens correct). Crews et al 2008 found no cognitive benefit, and an fMRI experiment found the change in brain oxygen levels it wanted but no improvement to reaction times. It’s not clear that there is much of an effect at all. This makes it hard to design a self-experiment - how big an effect on, say, dual n-back should I be expecting? Do I need an arduous long trial or an easy short one? This would principally determine the value of information too; chocolate seems like a net benefit even if it does not affect the mind, but it’s also fairly costly, especially if one likes (as I do) dark chocolate. Given the mixed research, I don’t think cocoa powder is worth investigating further as a nootropic. # Coconut oil Coconut oil was recommended by Pontus Granström on the Dual N-Back mailing list for boosting energy & mental clarity. It is fairly cheap (~$13 for 30 ounces) and tastes surprisingly good; it has a very bad reputation in some parts, but seems to be in the middle of a rehabilitation. Seth Robert’s Buttermind experiment found no mental benefits to coconut oil (and benefits to eating butter), but I wonder.
The first night I was eating some coconut oil, I did my n-backing past 11 PM; normally that damages my scores, but instead I got 66/66/75/88/77% (▁▁▂▇▃) on D4B and did not feel mentally exhausted by the end. The next day, I performed well on the Cambridge mental rotations test. An anecdote, of course, and it may be due to the vitamin D I simultaneously started. Or another day, I was slumped under apathy after a promising start to the day; a dose of fish & coconut oil, and 1 last vitamin D, and I was back to feeling chipper and optimist. Unfortunately I haven’t been testing out coconut oil & vitamin D separately, so who knows which is to thank. But still interesting.
After several weeks of regularly consuming coconut oil and using up the first jar of 15oz, I’m no longer particularly convinced it was doing anything. (I’ve found it’s good for frying eggs, though.) Several days after using up the second jar, I notice no real difference in mood or energy or DNB scores.
One of the most obscure -racetams around, coluracetam (Smarter Nootropics, Ceretropic, Isochroma) acts in a different way from piracetam - piracetam apparently attacks the breakdown of acetylcholine while coluracetam instead increases how much choline can be turned into useful acetylcholine. This apparently is a unique mechanism. A crazy Longecity user, ScienceGuy ponied up $16,000 (!) for a custom synthesis of 500g; he was experimenting with 10-80mg sublingual doses (the ranges in the original anti-depressive trials) and reported a laundry list of effects (as does Isochroma): primarily that it was anxiolytic and increased work stamina. Unfortunately for my stack, he claims it combines poorly with piracetam. He offered free 2g samples for regulars to test his claims. I asked & received some. Experiment design is complicated by his lack of use of any kind of objective tests, but 3 metrics seem worthwhile: 1. dual n-back: testing his claims about concentration, increased energy & stamina, and increased alertness & lucidity. 2. daily Mnemosyne flashcard scores: testing his claim about short & medium-term memory, viz. I have personally found that with respect to the NOOTROPIC effect(s) of all the RACETAMS, whilst I have experienced improvements in concentration and working capacity / productivity, I have never experienced a noticeable ongoing improvement in memory. COLURACETAM is the only RACETAM that I have taken wherein I noticed an improvement in MEMORY, both with regards to SHORT-TERM and MEDIUM-TERM MEMORY. To put matters into perspective, the memory improvement has been mild, yet still significant; whereas I have experienced no such improvement at all with the other RACETAMS. 3. daily mood/productivity log (1-5): for the anxiolytic and working claims. (In all 3, higher = better, so a multivariate result is easily interpreted..) He recommends a 10mg dose, but sublingually. He mentions COLURACETAM’s taste is more akin to that of PRAMIRACETAM than OXIRACETAM, in that it tastes absolutely vile (not a surprise), so it is impossible to double-blind a sublingual administration - even if I knew of an inactive equally-vile-tasting substitute, I’m not sure I would subject myself to it. To compensate for ingesting the coluracetam, it would make sense to double the dose to 20mg (turning the 2g into <100 doses). Whether the effects persist over multiple days is not clear; I’ll assume it does not until someone says it does, since this makes things much easier. # Creatine Creatine (Examine.com) monohydrate was another early essay of mine - cheap (because it’s so popular with the bodybuilder types), and with a very good safety record. I bought some from Bulk Powders and combined it with my then-current regimen (piracetam+choline). I’m not a bodybuilder, but my interest was sparked by several studies, some showing benefits and others not - usually in subpopulations like vegetarians or old people. As I am not any of the latter, I didn’t really expect a mental benefit. As it happens, I observed nothing. What surprised me was something I had forgotten about: its physical benefits. My performance in Taekwondo classes suddenly improved - specifically, my endurance increased substantially. Before, classes had left me nearly prostrate at the end, but after, I was weary yet fairly alert and happy. (I have done Taekwondo since I was 7, and I have a pretty good sense of what is and is not normal performance for my body. This was not anything as simple as failing to notice increasing fitness or something.) This was driven home to me one day when in a flurry before class, I prepared my customary tea with piracetam, choline & creatine; by the middle of the class, I was feeling faint & tired, had to take a break, and suddenly, thunderstruck, realized that I had absentmindedly forgot to actually drink it! This made me a believer. After I ran out of creatine, I noticed the increased difficulty, and resolved to buy it again at some point; many months later, there was a Smart Powders sale so bought it in my batch order,$12 for 1000g. As before, it made Taekwondo classes a bit easier. I paid closer attention this second time around and noticed that as one would expect, it only helped with muscular fatigue and did nothing for my aerobic issues. (I hate aerobic exercise, so it’s always been a weak point.) I eventually capped it as part of a sulbutiamine-DMAE-creatine-theanine mix. This ran out 1 May 2013. In March 2014, I spent $19 for 1kg of micronized creatine monohydrate to resume creatine use and also to use it as a placebo in a honey-sleep experiment testing Seth Roberts’s claim that a few grams of honey before bedtime would improve sleep quality: my usual flour placebo being unusable because the mechanism might be through simple sugars, which flour would digest into. (I did not do the experiment: it was going to be a fair amount of messy work capping the honey and creatine, and I didn’t believe Roberts’s claims for a second - my only reason to do it would be to prove the claim wrong but he’d just ignore me and no one else cares.) I didn’t try measuring out exact doses but just put a spoonful in my tea each morning (creatine is tasteless). The 1kg lasted from 25 March to 18 September or 178 days, so ~5.6g &$0.11 per day.
Ryan Carey tracked creatine consumption vs some tests with ambiguous results.
# Cytisine
Cytisine is an obscure drug known, if at all, for use in anti-smoking treatment.
Cytisine is not known as a stimulant and I’m not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it’s odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try.
My first dose on 1 March 2017, at the recommended 0.5ml/1.5mg was miserable, as I felt like I had the flu and had to nap for several hours before I felt well again, requiring 6h to return to normal; after waiting a month, I tried again, but after a week of daily dosing in May, I noticed no benefits; I tried increasing to 3x1.5mg but this immediately caused another afternoon crash/nap on 18 May. So I scrapped my cytisine. Oh well.
# Fish oil
Fish oil (Examine.com, buyer’s guide) provides benefits relating to general mood (eg. inflammation & anxiety; see later on anxiety) and anti-schizophrenia; it is one of the better supplements one can take. (The known risks are a higher rate of prostate cancer and internal bleeding, but are outweighed by the cardiac benefits - assuming those benefits exist, anyway, which may not be true.) The benefits of omega acids are well-researched.
It is at the top of the supplement snake oil list thanks to tons of correlations; for a review, see Luchtman & Song 2013 but some specifics include Teenage Boys Who Eat Fish At Least Once A Week Achieve Higher Intelligence Scores, anti-inflammatory properties (see Fish Oil: What the Prescriber Needs to Know on arthritis), and others - Fish oil can head off first psychotic episodes (study; Seth Roberts commentary), Fish Oil May Fight Breast Cancer, Fatty Fish May Cut Prostate Cancer Risk & Walnuts slow prostate cancer, Benefits of omega-3 fatty acids tally up, Serum Phospholipid Docosahexaenonic Acid Is Associated with Cognitive Functioning during Middle Adulthood endless anecdotes.
But like any other supplement, there are some safety concerns negative studies like Fish oil fails to hold off heart arrhythmia or other reports cast doubt on a protective effect against dementia or Fish Oil Use in Pregnancy Didn’t Make Babies Smart (WSJ) (an early promise but one that faded a bit later) or …Supplementation with DHA compared with placebo did not slow the rate of cognitive and functional decline in patients with mild to moderate Alzheimer disease..
As far as anxiety goes, psychiatrist Emily Deans has an overview of why the Kiecolt-Glaser et al 2011 study is nice; she also discusses why fish oil seems like a good idea from an evolutionary perspective. There was also a weaker earlier 2005 study also using healthy young people, which showed reduced anger/anxiety/depression plus slightly faster reactions. The anti-stress/anxiolytic may be related to the possible cardiovascular benefits (Carter et al 2013).
## Experiment?
I can test fish oil for mood, since the other claimed benefits like anti-schizophrenia are too hard to test. The medical student trial (Kiecolt-Glaser et al 2011) did not see changes until visit 3, after 3 weeks of supplementation. (Visit 1, 3 weeks, visit 2, supplementation started for 3 weeks, visit 3, supplementation continued 3 weeks, visit 4 etc.) There were no tests in between the test starting week 1 and starting week 3, so I can’t pin it down any further. This suggests randomizing in 2 or 3 week blocks. (For an explanation of blocking, see the footnote in the Zeo page.)
The placebos can be the usual pills filled with olive oil. The Nature’s Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don’t know what that means, but if I naively plug those numbers into Cohen’s d, I get a very large effect: $\frac{1.2 - 0.93}{0.076}$=3.55.)
### Quasi-experiment
I noticed what may have been an effect on my dual n-back scores; the difference is not large (▃▆▃▃▂▂▂▂▄▅▂▄▂▃▅▃▄ vs ▃▄▂▂▃▅▂▂▄▁▄▃▅▂▃▂▄▂▁▇▃▂▂▄▄▃▃▂▃▂▂▂▃▄▄▃▆▄▄▂▃▄▃▁▂▂▂▃▂▄▂▁▁▂▄▁▃▂▄) and appears mostly in the averages - Toomim’s quick two-sample t-test gave p=0.23, although a another analysis gives p=0.138112. One issue with this before-after quasi-experiment is that one would expect my scores to slowly rise over time and hence a fish oil after would yield a score increase - the 3.2 point difference could be attributable to that, placebo effect, or random variation etc. But an accidentally noticed effect (d=0.28) is a promising start. An experiment may be worth doing given that fish oil does cost a fair bit each year: randomized blocks permitting an fish-oil-then-placebo comparison would take care of the first issue, and then blinding (olive oil capsules versus fish oil capsules?) would take care of the placebo worry.
### Power calculation
We have clear hypotheses here, so we can be a little optimistic: the fish oil will either improve mood or scores or it will do nothing; it will not worsen either. First, the large anxiety effect:
pwr.t.test(d=3.55,type="paired",power=0.75,alternative="greater",sig.level=0.05)
# Paired t test power calculation
#
# n = 2.269155
#
# NOTE: n is number of *pairs*
Suspiciously easy. 2.25 pairs or 6 blocks? Let’s be pessimistic and use the smaller effect size estimate from my quasi-trial:
# pwr.t.test(d=0.28,type="paired",power=0.75,alternative="greater",sig.level=0.05)
#
# Paired t test power calculation
#
# n = 69.98612
70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires $(70 \times 2) \times (2 \times 7) \times 2 = 3920$ pills. I don’t even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of:
pwr.t.test(d=0.28,type="paired",alternative="greater",sig.level=0.05,n=9)
# ... power = 0.1908962
pwr.t.test(d=0.5,type="paired",alternative="greater",sig.level=0.05,n=9)
# ... power = 0.3927739
A 20-40% chance of detecting the effect.
### VoI
For background on value of information calculations, see the Adderall calculation.
1. Cost of fish oil:
The price is not as good as multivitamins or melatonin. The studies showing effects generally use pretty high dosages, 1-4g daily. I took 4 capsules a day for roughly 4g of omega acids. The jar of 400 is 100 days’ worth, and costs ~$17, or around 17¢ a day. The general health benefits push me over the edge of favoring its indefinite use, but looking to economize. Usually, small amounts of packaged substances are more expensive than bulk unprocessed, so I looked at fish oil fluid products; and unsurprisingly, liquid is more cost-effective than pills (but like with the powders, straight fish oil isn’t very appetizing) in lieu of membership somewhere or some other price-break. I bought 4 bottles (16 fluid ounces each) for$53.31 total (thanks to coupons & sales), and each bottle lasts around a month and a half for perhaps half a year, or ~$100 for a year’s supply. (As it turned out, the 4 bottles lasted from 4 December 2010 to 17 June 2011, or 195 days.) My next batch lasted 19 August 2011-20 February 2012, and cost$58.27. Since I needed to buy empty 00 capsules (for my lithium experiment) and a book (Stanovich 2010, for SIAI work) from Amazon, I bought 4 more bottles of 16fl oz Nature’s Answer (lemon-lime) at $48.44, which I began using 27 February 2012. So call it ~$70 a year.
Most of the most solid fish oil results seem to meliorate the effects of age; in my 20s, I’m not sure they are worth the cost. But I would probably resume fish oil in my 30s or 40s when aging really becomes a concern. So the experiment at most will result in discontinuing for a decade. At $X a year, that’s a net present value of sum$ map (\n -> 70 / (1 + 0.05)^n) [1..10] = $540.5. 2. Cost of experimentation: The fish oil can be considered a free sunk cost: I would take it in the absence of an experiment. The empty pill capsules could be used for something else, so we’ll put the 500 at$5. Filling 500 capsules with fish and olive oil will be messy and take an hour. Taking them regularly can be added to my habitual morning routine for vitamin D and the lithium experiment, so that is close to free but we’ll call it an hour over the 250 days. Recording mood/productivity is also free a sunk cost as it’s necessary for the other experiments; but recording dual n-back scores is more expensive: each round is ~2 minutes and one wants >=5, so each block will cost >10 minutes, so 18 tests will be >180 minutes or >3 hours. So >5 hours. Total: $5 + (>5 \times 7.25) = >41$.
3. Priors:
The power calculation indicates a 20% chance of getting useful information. My quasi-experiment has <70% chance of being right, and I preserve a general skepticism about any experiment, even one as well done as the medical student one seems to be, and give that one a <80% chance of being right; so let’s call it 70% the effect exists, or 30% it doesn’t exist (which is the case in which I save money by dropping fish oil for 10 years).
4. Value of Information
Power times prior times benefit minus cost of experimentation: $(0.20 \times 0.30 \times 540) - 41 = -9$. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn’t work isn’t enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8).
## Flaxseed
The general cost of fish oil made me interested in possible substitutes. Seth Roberts uses exclusively flaxseed oil or flaxseed meal, and this seems to work well for him with subjective effects (eg. noticing his Chinese brands seemed to not work, possibly because they were unrefrigerated and slightly rancid). It’s been studied much less than fish oil, but omega acids are confusing enough in general (is there a right ratio? McCluskey’s roundup gives the impression claims about ratios may have been overstated) that I’m not convinced ALA is a much inferior replacement for fish oil’s mixes of EPA & DHA.
Flaxseed oil is, ounce for ounce, about as expensive as fish oil, and also must be refrigerated and goes bad within months anyway. Flax seeds on the other hand, do not go bad within months, and cost dollars per pound. Various resources I found online estimated that the ALA component of human-edible flaxseed to be around 20% So Amazon’s 6lbs for $14 is ~1.2lbs of ALA, compared to 16fl-oz of fish oil weighing ~1lb and costing ~$17, while also keeping better and being a calorically useful part of my diet. The flaxseeds can be ground in an ordinary food processor or coffee grinder. It’s not a hugely impressive cost-savings, but I think it’s worth trying when I run out of fish oil.
After trying out 2 6lb packs between 12 September & 25 November 2012, and 20 March & 20 August 2013, I have given up on flaxseed meal. They did not seem to go bad in the refrigerator or freezer, and tasted OK, but I had difficulty working them into my usual recipes: it doesn’t combine well with hot or cold oatmeal, and when I tried using flaxseed meal in soups I learned flaxseed is a thickener which can give soup the consistency of snot. It’s easier to use fish oil on a daily basis.
# Huperzine-A
The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the null hypothesis files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A.
Possible confounding factors:
• youth: I am considerably younger than the other poster who uses HA
• I only tested a few days with choline+H-A (but I didn’t notice anything beyond the choline there).
• counterfeiting? Source Naturals is supposed to be trustworthy, but rare herbal products are most susceptible to fake goods.
It’s really too bad. H-A is cheap, compact, doesn’t taste at all, and in general is much easier to take than fish oil (and much easier to swallow than piracetam or choline!). But if it doesn’t deliver, it doesn’t deliver.
# Hydergine
Hydergine (FDA adverse events) was another disappointment (like the adrafinil, purchased from Anti-Aging Systems/Antiaging Central). I noticed little to nothing that couldn’t be normal daily variation.
# Iodine
As discussed in my iodine essay (FDA adverse events), iodine is a powerful health intervention as it eliminates cretinism and improves average IQ by a shocking magnitude. If this effect were possible for non-fetuses in general, it would be the best nootropic ever discovered, and so I looked at it very closely. Unfortunately, after going through ~20 experiments looking for ones which intervened with iodine post-birth and took measures of cognitive function, my meta-analysis concludes that: the effect is small and driven mostly by one outlier study. Once you are born, it’s too late. But the results could be wrong, and iodine might be cheap enough to take anyway, or take for non-IQ reasons. (This possibility was further weakened for me by an August 2013 blood test of TSH which put me at 3.71 uIU/ml, comfortably within the reference range of 0.27-4.20.)
## Power analysis
Starting from the studies in my meta-analysis, we can try to estimate an upper bound on how big any effect would be, if it actually existed. One of the most promising null results, Southon et al 1994, turns out to be not very informative: if we punch in the number of kids, we find that they needed a large effect size (d=0.81) before they could see anything:
library(pwr)
pwr.t.test(power=0.75, sig.level=0.05, n=22)
# Two-sample t test power calculation
#
# n = 22
# d = 0.8130347
Fitzgerald 2012 is better, and gives a number of useful details on her adult experiment:
Participants (n=205) [young adults aged 18-30 years] were recruited between July 2010 and January 2011, and were randomized to receive either a daily 150 µg (0.15mg) iodine supplement or daily placebo supplement for 32 weeks…After adjusting for baseline cognitive test score, examiner, age, sex, income, and ethnicity, iodine supplementation did not significantly predict 32 week cognitive test scores for Block Design (p=0.385), Digit Span Backward (p=0.474), Matrix Reasoning (p=0.885), Symbol Search (p=0.844), Visual Puzzles (p=0.675), Coding (p=0.858), and Letter-Number Sequencing (p=0.408).
Full text isn’t available although some of the p-values suggest that there might be differences which didn’t reach significance, so to estimate an upper bound on what sort of effect-size we’re dealing with:
pwr.t.test(type="two.sample",power=0.75,alternative="greater",n=102)
# Two-sample t test power calculation
#
# n = 102
# d = 0.325867
This is a much tighter upper bound than Southon et al 1994 gave us, and also kind of discouraging: remember, the smaller the effect size, the more data you will need to see it, and data is always expensive. If I were to try to do any experiment, how many pairs would I need if we optimistically assume that d=0.32?
pwr.t.test(type="paired",d=0.325867,power=0.75,alternative="greater")
# Paired t test power calculation
#
# n = 52.03677
We’d want 53 pairs, but Fitzgerald 2012’s experimental design called for 32 weeks of supplementation for a single pair of before-after tests - so that’d be 1664 weeks or ~54 months or ~4.5 years! We can try to adjust it downwards with shorter blocks allowing more frequent testing; but problematically, iodine is stored in the thyroid and can apparently linger elsewhere - many of the cited studies used intramuscular injections of iodized oil (as opposed to iodized salt or kelp supplements) because this ensured an adequate supply for months or years with no further compliance by the subjects. If the effects are that long-lasting, it may be worthless to try shorter blocks than ~32 weeks.
We’ve looked at estimating based on individual studies. But we aggregated them into a meta-analysis more powerful than any of them, and it gave us a final estimate of d=~0.1. What does that imply?
pwr.t.test(type="paired",d=0.1,power=0.75,alternative="greater")
# Paired t test power calculation
#
# n = 539.2906
540 pairs of tests or 1080 blocks… This game is not worth the candle!
## VoI
For background on value of information calculations, see the Adderall calculation.
1. Cost:
This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and $38 \times 7.25 = 275.5$. 12,000 pills is roughly $12.80 per thousand or$154; 120 potassium iodide pills is ~$9, so $\frac{365.25}{120} \times 9 \times 5 = 137$. The time plus the gel capsules plus the potassium iodide is$567.
2. Benefit:
Some work has been done on estimating the value of IQ, both as net benefits to the possessor (including all zero-sum or negative-sum aspects) and as net positive externalities to the rest of society. The estimates are substantial: in the thousands of dollars per IQ point. But since increasing IQ post-childhood is almost impossible barring disease or similar deficits, and even increasing childhood IQs is very challenging, much of these estimates are merely correlations or regressions, and the experimental childhood estimates must be weakened considerably for any adult - since so much time and so many opportunities have been lost. A wild guess: $1000 net present value per IQ point. The range for severely deficient children was 10-15 points, so any normal (somewhat deficient) adult gain must be much smaller and consistent with Fitzgerald 2012’s ceiling on possible effect sizes (small). Let’s make another wild guess at 2 IQ points, for$2000.
3. Expectation:
What is my prior expectation that iodine will do anything? A good way to break this question down is the following series of necessary steps:
• how much do I believe I am iodine deficient?
(If I am not deficient, then supplementation ought to have no effect.) The previous material on modern trends suggests a prior >25%, and higher than that if I were female. However, I was raised on a low-salt diet because my father has high blood pressure, and while I like seafood, I doubt I eat it more often than weekly. I suspect I am somewhat iodine-deficient, although I don’t believe as confidently as I did that I had a vitamin D deficiency. Let’s call this one 75%.
• If deficient, how likely would it help at my age?
(The effect may exist only at limited age ranges - like height, once you’re done growing, few interventions short of bone surgery will make one taller or shorter.) So this is one of the key assumptions: can we extend the benefits in deficient children to somewhat deficient adults?
Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of $20% \times \frac{1}{\text{dozens}}$ of being iodine! I may be unduly optimistic if I give this as much as 10%.
• If it would help at my age, how likely do I think my supplementation would hit the sweet spot and not under or overshoot?
(We already saw that too much iodine could poison both adults and children, and of course too little does not help much - iodine would seem to follow a U-curve like most supplements.) The listed doses at iherb.com often are ridiculously large: 10-50mg! These are doses that seems to actually be dangerous for long-term consumption, and I believe these are doses that are designed to completely suffocate the thyroid gland and prevent it from absorbing any more iodine - which is useful as a short-term radioactive fallout prophylactic, but quite useless from a supplementation standpoint. Fortunately, there are available doses at Fitzgerald 2012’s exact dose, which is roughly the daily RDA: 0.15mg. Even the contrarian materials seem to focus on a modest doubling or tripling of the existing RDA, so the range seems relatively narrow. I’m fairly confident I won’t overshoot if I go with 0.15-1mg, so let’s call this 90%.
Conclusion: 75% times 10% times 90% is 6.3%.
4. EV of taking iodine
Now, what is the expected value (EV) of simply taking iodine, without the additional work of the experiment? 4 cans of 0.15mg x 200 is $20 for 2.1 years’ worth or ~$10 a year or a NPV cost of $205 ($\frac{10}{\ln 1.05}$) versus a 20% chance of$2000 or $400. So the expected value is greater than the NPV cost of taking it, so I should start taking iodine. 5. Value of Information Finally, what is the value of information of conducting the experiment? With an estimated power of 75%, and my own skeptical prior of 20% that there’s any effect worth caring about, and a potential benefit of$2000, that’s $0.75 \times 0.063 \times 2000 = 95$. We must weigh $95 against the estimated experimentation cost of$567. Since the information is worth less than the experiment costs, I should not do it.
But notice that most of the cost imbalance is coming from the estimate of the benefit of IQ - if it quadrupled to a defensible $8000, that would be close to the experiment cost! So in a way, what this VoI calculation tells us is that what is most valuable right now is not that iodine might possibly increase IQ, but getting a better grip on how much any IQ intervention is worth. So the overall picture is that I should: 1. start taking a moderate dose of iodine at some point 2. look into cheap tests for iodine deficiency • One self-test suggested online involves dripping iodine onto one’s skin and seeing how long it takes to be absorbed. This doesn’t seem terrible, but according to Derry and Abraham, it is unreliable. • Home urine test kits of unknown accuracy are available online (Google iodine urine test kit) but run$70-$100+ eg. Hakala Research. 3. try to think of cheaper experiments I could run for benefits from iodine ## Iodine eye color changes? A poster or two on Longecity claimed that iodine supplementation had changed their eye color, suggesting a connection to the yellow-reddish element bromine - bromides being displaced by their chemical cousin, iodine. I was skeptical this was a real effect since I don’t know why visible amounts of either iodine or bromine would be in the eye, and the photographs produced were less than convincing. But it’s an easy thing to test, so why not? For 2 weeks, upon awakening I took close-up photographs of my right eye. Then I ordered two jars of Life-Extension Sea-Iodine (60x1mg) (1mg being an apparently safe dose), and when it arrived on 10 September 2012, I stopped the photography and began taking 1 iodine pill every other day. I noticed no ill effects (or benefits) after a few weeks and upped the dose to 1 pill daily. After the first jar of 60 pills was used up, I switched to the second jar, and began photography as before for 2 weeks. The photographs were uploaded, cropped by hand in Gimp, and shrunk to more reasonable dimensions; both sets are available in a Zip file. Upon examining the photographs, I noticed no difference in eye color, but it seems that my move had changed the ambient lighting in the morning and so there was a clear difference between the two sets of photographs! The before photographs had brighter lighting than the after photographs. Regardless, I decided to run a small survey on QuickSurveys/Toluna to confirm my diagnosis of no-change; the survey was 11 forced-choice pairs of photographs (before-after), with the instructions as follows: Estimated time: <1 min. Below is 11 pairs of close-up eye photographs,. In half the photos, the eye color of the iris may or may not have been artificially lightened; as a challenge, the photos are taken under varying light conditions! In each pair, try to pick the photo with a lightened iris eye color if any. (Do not judge simply on overall lighting.) (I reasoned that this description is not actually deceptive: taking pills is indeed artificial, as I would not naturally consume so much iodine or seaweed extract, and I didn’t know for sure that my eyes hadn’t changed color so the correct description is indeed may or may not have.) I posted a link to the survey on my Google+ account, and inserted the link at the top of all gwern.net pages; 51 people completed all 11 binary choices (most of them coming from North America & Europe), which seems adequate since the 11 questions are all asking the same question, and 561 responses to one question is quite a few. A few different statistical tests seem applicable: a chi-squared test whether there’s a difference between all the answers, a two-sample test on the averages, and most meaningfully, summing up the responses as a single pair of numbers and doing a binomial test: before <- c(27,31,18,26,22,29,20,13,18,31,27) # I split the 11 questions into how many picked, after <- c(24,20,33,25,29,22,31,38,33,20,24) # for it, before vs after summary(before); summary(after) # Min. 1st Qu. Median Mean 3rd Qu. Max. # 13.0 19.0 26.0 23.8 28.0 31.0 # Min. 1st Qu. Median Mean 3rd Qu. Max. # 20.0 23.0 25.0 27.2 32.0 38.0 chisq.test(before, after, simulate.p.value=TRUE) # Pearsons Chi-squared test with simulated p-value # # data: before and after # X-squared = 77, df = NA, p-value = 0.000135 wilcox.test(before, after) # Wilcoxon rank sum test with continuity correction # # data: before and after # W = 43, p-value = 0.2624 # alternative hypothesis: true location shift is not equal to 0 binom.test(c(sum(before), sum(after))) # Exact binomial test # # data: c(sum(before), sum(after)) # number of successes = 262, number of trials = 561, p-value = 0.1285 # alternative hypothesis: true probability of success is not equal to 0.5 # 95% confidence interval: # 0.4251 0.5093 # sample estimates: # probability of success # 0.467 So the chi-squared believes there is a statistically-significant difference, the two-sample test disagrees, and the binomial also disagrees. Since I regarded it as a dubious theory, can’t see a difference, and the binomial seems like the most appropriate test, I conclude that several months of 1mg iodine did not change my eye color. (As a final test, when I posted the results on the Longecity forum where people were claiming the eye color change, I swapped the labels on the photos to see if anyone would claim something along the lines when I look at the photos, I can see a difference!. I thought someone might do that, which would be a damning demonstration of their biases & wishful thinking, but no one did.) # Kratom Kratom (Erowid, Reddit) is a tree leaf from Southeast Asia; it’s addictive to some degree (like caffeine and nicotine), and so it is regulated/banned in Thailand, Malaysia, Myanmar, and Bhutan among others - but not the USA. (One might think that kratom’s common use there indicates how very addictive it must be, except it literally grows on trees so it can’t be too hard to get.) Kratom is not particularly well-studied (and what has been studied is not necessarily relevant - I’m not addicted to any opiates!), and it suffers the usual herbal problem of being an endlessly variable food product and not a specific chemical with the fun risks of perhaps being poisonous, but in my reading it doesn’t seem to be particularly dangerous or have serious side-effects. A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro’s Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom’s apparently chewed, but the powders are brewed as a tea. 1. I started with the 10g of Vitality Enhanced Blend, a sort of tan dust. Used 2 little-spoonfuls (dust tastes a fair bit like green/oolong tea dust) into the tea mug and then some boiling water. A minute of steeping and… bleh. Tastes sort of musty and sour. (I see why people recommended sweetening it with honey.) The effects? While I might’ve been more motivated - I hadn’t had caffeine that day and was a tad under the weather, a feeling which seemed to go away perhaps half an hour after starting - I can’t say I experienced any nausea or very noticeable effects. (At least the flavor is no longer quite so offensive.) 2. 3 days later, I’m fairly miserable (slept poorly, had a hair-raising incident, and a big project was not received as well as I had hoped), so well before dinner (and after a nap) I brew up 2 wooden-spoons of Malaysia Green (olive-color dust). I drank it down; tasted slightly better than the first. I was feeling better after the nap, and the kratom didn’t seem to change that. 3. The next day was somewhat similar, so at 2:40 I tried out 3 spoonfuls of sm00th (?), a straight tan powder. Like the Malaysia Green, not so bad tasting. By the second cup, my stomach is growling a little. No particular motivation. 4. A week later: Golden Sumatran, 3 spoonfuls, a more yellowish powder. (I combined it with some tea dregs to hopefully cut the flavor a bit.) Had a paper to review that night. No (subjectively noticeable) effect on energy or productivity. I tried 4 spoonfuls at noon the next day; nothing except a little mental tension, for lack of a better word. I think that was just the harbinger of what my runny nose that day and the day before was, a head cold that laid me low during the evening. 5. 4 spoons of Thai Red Vein at 1:30 PM; cold hasn’t gone away but the acetaminophen was making it bearable. 6. 4 spoons of Enriched Thai (brown) at 8PM. Steeped 15 minutes, drank; no effect - I have to take a break to watch 3 Mobile Suit Gundam episodes before I even feel like working. 7. 5 spoons of Enriched Sumatran (tannish-brown) at 3:10 PM; especially sludgy this time, the Sumatran powder must be finer than the other. 8. 4 spoons Synergy (a Premium Whole Leaf Blend) at 11:20 AM; by 12:30 PM I feel quite tired and like I need to take a nap (previous night’s sleep was slightly above average, 96 ZQ). 9. 5 spoons Essential Indo (olive green) at 1:50 PM; no apparent effect except perhaps some energy for writing (but then a vague headache). At dose #9, I’ve decided to give up on kratom. It is possible that it is helping me in some way that careful testing (eg. dual n-back over weeks) would reveal, but I don’t have a strong belief that kratom would help me (I seem to benefit more from stimulants, and I’m not clear on how an opiate-bearer like kratom could stimulate me). So I have no reason to do careful testing. Oh well. # Lion’s Mane mushroom Hericium erinaceus (Examine.com) was recommended strongly by several on the ImmInst.org forums for its long-term benefits to learning, apparently linked to Nerve growth factor. Highly speculative stuff, and it’s unclear whether the mushroom powder I bought was the right form to take (ImmInst.org discussions seem to universally assume one is taking an alcohol or hotwater extract). It tasted nice, though, and I mixed it into my sleeping pills (which contain melatonin & tryptophan). I’ll probably never know whether the$30 for 0.5lb was well-spent or not.
(I was more than a little nonplussed when the mushroom seller included a little pamphlet educating one about how papaya leaves can cure cancer, and how I’m shortening my life by decades by not eating many raw fruits & vegetables. There were some studies cited, but usually for points disconnected from any actual curing or longevity-inducing results.)
# Lithium
Lithium is a well-known mood stabilizer & suicide preventative; some research suggests lithium may be a cognitively-protective nutrient and on population levels chronic lithium consumption through drinking water predicts mental illness, violence, & suicide. Main article: Lithium.
Lithium orotate is sold commercially in low-doses; I purchased 200 pills with 5mg of lithium each. (To put this dosage in comparison, the therapeutic psychiatric doses of lithium are around 500mg and roughly 100x larger.) The pills are small and tasteless, and not at all hard to take.
## Lithium experiment
I experiment with a blind random trial of 5mg lithium orotate looking for effects on mood and various measures of productivity. There is no detectable effect, good or bad.
Some suggested that the lithium would turn me into a zombie, recalling the complaints of psychiatric patients. But at 5mg elemental lithium x 200 pills, I’d have to eat 20 to get up to a single clinical dose (a psychiatric dose might be 500mg of lithium carbonate, which translates to ~100mg elemental), so I’m not worried about overdosing. To test this, I took on day 1 & 2 no less than 4 pills/20mg as an attack dose; I didn’t notice any large change in emotional affect or energy levels. And it may’ve helped my motivation (though I am also trying out the tyrosine).
The effect? 3 or 4 weeks later, I’m not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn’t expect to. An effect? Possibly.
A real experiment is called for.
### Design
Most of the reported benefits of lithium are impossible for me to test: rates of suicide and Parkinson’s are right out, but so is crime and neurogenesis (the former is too rare & unusual, the latter too subtle & hard to measure), and likewise potential negatives. So we could measure:
1. mood, via daily self-report; should increase
The principal metric would be mood, however defined. Zeo’s web interface & data export includes a field for Day Feel, which is a rating 1-5 of general mood & quality of day. I can record a similar metric at the end of each day. 1-5 might be a little crude even with a year of data, so a more sophisticated measure might be in order. The first mood study is paywalled so I’m not sure what they used, but Shiotsuki 2008 used State-Trait of Anxiety Inventory (STAI) and Profiles of Mood States Test (POMS). The full POMS sounds too long to use daily, but the Brief POMS might work. In the original 1987 paper A brief POMS measure of distress for cancer patients, patients answering this questionnaire had a mean total mean of 10.43 (standard deviation 8.87). Is this the best way to measure mood? I’ve asked Seth Roberts; he suggested using a 0-100 scale, but personally, there’s no way I can assess my mood on 0-100. My mood is sufficiently stable (to me) that 0-5 is asking a bit much, even.
I ultimately decided to just go with the simple 0-5 scale, although it seems to have turned out to be more of a 2-4 scale! Apparently I’m not very good at introspection.
2. long-term memory (Mnemosyne 2.0’s statistics); could increase (neurogenesis), do nothing (null result), or decrease (metal poisoning)
3. working memory (dual n-back scores via Brain Workshop13); like long-term memory
4. sleep (Zeo); should increase (via mood improvement)
5. time procrastinating on computer (arbtt daemon every 10-40 seconds records open & active windows; these statistics can be parsed into categories like work or play. Total time on latter categories could be a useful metric. A second metric would be number of commits to the gwern.net source repository.)
Lithium is somewhat persistent in the body, and its effects are not acute especially in low doses; this calls for long blocked trials.
The blood half-life is 12-36 hours; hence two or three days ought to be enough to build up and wash out. A week-long block is reasonable since that gives 5 days for effects to manifest, although month-long blocks would not be a bad choice either. (I prefer blocks which fit in round periods because it makes self-experiments easier to run if the blocks fit in normal time-cycles like day/week/month. The most useless self-experiment is the one abandoned halfway.)
With subtle effects, we need a lot of data, so we want at least half a year (6 blocks) or better yet, a year (12 blocks); this requires 180 actives and 180 placebos. This is easily covered by $11 for Doctor’s Best Best Lithium Orotate (5mg), 200-Count (more precisely, Lithium 5mg (from 125mg of lithium orotate)) and$14 for 1000x1g empty capsules (purchased February 2012). For convenience I settled on 168 lithium & 168 placebos (7 pill-machine batches, 14 batches total); I can use them in 24 paired blocks of 7-days/1-week each (48 total blocks/48 weeks). The lithium expiration date is October 2014, so that is not a problem
The methodology would be essentially the same as the vitamin D in the morning experiment: put a multiple of 7 placebos in one container, the same number of actives in another identical container, hide & randomly pick one of them, use container for 7 days then the other for 7 days, look inside them for the label to determine which period was active and which was placebo, refill them, and start again.
### VoI
For background on value of information calculations, see the Adderall calculation.
Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I’m doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: $\frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4$, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit. ### Data 1. first pair 2. first block started and pill taken: 11 May 2012 - 19 May: 1 3. 20 May - 27: 0 4. second pair 1. first block started and pill taken: 29 May - 4 June: 1 2. second block: 5 June - 11 June: 0 5. third pair 1. first block: 12 June - 18 June: 1 2. second block: 19 June - 25 June: 0 6. fourth pair 7. first block: 26 June - 2 July: 1 8. second block: 3 July - 8 July: 0 9. fifth pair 1. first block: 13 July - 20 July: 1 2. second block: 21 July - 27 July: 0 10. sixth pair 1. first block: 28 July - 3 August: 0 2. second block: 4 August - 10 August: 1 11. seventh pair 1. first block: 11 August - 17 August: 1 2. second block: 18 August - 24 August: 0 12. eighth pair 1. first block: 25 August - 31 August: 1 2. second block: 1 September - 4 September, stopped until 24 September, finished 25 September: 0 13. I interrupted the lithium self-experiment until March 2013 in order to run the LSD microdosing self-experiment without a potential confound; ninth block pair: 1. 12 March 2013 - 18 March: 1 2. 19 March - 25 March: 0 14. tenth pair: 1. 26 March - 1 April: 0 2. 2 April - 8 April: 1 15. eleventh pair: 1. 9 April - 15 April: 0 2. 16 April - 21 April: 1 16. twelfth pair: 1. 22 April - 28 April: 1 2. 29 April - 5 May: 0 17. thirteenth pair: 1. 6 May - 12 May: 0 2. 13 May - 19 May: 1 18. fourteenth pair: 1. 20 May - 26 May: 1 2. 27 May - 2 June: 0 19. fifteenth: 1. 5 June - 11 June: 0 2. 12 June - 18 June: 1 20. sixteenth: 1. 19 June - 25 June: 0 2. 26 June - 2 July: 1 21. seventeenth: 1. 3 July - 9 July: 0 2. 10 July - 16 July: 1 22. eighteenth: 1. 17 July - 23 July: 0 2. 24 July - 28 July, 8 August - 9 August: 1 23. nineteenth: 1. 10 August - 16 August: 0 2. 17 August - 23 August: 1 24. twentieth: 1. 24 August - 30 August: 0 2. 3 September - 6 September: 1 25. twenty-first: 1. 7 September - 13 September: 1 2. 14 September - 20 September: 0 26. twenty-second: 1. 21 September - 27 September: 0 2. 28 September - 4 October: 1 27. twenty-third: 1. 5 October - 11 October: 0 2. 12 October - 18 October: 1 28. twenty-fourth: 1. 20 - 26 October: 0 2. 27 October - 2 November: 1 ### Analysis #### Preprocessing 1. lithium: hand-generated 2. MP: hand-edited into mp.csv 3. Mnemosyne daily recall scores: extracted from the database: sqlite3 -batch ~/.local/share/mnemosyne/default.db \ "SELECT timestamp,easiness,grade FROM log WHERE event_type==9;" | \ tr "|" "," \ > gwern-mnemosyne.csv 4. DNB scores: omitted because I wound up getting tired of DNB around Nov 2012 and so have no scores for most of the experiment 5. Zeo sleep: loaded from existing export; I don’t expect any changes so I will test just the ZQ 6. arbtt: supports the necessary scripting: arbtt-stats --logfile=/home/gwern/doc/arbtt/2012-2013.log \ --output-format="csv" --for-each="day" --min-percentage=0 > 2012-2013-arbtt.csv arbtt-stats --logfile=/home/gwern/doc/arbtt/2013-2014.log \ --output-format="csv" --for-each="day" --min-percentage=0 > 2013-2014-arbtt.csv arbtt generates cumulative time-usage for roughly a dozen overlapping tags/categories of activity of varying value. For the specific analysis, I plan to run factor analysis to extract one or two factors which seem to correlate with useful activity/work, and regress on those, instead of trying to regress on a dozen different time variables. 7. number of commits to the gwern.net source repository cd ~/wiki/ echo "Gwern.net.patches,Date" > ~/patchlog.txt git log --after=2012-05-11 --before=2013-11-02 --format="%ad" --date=short master | \ sort | uniq --count | tr --squeeze-repeats ' ' ',' | cut -d ',' -f 2,3 >> ~/patchlog.txt Prep work (read in, extract relevant date range, combine into a single dataset, run factor analysis to extract some potentially useful variables): lithium <- read.csv("lithium.csv") lithium$Date <- as.Date(lithium$Date) rm(lithium$X)
mp$Date <- as.Date(mp$Date)
colClasses=c("integer", "numeric", "integer"))
mnemosyne$Date <- as.Date(as.POSIXct(mnemosyne$Timestamp, origin = "1970-01-01", tz = "EST"))
mnemosyne <- mnemosyne[mnemosyne$Date>as.Date("2012-05-11") & mnemosyne$Date<as.Date("2013-11-02"),]
mnemosyne <- aggregate(mnemosyne$Grade, by=list(mnemosyne$Date), FUN=function (x) { mean(as.vector(x));})
zeo$Sleep.Date <- as.Date(zeo$Sleep.Date, format="%m/%d/%Y")
colnames(zeo)[1] <- "Date"
zeo <- zeo[zeo$Date>as.Date("2012-05-11") & zeo$Date<as.Date("2013-11-02"),]
zeo <- zeo[,c(1:10, 23)]
zeo$Start.of.Night <- sapply(strsplit(as.character(zeo$Start.of.Night), " "), function(x) { x[[2]] })
## convert "06:45" to 24300
interval <- function(x) { if (!is.na(x)) { if (grepl(" s",x)) as.integer(sub(" s","",x))
else { y <- unlist(strsplit(x, ":"));
as.integer(y[[1]])*60 + as.integer(y[[2]]); }
}
else NA
}
zeo$Start.of.Night <- sapply(zeo$Start.of.Night, interval)
## the night 'wraps around' at ~800, so let's take 0-400 and add +800 to reconstruct 'late at night'
zeo[zeo$Start.of.Night<400,]$Start.of.Night <- (zeo[zeo$Start.of.Night<400,]$Start.of.Night + 800)
arbtt <- rbind(arbtt1, arbtt2)
arbtt <- arbtt[as.Date(arbtt$Day)>=as.Date("2012-05-11") & as.Date(arbtt$Day)<=as.Date("2013-11-02"),]
## rename Day -> Date, delete Percentage
arbtt <- with(arbtt, data.frame(Date=Day, Tag=Tag, Time=Time))
## Convert time-lengths to second-counts: "0:16:40" to 1000 (seconds); "7:57:30" to 28650 (seconds) etc.
## We prefer units of seconds since arbtt has sub-minute resolution and not all categories
## will have a lot of time each day.
interval <- function(x) { if (!is.na(x)) { if (grepl(" s",x)) as.integer(sub(" s","",x))
else { y <- unlist(strsplit(x, ":"));
as.integer(y[[1]])*3600 +
as.integer(y[[2]])*60 +
as.integer(y[[3]]);
}
}
else NA
}
arbtt$Time <- sapply(as.character(arbtt$Time), interval)
library(reshape)
arbtt <- reshape(arbtt, v.names="Time", timevar="Tag", idvar="Date", direction="wide")
arbtt[is.na(arbtt)] <- 0
arbtt$Date <- as.Date(arbtt$Date)
patches$Date <- as.Date(patches$Date)
## merge all the previous data into a single data-frame:
lithiumExperiment <- merge(merge(merge(merge(merge(lithium, mp), mnemosyne, all=TRUE),
patches, all=TRUE), arbtt, all=TRUE), zeo, all=TRUE)
## no patches recorded for a day == 0 patches that day
lithiumExperiment[is.na(lithiumExperiment$Gwern.net.patches),]$Gwern.net.patches <- 0
## NA=I didn't do SRS that day; but that is bad and should be penalized!
lithiumExperiment[is.na(lithiumExperiment$Mnemosyne.grade),]$Mnemosyne.grade <- 0
productivity <- lithiumExperiment[,c(3,5:22)]
library(psych) ## for factor analysis
nfactors(productivity)
# VSS complexity 1 achieves a maximum of 0.58 with 14 factors
# VSS complexity 2 achieves a maximum of 0.67 with 14 factors
# The Velicer MAP achieves a minimum of 0.02 with 1 factors
# Empirical BIC achieves a minimum of -304.3 with 4 factors
# Sample Size adjusted BIC achieves a minimum of -97.84 with 7 factors
#
# Statistics by number of factors
# vss1 vss2 map dof chisq prob sqresid fit RMSEA BIC SABIC complex eChisq eRMS
# 1 0.16 0.00 0.016 152 1.3e+03 2.6e-190 20.4 0.16 0.122 389.4 871.9 1.0 2.1e+03 1.1e-01
# 2 0.27 0.31 0.022 134 7.8e+02 1.9e-91 16.7 0.31 0.095 -65.2 360.1 1.3 1.1e+03 7.9e-02
# 3 0.30 0.40 0.021 117 4.9e+02 5.2e-47 14.3 0.41 0.078 -247.2 124.2 1.6 7.0e+02 6.2e-02
# 4 0.39 0.47 0.024 101 2.5e+02 4.1e-14 12.1 0.50 0.052 -389.8 -69.2 1.7 3.4e+02 4.3e-02
# 5 0.39 0.51 0.028 86 1.9e+02 2.5e-10 11.2 0.54 0.049 -347.4 -74.4 1.7 2.4e+02 3.6e-02
# 6 0.41 0.53 0.034 72 1.4e+02 7.9e-06 10.3 0.57 0.041 -317.3 -88.8 1.6 1.7e+02 3.1e-02
# 7 0.44 0.54 0.041 59 8.6e+01 1.2e-02 9.6 0.60 0.030 -285.1 -97.8 1.8 1.1e+02 2.5e-02
# 8 0.40 0.52 0.050 47 1.1e+02 1.4e-07 9.9 0.59 0.053 -181.2 -32.0 2.0 2.0e+02 3.3e-02
# 9 0.48 0.57 0.063 36 4.6e+01 1.1e-01 8.3 0.66 0.024 -180.2 -65.9 1.7 6.0e+01 1.8e-02
# 10 0.51 0.62 0.079 26 1.9e+01 8.3e-01 7.2 0.70 0.000 -144.6 -62.1 1.6 1.9e+01 1.0e-02
# 11 0.52 0.62 0.098 17 1.4e+01 6.8e-01 6.7 0.72 0.000 -93.2 -39.3 1.7 1.5e+01 9.0e-03
# 12 0.52 0.61 0.124 9 1.1e+01 3.1e-01 6.7 0.72 0.020 -46.1 -17.5 1.6 1.3e+01 8.3e-03
# 13 0.48 0.61 0.163 2 4.9e+00 8.6e-02 6.3 0.74 0.053 -7.7 -1.3 1.8 6.2e+00 5.8e-03
# 14 0.58 0.67 0.210 -4 7.5e-03 NA 4.9 0.80 NA NA NA 1.8 9.0e-03 2.2e-04
# 15 0.56 0.64 0.293 -9 4.6e-06 NA 5.3 0.78 NA NA NA 2.0 6.1e-06 5.7e-06
# 16 0.53 0.62 0.465 -13 8.7e-07 NA 5.5 0.77 NA NA NA 2.1 8.6e-07 2.2e-06
# 17 0.51 0.61 0.540 -16 9.3e-12 NA 5.6 0.77 NA NA NA 2.1 1.1e-11 7.8e-09
# 18 0.51 0.61 1.000 -18 7.0e-10 NA 5.6 0.77 NA NA NA 2.1 7.8e-10 6.5e-08
# 19 0.51 0.61 NA -19 0.0e+00 NA 5.6 0.77 NA NA NA 2.1 6.2e-25 1.8e-15
# eCRMS eBIC
# 1 0.112 1107.9
# 2 0.089 303.3
# 3 0.075 -31.6
# 4 0.055 -300.5
# 5 0.050 -304.3
# 6 0.047 -280.3
# 7 0.042 -257.4
# 8 0.062 -97.2
# 9 0.039 -167.1
# 10 0.026 -144.7
# 11 0.028 -92.1
# 12 0.036 -44.0
# 13 0.054 -6.4
# 14 NA NA
# 15 NA NA
# 16 NA NA
# 17 NA NA
# 18 NA NA
# 19 NA NA
factorization <- fa(productivity, nfactors=4); factorization
# MR3 MR1 MR2 MR4 h2 u2 com
# MP 0.05 0.01 -0.02 0.34 0.1241 0.876 1.1
# Gwern.net.patches -0.04 0.01 0.01 0.48 0.2241 0.776 1.0
# Time.WWW 0.98 -0.04 -0.10 0.02 0.9778 0.022 1.0
# Time.X 0.49 0.29 0.47 -0.03 0.5801 0.420 2.6
# Time.IRC 0.35 -0.06 -0.14 0.16 0.1918 0.808 1.8
# Time.Writing 0.04 -0.01 0.04 0.69 0.4752 0.525 1.0
# Time.Stats 0.42 -0.10 0.30 0.01 0.2504 0.750 1.9
# Time.PDF -0.09 -0.05 0.98 0.00 0.9791 0.021 1.0
# Time.Music 0.10 -0.10 0.02 0.03 0.0196 0.980 2.2
# Time.Rec 0.03 0.99 -0.03 -0.02 0.9950 0.005 1.0
# Time.SRS 0.06 -0.06 0.07 0.10 0.0209 0.979 3.4
# Time.Sysadmin 0.22 0.13 -0.04 0.13 0.0953 0.905 2.4
# Time.DNB -0.04 -0.05 -0.06 0.07 0.0149 0.985 3.3
# Time.Bitcoin 0.15 -0.07 -0.07 -0.04 0.0306 0.969 2.1
# Time.Blackmarkets 0.18 -0.09 -0.08 0.02 0.0470 0.953 1.9
# Time.Programming -0.04 0.05 -0.04 0.43 0.1850 0.815 1.1
# Time.Backups -0.09 0.06 -0.01 0.04 0.0114 0.989 2.4
# Time.Umineko -0.16 0.71 -0.03 0.06 0.5000 0.500 1.1
# Time.Typing -0.03 -0.04 0.02 -0.01 0.0034 0.997 2.4
#
# MR3 MR1 MR2 MR4
# Proportion Var 0.09 0.09 0.07 0.06
# Cumulative Var 0.09 0.17 0.24 0.30
# Proportion Explained 0.29 0.29 0.23 0.19
# Cumulative Proportion 0.29 0.58 0.81 1.00
#
# With factor correlations of
# MR3 MR1 MR2 MR4
# MR3 1.00 0.12 -0.05 0.10
# MR1 0.12 1.00 0.07 -0.08
# MR2 -0.05 0.07 1.00 -0.08
# MR4 0.10 -0.08 -0.08 1.00
#
# Mean item complexity = 1.8
# Test of the hypothesis that 4 factors are sufficient.
#
# The degrees of freedom for the null model are 171
# and the objective function was 3.08 with Chi Square of 1645
# The degrees of freedom for the model are 101 and the objective function was 0.46
#
# The root mean square of the residuals (RMSR) is 0.04
# The df corrected root mean square of the residuals is 0.06
#
# The harmonic number of observations is 538 with the empirical chi square 332.7 with prob < 1.6e-26
# The total number of observations was 542 with MLE Chi Square = 246 with prob < 4.1e-14
#
# Tucker Lewis Index of factoring reliability = 0.832
# RMSEA index = 0.052 and the 90 % confidence intervals are 0.043 0.06
# BIC = -389.8
# Fit based upon off diagonal values = 0.88
# Measures of factor score adequacy
# MR3 MR1 MR2 MR4
# Correlation of scores with factors 0.99 1.00 0.99 0.79
# Multiple R square of scores with factors 0.98 0.99 0.98 0.63
# Minimum correlation of possible factor scores 0.95 0.99 0.96 0.25
## I interpret MR3=Internet+Stats usage; MR1=goofing off; MR2=reading/stats; MR4=writing
## I don't care about MR1, so we'll look for effects on 3/2/4:
lithiumExperiment$MR3 <- predict(factorization, data=productivity)[,1] lithiumExperiment$MR2 <- predict(factorization, data=productivity)[,3]
Since LLLT was so cheap, seemed safe, was interesting, just trying it would involve minimal effort, and it would be a favor to lostfalco, I decided to try it. I purchased off eBay a $13 48 LED illuminator light IR Infrared Night Vision+Power Supply For CCTV. Auto Power-On Sensor, only turn-on when the surrounding is dark. IR LED wavelength: 850nm. Powered by DC 12V 500mA adaptor. It arrived in 4 days, on 7 September 2013. It fits handily in my palm. My cellphone camera verified it worked and emitted infrared - important because there’s no visible light at all (except in complete darkness I can make out a faint red light), no noise, no apparent heat (it took about 30 minutes before the lens or body warmed up noticeably when I left it on a table). This was good since I worried that there would be heat or noise which made blinding impossible; all I had to do was figure out how to randomly turn the power on and I could run blinded self-experiments with it. My first time was relatively short: 10 minutes around the F3/F4 points, with another 5 minutes to the forehead. Awkward holding it up against one’s head, and I see why people talk of LED helmets, it’s boring waiting. No initial impressions except maybe feeling a bit mentally cloudy, but that goes away within 20 minutes of finishing when I took a nap outside in the sunlight. Lostfalco says Expectations: You will be tired after the first time for 2 to 24 hours. It’s perfectly normal., but I’m not sure - my dog woke me up very early and disturbed my sleep, so maybe that’s why I felt suddenly tired. On the second day, I escalated to 30 minutes on the forehead, and tried an hour on my finger joints. No particular observations except less tiredness than before and perhaps less joint ache. Third day: skipped forehead stimulation, exclusively knee & ankle. Fourth day: forehead at various spots for 30 minutes; tiredness 5/6/7/8th day (11/12/13/4): skipped. Ninth: forehead, 20 minutes. No noticeable effects. ## Pilot At this point I began to get bored with it and the lack of apparent effects, so I began a pilot trial: I’d use the LED set for 10 minutes every few days before 2PM, record, and in a few months look for a correlation with my daily self-ratings of mood/productivity (for 2.5 years I’ve asked myself at the end of each day whether I did more, the usual, or less work done that day than average, so 2=below-average, 3=average, 4=above-average; it’s ad hoc, but in some factor analyses I’ve been playing with, it seems to load on a lot of other variables I’ve measured, so I think it’s meaningful). On 15 March 2014, I disabled light sensor: the complete absence of subjective effects since the first sessions made me wonder if the LED device was even turning on - a little bit of ambient light seems to disable it thanks to the light sensor. So I stuffed the sensor full of putty, verified it was now always-on with the cellphone camera, and began again; this time it seemed to warm up much faster, making me wonder if all the previous sessions’ sense of warmth was simply heat from my hand holding the LEDs In late July 2014, I was cleaning up my rooms and was tired of LLLT, so I decided to chuck the LED device. But before I did that, I might as well analyze the data. That left me with 329 days of data. The results are that (correcting for the magnesium citrate self-experiment I was running during the time period which did not turn out too great) days on which I happened to use my LED device for LLLT were much better than regular days. Below is a graph showing the entire MP dataseries with LOESS-smoothed lines showing LLLT vs non-LLLT days: ### LLLT pilot analysis The correlation of LLLT usage with higher MP self-rating is fairly large (r=0.19 / d=0.455) and statistically-significant (p=0.0006). I have no particularly compelling story for why this might be a correlation and not causation. It could be placebo, but I wasn’t expecting that. It could be selection effect (days on which I bothered to use the annoying LED set are better days) but then I’d expect the off-days to be below-average and compared to the 2 years of trendline before, there doesn’t seem like much of a fall. The R code: lllt <- read.csv("https://www.gwern.net/docs/nootropics/2014-08-03-lllt-correlation.csv") l <- lm(MP ~ LLLT + as.logical(Magnesium.citrate) + as.integer(Date) + as.logical(Magnesium.citrate):as.integer(Date), data=lllt); summary(l) # ...Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 4.037702597 0.616058589 6.55409 5.0282e-10 # LLLTTRUE 0.330923350 0.095939634 3.44929 0.00069087 # as.logical(Magnesium.citrate)TRUE 0.963379487 0.842463568 1.14353 0.25424378 # as.integer(Date) -0.001269089 0.000880949 -1.44059 0.15132856 # as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.001765953 0.001213804 -1.45489 0.14733212 0.330923350 / sd(lllt$MP, na.rm=TRUE)
# [1] 0.455278787
cor.test(lllt$MP, as.integer(lllt$LLLT))
#
# Pearsons product-moment correlation
#
# data: lllt$MP and as.integer(lllt$LLLT)
# t = 3.4043, df = 327, p-value = 0.0007458
# alternative hypothesis: true correlation is not equal to 0
# 95% confidence interval:
# 0.0784517682 0.2873891665
# sample estimates:
# cor
# 0.185010342
## check whether there's odd about non-LLLT days by expanding to include baseline
llltImputed <- lllt
llltImputed[is.na(llltImputed)] <- 0
llltImputed[llltImputed$MP == 0,]$MP <- 3 # clean up an outlier using median
summary(lm(MP ~ LLLT + as.logical(Magnesium.citrate) + as.integer(Date) +
as.logical(Magnesium.citrate):as.integer(Date),
data=llltImputed))
# ...Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 2.959172295 0.049016571 60.37085 < 2.22e-16
# LLLT 0.336886970 0.083731179 4.02344 6.2212e-05
# as.logical(Magnesium.citrate)TRUE 2.155586397 0.619675529 3.47857 0.00052845
# as.integer(Date) 0.000181441 0.000103582 1.75166 0.08017565
# as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.003373682 0.000904342 -3.73054 0.00020314
power.t.test(power=0.8,
delta=(0.336886970 / sd(lllt$MP, na.rm=TRUE)), type="paired", alternative="one.sided") # # Paired t test power calculation # # n = 30.1804294 # delta = 0.463483435 # sd = 1 # sig.level = 0.05 # power = 0.8 # alternative = one.sided # # NOTE: n is number of *pairs*, sd is std.dev. of *differences* within pairs library(ggplot2) llltImputed$Date <- as.Date(llltImputed$Date) ggplot(data = llltImputed, aes(x=Date, y=MP, col=as.logical(llltImputed$LLLT))) +
geom_point(size=I(3)) +
stat_smooth() +
scale_colour_manual(values=c("gray49", "green"),
name = "LLLT")
So, I have started a randomized experiment; should take 2 months, given the size of the correlation. If that turns out to be successful too, I’ll have to look into methods of blinding - for example, some sort of electronic doohickey which turns on randomly half the time and which records whether it’s on somewhere one can’t see. (Then for the experiment, one hooks up the LED, turns the doohickey on, and applies directly to forehead, checking the next morning to see whether it was really on or off).
#### Sleep
One reader notes that for her, the first weeks of LLLT usage seemed to be accompanied by sleeping longer than usual. Did I experience anything similar? There doesn’t appear to be any particular effect on total sleep or other sleep variables:
lllt <- read.csv("https://www.gwern.net/docs/nootropics/2014-08-03-lllt-correlation.csv")
lllt$Date <- as.Date(lllt$Date)
zeo$Date <- as.Date(zeo$Sleep.Date, format="%m/%d/%Y")
sleepLLLT <- merge(lllt, zeo, all=TRUE)
l <- lm(cbind(Start.of.Night, Time.to.Z, Time.in.Wake, Awakenings, Time.in.REM, Time.in.Light, Time.in.Deep, Total.Z, ZQ, Morning.Feel) ~ LLLT, data=sleepLLLT)
summary(manova(l))
## Df Pillai approx F num Df den Df Pr(>F)
## LLLT 1 0.04853568 1.617066 10 317 0.10051
## Residuals 326
library(ggplot2)
qplot( sleepLLLT$Date, sleepLLLT$Total.Z, color=sleepLLLT$LLLT) #### LLLT pilot factor analysis Factor-analyzing several other personal datasets into 8 factors while omitting the previous MP variable, I find LLLT correlates with personal-productivity-related factors, but less convincingly than MP, suggesting the previous result is not quite as good as it seems. My worry about the MP variable is that, plausible or not, it does seem relatively weak against manipulation; other variables I could look at, like arbtt window-tracking of how I spend my computer time, # or size of edits to my files, or spaced repetition performance, would be harder to manipulate. If it’s all due to MP, then if I remove the MP and LLLT variables, and summarize all the other variables with factor analysis into 2 or 3 variables, then I should see no increases in them when I put LLLT back in and look for a correlation between the factors & LLLT with a multivariate regression. Preparation of data: lllt <- read.csv("~/wiki/docs/nootropics/2014-08-03-lllt-correlation.csv", colClasses=c("Date",rep("integer", 4), "logical")) lllt <- data.frame(Date=lllt$Date, LLLT=lllt$LLLT) mp <- read.csv("~/selfexperiment/mp.csv", colClasses=c("Date", "integer")) creativity <- read.csv("~/selfexperiment/dailytodo-marchjunecreativity.csv", colClasses=c("Date", "integer")) mnemosyne <- read.csv("~/selfexperiment/mnemosyne.csv", header=FALSE, col.names =c("Timestamp", "Easiness", "Grade"), colClasses=c("integer", "numeric", "integer")) mnemosyne$Timestamp <- as.POSIXct(mnemosyne$Timestamp, origin = "1970-01-01", tz = "EST") mnemosyne$Date <- as.Date(mnemosyne$Timestamp) mnemosyne <- aggregate(Grade ~ Date, mnemosyne, mean) mnemosyne$Average.Spaced.repetition.score <- mnemosyne$Grade rm(mnemosyne$Grade)
dnb$V1 <- as.POSIXct(dnb$V1, format="%Y-%m-%d %R:%S")
dnb <- dnb[!is.na(dnb$V1),] dnb <- with(dnb, data.frame(Timestamp=V1, Nback.type=V2, Percentage=V3)) dnb$Date <- as.Date(dnb$Timestamp) dnbDaily <- aggregate(Percentage ~ Date + Nback.type, dnb, mean) arbtt1 <- read.csv("~/selfexperiment/2012-2013-arbtt.txt") arbtt2 <- read.csv("~/selfexperiment/2013-2014-arbtt.txt") arbtt <- rbind(arbtt1, arbtt2) rm(arbtt$Percentage)
interval <- function(x) { if (!is.na(x)) { if (grepl(" s",x)) as.integer(sub(" s","",x))
else { y <- unlist(strsplit(x, ":"));
as.integer(y[[1]])*3600 +
as.integer(y[[2]])*60 +
as.integer(y[[3]]);
}
}
else NA
}
arbtt$Time <- sapply(as.character(arbtt$Time), interval)
library(reshape)
arbtt <- reshape(arbtt, v.names="Time", timevar="Tag", idvar="Day", direction="wide")
arbtt$Date <- as.Date(arbtt$Day)
rm(arbtt$Day) arbtt[is.na(arbtt)] <- 0 patches <- read.csv("~/selfexperiment/patchlog-gwern.net.txt", colClasses=c("integer", "Date")) patches$Gwern.net.patches.log <- log1p(patches$Gwern.net.patches) # modified lines per day is much harder: state machine to sum lines until it hits the next date patchCount <- scan(file="~/selfexperiment/patchlog-linecount-gwern.net.txt", character(), sep = "\n") patchLines <- new.env() for (i in 1:length(patchCount)) { if (grepl("\t", patchCount[i])) { patchLines[[date]] <- patchLines[[date]] + sum(sapply(strsplit(patchCount[i], "\t"), as.integer)) } else { date <- patchCount[i] patchLines[[date]] <- 0 } } patchLines <- as.list(patchLines) patchLines <- data.frame( Date = rep(names(patchLines), lapply(patchLines, length)), Gwern.net.linecount= unlist(patchLines)) rm(row.names(patchLines)) patchLines$Date <- as.Date(patchLines$Date) patchLines$Gwern.net.linecount.log <- log1p(patchLines$Gwern.net.linecount) firstDay <- patches$Date[1]; lastDay <- patches$Date[nrow(patches)] patches <- merge(merge(patchLines, patches, all=TRUE), data.frame(Date=seq(firstDay, lastDay, by="day")), all=TRUE) # if entries are missing, they == 0 patches[is.na(patches)] <- 0 # combine all the data: llltData <- merge(merge(merge(merge(merge(lllt, mp, all=TRUE), creativity, all=TRUE), dnbDaily, all=TRUE), arbtt, all=TRUE), patches, all=TRUE) write.csv(llltData, file="2014-08-08-lllt-correlation-factoranalysis.csv", row.names=FALSE) Factor analysis. The strategy: read in the data, drop unnecessary data, impute missing variables (data is too heterogeneous and collected starting at varying intervals to be clean), estimate how many factors would fit best, factor analyze, pick the ones which look like they match best my ideas of what productive is, extract per-day estimates, and finally regress LLLT usage on the selected factors to look for increases. lllt <- read.csv("https://www.gwern.net/docs/nootropics/2014-08-08-lllt-correlation-factoranalysis.csv") ## the log transforms are more useful: rm(lllt$Date, lllt$Nback.type, lllt$Gwern.net.linecount, lllt$Gwern.net.patches) ## https://stats.stackexchange.com/questions/28576/filling-nas-in-a-dataset-with-column-medians-in-r imputeColumnAsMedian <- function(x){ x[is.na(x)] <- median(x, na.rm=TRUE) #convert the item with NA to median value from the column x #display the column } llltI <- data.frame(apply(lllt, 2, imputeColumnAsMedian)) library(psych) nfactors(llltI[-c(1,2)]) # VSS complexity 1 achieves a maximum of 0.56 with 16 factors # VSS complexity 2 achieves a maximum of 0.66 with 16 factors # The Velicer MAP achieves a minimum of 0.01 with 1 factors # Empirical BIC achieves a minimum of -280.23 with 8 factors # Sample Size adjusted BIC achieves a minimum of -135.77 with 9 factors fa.parallel(llltI[-c(1,2)], n.iter=2000) # Parallel analysis suggests that the number of factors = 7 and the number of components = 7 ## split the difference between sample-size adjusted BIC and parallel analysis with 8: factorization <- fa(llltI[-c(1,2)], nfactors=8); factorization # Standardized loadings (pattern matrix) based upon correlation matrix # MR6 MR1 MR2 MR4 MR3 MR5 MR7 MR8 h2 u2 com # Creativity.self.rating 0.22 0.06 -0.04 0.08 -0.04 0.02 -0.05 -0.14 0.0658 0.934 2.5 # Percentage -0.05 -0.02 0.01 0.01 0.00 -0.42 0.02 0.02 0.1684 0.832 1.0 # Time.X -0.04 0.11 0.04 0.88 -0.02 0.01 0.01 0.02 0.8282 0.172 1.0 # Time.PDF 0.02 0.99 -0.02 0.04 0.02 0.00 -0.01 -0.01 0.9950 0.005 1.0 # Time.Stats -0.10 0.21 0.12 0.16 -0.04 0.04 0.12 0.25 0.2310 0.769 4.3 # Time.IRC 0.01 -0.02 0.99 0.02 0.02 0.01 0.00 -0.01 0.9950 0.005 1.0 # Time.Writing 0.01 -0.02 0.01 0.04 -0.01 -0.03 0.68 0.04 0.4720 0.528 1.0 # Time.Rec 0.20 -0.12 -0.06 0.42 0.62 -0.02 -0.07 -0.01 0.8501 0.150 2.2 # Time.Music -0.05 0.05 0.02 0.02 -0.04 0.22 0.02 0.13 0.0909 0.909 2.0 # Time.SRS -0.07 0.09 0.08 0.00 0.00 0.08 0.06 0.16 0.0702 0.930 3.6 # Time.Sysadmin 0.05 -0.09 -0.04 0.15 0.07 0.01 0.14 0.42 0.2542 0.746 1.7 # Time.Bitcoin 0.45 0.02 0.25 -0.07 -0.03 -0.09 -0.04 0.11 0.3581 0.642 1.9 # Time.Backups 0.22 0.10 -0.08 -0.19 0.12 0.13 0.02 0.27 0.1809 0.819 4.3 # Time.Blackmarkets 0.62 -0.01 0.06 -0.02 -0.09 -0.01 -0.04 0.15 0.4442 0.556 1.2 # Time.Programming 0.06 -0.01 -0.01 -0.04 0.07 0.08 0.41 -0.07 0.1790 0.821 1.3 # Time.DNB -0.01 -0.01 0.02 0.01 -0.01 0.76 -0.01 0.00 0.5800 0.420 1.0 # Time.Typing -0.04 0.05 0.02 0.01 -0.02 -0.01 0.00 -0.01 0.0054 0.995 2.9 # Time.Umineko -0.10 0.08 0.06 -0.15 0.77 -0.01 0.03 0.02 0.5082 0.492 1.2 # Gwern.net.linecount.log 0.65 0.03 0.00 -0.03 0.04 0.00 0.10 -0.13 0.4223 0.578 1.1 # Gwern.net.patches.log 0.11 0.02 -0.01 0.02 0.00 0.06 0.29 -0.06 0.1001 0.900 1.5 # # MR6 MR1 MR2 MR4 MR3 MR5 MR7 MR8 # SS loadings 1.24 1.12 1.13 1.13 1.05 0.86 0.80 0.48 # Proportion Var 0.06 0.06 0.06 0.06 0.05 0.04 0.04 0.02 # Cumulative Var 0.06 0.12 0.17 0.23 0.28 0.33 0.37 0.39 # Proportion Explained 0.16 0.14 0.14 0.14 0.13 0.11 0.10 0.06 # Cumulative Proportion 0.16 0.30 0.45 0.59 0.73 0.84 0.94 1.00 # # With factor correlations of # MR6 MR1 MR2 MR4 MR3 MR5 MR7 MR8 # MR6 1.00 -0.13 0.26 0.15 0.22 -0.09 0.03 0.16 # MR1 -0.13 1.00 -0.12 0.25 -0.05 0.06 0.12 0.11 # MR2 0.26 -0.12 1.00 0.04 -0.04 0.10 0.20 0.19 # MR4 0.15 0.25 0.04 1.00 0.32 0.01 -0.05 0.10 # MR3 0.22 -0.05 -0.04 0.32 1.00 -0.04 -0.07 0.00 # MR5 -0.09 0.06 0.10 0.01 -0.04 1.00 0.11 0.11 # MR7 0.03 0.12 0.20 -0.05 -0.07 0.11 1.00 0.20 # MR8 0.16 0.11 0.19 0.10 0.00 0.11 0.20 1.00 # # Mean item complexity = 1.9 # Test of the hypothesis that 8 factors are sufficient. # # The degrees of freedom for the null model are 190 # and the objective function was 2.46 with Chi Square of 5344.68 # The degrees of freedom for the model are 58 and the objective function was 0.07 # # The root mean square of the residuals (RMSR) is 0.02 # The df corrected root mean square of the residuals is 0.03 # # The harmonic number of observations is 2178 with the empirical chi square 190.08 with prob < 5.9e-16 # The total number of observations was 2178 with MLE Chi Square = 149.65 with prob < 4.9e-10 # # Tucker Lewis Index of factoring reliability = 0.942 # RMSEA index = 0.027 and the 90 % confidence intervals are 0.022 0.032 # BIC = -296.15 # Fit based upon off diagonal values = 0.98 # Measures of factor score adequacy # MR6 MR1 MR2 MR4 MR3 MR5 MR7 MR8 # Correlation of scores with factors 0.84 1.00 1.00 0.93 0.89 0.80 0.76 0.64 # Multiple R square of scores with factors 0.70 0.99 0.99 0.86 0.79 0.63 0.58 0.41 # Minimum correlation of possible factor scores 0.40 0.99 0.99 0.71 0.58 0.27 0.16 -0.19 The important factors seem to be: #1/MR6 (Creativity.self.rating, Time.Bitcoin, Time.Backups, Time.Blackmarkets, Gwern.net.linecount.log), #2/MR1 (Time.PDF, Time.Stats), #7/MR7 (Time.Writing, Time.Sysadmin, Time.Programming, Gwern.net.patches.log), and #8/MR8 (Time.States, Time.SRS, Time.Sysadmin, Time.Backups, Time.Blackmarkets). The rest seem to be time-wasting or reflect dual n-back/DNB usage (which is not relevant in the LLLT time period). So we want to extract and look at factors #1/2/7/8 (MR6/1/7/8): lllt$MR6 <- predict(factorization, data=llltI[-c(1,2)])[,1]
lllt$MR1 <- predict(factorization, data=llltI[-c(1,2)])[,2] lllt$MR7 <- predict(factorization, data=llltI[-c(1,2)])[,7]
lllt$MR8 <- predict(factorization, data=llltI[-c(1,2)])[,8] l <- lm(cbind(MR6, MR1, MR7, MR8) ~ LLLT, data=lllt); summary(l) # Response MR6 : # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 1.5307773 0.0736275 20.79085 < 2e-16 # LLLTTRUE 0.1319675 0.1349040 0.97823 0.32868 # # Response MR1 : # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) -0.1675241 0.0609841 -2.74701 0.0063473 # LLLTTRUE 0.0317851 0.1117381 0.28446 0.7762378 # # Response MR7 : # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) -0.0924052 0.0709438 -1.30251 0.193658 # LLLTTRUE 0.2556655 0.1299869 1.96686 0.050045 # # Response MR8 : # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.0741850 0.0687618 1.07887 0.28144 # LLLTTRUE 0.1380131 0.1259889 1.09544 0.27413 0.2556655 / sd(lllt$MR7)
# [1] 0.335510445
summary(manova(l))
# Df Pillai approx F num Df den Df Pr(>F)
# LLLT 1 0.01372527 1.127218 4 324 0.34355
# Residuals 327
All of the coefficients are positive, as one would hope, and one specific factor (MR7) squeaks in at d=0.34 (p=0.05). The graph is much less impressive than the graph for just MP, suggesting that the correlation may be spread out over a lot of factors, the current dataset isn’t doing a good job of capturing the effect compared to the MP self-rating, or it really was a placebo effect:
library(ggplot2)
llltRecent$index <- 1:nrow(llltRecent) qplot(index, MR7, color=LLLT, data=llltRecent) + geom_point(size=I(3)) + stat_smooth() + scale_colour_manual(values=c("gray49", "green"), name = "LLLT") The concentration in one factor leaves me a bit dubious. We’ll see what the experiment turns up. ## Experiment A randomized non-blind self-experiment of LLLT 2014-2015 yields a causal effect which is several times smaller than a correlative analysis and non-statistically-significant/very weak Bayesian evidence for a positive effect. This suggests that the earlier result had been driven primarily by reverse causation, and that my LLLT usage has little or no benefits. Following up on the promising but unrandomized pilot, I began randomizing my LLLT usage since I worried that more productive days were causing use rather than vice-versa. I began on 2 August 2014, and the last day was 3 March 2015 (n=167); this was twice the sample size I thought I needed, and I stopped, as before, as part of cleaning up (I wanted to know whether to get rid of it or not). The procedure was simple: by noon, I flipped a bit and either did or did not use my LED device; if I was distracted or didn’t get around to randomization by noon, I skipped the day. This was an unblinded experiment because finding a randomized on/off switch is tricky/expensive and it was easier to just start the experiment already. The question is simple too: controlling for the simultaneous blind magnesium experiment & my rare nicotine use (I did not use modafinil during this period or anything else I expect to have major influence), is the pilot correlation of d=0.455 on my daily self-ratings borne out by the experiment? llltRandom <- read.csv("https://www.gwern.net/docs/nootropics/2015-lllt-random.csv", colClasses=c("Date", "logical", "integer", "logical", "logical")) # impute magnesium data: that randomized experiment started a month later llltRandom[is.na(llltRandom$Magnesium.random),]$Magnesium.random <- 0 l <- lm(MP ~ LLLT.random + Nicotine + Magnesium.random, data=llltRandom); summary(l); confint(l) # ...Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 3.28148626 0.06856553 47.85912 < 2e-16 # LLLT.randomTRUE 0.04099628 0.09108322 0.45010 0.65324 # NicotineTRUE 0.21152245 0.26673557 0.79300 0.42893 # Magnesium.random 0.10299190 0.09312616 1.10594 0.27038 # # Residual standard error: 0.5809214 on 163 degrees of freedom # (47 observations deleted due to missingness) # Multiple R-squared: 0.01519483, Adjusted R-squared: -0.002930415 # F-statistic: 0.8383241 on 3 and 163 DF, p-value: 0.474678 # # 2.5 % 97.5 % # (Intercept) 3.14609507948 3.4168774481 # LLLT.randomTRUE -0.13885889747 0.2208514560 # NicotineTRUE -0.31518017129 0.7382250752 # Magnesium.random -0.08089731164 0.2868811034 0.04099628 / sd(llltRandom$MP)
# [1] 0.0701653002
library(ggplot2)
ggplot(data = llltRandom, aes(x=Date, y=MP, col=as.logical(llltRandom$LLLT.random))) + geom_point(size=I(3)) + stat_smooth() + scale_colour_manual(values=c("gray49", "blue"), name = "LLLT") The estimate of the causal effect of LLLT+placebo is not statistically-significant, and the effect size of +0.04 / d=0.07 is much smaller than d=0.455 (15%) and the original pilot’s point estimate of +0.33 is excluded by the new confidence interval (95% CI: -0.13 - +0.22). I have strong priors about the possible effects of LLLT, nicotine & magnesium (specifically, I know from experience that they tend to be small), so a Bayesian linear model using JAGS is useful for letting me take that into account and also producing more meaningful results (probabilities, rather than p-values): ## JAGS won't automatically drop rows with missing variables like lm does by default llltClean <- llltRandom[!is.na(llltRandom$LLLT.random),]
library(rjags)
library(R2jags)
model1<-"
model {
for (i in 1:n) {
MP[i] ~ dnorm(MP.hat[i], tau)
MP.hat[i] <- a + b1*LLLT.random[i] + b2*Nicotine[i] + b3*Magnesium.random[i]
}
# intercept
a ~ dnorm(3, 4) # precision 4 ~= 0.5^-2 ~= SD 0.5, the historical SD of my MPs
# coefficients
## informative prior: effects should be <0.5 usually, and >0.3 is unusual
b1 ~ dnorm(0, 13) # precision 13 ~= SD 0.3
b2 ~ dnorm(0, 13)
b3 ~ dnorm(0, 13)
# informative prior: 2-5 doesn't allow for much variance
sigma ~ dunif(0, 1)
# convert SD to 'precision' unit that JAGS's distributions use instead
tau <- pow(sigma, -2)
}
"
j1 <- with(llltClean, jags(data=list(n=nrow(llltClean), MP=MP, LLLT.random=LLLT.random,
Nicotine=Nicotine, Magnesium.random=Magnesium.random),
parameters.to.save=c("b1", "b2", "b3"),
model.file=textConnection(model1),
n.chains=getOption("mc.cores"), n.iter=1000000))
print(j1, intervals=c(0.0001, 0.5, 0.9999))
# Inference for Bugs model at "4", fit using jags,
# 4 chains, each with 1e+06 iterations (first 5e+05 discarded), n.thin = 500
# n.sims = 4000 iterations saved
# mu.vect sd.vect 0.01% 50% 99.99% Rhat n.eff
# b1 0.042 0.087 -0.276 0.041 0.326 1.002 2100
# b2 0.114 0.194 -0.533 0.114 0.745 1.001 4000
# b3 0.100 0.088 -0.266 0.100 0.412 1.001 4000
# deviance 293.023 2.864 288.567 292.420 314.947 1.001 4000
#
# For each parameter, n.eff is a crude measure of effective sample size,
# and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#
# DIC info (using the rule, pD = var(deviance)/2)
# pD = 4.1 and DIC = 297.1
# DIC is an estimate of expected predictive error (lower deviance is better).
This analysis suggests that there’s a 95% probability the effect is somewhere between -0.129 & 0.208 (d=-0.22 - d=0.35), similar to the original linear model’s CI. More relevantly: there is only a 70% probability that the effect is >0 (albeit probably tiny), and >99.99% probability it’s not as big as the pilot data had claimed.
At small effects like d=0.07, a nontrivial chance of negative effects, and an unknown level of placebo effects (this was non-blinded, which could account for any residual effects), this strongly implies that LLLT is not doing anything for me worth bothering with. I was pretty skeptical of LLLT in the first place, and if 167 days can’t turn up anything noticeable, I don’t think I’ll be continuing with LLLT usage and will be giving away my LED set. (Should any experimental studies of LLLT for cognitive enhancement in healthy people surface with large quantitative effects - as opposed to a handful of qualitative case studies about brain-damaged people - and I decide to give LLLT another try, I can always just buy another set of LEDs: it’s only ~$15, after all.) # LSD microdosing For the full writeup of background, methodology, data, and statistical analysis, please see the LSD microdosing page. Intrigued by old scientific results & many positive anecdotes since, I experimented with microdosing LSD - taking doses ~10μg, far below the level at which it causes its famous effects. At this level, the anecdotes claim the usual broad spectrum of positive effects on mood, depression, ability to do work, etc. After researching the matter a bit, I discovered that as far as I could tell, since the original experiment in the 1960s, no one had ever done a blind or even a randomized self-experiment on it. The self-experiment was simple: I ordered two tabs off Silk Road, dissolved one in distilled water, put the solution in one jar & tap water in the other, and took them in pairs of 3-day blocks. The results of my pre-specified analysis on a well-powered randomized blind self-experiment: 1. Sleep: • latency: none (p=0.42) • total sleep: none (p=0.14) • number of awakenings: none (p=0.36) • morning feel: increased (p=0.02) There is an increase in Morning Feel from 2.6 to 2.9, d=0.42 & p=0.023; correcting for performing 7 different tests, this result is not statistically-significant (it does not survive a Bonferroni correction (since $0.0231 > \frac{0.05}{7}$) nor the q-value approach to family-wise correction). 2. Mnemosyne flashcard scores: none (p=0.52) 3. Mood/productivity: none (d=-0.18; p=0.86) 4. Creativity: none (d=-0.19; p=0.87) I concluded that if anything, the LSD microdosing may done the opposite of what I wanted. # Magnesium Main article: Magnesium Self-Experiments TODO # Melatonin See Melatonin for information on effects & cost; I regularly use melatonin to sleep (more to induce sleep than prolong or deepen it), and investigating with my Zeo, it does seem to improve & shorten my sleep. Some research suggests that higher doses are not necessarily better and may be overkill, so each time I’ve run out, I’ve been steadily decreasing the dose from 3mg to 1.5mg to 1mg, without apparently compromising the usefulness. # Modafinil See Modafinil for background on performance improvements and side-effects; the following sections are about my usage. ## SpierX Here are the notes I jotted down while trying out modafinil back in November 2009. I didn’t make any effort to write sensibly, so this makes my lucidity seem much worse than it actually was: Thursday: 3g piracetam/4g choline bitartrate at 1; 1 200mg modafinil at 2:20; noticed a leveling of fatigue by 3:30; dry eyes? no bad after taste or anything. a little light-headed by 4:30, but mentally clear and focused. wonder if light-headedness is due simply to missing lunch and not modafinil. 5:43: noticed my foot jiggling - doesn’t usually jiggle while in piracetam/choline. 7:30: starting feeling a bit jittery & manic - not much or to a problematic level but definitely noticeable; but then, that often happens when I miss lunch & dinner. 12:30: bedtime. Can’t sleep even with 3mg of melatonin! Subjectively, I toss & turn (in part thanks to my cat) until 4:30, when I really wake up. I hang around bed for another hour & then give up & get up. After a shower, I feel fairly normal, strangely, though not as good as if I had truly slept 8 hours. The lesson here is to pay attention to wikipedia when it says the half-life is 12-15 hours! About 6AM I take 200mg; all the way up to 2pm I feel increasingly less energetic and unfocused, though when I do apply myself I think as well as ever. Not fixed by food or tea or piracetam/choline. I want to be up until midnight, so I take half a pill of 100mg and chew it (since I’m not planning on staying up all night and I want it to work relatively soon). From 4-12PM, I notice that today as well my heart rate is elevated; I measure it a few times and it seems to average to ~70BPM, which is higher than normal, but not high enough to concern me. I stay up to midnight fine, take 3mg of melatonin at 12:30, and have no trouble sleeping; I think I fall asleep around 1. Alarm goes off at 6, I get up at 7:15 and take the other 100mg. Only 100mg/half-a-pill because I don’t want to leave the half laying around in the open, and I’m curious whether 100mg + ~5 hours of sleep will be enough after the last 2 days. Maybe next weekend I’ll just go without sleep entirely to see what my limits are. In general, I feel a little bit less alert, but still close to normal. By 6PM, I have a mild headache, but I try out 30 rounds of gbrainy (haven’t played it in months) and am surprised to find that I reach an all-time high; no idea whether this is due to DNB or not, since Gbrainy is very heavily crystallized (half the challenge disappears as you learn how the problems work), but it does indicate I’m not deluding myself about mental ability. (To give a figure: my last score well before I did any DNB was 64, and I was doing well that day; on modafinil, I had a 77.) I figure the headache might be food related, eat, and by 7:30 the headache is pretty much gone and I’m fine up to midnight. I took 1.5mg of melatonin, and went to bed at ~1:30AM; I woke up around 6:30, took a modafinil pill/200mg, and felt pretty reasonable. By noon my mind started to feel a bit fuzzy, and lunch didn’t make much of it go away. I’ve been looking at studies, and users seem to degrade after 30 hours; I started on mid-Thursday, so call that 10 hours, then 24 (Friday), 24 (Saturday), and 14 (Sunday), totaling 72hrs with <20hrs sleep; this might be equivalent to 52hrs with no sleep, and Wikipedia writes: One study of helicopter pilots suggested that 600 mg of modafinil given in three doses can be used to keep pilots alert and maintain their accuracy at pre-deprivation levels for 40 hours without sleep.[60] However, significant levels of nausea and vertigo were observed. Another study of fighter pilots showed that modafinil given in three divided 100 mg doses sustained the flight control accuracy of sleep-deprived F-117 pilots to within about 27% of baseline levels for 37 hours, without any considerable side effects.[61] In an 88-hour sleep loss study of simulated military grounds operations, 400 mg/day doses were mildly helpful at maintaining alertness and performance of subjects compared to placebo, but the researchers concluded that this dose was not high enough to compensate for most of the effects of complete sleep loss. If I stop tonight and do nothing Monday (and I sleep the normal eight hours and do not pay any penalty), then that’ll be 4 out of 5 days on modafinil, each saving 3 or 4 hours. Each day took one pill which cost me$1.20, but each pill saved let’s call it 3.5 hours; if I value my time at minimum wage, or 7.25/hr (federal minimum wage), then that 3.5 hours is worth $25.37, which is much more than$1.20, ~21x more.
My mental performance continues as before; curiously, I get an even higher score on Gbrainy, despite being sure I was less sharp than yesterday. Either I’m wrong about that, or Gbrainy is even more trainable than I thought. I go to bed Sunday around 1AM, and get up around 8AM (so call it 6 or 7 hours).
Monday: It’s a long day ahead of me, so I take 200mg. Reasonable performance.
Tuesday: I went to bed at 1am, and first woke up at 6am, and I wrote down a dream; the lucid dreaming book I was reading advised that waking up in the morning and then going back for a short nap often causes lucid dreams, so I tried that - and wound up waking up at 10am with no dreams at all. Oops. I take a pill, but the whole day I don’t feel so hot, although my conversation and arguments seem as cogent as ever. I’m also having a terrible time focusing on any actual work. At 8 I take another; I’m behind on too many things, and it looks like I need an all-nighter to catch up. The dose is no good; at 11, I still feel like at 8, possibly worse, and I take another along with the choline+piracetam (which makes a total of 600mg for the day). Come 12:30, and I disconsolately note that I don’t seem any better, although I still seem to understand the IQ essays I am reading. I wonder if this is tolerance to modafinil, or perhaps sleep catching up to me? Possibly it’s just that I don’t remember what the quasi-light-headedness of modafinil felt like. I feel this sort of zombie-like state without change to 4am, so it must be doing something, when I give up and go to bed, getting up at 7:30 without too much trouble. Some N-backing at 9am gives me some low scores but also some pretty high scores (38/43/66/40/24/67/60/71/54 or ▂▂▆▂▁▆▅▇▄), which suggests I can perform normally if I concentrate. I take another pill and am fine the rest of the day, going to bed at 1am as usual.
Thursday: this is an important day where I really need to be awake. I’m up around 8, take a pill, and save one for later; I’ll take half a pill at noon and the other half at 2. This works very well, and I don’t feel tired well up to midnight, even though I spent an hour walking.
Friday: alarm clock woke me at 7:40, but I somehow managed to go back to sleep until 9:40. Perhaps sleep inertia is building up despite the modafinil. Another pill. I am in general noticing less effect, but I’ll not take any this weekend to see whether I have simply gotten used to it.
Sat/Sun: bed at 1/2AM, awake at 10/11 respectively. Generally unmotivated.
Mon: went to bed at 11:30 Sun, woke at 7:30 and dozed to 8. 200mg at 8:30. No particular effect. Past this, I stop keeping notes. The main thing I notice is that my throat seems to be a little rough and my voice hoarser than usual.
(On a side note, I think I understand now why modafinil doesn’t lead to a Beggars in Spain scenario; BiS includes massive IQ and motivation boosts as part of the Sleepless modification. Just adding 8 hours a day doesn’t do the world-changing trick, no more than some researchers living to 90 and others to 60 has lead to the former taking over. If everyone were suddenly granted the ability to never need sleep, many of them would have no idea what to do with the extra 8 or 9 hours and might well be destroyed by the gift; it takes a lot of motivation to make good use of the time, and if one cannot, then it is a curse akin to the stories of immortals who yearn for death - they yearn because life is not a blessing to them, though that is a fact more about them than life.)
In 2011, as part of the Silk Road research, I ordered 10x100mg Modalert (5btc) from a seller. I also asked him about his sourcing, since if it was bad, it’d be valuable to me to know whether it was sourced from one of the vendors listed in my table. He replied, more or less, I get them from a large Far Eastern pharmaceuticals wholesaler. I think they’re probably the supplier for a number of the online pharmacies. 100mg seems likely to be too low, so I treated this shipment as 5 doses:
1. I split the 2 pills into 4 doses for each hour from midnight to 4 AM. 3D driver issues in Debian unstable prevented me from using Brain Workshop, so I don’t have any DNB scores to compare with the armodafinil DNB scores. I had the subjective impression that I was worse off with the Modalert, although I still managed to get a fair bit done so the deficits couldn’t’ve been too bad. The apathy during the morning felt worse than armodafinil, but that could have been caused by or exacerbated by an unexpected and very stressful 2 hour drive through rush hour and multiple accidents; the quick hour-long nap at 10 AM was half-waking half-light-sleep according to the Zeo, but seemed to help a bit. As before, I began to feel better in the afternoon and by evening felt normal, doing my usual reading. That night, the Zeo recorded my sleep as lasting ~9:40, when it was usually more like 8:40-9:00 (although I am not sure that this was due to the modafinil inasmuch as once a week or so I tend to sleep in that long, as I did a few days later without any influence from the modafinil); assuming the worse, the nap and extra sleep cost me 2 hours for a net profit of ~7 hours. While it’s not clear how modafinil affects recovery sleep (see the footnote in the essay), it’s still interesting to ponder the benefits of merely being able to delay sleep18.
2. I tried taking whole pills at 1 and 3 AM. I felt kind of bushed at 9 AM after all the reading, and the 50 minute nap didn’t help much - I was sleep only around 10 minutes and spent most of it thinking or meditation. Just as well the 3D driver is still broken; I doubt the scores would be reasonable. Began to perk up again past 10 AM, then felt more bushed at 1 PM, and so on throughout the day; kind of gave up and began watching & finishing anime (Amagami and Voices of a Distant Star) for the rest of the day with occasional reading breaks (eg. to start James C. Scotts Seeing Like A State, which is as described so far). As expected from the low quality of the day, the recovery sleep was bigger than before: a full 10 hours rather than 9:40; the next day, I slept a normal 8:50, and the following day ~8:20 (woken up early); 10:20 (slept in); 8:44; 8:18 (▁▇▁▁). It will be interesting to see whether my excess sleep remains in the hour range for ’good modafinil nights and two hours for bad modafinil nights.
3. I decided to try out day-time usage on 2 consecutive days, taking the 100mg at noon or 1 PM. On both days, I thought I did feel more energetic but nothing extraordinary (maybe not even as strong as the nicotine), and I had trouble falling asleep on Halloween, thinking about the meta-ethics essay I had been writing diligently on both days. Not a good use compared to staying up a night.
Most people I talk to about modafinil seem to use it for daytime usage; for me that has not ever worked out well, but I had nothing in particular to show against it. So, as I was capping the last of my piracetam-caffeine mix and clearing off my desk, I put the 4 remaining Modalerts pills into capsules with the last of my creatine powder and then mixed them with 4 of the theanine-creatine pills. Like the previous Adderall trial, I will pick one pill blindly each day and guess at the end which it was. If it was active (modafinil-creatine), take a break the next day; if placebo (theanine-creatine), replace the placebo and try again the next day. We’ll see if I notice anything on DNB or possibly gwern.net edits.
1. Take at 10 AM; seem a bit more active but that could just be the pressure of the holiday season combined with my nice clean desk. I do the chores without too much issue and make progress on other things, but nothing major; I survive going to The Sitter without too much tiredness, so ultimately I decide to give the palm to it being active, but only with 60% confidence. I check the next day, and it was placebo. Oops.
2. Take at 11 AM; distractions ensue and the Christmas tree-cutting also takes up much of the day. By 7 PM, I am exhausted and in a bad mood. While I don’t expect day-time modafinil to buoy me up, I do expect it to at least buffer me against being tired, and so I conclude placebo this time, and with more confidence than yesterday (65%). I check before bed, and it was placebo.
3. 10:30 AM; no major effect that I notice throughout the day - it’s neither good nor bad. This smells like placebo (and part of my mind is going how unlikely is it to get placebo 3 times in a row!, which is just the Gambler’s fallacy talking inasmuch as this is sampling with replacement). I give it 60% placebo; I check the next day right before taking, and it is. Man!
4. 1 PM; overall this was a pretty productive day, but I can’t say it was very productive. I would almost say even odds, but for some reason I feel a little more inclined towards modafinil. Say 55%. That night’s sleep was vile: the Zeo says it took me 40 minutes to fall asleep, I only slept 7:37 total, and I woke up 7 times. I’m comfortable taking this as evidence of modafinil (half-life 10 hours, 1 PM to midnight is only 1 full halving), bumping my prediction to 75%. I check, and sure enough - modafinil.
5. 10:40 AM; again no major effects, although I got two jQuery extensions working and some additional writing so one could argue the day went well. I don’t know; 50%. Placebo.
6. 11 AM; a rather productive day. I give it 65%. To my surprise, it was placebo.
7. 10 AM; this was an especially productive day, but this was also the day my nicotine gum finally arrived and I just had to try it (I had been waiting so long); it’s definitely a stimulant, alright. But this trashes my own subjective estimates; I hoped it was just placebo, but no, it was modafinil.
8. 9:50 AM; nothing noticed by noon. Managed to finish Reasons of State: Why Didn’t Denmark Sell Greenland? which was a surprising amount of work, especially after I managed to delete a third of the first draft - but nothing I would chalk up to modafinil. I decide to give it 60% placebo, and I turn out to be wrong: it was my last modafinil.
So with these 8 results in hand, what do I think? Roughly, I was right 5 of the days and wrong 3 of them. If not for the sleep effect on #4, which is - in a way - cheating (one hopes to detect modafinil due to good effects), the ratio would be 5:4 which is awfully close to a coin-flip. Indeed, a scoring rule ranks my performance at almost identical to a coin flip: -5.49 vs -5.5419. (The bright side is that I didn’t do worse than a coin flip: I was at least calibrated.)
I can’t call this much of a success; there may be an effect on my productivity but it’s certainly not very clear subjectively. I’ll chalk this up as a failure for modafinil and evidence for what I believed - day-time modafinil use does not work for me (even if it works for others).
#### VoI
For background on value of information calculations, see the Adderall calculation.
I had tried 8 randomized days like the Adderall experiment to see whether I was one of the people whom modafinil energizes during the day. (The other way to use it is to skip sleep, which is my preferred use.) I rarely use it during the day since my initial uses did not impress me subjectively. The experiment was not my best - while it was double-blind randomized, the measurements were subjective, and not a good measure of mental functioning like dual n-back (DNB) scores which I could statistically compare from day to day or against my many previous days of dual n-back scores. Between my high expectation of finding the null result, the poor experiment quality, and the minimal effect it had (eliminating an already rare use), the value of this information was very small.
I mostly did it so I could tell people that no, day usage isn’t particularly great for me; why don’t you run an experiment on yourself and see whether it was just a placebo effect (or whether you genuinely are sleep-deprived and it is indeed compensating)?
## Armodafinil
Armodafinil is sort of a purified modafinil which Cephalon sells under the brand-name Nuvigil (and Sun under Waklert20). Armodafinil acts much the same way (see the ADS Drug Profile) but the modafinil variant filtered out are the faster-acting molecules21. Hence, it is supposed to last longer. as studies like Pharmacodynamic effects on alertness of single doses of armodafinil in healthy subjects during a nocturnal period of acute sleep loss seem to bear out; anecdotally, it’s also more powerful, with Cephalon offering pills with doses as low as 50mg. (To be technical, modafinil is racemic: it comes in two forms which are rotations, mirror-images of each other. The rotation usually doesn’t matter, but sometimes it matters tremendously - for example, one form of thalidomide stops morning sickness, and the other rotation causes hideous birth defects.)
Besides Adderall, I also purchased on Silk Road 5x250mg pills of armodafinil. The price was extremely reasonable, 1.5btc or roughly $23 at that day’s exchange rate; I attribute the low price to the seller being new and needing feedback, and offering a discount to induce buyers to take a risk on him. (Buyers bear a large risk on Silk Road since sellers can easily physically anonymize themselves from their shipment, but a buyer can be found just by following the package.) Because of the longer active-time, I resolved to test the armodafinil not during the day, but with an all-nighter. ### Nuvigil 1. First use Took full pill at 10:21 PM when I started feeling a bit tired. Around 11:30, I noticed my head feeling fuzzy but my reading seemed to still be up to snuff. I would eventually finish the science book around 9 AM the next day, taking some very long breaks to walk the dog, write some poems, write a program, do Mnemosyne review (memory performance: subjectively below average, but not as bad as I would have expected from staying up all night), and some other things. Around 4 AM, I reflected that I felt much as I had during my nightwatch job at the same hour of the day - except I had switched sleep schedules for the job. The tiredness continued to build and my willpower weakened so the morning wasn’t as productive as it could have been - but my actual performance when I could be bothered was still pretty normal. That struck me as kind of interesting that I can feel very tired and not act tired, in line with the anecdotes. Past noon, I began to feel better, but since I would be driving to errands around 4 PM, I decided to not risk it and take an hour-long nap, which went well, as did the driving. The evening was normal enough that I forgot I had stayed up the previous night, and indeed, I didn’t much feel like going to bed until past midnight. I then slept well, the Zeo giving me a 108 ZQ (not an all-time record, but still unusual). 2. I had intended to run another Adderall trial this day but then I learned we would be going to the midnight showing of the last Harry Potter movie. A perfect opportunity: going to bed at 3 AM after a stimulating battle movie would mean crappy sleep, so why not just do another armodafinil trial and kill 2 birds with one stone? I took the pill at 11 PM the evening of (technically, the day before); that day was a little low on sleep than usual, since I had woken up an hour or half-hour early. I didn’t yawn at all during the movie (merely mediocre to my eyes with some questionable parts)22. It worked much the same as it did the previous time - as I walked around at 5 AM or so, I felt perfectly alert. I made good use of the hours and wrote up my memories of ICON 2011. (As I was doing this, I reflected how modafinil is such a pure example of the money-time tradeoff. It’s not that you pay someone else to do something for you, which necessarily they will do in a way different from you; nor is it that you have exchanged money to free yourself of a burden of some future time-investment; nor have you paid money for a speculative return of time later in life like with many medical expenses or supplements. Rather, you have paid for 8 hours today of your own time.) And as before, around 9 AM I began to feel the peculiar feeling that I was mentally able and apathetic (in a sort of aboulia way); so I decided to try what helped last time, a short nap. But this time, though I took a full hour, I slept not a wink and my Zeo recorded only 2 transient episodes of light sleep! A back-handed sort of proof of alertness, I suppose. I didn’t bother trying again. The rest of the day was mediocre, and I wound up spending much of it on chores and whatnot out of my control. Mentally, I felt better past 3 PM. This continued up to 1 AM, at which point I decided not to take a second armodafinil (why spend a second pill to gain what would likely be an unproductive set of 8 hours?) and finish up the experiment with some n-backing. My 5 rounds: 60/38/62/44/5023. This was surprising. Compare those scores with scores from several previous days: 39/42/44/40/20/28/36. I had estimated before the n-backing that my scores would be in the low-end of my usual performance (20-30%) since I had not slept for the past 41 hours, and instead, the lowest score was 38%. If one did not know the context, one might think I had discovered a good nootropic! Interesting evidence that armodafinil preserves at least one kind of mental performance. 3. I stayed up late writing some poems and about how [email protected] kills, and decided to make a night of it. I took the armodafinil at 1 AM; the interesting bit is that this was the morning/evening after what turned out to be an Adderall (as opposed to placebo) trial, so perhaps I will see how well or ill they go together. A set of normal scores from a previous day was 32%/43%/51%/48%. At 11 PM, I scored 39% on DNB; at 1 AM, I scored 50%/43%; 5:15 AM, 39%/37%; 4:10 PM, 42%/40%; 11 PM, 55%/21%/38%. (▂▄▆▅ vs ▃▅▄▃▃▄▃▇▁▃) The peculiar tired-sharp feeling was there as usual, and the DNB scores continue to suggest this is not an illusion, as they remain in the same 30-50% band as my normal performance. I did not notice the previous aboulia feeling; instead, around noon, I was filled with a nervous energy and a disturbingly rapid pulse which meditation & deep breathing did little to help with, and which didn’t go away for an hour or so. Fortunately, this was primarily at church, so while I felt irritable, I didn’t actually interact with anyone or snap at them, and was able to keep a lid on it. I have no idea what that was about. I wondered if it might’ve been a serotonin storm since amphetamines are some of the drugs that can trigger storms but the Adderall had been at 10:50 AM the previous day, or >25 hours (the half-lives of the ingredients being around 13 hours). An hour or two previously I had taken my usual caffeine-piracetam pill with my morning tea - could that have interacted with the armodafinil and the residual Adderall? Or was it caffeine+modafinil? Speculation, perhaps. A house-mate was ill for a few hours the previous day, so maybe the truth is as prosaic as me catching whatever he had. 4. Stayed up with the purpose of finishing my work for a contest. This time, instead of taking the pill as a single large dose (I feel that after 3 times, I understand what it’s like), I will take 4 doses over the new day. I took the first quarter at 1 AM, when I was starting to feel a little foggy but not majorly impaired. Second dose, 5:30 AM; feeling a little impaired. 8:20 AM, third dose; as usual, I feel physically a bit off and mentally tired - but still mentally sharp when I actually do something. Early on, my heart rate seemed a bit high and my limbs trembling, but it’s pretty clear now that that was the caffeine or piracetam. It may be that the other day, it was the caffeine’s fault as I suspected. The final dose was around noon. The afternoon crash wasn’t so pronounced this time, although motivation remains a problem. I put everything into finishing up the spaced repetition literature review, and didn’t do any n-backing until 11:30 PM: 32/34/31/54/40%. 5. With the last pill, I wound up trying split-doses on non-full nights; that is, if one full pill keeps me awake one full night, what does 1/4th the pill do? 1. Between midnight and 1:36 AM, I do four rounds of n-back: 50/39/30/55%. I then take 1/4th of the pill and have some tea. At roughly 1:30 AM, AngryParsley linked a SF anthology/novel, Fine Structure, which sucked me in for the next 3-4 hours until I finally finished the whole thing. At 5:20 AM, circumstances forced me to go to bed, still having only taken 1/4th of the pill and that determines this particular experiment of sleep; I quickly do some n-back: 29/20/20/54/42. I fall asleep in 13 minutes and sleep for 2:48, for a ZQ of 28 (a full night being ~100). I did not notice anything from that possible modafinil+caffeine interaction. Subjectively upon awakening: I don’t feel great, but I don’t feel like 2-3 hours of sleep either. N-back at 10 AM after breakfast: 25/54/44/38/33. These are not very impressive, but seem normal despite taking the last armodafinil ~9 hours ago; perhaps the 3 hours were enough. Later that day, at 11:30 PM (just before bed): 26/56/47. 2. 2 break days later, I took the quarter-pill at 11:22 PM. I had discovered I had for years physically possessed a very long interview not available online, and transcribing that seemed like a good way to use up a few hours. I did some reading, some Mnemosyne, and started it around midnight, finishing around 2:30 AM. There seemed a mental dip around 30 minutes after the armodafinil, but then things really picked up and I made very good progress transcribing the final draft of 9000 words in that period. (In comparison, The Conscience of the Otaking parts 2 & 4 were much easier to read than the tiny font of the RahXephon booklet, took perhaps 3 hours, and totaled only 6500 words. The nicotine is probably also to thank.) By 3:40 AM, my writing seems to be clumsier and my mind fogged. Began DNB at 3:50: 61/53/44. Went to bed at 4:05, fell asleep in 16 minutes, slept for 3:56. Waking up was easier and I felt better, so the extra hour seemed to help. 3. With this experiment, I broke from the previous methodology, taking the remaining and final half Nuvigil at midnight. I am behind on work and could use a full night to catch up. By 8 AM, I am as usual impressed by the Nuvigil - with Modalert or something, I generally start to feel down by mid-morning, but with Nuvigil, I feel pretty much as I did at 1 AM. Sleep: 9:51/9:15/8:27 ### Waklert I noticed on SR something I had never seen before, an offer for 150mgx10 of Waklert for ฿13.47 (then, ฿1 =$3.14). I searched and it seemed Sun was somehow manufacturing armodafinil! Interesting. Maybe not cost-effective, but I tried out of curiosity. They look and are packaged the same as the Modalert, but at a higher price-point: 150 rather than 81 rupees. Not entirely sure how to use them: assuming quality is the same, 150mg Waklert is still 100mg less armodafinil than the 250mg Nuvigil pills.
1. Take quarter at midnight, another quarter at 2 AM. Night runs reasonably well once I remember to eat a lot of food (I finish a big editing task I had put off for weeks), but the apathy kicks in early around 4 AM so I gave up and watched Scott Pilgrim vs. the World, finishing around 6 AM. I then read until it’s time to go to a big shotgun club function, which occupies the rest of the morning and afternoon; I had nothing to do much of the time and napped very poorly on occasion. By the time we got back at 4 PM, the apathy was completely gone and I started some modafinil research with gusto (interrupted by going to see Puss in Boots). That night: Zeo recorded 8:30 of sleep, gap of about 1:50 in the recording, figure 10:10 total sleep; following night, 8:33; third night, 8:47; fourth, 8:20 (▇▁▁▁).
2. First quarter at 1:20 AM. Second quarter at 4 AM. 20 minute nap at 7:30 AM; took show and last 2 doses at 11 AM. (If I feel bad past 3 PM, I’ll try one of the Modalerts or maybe another quarter of a Waklert - 150mg may just be too little.) Overall, pretty good day. Nights: 9:43; 9:51; 7:57; 8:25; 8:08; 9:02; 8:07 (▇█▁▂▁▄▁).
3. First half at 6 AM; second half at noon. Wrote a short essay I’d been putting off and napped for 1:40 from 9 AM to 10:40. This approach seems to work a little better as far as the aboulia goes. (I also bother to smell my urine this time around - there’s a definite off smell to it.) Nights: 10:02; 8:50; 10:40; 7:38 (2 bad nights of nasal infections); 8:28; 8:20; 8:43 (▆▃█▁▂▂▃).
4. Whole pill at 5:42 AM. (Somewhat productive night/morning beforehand.) DNB at 2 PM: 52/36/54 (▇▁█); slept for 49 minutes; DNB at 8 PM: 50/44/38/40 (▆▄▁▂). Nights: 10:02; 8:02; no data; 9:21; 8:20 (█▁ ▅▂).
5. Whole pill at 3 AM. I spend the entire morning and afternoon typing up a transcript of Earth in My Window. I tried taking a nap around 10 AM, but during the hour I was down, I had <5m of light sleep, the Zeo said. After I finished the transcript (~16,600 words with formatting), I was completely pooped and watched a bunch of Mobile Suit Gundam episodes, then I did Mnemosyne. The rest of the night was nothing to write home about either - some reading, movie watching, etc. Next time I will go back to split-doses and avoid typing up 110kB of text. On the positive side, this is the first trial I had available the average daily grade Mnemosyne 2.0 plugin. The daily averages all are 3-point-something (peaking at 3.89 and flooring at 3.59), so just graphing the past 2 weeks, the modafinil day, and recovery days: ▅█▅▆▄▆▄▃▅▄▁▄▄ ▁ ▂▄▄█. Not an impressive performance but there was a previous non-modafinil day just as bad, and I’m not too sure how important a metric this is; I must see whether future trials show similar underperformance. Nights: 11:29; 9:22; 8:25; 8:41.
6. Spaced repetition at midnight: 3.68. (Graphing preceding and following days: ▅▄▆▆▁▅▆▃▆▄█ ▄ ▂▄▄▅) DNB starting 12:55 AM: 30/34/41. Transcribed Sawaragi 2005, then took a walk. DNB starting 6:45 AM: 45/44/33. Decided to take a nap and then take half the armodafinil on awakening, before breakfast. I wound up oversleeping until noon (4:28); since it was so late, I took only half the armodafinil sublingually. I spent the afternoon learning how to do value of information calculations, and then carefully working through 8 or 9 examples for my various pages, which I published on Lesswrong. That was a useful little project. DNB starting 12:09 AM: 30/38/48. (To graph the preceding day and this night: ▇▂█▆▅▃▃▇▇▇▁▂▄ ▅▅▁▁▃▆) Nights: 9:13; 7:24; 9:13; 8:20; 8:31.
7. Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn’t entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage.
# NGF
Nerve growth factor is a protein involved in exactly what its name suggests. Administration may have effects on neurodegeneration, plasticity, and learning. Its co-discoverer, Nobelist Rita Levi-Montalcini, reportedly took NGF eyedrops daily.
NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of $1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab’s production.) A year’s supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one’s eyes?), it could cost anywhere up to $10,000. As well, the possible effects seem like they would be long-term and difficult to measure or experiment on; so if one could somehow afford NGF eyedrops, one wouldn’t be able to know they were working. So unless the price of NGF comes down by at least two orders of magnitude, it’s not a viable nootropic. # Nicotine One of the most popular legal stimulants in the world, nicotine is often conflated with the harmful effects of tobacco; considered on its own, it has performance & possibly health benefits. Nicotine is widely available at moderate prices as long-acting nicotine patches, gums, lozenges, and suspended in water for vaping. While intended for smoking cessation, there is no reason one cannot use a nicotine patch or nicotine gum for its stimulant effects. Nicotine’s stimulant effects are general and do not come with the same tweakiness and aggression associated with the amphetamines, and subjectively are much cleaner with less of a crash. I would say that its stimulant effects are fairly strong, around that of modafinil. Another advantage is that nicotine operates through nicotinic receptors and so doesn’t cross-tolerate with dopaminergic stimulants (hence one could hypothetically cycle through nicotine, modafinil, amphetamines, and caffeine, hitting different receptors each time). Like caffeine, nicotine tolerates rapidly and addiction can develop, after which the apparent performance boosts may only represent a return to baseline after withdrawal; so nicotine as a stimulant should be used judiciously, perhaps roughly as frequent as modafinil. Another problem is that nicotine has a half-life of merely 1-2 hours, making regular dosing a requirement. There is also some elevated heart-rate/blood-pressure often associated with nicotine, which may be a concern. (Possible alternatives to nicotine include cytisine, 2’-methylnicotine, GTS-21, galantamine, Varenicline, WAY-317,538, EVP-6124, and Wellbutrin, but none have emerged as clearly superior.) I decided to try it out myself since it would be both boring and hypocritical not to. The stimulant properties are well-established, and after reading up, I didn’t think there was a >3% chance it might lead me to any short or long-term future cigarette use. So I ordered the most cost-effective batch of chewing gum I could find on Amazon (100 Nicorette 4mg) - and the seller canceled on me! Poor show, Direct Super center, very poor show. In August 2011, after winning the spaced repetition contest and finishing up the Adderall double-blind testing, I decided the time was right to try nicotine again. I had since learned that e-cigarettes use nicotine dissolved in water, and that nicotine-water was a vastly cheaper source of nicotine than either gum or patches. So I ordered 250ml of water at 12mg/ml (total cost:$18.20). A cigarette apparently delivers around 1mg of nicotine, so half a ml would be a solid dose of nicotine, making that ~500 doses. Plenty to experiment with. The question is, besides the stimulant effect, nicotine also causes habit formation; what habits should I reinforce with nicotine? Exercise, and spaced repetition seem like 2 good targets.
## Nicotine water
It arrived as described, a little bottle around the volume of a soda can. I had handy a plastic syringe with milliliter units which I used to measure out the nicotine-water into my tea. I began with half a ml the first day, 1ml the second day, and 2ml the third day. (My Zeo sleep scores were 85/103/86 (▁▇▁), and the latter had a feline explanation; these values are within normal variation for me, so if nicotine affects my sleep, it does so to a lesser extent than Adderall.) Subjectively, it’s hard to describe. At half a ml, I didn’t really notice anything; at 1 and 2ml, I thought I began to notice it - sort of a cleaner caffeine. It’s nice so far. It’s not as strong as I expected. I looked into whether the boiling water might be breaking it down, but the answer seems to be no - boiling tobacco is a standard way to extract nicotine, actually, and nicotine’s own boiling point is much higher than water; nor do I notice a drastic difference when I take it in ordinary water. And according to various e-cigarette sources, the liquid should be good for at least a year.
2ml is supposed to translate to 24mg, which is a big dose. I do not believe any of the commercial patches go much past that. I asked Wedrifid, whose notes inspired my initial interest, and he was taking perhaps 2-4mg, and expressed astonishment that I might be taking 24mg. (2mg is in line with what I am told by another person - that 2mg was so much that they actually felt a little sick. On the other hand, in one study, the subjects could not reliably distinguish between 1mg and placebo24.) 24mg is particularly troubling in that I weigh ~68kg, and nicotine poisoning and the nicotine LD50 start, for me, at around 68mg of nicotine. (I reflected that the entire jar could be a useful murder weapon, although nicotine presumably would be caught in an autopsy’s toxicology screen; I later learned nicotine was an infamous weapon in the 1800s before any test was developed. It doesn’t seem used anymore, but there are still fatal accidents due to dissolved nicotine.) The upper end of the range, 10mg/kg or 680mg for me, is calculated based on experienced smokers. Something is wrong here - I can’t see why I would have nicotine tolerance comparable to a hardened smoker, inasmuch as my maximum prior exposure was second-hand smoke once in a blue moon. More likely is that either the syringe is misleading me or the seller NicVape sold me something more dilute than 12mg/ml. (I am sure that it’s not simply plain water; when I mix the drops with regular water, I can feel the propylene glycol burning as it goes down.) I would rather not accuse an established and apparently well-liked supplier of fraud, nor would I like to simply shrug and say I have a mysterious tolerance and must experiment with doses closer to the LD50, so the most likely problem is a problem with the syringe. The next day I altered the procedure to sucking up 8ml, squirting out enough fluid to move the meniscus down to 7ml, and then ejecting the rest back into the container. The result was another mild clean stimulation comparable to the previous 1ml days. The next step is to try a completely different measuring device, which doesn’t change either.
One item always of interest to me is sleep; a stimulant is no good if it damages my sleep (unless that’s what it is supposed to do, like modafinil) - anecdotes and research suggest that it does. Over the past few days, my Zeo sleep scores continued to look normal. But that was while not taking nicotine much later than 5 PM. In lieu of a different ml measurer to test my theory that my syringe is misleading me, I decide to more directly test nicotine’s effect on sleep by taking 2ml at 10:30 PM, and go to bed at 12:20; I get a decent ZQ of 94 and I fall asleep in 16 minutes, a bit below my weekly average of 19 minutes. The next day, I take 1ml directly before going to sleep at 12:20; the ZQ is 95 and time to sleep is 14 minutes.
The next cheap proposition to test is that the 2ml dose is so large that the sedation/depressive effect of nicotine has begun to kick in. This is easy to test: take much less, like half a ml. I do so two or three times over the next day, and subjectively the feeling seems to be the same - which seems to support that proposition (although perhaps I’ve been placebo effecting myself this whole time, in which case the exact amount doesn’t matter). If this theory is true, my previous sleep results don’t show anything; one would expect nicotine-as-sedative to not hurt sleep or improve it. I skip the day (no cravings or addiction noticed), and take half a ml right before bed at 11:30; I fall asleep in 12 minutes and have a ZQ of ~105. The next few days I try putting one or two drops into the tea kettle, which seems to work as well (or poorly) as before. At that point, I was warned that there were some results that nicotine withdrawal can kick in with delays as long as a week, so I shouldn’t be confident that a few days off proved an absence of addiction; I immediately quit to see what the week would bring. 4 or 7 days in, I didn’t notice anything. I’m still using it, but I’m definitely a little nonplussed and disgruntled - I need some independent source of nicotine to compare with!
After trying the nicotine gum (see below) and experiencing effects, I decided the liquid was busted somehow and to request a refund. To its credit, NicVape immediately agreed to a refund.
### Poor absorption?
2 commenters point out that my possible lack of result is due to my mistaken assumption that if nicotine is absorbable through skin, mouth, and lungs it ought to be perfectly fine to absorb it through my stomach by drinking it (rather than vaporizing it and breathing it with an e-cigarette machine) - it’s apparently known that absorption differs in the stomach.
• the online book The Cigarette Papers describes early animal experiments (without specific bioavailability percentages):
The Fate of Nicotine in the Body also describes Battelle’s animal work on nicotine absorption. Using C14-labeled nicotine in rabbits, the Battelle scientists compared gastric absorption with pulmonary absorption. Gastric absorption was slow, and first pass removal of nicotine by the liver (which transforms nicotine into inactive metabolites) was demonstrated following gastric administration, with consequently low systemic nicotine levels. In contrast, absorption from the lungs was rapid and led to widespread distribution. These results show that nicotine absorbed from the stomach is largely metabolized by the liver before it has a chance to get to the brain. That is why tobacco products have to be puffed, smoked or sucked on, or absorbed directly into the bloodstream (i.e., via a nicotine patch). A nicotine pill would not work because the nicotine would be inactivated before it reached the brain.
• Absorption of nicotine across biological membranes depends on pH. Nicotine is a weak base with a pKa of 8.0 (Fowler, 1954). In its ionized state, such as in acidic environments, nicotine does not rapidly cross membranes…About 80 to 90% of inhaled nicotine is absorbed during smoking as assessed using C14-nicotine (Armitage et al., 1975). The efficacy of absorption of nicotine from environmental smoke in nonsmoking women has been measured to be 60 to 80% (Iwase et al., 1991)…The various formulations of nicotine replacement therapy (NRT), such as nicotine gum, transdermal patch, nasal spray, inhaler, sublingual tablets, and lozenges, are buffered to alkaline pH to facilitate the absorption of nicotine through cell membranes. Absorption of nicotine from all NRTs is slower and the increase in nicotine blood levels more gradual than from smoking (Table 1). This slow increase in blood and especially brain levels results in low abuse liability of NRTs (Henningfield and Keenan, 1993; West et al., 2000). Only nasal spray provides a rapid delivery of nicotine that is closer to the rate of nicotine delivery achieved with smoking (Sutherland et al., 1992; Gourlay and Benowitz, 1997; Guthrie et al., 1999). The absolute dose of nicotine absorbed systemically from nicotine gum is much less than the nicotine content of the gum, in part, because considerable nicotine is swallowed with subsequent first-pass metabolism (Benowitz et al., 1987). Some nicotine is also retained in chewed gum. A portion of the nicotine dose is swallowed and subjected to first-pass metabolism when using other NRTs, inhaler, sublingual tablets, nasal spray, and lozenges (Johansson et al., 1991; Bergstrom et al., 1995; Lunell et al., 1996; Molander and Lunell, 2001; Choi et al., 2003). Bioavailability for these products with absorption mainly through the mucosa of the oral cavity and a considerable swallowed portion is about 50 to 80% (Table 1)…Nicotine is poorly absorbed from the stomach because it is protonated (ionized) in the acidic gastric fluid, but is well absorbed in the small intestine, which has a more alkaline pH and a large surface area. Following the administration of nicotine capsules or nicotine in solution, peak concentrations are reached in about 1 h (Benowitz et al., 1991; Zins et al., 1997; Dempsey et al., 2004). The oral bioavailability of nicotine is about 20 to 45% (Benowitz et al., 1991; Compton et al., 1997; Zins et al., 1997). Oral bioavailability is incomplete because of the hepatic first-pass metabolism. Also the bioavailability after colonic (enema) administration of nicotine (examined as a potential therapy for ulcerative colitis) is low, around 15 to 25%, presumably due to hepatic first-pass metabolism (Zins et al., 1997). Cotinine is much more polar than nicotine, is metabolized more slowly, and undergoes little, if any, first-pass metabolism after oral dosing (Benowitz et al., 1983b; De Schepper et al., 1987; Zevin et al., 1997).
Particularly germane is the table of absorption by administration methods, which gives bioavailability for oral capsule (44%) and oral solution (20%)
• An oral formulation of nicotine for release and absorption in the colon: its development and pharmacokinetics does not break out bioavailability for their enema, but they seem to have measured levels consistent with 10-20%.
• Absorption of nicotine by the human stomach and its effect on gastric ion fluxes and potential difference’s abstract confirms the variation from acidity:
Nicotine was well absorbed, mean 18.6±3.4% in 15 min, on intragastric instillation at pH 9.8. Absorption was accompanied by side effects of nausea and vomiting, and delay in gastric emptying. Gastric absorption of nicotine at pH 7.4 was less marked (mean 8.2±2.9%), but was negligible at pH 1 (mean 3.3±1.4%).
• Nicotine is poorly absorbed from the stomach due to the acidity of the gastric fluid, but is well absorbed in the small intestine, which has a more alkaline pH and a large surface area [Nicotine, its metabolism and an overview of its biological effects].
• Tobacco and Shamanism in South America (Wilbert 1993), pg 139:
Nicotine absorption through the stomach is variable and relatively reduced in comparison with absorption via the buccal cavity and the small intestine. Drinking, eating, and swallowing of tobacco smoke by South American Indians have frequently been reported. Tenetehara shamans reach a state of tobacco narcosis through large swallows of smoke, and Tapirape shams are said to eat smoke by forcing down large gulps of smoke only to expel it again in a rapid sequence of belches. In general, swallowing of tobacco smoke is quite frequently likened to drinking. However, although the amounts of nicotine swallowed in this way - or in the form of saturated saliva or pipe juice - may be large enough to be behaviorally significant at normal levels of gastric pH, nicotine, like other weak bases, is not significantly absorbed.
From the standpoint of absorption, the drinking of tobacco juice and the interaction of the infusion or concoction with the small intestine is a highly effective method of gastrointestinal nicotine administration. The epithelial area of the intestines is incomparably larger than the mucosa of the upper tract including the stomach, and the small intestine represents the area with the greatest capacity for absorption (Levine 1983:81-83). As practiced by most of the sixty-four tribes documented here, intoxicated states are achieved by drinking tobacco juice through the mouth and/or nose…The large intestine, although functionally little equipped for absorption, nevertheless absorbs nicotine that may have passed through the small intestine.
• Stomach absorption of intubated insecticides in fasted mice’s abstract reports 10% stomach bioavailability in rats.
It looks like the overall picture is that nicotine is absorbed well in the intestines and the colon, but not so well in the stomach; this might be the explanation for the lack of effect, except on the other hand, the specific estimates I see are that 10-20% of the nicotine will be bioavailable in the stomach (as compared to 50%+ for mouth or lungs)… so any of my doses of >5ml should have overcome the poorer bioavailability! But on the gripping hand, these papers are mentioning something about the liver metabolizing nicotine when absorbed through the stomach, so…
## Nicotine gum
So I eventually got around to ordering another thing of nicotine gum, Habitrol Nicotine Gum, 4mg MINT flavor COATED gum. 96 pieces per box. Gum should be easier to double-blind myself with than nicotine patches - just buy some mint gum. If 4mg is too much, cut the gum in half or whatever. When it arrived, my hopes were borne out: the gum was rectangular and soft, which made it easy to cut into fourths.
Remembering what Wedrifid told me, I decided to start with a quarter of a piece (~1mg). The gum was pretty tasteless, which ought to make blinding easier. The effects were noticeable around 10 minutes - greater energy verging on jitteriness, much faster typing, and apparent general quickening of thought. Like a more pleasant caffeine. While testing my typing speed in Amphetype, my speed seemed to go up >=5 WPM, even after the time penalties for correcting the increased mistakes; I also did twice the usual number without feeling especially tired. A second dose was similar, and the third dose was at 10 PM before playing Ninja Gaiden II seemed to stop the usual exhaustion I feel after playing through a level or so. (It’s a tough game, which I have yet to master like Ninja Gaiden Black.) Returning to the previous concern about sleep problems, though I went to bed at 11:45 PM, it still took 28 minutes to fall sleep (compared to my more usual 10-20 minute range); the next day I use 2mg from 7-8PM while driving, going to bed at midnight, where my sleep latency is a more reasonable 14 minutes. I then skipped for 3 days to see whether any cravings would pop up (they didn’t). I subsequently used 1mg every few days for driving or Ninja Gaiden II, and while there were no cravings or other side-effects, the stimulation definitely seemed to get weaker - benefits seemed to still exist, but I could no longer describe any considerable energy or jitteriness.
The easiest way to use 2mg was to use half a gum; I tried not chewing it but just holding it in my cheek. The first night I tried, this seemed to work well for motivation; I knocked off a few long-standing to-do items. Subsequently, I began using it for writing, where it has been similarly useful. One difficult night, I wound up using the other half (for a total of 4mg over ~5 hours), and it worked but gave me a fairly mild headache and a faint sensation of nausea; these may have been due to forgetting to eat dinner, but this still indicates 3mg should probably be my personal ceiling until and unless tolerance to lower doses sets in.
### Experiment
#### Design
Blinding stymied me for a few months since the nasty taste was unmistakable and I couldn’t think of any gums with a similar flavor to serve as placebo. (The nasty taste does not seem to be due to the nicotine despite what one might expect; Vaniver plausibly suggested the bad taste might be intended to prevent over-consumption, but nothing in the Habitrol ingredient list seemed to be noted for its bad taste, and a number of ingredients were sweetening sugars of various sorts. So I couldn’t simply flavor some gum.)
I almost resigned myself to buying patches to cut (and let the nicotine evaporate) and hope they would still stick on well enough afterwards to be indistinguishable from a fresh patch, when late one sleepless night I realized that a piece of nicotine gum hanging around on my desktop for a week proved useless when I tried it, and that was the answer: if nicotine evaporates from patches, then it must evaporate from gum as well, and if gum does evaporate, then to make a perfect placebo all I had to do was cut some gum into proper sizes and let the pieces sit out for a while. (A while later, I lost a piece of gum overnight and consumed the full 4mg to no subjective effect.) Google searches led to nothing indicating I might be fooling myself, and suggested that evaporation started within minutes in patches and a patch was useless within a day. Just a day is pushing it (who knows how much is left in a useless patch?), so I decided to build in a very large safety factor and let the gum sit for around a month rather than a single day.
The experiment then is straightforward: cut up a fresh piece of gum, randomly select from it and an equivalent dry piece of gum, and do 5 rounds of dual n-back to test attention/energy & WM. (If it turns out to be placebo, I’ll immediately use the remaining active dose: no sense in wasting gum, and this will test whether nigh-daily use renders nicotine gum useless, similar to how caffeine may be useless if taken daily. If there’s 3 pieces of active gum left, then I wrap it very tightly in Saran wrap which is sticky and air-tight.) The dose will be 1mg or 1/4 a gum. I cut up a dozen pieces into 4 pieces for 48 doses and set them out to dry. Per the previous power analyses, 48 groups of DNB rounds likely will be enough for detecting small-medium effects (partly since we will be only looking at one metric - average % right per 5 rounds - with no need for multiple correction). Analysis will be one-tailed, since we’re looking for whether there is a clear performance improvement and hence a reason to keep using nicotine gum (rather than whether nicotine gum might be harmful).
Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I’ll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <$820 ($$); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days ($0.2 \times 48 \times 7.25 = 70$), it’s still a clear profit to run a convincing experiment. #### Data #### Analysis First, we’ll check the prediction score (versus a random guesser scoring 0; higher is better): logBinaryScore = sum . map (\(result,p) -> if result then 1 + logBase 2 p else 1 + logBase 2 (1-p)) logBinaryScore [(True,0.35),(False,0.40),(False,0.40),(True,0.60),(True,0.35),(False,0.45),(False,0.50), (True,0.60),(False,0.30),(True,0.50),(False,0.40),(False,0.30),(False,0.25),(False,0.75), (False,0.40),(False,0.40),(False,0.65),(False,0.45),(True,0.50),(False,0.65),(True,0.40), (True,0.55),(True,0.40),(False,0.50),(False,0.60),(True,0.40),(False,0.50),(False,0.50), (False,0.55),(True,0.55),(False,0.50),(False,0.55),(False,0.45),(True,0.55),(True,0.50), (True,0.50),(False,0.55),(True,0.50)] -- -0.58 Ouch, so my guesses were actually worse than random; this isn’t encouraging (if nicotine was helpful, why didn’t I notice? Has 1mg tolerated?) but it does indicate the blinding was successful. Now we will examine the actual performance. Extracting the individual rounds scores from my Brain Workshop log file, we can average them in groups of 5 to get a daily average; then feed them into BEST (Bayesian equivalent of t-test; see Kruschke 2012): ## individual rounds; the imbalance is unfortunate but the experiment design means nothing can be done on <- c(36,36,25,27,38,50,34,62,33,22,40,28,37,50,25,42,44,58,47,55,38,35,43,60,47,44,40,33,44, 19,58,38,41,52,41,33,47,45,45,55,45,27,35,45,30,30,52,36,28,43,50,27,29,55,45,31,15,47, 64,35,33,60,38,28,60,45,64,50,44,38,35,61,56,30,44,41,37,41,43,38) off <- c(25,34,30,40,57,34,41,51,36,26,37,42,40,45,31,24,38,40,47,35,31,27,66,25,17,43,46,50,36, 38,58,50,23,50,31,38,33,66,30,68,42,40,29,69,45,60,37,22,28,40,41,45,37,18,50,20,41,42, 47,44,60,31,46,46,55,47,42,35,40,29,47,56,37,50,20,31,42,53,27,45,50,65,33,33,33,40,47, 41,25,55,40,31,30,45,50,20,25,30,70,47,47,42,40,35,45,60,37,22,38,36,54,64,25,28,50,42, 31,50,30,30) on2 <- rowMeans(as.data.frame(matrix(on,ncol=5,byrow=TRUE))) off2 <- rowMeans(as.data.frame(matrix(off,ncol=5,byrow=TRUE))) on2 # [1] 32.4 40.2 36.0 49.2 44.6 36.0 46.0 45.0 36.4 37.8 41.2 38.4 43.8 48.2 45.2 # [16] 40.0 off2 # [1] 37.2 37.6 39.0 36.8 33.2 42.6 42.4 47.0 45.0 37.4 38.2 38.8 47.6 38.6 42.0 # [16] 39.6 42.8 41.6 39.2 38.4 41.8 38.6 44.2 36.6 source("BEST.R") mcmc = BESTmcmc(on2, off2); postInfo # SUMMARY.INFO # PARAMETER mean median mode HDIlow HDIhigh pcgtZero # mu1 41.2808129 41.2819208 41.2272636 38.5078129 44.032699 NA # mu2 40.1981087 40.1955543 40.1777039 38.6810806 41.706469 NA # muDiff 1.0827042 1.0837831 1.1279921 -2.0292432 4.244909 75.87121 # sigma1 5.2563674 5.0898354 4.7768681 3.3307493 7.511054 NA # sigma2 3.5513796 3.4850902 3.3453379 2.4655024 4.782887 NA # sigmaDiff 1.7049879 1.5917839 1.3816030 -0.6523817 4.300692 93.36015 # nu 37.7948193 29.3217989 13.0664336 2.2755711 98.116623 NA # nuLog10 1.4472479 1.4671906 1.5204474 0.7604963 2.101837 NA # effSz 0.2460061 0.2450074 0.2361248 -0.4399959 0.936570 75.87121 The results graphed: We can read off the results from the table or graph: the nicotine days average 1.1% higher, for an effect size of 0.24; however, the 95% credible interval (equivalent of confidence interval) goes all the way from 0.93 to -0.44, so we cannot exclude 0 effect and certainly not claim confidence the effect size must be >0.1. Specifically, the analysis gives a 66% chance that the effect size is >0.1. (One might wonder if any increase is due purely to a training effect - getting better at DNB. Probably not25.) This is disappointing. One curious thing that leaps out looking at the graphs is that the estimated underlying standard deviations differ: the nicotine days have a strikingly large standard deviation, indicating greater variability in scores - both higher and lower, since the means weren’t very different. The difference in standard deviations is just 6.6% below 0, so the difference almost reaches our usual frequentist levels of confidence too, which we can verify by testing: var.test(on2, off2, alternative="greater") # F test to compare two variances # # data: on2 and off2 # F = 1.9823, num df = 15, denom df = 23, p-value = 0.06775 # alternative hypothesis: true ratio of variances is greater than 1 # 95% confidence interval: # 0.9314525 Inf # sample estimates: # ratio of variances # 1.982333 We can double-check this by seeing what the variance is for the unaveraged scores: we know the means are only 1.1% different, so the additional standard deviation must be coming from how individual days are good or bad, and if that is so, then unaveraging them out to eliminate most of the observed difference. We re-run BEST: mcmc = BESTmcmc(on,off); postInfo # SUMMARY.INFO # PARAMETER mean median mode HDIlow HDIhigh pcgtZero # mu1 41.22703657 41.22582276 41.11576792 38.7591670 43.7209215 NA # mu2 40.12386083 40.12235449 40.04585340 37.9655703 42.3037602 NA # muDiff 1.10317574 1.10302023 1.13446641 -2.1520680 4.4246013 74.52276 # sigma1 10.91966242 10.86603052 10.74158135 9.1335897 12.7962565 NA # sigma2 11.69484205 11.66111990 11.57560017 10.1050885 13.3605913 NA # sigmaDiff -0.77517964 -0.79214849 -0.85774274 -3.1789680 1.6252535 25.70744 # nu 46.86258782 38.65278685 22.91066668 5.8159908 109.9850644 NA # nuLog10 1.57972151 1.58718081 1.60810992 1.0214182 2.1234248 NA # effSz 0.09778545 0.09763823 0.09931263 -0.1895882 0.3907156 74.52276 We see the standard deviation difference go away - now the difference estimate is almost centered on zero with a just 75% estimate the standard deviation differs in the observed direction. And to repeat the frequentist test: var.test(on, off, alternative="greater") # F test to compare two variances # # data: on and off # F = 0.8564, num df = 79, denom df = 119, p-value = 0.7689 # alternative hypothesis: true ratio of variances is greater than 1 # 95% confidence interval: # 0.6140736 Inf # sample estimates: # ratio of variances # 0.856387 (So our p-value there went from 0.06 to 0.769 when we disaggregated the days, consistent with the Bayesian results.) ##### Good days and bad days? The greatly increased variance, but only somewhat increased mean, is consistent with nicotine operating on me with an inverted U-curve for dosage/performance (or the Yerkes-Dodson law): on good days, 1mg nicotine is too much and degrades performance (perhaps I am overstimulated and find it hard to focus on something as boring as n-back) while on bad days, nicotine is just right and improves n-back performance. This would be easy to test if I had done something before taking the nicotine gum; then I would simply see if pre-gum scores were higher than post-gum scores on nicotine days, but equal on placebo days. Unfortunately, I didn’t. The closest data I have is my daily log of productivity/mood (1-5). If nicotine scores are higher than placebo scores on bad days (1-2) and lower on good days (3-4), then I think that would be consistent with an inverted U-curve. nicotine <- read.table(stdin(),header=TRUE) day active mp score 20120824 1 3 35.2 20120827 0 5 37.2 20120828 0 3 37.6 20120830 1 3 37.75 20120831 1 2 37.75 20120902 0 2 36.0 20120905 0 5 36.0 20120906 1 5 37.25 20120910 0 5 49.2 20120911 1 3 36.8 20120912 0 3 44.6 20120913 0 5 38.4 20120915 0 5 43.8 20120916 0 2 39.6 20120918 0 3 49.6 20120919 0 4 38.4 20120923 0 5 36.2 20120924 0 5 45.4 20120925 1 3 43.8 20120926 0 4 36.4 20120929 1 3 43.8 20120930 1 3 36.0 20121001 1 3 46.0 20121002 0 4 45.0 20121008 0 2 34.6 20121009 1 3 45.2 20121012 0 5 37.8 20121013 0 4 37.2 20121016 0 4 40.2 20121020 1 3 39.0 20121021 0 3 41.2 20121022 0 3 42.2 20121024 0 5 40.4 20121029 1 2 41.4 20121031 1 3 38.4 20121101 1 5 43.8 20121102 0 3 48.2 20121103 1 5 40.6 summary(nicotine) # day active mp score # Min. :20120824 Min. :0.0000 Min. :2.000 Min. :34.60 # 1st Qu.:20120911 1st Qu.:0.0000 1st Qu.:3.000 1st Qu.:37.21 # Median :20120926 Median :0.0000 Median :3.000 Median :39.30 # Mean :20120954 Mean :0.3947 Mean :3.632 Mean :40.47 # 3rd Qu.:20121015 3rd Qu.:1.0000 3rd Qu.:5.000 3rd Qu.:43.80 # Max. :20121103 Max. :1.0000 Max. :5.000 Max. :49.60 cor(nicotine) # day active mp score # day 0.05331968 0.07437166 0.32021554 # active -0.27754064 -0.05727501 # mp 0.05238032 Interesting. On days ranked 2 (below-average mood/productivity), nicotine seems to have boosted scores; on days ranked 3, nicotine hurts scores; there aren’t enough 4’s to tell, but even ’5 days seem to see a boost from nicotine, which is not predicted by the theory. But I don’t think much of a conclusion can be drawn: not enough data to make out any simple relationship. Some modeling suggests no relationship in this data either (although also no difference in standard deviations, leading me to wonder if I screwed up the data recording - not all of the DNB scores seem to match the input data in the previous analysis). So although the 2 days in the graph are striking, the theory may not be right. #### Conclusion What should I make of all these results? • The poor prediction performance, while confirming my belief that my novel strategy for blinding nicotine gum worked well, undermines confidence in the value of nicotine. • I specified at the beginning that I wanted an effect size of >0.2; I got it, but with it came a very wide credible interval, undermining confidence in the effect size. • The difference in standard deviations is not, from a theoretical perspective, all that strange a phenomenon: at the very beginning of this page, I covered some basic principles of nootropics and mentioned how many stimulants or supplements follow a inverted U-curve where too much or too little lead to poorer performance (ironically, one of the examples in Kruschke 2012 was a smart drug which did not affect means but increased standard deviations). If this is the case, this suggests some thoughtfulness about my use of nicotine: there are times when use of nicotine will not be helpful, but times where it will be helpful. I don’t know what makes the difference, but I can guess it relates to over-stimulation: on some nights during the experiment, I had difficult concentrating on n-backing because it was boring and I was thinking about the other things I was interested in or working on - in retrospect, I wonder if those instances were nicotine nights. In retrospect, there were 2 parts of the experiment design I probably should have changed: 1. I used 1mg gum, rather than 2mg 1mg may have too small effects to easily detect, and I may have developed tolerance to 1mg even though I’ve been careful to space out all my gum use. 2mg would have reduced this concern. 2. I used 1mg each day regardless of the randomization This was to make each day more consistent and avoid wasting a sliced piece of gum (due to evaporation, it’s use-it-or-lose-it). But this plausibly is a source of tolerance, and even #1 was not an issue when the self-experiment began, this could have become an issue. All things considered, I will probably continue using nicotine gum sparingly. ## Nicotine patches Running low on gum (even using it weekly or less, it still runs out), I decided to try patches. Reading through various discussions, I couldn’t find any clear verdict on what patch brands might be safer (in terms of nicotine evaporation through a cut or edge) than others, so I went with the cheapest Habitrol I could find as a first try of patches (Nicotine Transdermal System Patch, Stop Smoking Aid, 21 mg, Step 1, 14 patches) in May 2013. I am curious to what extent nicotine might improve a long time period like several hours or a whole day, compared to the shorter-acting nicotine gum which feels like it helps for an hour at most and then tapers off (which is very useful in its own right for kicking me into starting something I have been procrastinating on). I have not decided whether to try another self-experiment. Using the 21mg patches, I cut them into quarters. What I would do is I would cut out 1 quarter, and then seal the two edges with scotch tape, and put the Pac-Man back into its sleeve. Then the next time I would cut another quarter, seal the new edge, and so on. I thought that 5.25mg might be too much since I initially found 4mg gum to be too much, but it’s delivered over a long time and it wound up feeling much more like 1mg gum used regularly. I don’t know if the tape worked, but I did not notice any loss of potency. I didn’t like them as much as the gum because I would sometimes forget to take off a patch at the end of the day and it would interfere with sleep, and because the onset is much slower and I find I need stimulants more for getting started than for ongoing stimulation so it is better to have gum which can be taken precisely when needed and start acting quickly. (One case where the patches were definitely better than the gum was long car trips where slow onset is fine, since you’re most alert at the start.) When I finally ran out of patches in June 2016 (using them sparingly), I ordered gum instead. # Noopept Related to the famous -racetams but reportedly better (and much less bulky), Noopept is one of the many obscure Russian nootropics. (Further reading: Google Scholar, Examine.com, Reddit, Longecity, Bluelight.ru.) Its advantages seem to be that it’s far more compact than piracetam and doesn’t taste awful so it’s easier to store and consume; doesn’t have the cloud hanging over it that piracetam does due to the FDA letters, so it’s easy to purchase through normal channels; is cheap on a per-dose basis; and it has fans claiming it is better than piracetam. A Redditor ordered some Russian brand Noopept, but finding it was unpleasant & not working for him, gave the left-over half to me: It appeared in reasonably good shape, and closely matched the photographs in the Wikipedia article. I took 2 of the 25 10mg pills on successive days on top of my usual caffeine+piracetam stack, and didn’t notice anything; in particular, I didn’t find it unpleasant like he did. ## Pilot experiment So, I thought I might as well experiment since I have it. I put the 23 remaining pills into gel capsules with brown rice as filling, made ~30 placebo capsules, and will use the one-bag blinding/randomization method. I don’t want to spend the time it would take to n-back every day, so I will simply look for an effect on my daily mood/productivity self-rating; hopefully Noopept will add a little on average above and beyond my existing practices like caffeine+piracetam (yes, Noopept may be as good as piracetam, but since I still have a ton of piracetam from my 3kg order, I am primarily interested in whether Noopept adds onto piracetam rather than replaces). 10mg doses seem to be on the low side for Noopept users, weakening the effect, but on the other hand, if I were to take 2 capsules at a time, then I’d halve the sample size; it’s not clear what is the optimal tradeoff between dose and n for statistical power. Nor am I sure how important the results are - partway through, I haven’t noticed anything bad, at least, from taking Noopept. And any effect is going to be subtle: people seem to think that 10mg is too small for an ingested rather than sublingual dose and I should be taking twice as much, and Noopept’s claimed to be a chronic gradual sort of thing, with less of an acute effect. If the effect size is positive, regardless of statistical-significance, I’ll probably think about doing a bigger real self-experiment (more days blocked into weeks or months & 20mg dose) ### Power I don’t expect to find an effect, though; a quick t-test power analysis of a one-sided paired design with 23 pairs suggests that a reasonable power of 80% would still only be able to detect an increase of d>=0.5: pwr.t.test(n=23, type="paired", alternative="greater", sig.level=0.05, power=0.8) # Paired t test power calculation # # n = 23 # d = 0.5352 Or in other words, since the standard deviation of my previous self-ratings is 0.75 (see the Weather and my productivity data), a mean rating increase of >0.39 on the self-rating. This is, unfortunately, implying an extreme shift in my self-assessments (for example, 3s are ~50% of the self-ratings and 4s ~25%; to cause an increase of 0.25 while leaving 2s alone in a sample of 23 days, one would have to push 3s down to ~25% and 4s up to ~47%). So in advance, we can see that the weak plausible effects for Noopept are not going to be detected here at our usual statistical levels with just the sample I have (a more plausible experiment might use 178 pairs over a year, detecting down to d>=0.18). But if the sign is right, it might make Noopept worthwhile to investigate further. And the hardest part of this was just making the pills, so it’s not a waste of effort. ### Data Available as a CSV spanning 15 May - 9 July 2013, with magnesium l-threonate consumption as a covariate (see the magnesium page). ### Analysis Some quick tests turn in similar conclusions: both Noopept and the Magtein increased self-rating but not statistically-significantly (as expected from the beginning due to the lack of power). npt <- read.csv("https://www.gwern.net/docs/nootropics/2013-gwern-noopept.csv") wilcox.test(MP ~ Noopept, alternative="less", data = npt) # # Wilcoxon rank sum test with continuity correction # # data: MP by Noopept # W = 343, p-value = 0.2607 summary(lm(MP ~ Noopept + Magtein, data = npt)) # ...Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 2.8038 0.1556 18.02 <2e-16 # Noopept 0.0886 0.2098 0.42 0.67 # Magtein 0.2673 0.2070 1.29 0.20 # # Residual standard error: 0.761 on 53 degrees of freedom # Multiple R-squared: 0.0379, Adjusted R-squared: 0.00164 # F-statistic: 1.05 on 2 and 53 DF, p-value: 0.359 More specifically, the ordinal logistic regression estimates effect sizes of odds-ratio 1.3 for the Noopept and 1.9 for the magnesium: library(rms) npt$MP <- as.ordered(npt$MP) lmodel <- lrm(MP ~ Noopept + Magtein, data = npt); lmodel # ... # Coef S.E. Wald Z Pr(>|Z|) # y>=3 0.4330 0.4049 1.07 0.2849 # y>=4 -1.4625 0.4524 -3.23 0.0012 # Noopept 0.2336 0.5114 0.46 0.6479 # Magtein 0.6748 0.5098 1.32 0.1856 The magnesium was neither randomized nor blinded and included mostly as a covariate to avoid confounding (the Noopept coefficient & t-value increase somewhat without the Magtein variable), so an OR of 1.9 is likely too high; in any case, this experiment was too small to reliably detect any effect (~26% power, see bootstrap power simulation in the magnesium section) so we can’t say too much. set.seed(3333) library(boot) noopeptPower <- function(dt, indices) { d <- dt[indices,] # bootstrap's _n_ = original _n_ lmodel <- lrm(MP ~ Noopept + Magtein, data = d) return(anova(lmodel)[7]) # _p_-value for the Noopept coefficient } bs <- boot(data=npt, statistic=noopeptPower, R=100000, parallel="multicore", ncpus=4) alpha <- 0.05 print(sum(bs$t<=alpha) / length(bs$t)) # [1] 0.073 So for the observed effect size, the small Noopept sample had only 7% power to turn in a statistically-significant result. Given the plausible effect size, and weakness of the experiment, I find these results encouraging. ## Noopept followup experiment Noopept is a Russian stimulant sometimes suggested for nootropics use as it may be more effective than piracetam or other -racetams, and its smaller doses make it more convenient & possibly safer. Following up on a pilot study, I ran a well-powered blind randomized self-experiment between September 2013 and August 2014 using doses of 12-60mg Noopept & pairs of 3-day blocks to investigate the impact of Noopept on self-ratings of daily functioning in addition to my existing supplementation regimen involving small-to-moderate doses of piracetam. A linear regression, which included other concurrent experiments as covariates & used multiple imputation for missing data, indicates a small benefit to the lower dose levels and harm from the highest 60mg dose level, but no dose nor Noopept as a whole was statistically-significant. It seems Noopept’s effects are too subtle to easily notice if they exist, but if one uses it, one should probably avoid 60mg+. ### Design In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment. As it happened, Health Supplement Wholesalers (since renamed Powder City) offered me a sample of their products, including their 5g Noopept powder ($13). I’d never used HSW before & they had some issues in the past; but I haven’t seen any recent complaints, so I was willing to try them. My 5g from batch #130830 arrived quickly (photos: packaging, powder contents). I tried some (tastes just slightly unpleasant, like an ultra-weak piracetam), and I set about capping the fluffy white flour-like powder with the hilariously tiny scoop they provide.
It took 4 hours to cap 432 Noopept pills and another 432 flour pills. I tried to allocate the Noopept as evenly as possible (3 little scoops per pill) which the HSW packaging suggested would be 10-30mg; running out after 432 implies I managed to get ~12mg into each ($\frac{5000}{432}=11.6$). At 2 pills a day, the experiment will run under a year.
I don’t want to synchronize with the magnesium or lithium experiments, so I’ll use paired blocks of 3 days randomized 50:50, which will help with the reported tolerance of Noopept setting in after a few days and one needing to cycle.
To make things more interesting, I think I would like to try randomizing different dosages as well: 12mg, 24mg, and 36mg (1-3 pills); on 5 May 2014, because I wanted to finish up the experiment earlier, I decided to add 2 larger doses of 48 & 60mg (4-5 pills) as options. Then I can include the previous pilot study as 10mg doses, and regress over dose amount.
During this time period, I generally refrained from using any nicotine (I wound up using it only 3x in the experimental period) or modafinil (0x) to avoid adding variation to results. I did use magnesium citrate & LLLT (discussed later). Finally, I was taking a stack like this:
1. 1mg melatonin at bedtime
2. 5000IU vitamin D & multivitamin at morning; an iron supplement every 3 days
3. from 25 March to 18 September 2014, ~5g of creatine monohydrate per day
4. a few times a day, taking a custom gel pill which in total supplies ~1g piracetam & 200mg caffeine
#### Power
I’ll first assume the effect size is the same. Using the usual alpha, we can find the necessary sample size by a slight variation on the magnesium bootstrap power calculation. Since the 56 days gave a power of 7% while we want closer to 80%, we probably want to start our power estimation much higher, with n in the 300s:
library(boot)
library(rms)
newNoopeptPower <- function(dt, indices) {
d <- dt[sample(nrow(dt), n, replace=TRUE), ] # new dataset, possibly larger than the original
lmodel <- lrm(MP ~ Noopept + Magtein, data = d)
return(anova(lmodel)[7])
}
alpha <- 0.05
for (n in seq(from = 300, to = 600, by = 30)) {
bs <- boot(data=npt, statistic=newNoopeptPower, R=10000, parallel="multicore", ncpus=4)
print(c(n, sum(bs$t<=alpha)/length(bs$t)))
}
# 0.18/0.19/0.21/0.21/0.23/0.25/0.26/0.28/0.29/0.32/0.32
Even at n=600 (nearly 2 years), the estimated power is only 32%. This is absurdly small and such an experiment would be a waste of time.
Suppose we were optimistic and we doubled the effect from 0.23 to 0.47 (this can be done by editing the first two Noopept rows and incrementing the MP variable by 1), and then looked again at power? At n=300, power has reached 60%, and by n=530, we have hit the desired 80%.
npt[1,2] <- npt[1,2] + 1
npt[2,2] <- npt[2,2] + 1
n <- 530
bs <- boot(data=npt, statistic=newNoopeptPower, R=100000, parallel="multicore", ncpus=4)
print(c(n, sum(bs$t<=alpha)/length(bs$t)))
# [1] 530.0000 0.8241
530 is more acceptable, albeit I am worried about doubling the effect.
### Data
1. 20mg: 15 September - 17 September: 0
18 - 20 September: 1
2. 30mg: 21 September - 23? September: 1
24 - 26: 0
3. 20mg: 27 - 29 September: 0
30 - 2 October: 1
4. 10mg: 3 - 5 October: 1
6 - 8 October: 0
5. 30mg: 9 - 11 October: 1
12 - 14 October: 0
6. 10mg: 15 - 17 October: 1
18 - 20 October: 0
7. 30mg: 22 - 24 October: 1
25 - 27 October: 0
8. 10mg: 28 - 30 October: 1
31 - 2 November: 0
9. 30mg: 4 - 6 November: 1
7 - 9 November: 0
10. 20mg: 11 - 13 Nov: 0
14 - 16 Nov: 1
11. 30mg: 20 - 22 November: 1
23 - 25 November: 0
12. 20mg: 26 - 28 November: 1
29 - 1 December: 0
13. 30mg: 2 - 4 December: 1
5 - 7 December: 0
14. 10mg: 8 - 10 December: 1
11 - 13 December: 0
15. 30mg: 14 - 16 December: 0
17 - 19 December: 1
16. 20mg: 20 - 22 December: 0
27 - 29 December: 1
17. 10mg: 1 - 3 January 2014: 1
4 - 6 January 2014: 0
18. 30mg: 7 - 9 January: 1
10 - 12 January: 0
19. 10mg: 13 - 15 January: 1
16 - 17 January: 0
20. 20mg: 18 - 20 January: 0
21 - 23 January: 1
21. 30mg: 25 - 27 January: 0
28 - 30 January: 1
22. 10mg: 31 January - 2 February: 1
3 - 5 February: 0
23. 30mg: 8 - 10 February: 1
11 - 13 February: 1
24. 10mg: 14 - 16 February: 0
17 - 19 February: 1
25. 30mg: 20 - 22 February: 1
22 - 25 February: 0
26. 20mg: 26 - 28 February: 0
1 March - 3 March: 1
27. 10mg: 4 March - 6 March: 1
7 March - 9 March: 0
28. 30mg: 10 - 11 March: 0; accidentally unblinded & restarted on the 12th (a spare rice placebo, which is visibly different from the flour/Noopept capsules, was mixed in)
29. 30mg: 12 - 14 March : 1
15 - 17 March: 0
30. 20mg: 18 - 20 March: 0
21 - 23 March: 1
31. 10mg: 24 - 26 March: 1
27 - 29 March: 0
32. 20mg: 30 - 1 April: 0
2 - 4 April: 1
33. 10mg: 5 - 7 April: 1
8 - 10 April: 0
34. 30mg: 11 - 13 April: 1
14 - 16 April: 0
35. 20mg: 17 - 19 April: 0
20 - 22 April: 1
36. 10mg: 23 - 25 April: 1
26 - 28 April: 0
37. 30mg: 29 - 1 May: 1
2 - 4 May: 0
38. 48mg: 5 - 7 May: 0
8 - 10 May: 1
39. 60mg: 11 - 13 May: 0
14 - 17 May: 1
40. 20mg: 18 - 20 May: 0
21 - 23 May: 1
41. 48mg: 24 - 26 May: 1
27 - 29 May: 0
42. 60mg: 30 - 1 June: 0
2 - 4 June: 1
43. 30mg: 5 - 7 June: 1
8 - 10 June: 0
44. 5x: 11 - 13 June: 1
14 - 16 June: 0
45. 3x: 17 - 19 June: 0
20 - 22 June: 1
46. 5x: 23 - 25 June: 0
26 - 28 June: 1
47. 4x: 29 June - 1 July: 1
2 - 4 July: 0
48. 3x: 5 - 7 July: 1
8 - 9 July: 0
49. 5x: 10 - 12 July: 1
13 - 15 July: 0
50. 3x: 16 - 18 July: 0
19 - 21 July: 1
51. 4x: 23 - 25 July: 0
26 - 28 July: 1
52. 5x: 29 - 31 July: 0
1 - 3 August: 1
53. 3x: 4 - 6 August: 1
7 - 9 August: 0
54. 3x: 10 - 12 August: 0
13 - 15 August: 1
55. 3x: 16 - 18 August: 0
19 - 21 August: 1
56. 2x: 23 - 25 August: 0
26 - 28 August: 1
### Analysis
Analyzing the results is a little tricky because I was simultaneously running the first magnesium citrate self-experiment, which turned out to cause a quite complex result which looks like a gradually-accumulating overdose negating an initial benefit for net harm, and also toying with LLLT, which turned out to have a strong correlation with benefits. So for the potential small Noopept effect to not be swamped, I need to include those in the analysis. I designed the experiment to try to find the best dose level, so I want to look at an average Noopept effect but also the estimated effect at each dose size in case some are negative (especially in the case of 5-pills/60mg); I included the pilot experiment data as 10mg doses since they were also blind & randomized. Finally, missingness affects analysis: because not every variable is recorded for each date (what was the value of the variable for the blind randomized magnesium citrate before and after I finished that experiment? what value do you assign the Magtein variable before I bought it and after I used it all up?), just running a linear regression may not work exactly as one expects as various days get omitted because part of the data was missing.
noopeptSecond <- read.csv("https://www.gwern.net/docs/nootropics/2013-2014-gwern-noopept.csv", colClasses=c("Date","integer","integer","integer","logical"))
l <- lm(MP ~ Noopept +
LLLT +
as.logical(Magnesium.citrate) + as.integer(Date) + as.logical(Magnesium.citrate):as.integer(Date),
data=noopeptSecond)
summary(l)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 24.254373177 14.252905125 1.70171 0.09043607
# Noopept 0.002069507 0.003937337 0.52561 0.59976836
# LLLTTRUE 0.330112028 0.096133360 3.43390 0.00072963
# as.logical(Magnesium.citrate)TRUE 27.058060337 19.655569654 1.37661 0.17024431
# as.integer(Date) -0.001313316 0.000886616 -1.48127 0.14018300
# as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.001699162 0.001222719 -1.38966 0.16625033
#
# Residual standard error: 0.640741 on 191 degrees of freedom
# (731 observations deleted due to missingness)
# Multiple R-squared: 0.154383, Adjusted R-squared: 0.132246
# F-statistic: 6.97411 on 5 and 191 DF, p-value: 5.23897e-06
As expected since most of the data overlaps with the previous LLLT analysis, the LLLT variable correlates strongly; the individual magnesium variables may look a little more questionable but were justified in the magnesium citrate analysis. The Noopept result looks a little surprising - almost zero effect? Let’s split by dose (which was the point of the whole rigmarole of changing dose levels):
l2 <- lm(MP ~ as.factor(Noopept) +
LLLT +
as.logical(Magnesium.citrate) + as.integer(Date) + as.logical(Magnesium.citrate):as.integer(Date),
data=noopeptSecond)
summary(l2)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 27.044709119 14.677995235 1.84253 0.06697191
# as.factor(Noopept)10 0.099920147 0.139287051 0.71737 0.47403711
# as.factor(Noopept)15 0.526389063 0.297940313 1.76676 0.07889108
# as.factor(Noopept)20 0.114943375 0.147994400 0.77667 0.43832733
# as.factor(Noopept)30 0.019029776 0.125504996 0.15163 0.87964479
# LLLTTRUE 0.329976497 0.096071943 3.43468 0.00072993
# as.logical(Magnesium.citrate)TRUE 25.615810606 20.397271406 1.25584 0.21073068
# as.integer(Date) -0.001488184 0.000913563 -1.62899 0.10499001
# as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.001610059 0.001269219 -1.26854 0.20617256
#
# Residual standard error: 0.639823 on 188 degrees of freedom
# (731 observations deleted due to missingness)
# Multiple R-squared: 0.170047, Adjusted R-squared: 0.13473
# F-statistic: 4.81487 on 8 and 188 DF, p-value: 2.08804e-05
This looks interesting: the Noopept effect is positive for all the dose levels, but it looks like a U-curve - low at 10mg, high at 15mg, lower at 20mg, and even lower at 30mg 48mg and 60mg aren’t estimated because they are hit by the missingness problem: the magnesium citrate variable is unavailable for the days the higher doses were taken on, and so their days are omitted and those levels of the factor are not estimated. One way to fix this is to drop magnesium from the model entirely, at the cost of fitting the data much more poorly and losing a lot of R2:
l3 <- lm(MP ~ as.factor(Noopept) + LLLT, data=noopeptSecond)
summary(l3)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 3.0564075 0.0578283 52.85318 < 2.22e-16
# as.factor(Noopept)10 0.1079878 0.1255354 0.86022 0.39031118
# as.factor(Noopept)15 0.1835389 0.2848069 0.64443 0.51975512
# as.factor(Noopept)20 0.1314225 0.1301826 1.00952 0.31348347
# as.factor(Noopept)30 0.0125616 0.1091561 0.11508 0.90845401
# as.factor(Noopept)48 0.2302323 0.2050326 1.12291 0.26231647
# as.factor(Noopept)60 -0.1714377 0.1794377 -0.95542 0.34008626
# LLLTTRUE 0.2801608 0.0829625 3.37696 0.00082304
#
# Residual standard error: 0.685953 on 321 degrees of freedom
# (599 observations deleted due to missingness)
# Multiple R-squared: 0.0468695, Adjusted R-squared: 0.0260848
# F-statistic: 2.25499 on 7 and 321 DF, p-value: 0.0297924
This doesn’t fit the U-curve so well: while 60mg is substantially negative as one would extrapolate from 30mg being ~0, 48mg is actually better than 15mg. But we bought the estimates of 48mg/60mg at a steep price - we ignore the influence of magnesium which we know influences the data a great deal. And the higher doses were added towards the end, so may be influenced by the magnesium starting/stopping. Another fix for the missingness is to impute the missing data. In this case, we might argue that the placebo days of the magnesium experiment were identical to taking no magnesium at all and so we can classify each NA as a placebo day, and rerun the desired analysis:
noopeptImputed <- noopeptSecond
noopeptImputed[is.na(noopeptImputed$Magnesium.citrate),]$Magnesium.citrate <- 0
li <- lm(MP ~ as.factor(Noopept) +
LLLT +
as.logical(Magnesium.citrate) + as.integer(Date) + as.logical(Magnesium.citrate):as.integer(Date),
data=noopeptImputed)
summary(li)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 10.430818153 8.189365582 1.27370 0.2036989
# as.factor(Noopept)10 0.049595514 0.122841008 0.40374 0.6866772
# as.factor(Noopept)15 0.405925320 0.281291053 1.44308 0.1499824
# as.factor(Noopept)20 0.088343999 0.127014107 0.69554 0.4872219
# as.factor(Noopept)30 0.029464990 0.106375169 0.27699 0.7819668
# as.factor(Noopept)48 0.190340419 0.207736878 0.91626 0.3602263
# as.factor(Noopept)60 -0.210638501 0.184357630 -1.14255 0.2540834
# LLLTTRUE 0.286295998 0.081098102 3.53024 0.0004765
# as.logical(Magnesium.citrate)TRUE 42.273941799 16.288481089 2.59533 0.0098882
# as.integer(Date) -0.000451814 0.000507568 -0.89015 0.3740561
# as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.002647546 0.001012691 -2.61437 0.0093648
#
# Residual standard error: 0.666405 on 318 degrees of freedom
# (599 observations deleted due to missingness)
# Multiple R-squared: 0.108827, Adjusted R-squared: 0.0808031
# F-statistic: 3.88332 on 10 and 318 DF, p-value: 5.4512e-05
The 48mg/60mg coefficients shift downwards as expected. If we plot the coefficients with arm’s coefplot(), and one squints, the confidence intervals/point-values for Noopept look sort of consistent with a U-curve. What if we switch to a quadratic term to try to turn the Noopept values into a curve?
li2 <- lm(MP ~ Noopept + I(Noopept^2) +
LLLT +
as.logical(Magnesium.citrate) + as.integer(Date) + as.logical(Magnesium.citrate):as.integer(Date),
data=noopeptImputed)
summary(li2)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 9.172594278 8.112803113 1.13063 0.25905147
# Noopept 0.008079500 0.006074315 1.33011 0.18442378
# I(Noopept^2) -0.000178179 0.000122736 -1.45172 0.14755366
# LLLTTRUE 0.284419402 0.080959896 3.51309 0.00050627
# as.logical(Magnesium.citrate)TRUE 41.589054331 16.141539488 2.57652 0.01042501
# as.integer(Date) -0.000373812 0.000502850 -0.74339 0.45778931
# as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.002604384 0.001003433 -2.59547 0.00987860
#
# Residual standard error: 0.665408 on 322 degrees of freedom
# (599 observations deleted due to missingness)
# Multiple R-squared: 0.100316, Adjusted R-squared: 0.0835521
# F-statistic: 5.98394 on 6 and 322 DF, p-value: 6.02357e-06
Looks better, but I’m not sure how well it fits. The quadratic $y = 0.0080795x + -0.000178179x^2$ has its maximum around 40mg, though, which seems suspiciously high; it seems that in order to fit the negative estimate for 60mg, the top of the curve gets pulled over to 48mg since it’s almost as big as 15mg. I don’t find that entirely plausible.
A fancier method of imputation would be multiple imputation using, for example, the R library mice (Multivariate Imputation by Chained Equations) (guide), which will try to impute all missing values in a way which mimicks the internal structure of the data and provide several possible datasets to give us an idea of what the underlying data might have looked like, so we can see how our estimates improve with no missingness & how much of the estimate is now due to the imputation:
library(mice)
## work around apparent error in MICE: can't handle Dates type
## even though no missing-values in that column...?
noopeptSecond$Date <- as.integer(noopeptSecond$Date)
nimp <- mice(noopeptSecond, m=200, maxit=200)
li3 <- with(nimp, lm(MP ~ Noopept + I(Noopept^2) +
LLLT +
as.logical(Magnesium.citrate) + as.integer(Date) +
as.logical(Magnesium.citrate):as.integer(Date)))
round(summary(pool(li3)), 4)
# est se t df Pr(>|t|)
# (Intercept) -8.3369 3.2520 -2.5636 296.1619 0.0109
# Noopept 0.0073 0.0057 1.2790 521.5756 0.2015
# I(Noopept^2) -0.0001 0.0001 -1.2808 583.2136 0.2008
# LLLT 0.3069 0.0910 3.3737 168.4541 0.0009
# as.logical(Magnesium.citrate)TRUE 7.0763 3.9584 1.7877 298.7476 0.0748
# as.integer(Date) 0.0007 0.0002 3.4911 299.6728 0.0006
# as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.0005 0.0002 -1.8119 300.7040 0.0710
# lo 95 hi 95 nmis fmi lambda
# (Intercept) -14.7369 -1.9368 NA 0.4974 0.4940
# Noopept -0.0039 0.0185 457 0.2852 0.2824
# I(Noopept^2) -0.0004 0.0001 NA 0.2411 0.2385
# LLLT 0.1273 0.4864 599 0.6954 0.6918
# as.logical(Magnesium.citrate)TRUE -0.7135 14.8662 NA 0.4942 0.4908
# as.integer(Date) 0.0003 0.0011 NA 0.4930 0.4897
# as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.0009 0.0000 NA 0.4918 0.4884
The coefficients & p-values agree, so it seems that it doesn’t make too much difference how we deal with missingness.
Finally, we can see if some weak priors/regularization changes the picture much by using a Bayesian regression instead:
library(arm)
bl1 <- bayesglm(MP ~ as.factor(Noopept) +
LLLT +
as.logical(Magnesium.citrate) + as.integer(Date) + as.logical(Magnesium.citrate):as.integer(Date),
data=noopeptImputed)
display(bl1)
# coef.est coef.se
# (Intercept) 20.86 7.18
# as.factor(Noopept)10 0.06 0.12
# as.factor(Noopept)15 0.32 0.28
# as.factor(Noopept)20 0.10 0.13
# as.factor(Noopept)30 0.04 0.11
# as.factor(Noopept)48 0.26 0.20
# as.factor(Noopept)60 -0.13 0.18
# LLLTTRUE 0.27 0.08
# as.logical(Magnesium.citrate)TRUE 0.28 1.33
# as.integer(Date) 0.00 0.00
# as.logical(Magnesium.citrate)TRUE:as.integer(Date) 0.00 0.00
# ---
# n = 329, k = 11
# residual deviance = 144.2, null deviance = 158.5 (difference = 14.3)
# overdispersion parameter = 0.5
# residual sd is sqrt(overdispersion) = 0.67
coefplot(bl1)
simulates <- as.data.frame(coef(sim(bl1, n.sims=100000)))
sapply(simulates[1:11], function(c) { quantile(c, c(.025, .975)) } )
# (Intercept) as.factor(Noopept)10 as.factor(Noopept)15 as.factor(Noopept)20
# 2.5% 6.80794518 -0.179116006 -0.218679929 -0.151787205
# 97.5% 34.85995773 0.304713370 0.865894125 0.348912986
# as.factor(Noopept)30 as.factor(Noopept)48 as.factor(Noopept)60 LLLTTRUE
# 2.5% -0.174273139 -0.143056371 -0.490166499 0.114146706
# 97.5% 0.247145243 0.660125966 0.221157470 0.433830363
# as.logical(Magnesium.citrate)TRUE as.integer(Date)
# 2.5% -2.29986917 -0.001966335149
# 97.5% 2.86557048 -0.000227816111
# as.logical(Magnesium.citrate)TRUE:as.integer(Date)
# 2.5% -0.000197411805
# 97.5% 0.000124153915
The 95% credible intervals emphasize that while the mean estimates of the posterior for the Noopept parameters are positive, there’s substantial uncertainty after updating on the data, and the effects are small.
Should I run another followup experiment? No; the implied effect is so small a confirmatory experiment would have to run a miserably long time, it seems:
library(boot)
library(rms)
newNoopeptPower <- function(dt, indices) {
d <- dt[sample(nrow(dt), n, replace=TRUE), ] # new dataset, possibly larger than the original
lmodel <- lm(MP ~ Noopept + I(Noopept^2) +
LLLT +
as.logical(Magnesium.citrate) + as.integer(Date) + as.logical(Magnesium.citrate):as.integer(Date),
data=d)
return(anova(lmodel)[1:2,][5]$Pr(>F)) } alpha <- 0.05 for (n in seq(from = 100, to = 3000, by = 200)) { bs <- boot(data=noopeptImputed, statistic=newNoopeptPower, R=10000, parallel="multicore", ncpus=4) print(c(n, sum(bs$t<=alpha)/length(bs$t))) } # [1] 100.0000 0.0817 # [1] 300.0000 0.1145 # [1] 500.00000 0.15175 # [1] 700.00000 0.17825 # [1] 900.0000 0.2132 # [1] 1100.0000 0.2401 # [1] 1300.00000 0.26345 # [1] 1500.00000 0.28595 # [1] 1700.0000 0.3146 # [1] 1900.00000 0.33695 # [1] 2100.0000 0.3513 # [1] 2300.00000 0.37485 # [1] 2500.00000 0.39065 # [1] 2700.0000 0.4068 # [1] 2900.0000 0.4238 (I am not running an blind random self-experiment for 8 years just to get barely 40% power.) ### Conclusion So on net, I think there may be an effect but it’s small and I don’t know whether the optimal dose would be lower (~10mg) or much higher (~40mg). I don’t find this a particularly good reason to continue taking Noopept: it seems to either not be helpful in a noticeable way or to be redundant with the piracetam. # Oxiracetam Oxiracetam is one of the 3 most popular -racetams; less popular than piracetam but seems to be more popular than aniracetam. Prices have come down substantially since the early 2000s, and stand at around 1.2g/$ or roughly 50 cents a dose, which was low enough to experiment with; key question, does it stack with piracetam or is it redundant for me? (Oxiracetam can’t compete on price with my piracetam pile stockpile: the latter is now a sunk cost and hence free.)
I bought 60 grams from Smart Powders and combined it with the DMAE; I couldn’t compare oxiracetam+DMAE vs oxiracetam+choline-bitartrate because I had capped all the choline with the piracetam. One immediate advantage of oxiracetam: it is not unbelievably foul tasting like piracetam, but slightly sweet.
Regardless, while in the absence of piracetam, I did notice some stimulant effects (somewhat negative - more aggressive than usual while driving) and similar effects to piracetam, I did not notice any mental performance beyond piracetam when using them both. The most I can say is that on some nights, I seemed to be less easily tired when writing or editing or n-backing (and I felt less tired than ICON 2011 than ICON 2010), but those were also often nights I was also trying out all the other things I had gotten in that order from Smart Powders, and I am still dis-entangling what was responsible. (Probably the l-theanine or sulbutiamine.)
In other words, for me, the two -racetams did not seem to stack. The following are a number of n-back scores from before (piracetam only) and after (piracetam and oxiracetam):
1. [28,39,26,48,34]; [34,60]; [37,53,55] (▁▂▁▄▁▁▆▂▄▅▆)
2. [56,66,44,46,30,24,50,56,34,39,34]; [30,50,31,37,41,23]; [53,35,40] (▅▇▃▃▁▁▄▅▁▂▁▁▄▁▂▂▁▄▁▂)
There may be some improvement hidden in there, but nothing jumps out to my eye. Oxiracetam has smaller recommended doses than piracetam, true, but even after taking that into account, oxiracetam is still more expensive per dose. When I finished it off, I decided it hadn’t shown any benefits so there was no point in continuing it.
# Piracetam
I bought 500g of piracetam (Examine.com; FDA adverse events) from Smart Powders (piracetam is one of the cheapest nootropics and SP was one of the cheapest suppliers; the others were much more expensive as of October 2010), and I’ve tried it out for several days (started on 7 September 2009, and used it steadily up to mid-December). I’ve varied my dose from 3 grams to 12 grams (at least, I think the little scoop measures in grams), taking them in my tea or bitter fruit juice. Cranberry worked the best, although orange juice masks the taste pretty well; I also accidentally learned that piracetam stings horribly when I got some on a cat scratch. 3 grams (alone) didn’t seem to do much of anything while 12 grams gave me a nasty headache. I also ate 2 or 3 eggs a day.
Subjectively, I didn’t notice drastic changes. Here’s what I did notice:
• My thinking seems a little clearer
• I’m not so easy to tire - I went through a month’s worth of my Wikipedia watchlist with less fatigue than usual, and n-backing doesn’t seem so tiring.
• DNB-wise, eyeballing my stats file seems to indicate a small increase: when I compare peak scores D4B scores, I see mostly 50s and a few 60s before piracetam, and after starting piracetam, a few 70s mixed into the 50s and 60s. Natural increase from training? Dunno - I’ve been stuck on D4B since June, so 5 or 10% in a week or 3 seems a little suspicious. A graph of the score series26:
▁▅▂▁▅▅▂▄▁▂▁▄▄▁▄▂▁▃▃▂▂▂▁▆▁▂▁▄▃▁▃▄▁▄▁▂▅▅▂▃▁▃▃▂▄▂▄▇▄▄▄▅▃▄▂▄▅▅▁▅▃▃▄▅▅▃▃▂▄▄▃▄▆▃▅▃▄▅ ▃▅▄▄▄▂▄▂▄▃▄▄▃▄▄▂▃▆▂▁
vs
▆▅▆▄▄▅▃▅▁▁▃▄▅▃▁▅▃▅▂▃▄▃▁▄▅▅▂▃▁▁▆▃▁▄▄▃▁▅▄▄▃▃▄▂▅▃▁▄▂▅▃▆▆▂▃▃▆▄▃▃▂▂▂▁▄▃▃▄▄▂
• The other day, I also noticed I was fidgeting less
• After a week or two, I think I noticed better reflexes - both in catching falling cups and the saccading in BW seems slightly easier. But I could be imagining this since I just saw an Erowid report mentioning better reflexes & I may’ve read that one before I started. (Darn those subconscious impressions and memories! :)
After 7 days, I ordered a kg of choline bitartrate from Bulk Powders. Choline is standard among piracetam-users because it is pretty universally supported by anecdotes about piracetam headaches, has support in rat/mice experiments27, and also some human-related research. So I figured I couldn’t fairly test piracetam without some regular choline - the eggs might not be enough, might be the wrong kind, etc. It has a quite distinctly fishy smell, but the actual taste is more citrus-y, and it seems to neutralize the piracetam taste in tea (which makes things much easier for me).
The first day (22 September) I took ~10g since I was taking 5g of piracetam; I wound up with some diarrhea & farting. Oops.
On the plus side: - I noticed the less-fatigue thing to a greater extent, getting out of my classes much less tired than usual. (Caveat: my sleep schedule recently changed for the saner, so it’s possible that’s responsible. I think it’s more the piracetam+choline, though.) - One thing I wasn’t expecting was a decrease in my appetite - nobody had mentioned that in their reports.I don’t like being bothered by my appetite (I know how to eat fine without it reminding me), so I count this as a plus. - Fidgeting was reduced further
The second day I went with ~6g of choline; much less intestinal distress, but similar effects vis-a-vis fidgeting, loss of appetite, & reduced fatigue. So in general I thought this was a positive experience, but I’m not sure it was worth $40 for ~2 months’ worth, and it was tedious consuming it dissolved. Fortunately for me, the FDA decided Smart Powder’s advertising was too explicit and ordered its piracetam sales stopped; I was equivocal at the previous price point, but then I saw that between the bulk discount and the fire-sale coupon, 3kg was only$99.99 (shipping was amortized over that, the choline, caffeine, and tryptophan). So I ordered in September 2010. As well, I had decided to cap my own pills, eliminating the inconvenience and bad taste. 3kg goes a very long way so I am nowhere close to running out of my pills; there is nothing to report since, as the pills are simply part of my daily routine.
## Piracetam natural experiment
I take my piracetam in the form of capped pills consisting (in descending order) of piracetam, choline bitartrate, anhydrous caffeine, and l-tyrosine. On 8 December 2012, I happened to run out of them and couldn’t fetch more from my stock until 27 December. This forms a sort of (non-randomized, non-blind) short natural experiment: did my daily 1-5 mood/productivity ratings fall during 8-27 December compared to November 2012 & January 2013? The graphed data28 suggests to me a decline:
The BEST results29 give a small effect size of -0.26 and only partial exclusion of zero effect size (which a one-tailed two-sample test agrees with30):
So the answer is yes, M/P did fall as I expected; but also as one would expect given daily variation and the small sample of off days (19 days), the result is not very statistically robust (even ignoring the low quality of data from a natural experiment). But it was an easy experiment to run and the result had the right sign, as they say.
# Potassium
In the 2011-2012 Quantified Health Prize, potassium (FDA adverse events) came up twice as a recommendation. Potassium is vital to nerve conduction, since nerve impulses are nothing but potassium and sodium rushing around, but it didn’t seem like a priority to investigate since I am not an athlete nor do I sweat a great deal.
A LessWrong user Kevin claimed it worked well for him:
By which I mean that simple potassium is probably the most positively mind altering supplement I’ve ever tried…About 15 minutes after consumption, it manifests as a kind of pressure in the head or temples or eyes, a clearing up of brain fog, increased focus, and the kind of energy that is not jittery but the kind that makes you feel like exercising would be the reasonable and prudent thing to do. I have done no tests, but feel smarter from this in a way that seems much stronger than piracetam or any of the conventional weak nootropics. It is not just me – I have been introducing this around my inner social circle and I’m at 7/10 people felt immediately noticeable effects. The 3 that didn’t notice much were vegetarians and less likely to have been deficient. Now that I’m not deficient, it is of course not noticeable as mind altering, but still serves to be energizing, particularly for sustained mental energy as the night goes on…Potassium chloride initially, but since bought some potassium gluconate pills… research indicates you don’t want to consume large amounts of chloride (just moderate amounts).
…The first time I took supplemental potassium (50% US RDA in a lot of water), it was like a brain fog lifted that I never knew I had, and I felt profoundly energized in a way that made me feel exercise was reasonable and prudent, which resulted in me and the roommate that had just supplemented potassium going for an hour long walk at 2AM. Experiences since then have not been quite so profound (which probably was so stark for me as I was likely fixing an acute deficiency), but I can still count on a moderately large amount of potassium to give me a solid, nearly side effect free performance boost for a few hours…I had been doing Bikram yoga on and off, and I think I wasn’t keeping up the practice because I wasn’t able to properly rehydrate myself.
One claim was partially verified in passing by Eliezer Yudkowsky (Supplementing potassium (citrate) hasn’t helped me much, but works dramatically for Anna, Kevin, and Vassar…About the same as drinking a cup of coffee - i.e., it works as a perker-upper, somehow. I’m not sure, since it doesn’t do anything for me except possibly mitigate foot cramps.)
I largely ignored this since the discussions were of sub-RDA doses, and my experience has usually been that RDAs are a poor benchmark and frequently far too low (consider the RDA for vitamin D). This time, I checked the actual RDA - and was immediately shocked and sure I was looking at a bad reference: there was no way the RDA for potassium was seriously 3700-4700mg or 4-5 grams daily, was there? Just as an American, that implied that I was getting less than half my RDA. (How would I get 4g of potassium in the first place? Eat a dozen bananas a day⸮) I am not a vegetarian, nor is my diet that fantastic: I figured I was getting some potassium from the ~2 fresh tomatoes I was eating daily, but otherwise my diet was not rich in potassium sources. I have no blood tests demonstrating deficiency, but given the figures, I cannot see how I could not be deficient.
Potassium is not the safest supplement ever, but it’s reasonably safe (kidneys can filter out overdoses), and between the anecdotes and my sudden realization that I was highly likely deficient, I decided to try it out.
# Sulbutiamine
2 experiences with sulbutiamine (Examine.com) on Reddit moved me to check it out.
My general impression is positive; it does seem to help with endurance and extended the effect of piracetam+choline, but is not as effective as that combo. At $20 for 30g (bought from Smart Powders), I’m not sure it’s worthwhile, but I think at$10-15 it would probably be worthwhile. Sulbutiamine seems to affect my sleep negatively, like caffeine. I bought 2 or 3 canisters for my third batch of pills along with the theanine. For a few nights in a row, I slept terribly and stayed awake thinking until the wee hours of the morning; eventually I realized it was because I was taking the theanine pills along with the sleep-mix pills, and the only ingredient that was a stimulant in the batch was - sulbutiamine. I cut out the theanine pills at night, and my sleep went back to normal. (While very annoying, this, like the creatine & taekwondo example, does tend to prove to me that sulbutiamine was doing something and it is not pure placebo effect.)
It’s worth noting that sulbutiamine reports vary dramatically, and it seems possible that some people are thiamine-deficient and so would disproportionately; SilasBarta noticed little to nothing (like me), but Jimrandomh reports his life was transformed (and he suspects that his diabetes caused or exacerbated a deficiency).
# Taurine
Taurine (Examine.com) was another gamble on my part, based mostly on its inclusion in energy drinks. I didn’t do as much research as I should have: it came as a shock to me when I read in Wikipedia that taurine has been shown to prevent oxidative stress induced by exercise and was an antioxidant - oxidative stress is a key part of how exercise creates health benefits and antioxidants inhibit those benefits.
So now I have to be careful about when I take it so it isn’t near a session of exercise or just accept whatever damage taurine does me. I’m not sure what I’ll do with it when I cap my current supply of powders. (It would make little sense to cap it with the creatine since I would often take the creatine before exercise.)
And the effects? Well, if you look through the WP article or other places, you see it justified in part due to supposed long term benefits or effects on blood sugar. I can’t say I’ve noticed any absence of crashes, taking it on alternate days or alone. (At least it wasn’t too expensive - $9 for 500g.) # Testosterone The hormone testosterone (Examine.com; FDA adverse events) needs no introduction. This is one of the scariest substances I have considered using: it affects so many bodily systems in so many ways that it seems almost impossible to come up with a net summary, either positive or negative. With testosterone, the problem is not the usual nootropics problem that that there is a lack of human research, the problem is that the summary constitutes a textbook - or two. That said, the 2011 review The role of testosterone in social interaction (excerpts) gives me the impression that testosterone does indeed play into risk-taking, motivation, and social status-seeking; some useful links and a representative anecdote: • The Manly Molecule, Steve Sailer 2000 • Wedrifid, 2012: While the primary effect of the drug is massive muscle growth the psychological side effects actually improved his sanity by an absurd degree. He went from barely functional to highly productive. When one observes that the decision to not attempt to fulfill one’s CEV at a given moment is a bad decision it follows that all else being equal improved motivation is improved sanity. Elaborating on why the psychological side effects of testosterone injection are individual dependent: Not everyone get the same amount of motivation and increased goal seeking from the steroid and most people do not experience periods of chronic avolition. Another psychological effect is a potentially drastic increase in aggression which in turn can have negative social consequences. In the case of counterfactual Wedrifid he gets a net improvement in social consequences. He has observed that aggression and anger are a prompt for increased ruthless self-interested goal seeking. Ruthless self-interested goal seeking involves actually bothering to pay attention to social politics. People like people who do social politics well. Most particularly it prevents acting on contempt which is what Wedrifid finds prompts the most hostility and resentment in others. Point is, what is a sanity promoting change in one person may not be in another. As it happens, these are areas I am distinctly lacking in. When I first began reading about testosterone I had no particular reason to think it might be an issue for me, but it increasingly sounded plausible, an aunt independently suggested I might be deficient, a biological uncle turned out to be severely deficient with levels around 90 ng/dl (where the normal range for 20-49yo males is 249-839), and finally my blood test in August 2013 revealed that my actual level was 305 ng/dl; inasmuch as I was 25 and not 49, this is a tad low. One idea I’ve been musing about is the connections between IQ, Conscientiousness, and testosterone. IQ and Conscientiousness do not correlate to a remarkable degree - even though one would expect IQ to at least somewhat enable a long-term perspective, self-discipline, metacognition, etc! There are indications in studies of gifted youth that they have lower testosterone levels. The studies I’ve read on testosterone indicate no improvements to raw ability. So, could there be a self-sabotaging aspect to human intelligence whereby greater intelligence depends on lack of testosterone, but this same lack also holds back Conscientiousness (despite one’s expectation that intelligence would produce greater self-discipline and planning), undermining the utility of greater intelligence? Could cases of high IQ types who suddenly stop slacking and accomplish great things sometimes be due to changes in testosterone? Studies on the correlations between IQ, testosterone, Conscientiousness, and various measures of accomplishment are confusing and don’t always support this theory, but it’s an idea to keep in mind. One might suggest just going to the gym or doing other activities which may increase endogenous testosterone secretion. This would be unsatisfying to me as it introduces confounds: the exercise may be doing all the work in any observed effect, and certainly can’t be blinded. And blinding is especially important because the 2011 review discusses how some studies report that the famed influence of testosterone on aggression (eg. Wedrifid’s anecdote above) is a placebo effect caused by the folk wisdom that testosterone causes aggression & rage! I have a needle phobia, so injections are right out; but from the images I have found, it looks like testosterone enanthate gels using DMSO resemble other gels like Vaseline. This suggests an easy experimental procedure: spoon an appropriate dose of testosterone gel into one opaque jar, spoon some Vaseline gel into another, and pick one randomly to apply while not looking. If one gel evaporates but the other doesn’t, or they have some other difference in behavior, the procedure can be expanded to something like and then half an hour later, take a shower to remove all visible traces of the gel. Testosterone itself has a fairly short half-life of 2-4 hours, but the gel or effects might linger. (Injections apparently operate on a time-scale of weeks; I’m not clear on whether this is because the oil takes that long to be absorbed by surrounding materials or something else.) Experimental design will depend on the specifics of the obtained substance. As a controlled substance (Schedule III in the US), supplies will be hard to obtain; I may have to resort to the Silk Road. Power-wise, the effects of testosterone are generally reported to be strong and unmistakable. Even a short experiment should work. I would want to measure DNB scores & Mnemosyne review averages as usual, to verify no gross mental deficits; the important measures would be physical activity, so either pedometer or miles on treadmill, and general productivity/mood. The former 2 variables should remain the same or increase, and the latter 2 should increase. Either prescription or illegal, daily use of testosterone would not be cheap. On the other hand, if I am one of the people for whom testosterone works very well, it would be even more valuable than modafinil, in which case it is well worth even arduous experimenting. Since I am on the fence on whether it would help, this suggests the value of information is high. # Theanine l-theanine (Examine.com) is occasionally mentioned on Reddit or Imminst or LessWrong32 but is rarely a top-level post or article; this is probably because theanine was discovered a very long time ago (>61 years ago), and it’s a pretty straightforward substance. It’s a weak relaxant/anxiolytic (Google Scholar) which is possibly responsible for a few of the health benefits of tea, and which works synergistically with caffeine (and is probably why caffeine delivered through coffee feels different from the same amount consumed in tea - in one study, separate caffeine and theanine were a mixed bag, but the combination beat placebo on all measurements). The half-life in humans seems to be pretty short, with van der Pijl 2010 putting it ~60 minutes. This suggests to me that regular tea consumption over a day is best, or at least that one should lower caffeine use - combining caffeine and theanine into a single-dose pill has the problem of caffeine’s half-life being much longer so the caffeine will be acting after the theanine has been largely eliminated. The problem with getting it via tea is that teas can vary widely in their theanine levels and the variations don’t seem to be consistent either, nor is it clear how to estimate them. (If you take a large dose in theanine like 400mg in water, you can taste the sweetness, but it’s subtle enough I doubt anyone can actually distinguish the theanine levels of tea; incidentally, r-theanine - the useless racemic other version - anecdotally tastes weaker and less sweet than l-theanine.) On 8 April 2011, I purchased from Smart Powders (20g for$8); as before, some light searching seemed to turn up SP as the best seller given shipping overhead; it was on sale and I planned to cap it so I got 80g. This may seem like a lot, but I was highly confident that theanine and I would get along since I already drink so much tea and was a tad annoyed at the edge I got with straight caffeine. So far I’m pretty happy with it. My goal was to eliminate the physical & mental twitchiness of caffeine, which subjectively it seems to do.
## 3 years supply in pill form (2010)
Manually mixing powders is too annoying, and pre-mixed pills are expensive in bulk. So if I’m not actively experimenting with something, and not yet rich, the best thing is to make my own pills, and if I’m making my own pills, I might as well make a custom formulation using the ones I’ve found personally effective. And since making pills is tedious, I want to not have to do it again for years. 3 years seems like a good interval - 1095 days. Since one is often busy and mayn’t take that day’s pills (there are enough ingredients it has to be multiple pills), it’s safe to round it down to a nice even 1000 days. What sort of hypothetical stack could I make? What do the prices come out to be, and what might we omit in the interests of protecting our pocketbook?
We omit tryptophan and melatonin, of course, because they are most useful for sleeping and this is a stimulus pill for daytime usage. That leaves from the above the following, with some basic commercial specs from the usual retailers:
Ingredient Dose (g) g/day Days Price Supplier
aniracetam 180 1 180 $50 SmartPowders.com caffeine 400 2 200$18 SmartPowders.com
choline citrate 500 2 250 $17 SmartPowders.com creatine 1000 4 250$17 SmartPowders.com
lithium orotate 25 0.2 125 $11 Amazon modafinil 2 0.2 10$8 United Pharmacies36
sulbutiamine 30 0.25 120 $20 SmartPowders.com theanine 20 0.1 200$10 SmartPowders.com
We calculate how many days each unit gets us simply by dose divided by dose per day. We get quite a range; with some products, we only need 4 units to cover at least 1000 days, but we need 100 units for modafinil!
Ingredient Units Cost
aniracetam 6 $300 caffeine 5$90
choline citrate 4 $68 creatine 4$68
lithium orotate 8 $88 modafinil 100$800
sulbutiamine 9 $180 theanine 5$50
Sum total, $1644, or$1.65 per day for the ingredients.
But how many pills does this make and how much do those pills cost?
Capsule Connection sells 1000 00 pills (the largest pills) for $9. I already have a pill machine, so that doesn’t count (a sunk cost). If we sum the grams per day column from the first table, we get 9.75 grams a day. Each 00 pill can take around 0.75 grams, so we need 13 pills. (Creatine is very bulky, alas.) 13 pills per day for 1000 days is 13,000 pills, and 1,000 pills is$9 so we need 13 units and 13 times 9 is $117. Redoing the above, the total expense is$1761 or $1.76 per day. 13 pills a day sounds like a lot, and$1.76 is actually a fair amount per day compared to what most people take. If I couldn’t swing a round $1800 (even to cover years of consumption), how would I economize? Looking at the prices, the overwhelming expense is for modafinil. It’s a powerful stimulant - possibly the single most effective ingredient in the list - but dang expensive. Worse, there’s anecdotal evidence that one can develop tolerance to modafinil, so we might be wasting a great deal of money on it. (And for me, modafinil isn’t even very useful in the daytime: I can’t even notice it.) If we drop it, the cost drops by a full$800 from $1761 to$961 (almost halving) and to $0.96 per day. A remarkable difference, and if one were genetically insensitive to modafinil, one would definitely want to remove it. On the other metric, suppose we removed the creatine? Dropping 4 grams of material means we only need to consume 5.75 grams a day, covered by 8 pills (compared to 13 pills). We save 5,000 pills, which would have cost$45 and also don’t spend the $68 for the creatine; assuming a modafinil formulation, that drops our$1761 down to $1648 or$1.65 a day. Or we could remove both the creatine and modafinil, for a grand total of $848 or$0.85 a day, which is pretty reasonable.
1. Stewart Brand on the ’60s:
…The drugs didn’t work. Or at least only for a bit. We believed there was no hope without dope but we were wrong. I’m always amazed there aren’t drugs by now, but there aren’t. They didn’t get any better, whereas computers never stopped getting better.
2. More than once I have seen results indicating that high-IQ types benefit the least from random nootropics; nutritional deficits are the premier example, because high-IQ types almost by definition suffer from no major deficiencies like iodine. But a stimulant modafinil may be another such nootropic (see Cognitive effects of modafinil in student volunteers may depend on IQ, Randall et al 2005), which mentions:
Similarly, Mehta et al 2000 noted that the positive effects of methylphenidate (40 mg) on spatial working memory performance were greatest in those volunteers with lower baseline working memory capacity. In a study of the effects of ginkgo biloba in healthy young adults, Stough et al 2001 found improved performance in the Trail-Making Test A only in the half with the lower verbal IQ.
3. From Why Aren’t We Smarter Already: Evolutionary Trade-Offs and Cognitive Enhancements, Hills & Hertwig 2011:
For illustration, consider amphetamines, Ritalin, and modafinil, all of which have been proposed as cognitive enhancers of attention. These drugs exhibit some positive effects on cognition, especially among individuals with lower baseline abilities. However, individuals of normal or above-average cognitive ability often show negligible improvements or even decrements in performance following drug treatment (for details, see de Jongh, Bolt, Schermer, & Olivier, 2008). For instance, Randall, Shneerson, and File (2005) found that modafinil improved performance only among individuals with lower IQ, not among those with higher IQ. [See also Finke et al 2010 on visual attention.] Farah, Haimm, Sankoorikal, & Chatterjee 2009 found a similar nonlinear relationship of dose to response for amphetamines in a remote-associates task, with low-performing individuals showing enhanced performance but high-performing individuals showing reduced performance. Such ∩-shaped dose-response curves are quite common (see Cools & Robbins, 2004)
Among other things, these considerations warn us against expecting much from nootropics whose principal justification come from their results in the ill or the old (since we could call being old an illness) - they are already brain-damaged.
4. For example, I am have used my Zeo to measure the effects of melatonin or of double-blinded4 vitamin D on my Zeo sleep data; the latter is novel and interesting.
5. This is reportedly the result of Ilieva, I., Boland, J., Chatterjee, A. & Farah, M.J. (2010). Adderall’s perceived and actual effects on healthy people’s cognition. Poster presented at the annual meeting of the Society for Neuroscience, San Diego, CA; blogger Casey Schwartz describes it:
6. Much better than I had expected. One of the best superhero movies so far, better than Thor or Watchmen (and especially better than the Iron Man movies). I especially appreciated how it didn’t launch right into the usual hackneyed creation of the hero plot-line but made Captain America cool his heels performing & selling war bonds for 10 or 20 minutes. The ending left me a little nonplussed, although I sort of knew it was envisioned as a franchise and I would have to admit that showing Captain America wondering at Times Square is much better an ending than something as cliche as a close-up of his suddenly-opened eyes and then a fade out. (The movie continued the lamentable trend in superhero movies of having a strong female love interest… who only gets the hots for the hero after they get muscles or powers. It was particularly bad in CA because she knows him and his heart of gold beforehand! What is the point of a feminist character who is immediately forced to do that?)
7. With just 16 predictions, I can’t simply bin the predictions and say yep, that looks good. Instead, we can treat each prediction as equivalent to a bet and see what my winnings (or losses) were; the standard such proper scoring rule is the logarithmic rule which pretty simple: you earn the logarithm of the probability if you were right, and the logarithm of the negation if you were wrong; he who racks up the fewest negative points wins. We feed in a list and get back a number:
logScore ps = sum $map (\(result,p) -> if result then log p else log (1-p)) ps logScore [(True,0.95),(False,0.30),(True,0.85),(True,0.75),(False,0.50),(False,0.25), (False,0.60),(True,0.70),(True,0.65),(True,0.60),(False,0.30),(True,0.50), (True,0.90),(True,0.40)] -- -6.125 In this case, a blind guesser would guess 50% every time (roughly half the days were Adderall and roughly half were not) so the question is, did the 50% guesser beat me? logScore [(True,0.5),(False,0.5),(True,0.5),(True,0.5),(False,0.5), (False,0.5),(False,0.5),(True,0.5),(True,0.5),(True,0.5), (False,0.5),(True,0.50),(True,0.5),(True,0.5)] -- -9.7 (-9.7) > logScore [(True,0.95),(False,0.30),(True,0.85),(True,0.75),(False,0.50), (False,0.25),(False,0.60),(True,0.70),(True,0.65),(True,0.60), (False,0.30),(True,0.50),(True,0.90),(True,0.40)] -- False We can also express this as a single function by using a base-2 log (higher numbers are better): logBinaryScore = sum . map (\(result,p) -> if result then 1 + logBase 2 p else 1 + logBase 2 (1-p)) logBinaryScore [(True,0.95),(False,0.30),(True,0.85),(True,0.75),(False,0.50),(False,0.25), (False,0.60),(True,0.70),(True,0.65),(True,0.60),(False,0.30),(True,0.50), (True,0.90),(True,0.40)] -- 5.16 So I had a palpable edge over the random guesser, although the sample size is not fantastic. 8. For example, Alexander Shulgin’s famous PiHKAL book on derivatives of PEA comments on PEA proper that: • (with 200, 400, 800 and 1600 mg) No effects. • (with 500 mg) No effects. • (with 800 and 1600 mg) No effects. • (with 25 and 50 mg i.v.) No effects. …It is without activity in man! Certainly not for the lack of trying, as some of the dosage trials that are tucked away in the literature (as abstracted in the Qualitative Comments given above) are pretty heavy duty. Actually, I truly doubt that all of the experimenters used exactly that phrase, No effects, but it is patently obvious that no effects were found. It happened to be the phrase I had used in my own notes. …Phenethylamine is intrinsically a stimulant, although it doesn’t last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent. 9. The abuse liability of caffeine has been evaluated.147,148 Tolerance development to the subjective effects of caffeine was shown in a study in which caffeine was administered at 300 mg twice each day for 18 days.148 Tolerance to the daytime alerting effects of caffeine, as measured by the MSLT, was shown over 2 days on which 250 g of caffeine was given twice each day48 and to the sleep-disruptive effects (but not REM percentage) over 7 days of 400 mg of caffeine given 3 times each day.7 In humans, placebo-controlled caffeine-discontinuation studies have shown physical dependence on caffeine, as evidenced by a withdrawal syndrome.147 The most frequently observed withdrawal symptom is headache, but daytime sleepiness and fatigue are also often reported. The withdrawal-syndrome severity is a function of the dose and duration of prior caffeine use…At higher doses, negative effects such as dysphoria, anxiety, and nervousness are experienced. The subjective-effect profile of caffeine is similar to that of amphetamine,147 with the exception that dysphoria/anxiety is more likely to occur with higher caffeine doses than with higher amphetamine doses. Caffeine can be discriminated from placebo by the majority of participants, and correct caffeine identification increases with dose.147 Caffeine is self-administered by about 50% of normal subjects who report moderate to heavy caffeine use. In post-hoc analyses of the subjective effects reported by caffeine choosers versus nonchoosers, the choosers report positive effects and the nonchoosers report negative effects. Interestingly, choosers also report negative effects such as headache and fatigue with placebo, and this suggests that caffeine-withdrawal syndrome, secondary to placebo choice, contributes to the likelihood of caffeine self-administration. This implies that physical dependence potentiates behavioral dependence to caffeine. 10. Flavonoids and cognitive function: a review of human randomized controlled trial studies and recommendations for future studies Evidence in support of the neuroprotective effects of flavonoids has increased significantly in recent years, although to date much of this evidence has emerged from animal rather than human studies. Nonetheless, with a view to making recommendations for future good practice, we review 15 existing human dietary intervention studies that have examined the effects of particular types of flavonoid on cognitive performance. The studies employed a total of 55 different cognitive tests covering a broad range of cognitive domains. Most studies incorporated at least one measure of executive function/working memory, with nine reporting significant improvements in performance as a function of flavonoid supplementation compared to a control group. However, some domains were overlooked completely (e.g. implicit memory, prospective memory), and for the most part there was little consistency in terms of the particular cognitive tests used making across study comparisons difficult. Furthermore, there was some confusion concerning what aspects of cognitive function particular tests were actually measuring. Overall, while initial results are encouraging, future studies need to pay careful attention when selecting cognitive measures, especially in terms of ensuring that tasks are actually sensitive enough to detect treatment effects. 11. The abstract: Cocoa flavanols (CF) positively influence physiological processes in ways which suggest that their consumption may improve aspects of cognitive function. This study investigated the acute cognitive and subjective effects of CF consumption during sustained mental demand. In this randomized, controlled, double-blinded, balanced, three period crossover trial 30 healthy adults consumed drinks containing 520 mg, 994 mg CF and a matched control, with a 3-day washout between drinks. Assessments included the state anxiety inventory and repeated 10-min cycles of a Cognitive Demand Battery comprising of two serial subtraction tasks (Serial Threes and Serial Sevens), a Rapid Visual Information Processing (RVIP) task and a mental fatigue scale, over the course of 1 h. Consumption of both 520 mg and 994 mg CF significantly improved Serial Threes performance. The 994 mg CF beverage significantly speeded RVIP responses but also resulted in more errors during Serial Sevens. Increases in self-reported mental fatigue were significantly attenuated by the consumption of the 520 mg CF beverage only. This is the first report of acute cognitive improvements following CF consumption in healthy adults. While the mechanisms underlying the effects are unknown they may be related to known effects of CF on endothelial function and blood flow. 12. If we assume the variance of the daily scores are equal and we exclude the hypothesis that fish oil might make scores worse, then we get a smaller p-value: before <- c(54,69,42,54,44,62,44,35,85,50,44,42,57,65,51,56,42,53,40,47, 45,51,57,57,56,76,66,60,46,52,59,48,28,45,43,47,50,40,57,46,33,19,43,58,36,52,44,64) after <- c(55,76,56,55,44,41,44,45,65,70,46,65,46,52,68,52,57,50,64,43, 41,50,69,44,47,63,34,57) wilcox.test(before,after,alternative="less") # Wilcoxon rank sum test with continuity correction # # data: before and after # W = 570.5, p-value = 0.1381 # alternative hypothesis: true location shift is less than 0 (mean(after) - mean(before)) / sd(append(before,after)) # the effect size # 0.28 A Bayesian MCMC analysis using the BEST library gives a similar answer - too much overlap, not enough data: $ sudo apt-get install jags r-cran-rjags
$R install.packages("rjags") source("BEST.R") # assumed downloaded & unzipped BEST to ./ before <- c(54,69,42,54,44,62,44,35,85,50,44,42,57,65,51,56,42,53,40,47, 45,51,57,57,56,76,66,60,46,52,59,48,28,45,43,47,50,40,57,46,33,19,43,58,36,52,44,64) after <- c(55,76,56,55,44,41,44,45,65,70,46,65,46,52,68,52,57,50,64,43,41,50,69,44,47,63,34,57) mcmcChain = BESTmcmc(before, after) postInfo = BESTplot(before, after, mcmcChain) # the generated image show(postInfo) # SUMMARY.INFO # PARAMETER mean median mode HDIlow HDIhigh pcgtZero # mu1 50.1419390 50.1377127 50.1913377 46.8630997 53.6056828 NA # mu2 53.3331611 53.3335072 53.4984856 49.0140883 57.5923759 NA # muDiff -3.1912221 -3.1790710 -2.8965497 -8.6114644 2.2571314 12.11276 # sigma1 11.1989483 11.1365632 11.0708164 8.3699263 14.0987125 NA # sigma2 10.7999759 10.6280744 10.3198861 7.7835957 14.2214647 NA # sigmaDiff 0.3989724 0.4697451 0.5809042 -3.8825471 4.5266108 59.15182 # nu 31.2485911 22.6401577 9.1936838 2.3043610 86.5712602 NA # nuLog10 1.3484496 1.3548794 1.3570830 0.6442172 2.0117475 NA # effSz -0.2917182 -0.2898252 -0.2621231 -0.7942141 0.1909223 12.11276 13. This metric is a little troubling since working memory is trainable and that’s the point of dual n-back - but my own scores have been stagnant for a long time and the blocking should reduce the impact of any very slow linear growth in scores. 14. That is, perhaps light of the right wavelength can indeed save the brain some energy by making it easier to generate ATP. Would 15 minutes of LLLT create enough ATP to make any meaningful difference, which could possibly cause the claimed benefits? The problem here is like that of the famous blood-glucose theory of willpower - while the brain does indeed use up more glucose while active, high activity uses up very small quantities of glucose/energy which doesn’t seem like enough to justify a mental mechanism like weak willpower. 15. Kurzban, in a blog post puts it well: In my last post, I talked about the idea that there is a resource that is necessary for self-control…I want to talk a little bit about the candidate for this resource, glucose. Could willpower fail because the brain is low on sugar? Let’s look at the numbers. A well-known statistic is that the brain, while only 2% of body weight, consumes 20% of the body’s energy. That sounds like the brain consumes a lot of calories, but if we assume a 2,400 calorie/day diet - only to make the division really easy - that’s 100 calories per hour on average, 20 of which, then, are being used by the brain. Every three minutes, then, the brain - which includes memory systems, the visual system, working memory, then emotion systems, and so on - consumes one (1) calorie. One. Yes, the brain is a greedy organ, but it’s important to keep its greediness in perspective… Suppose, for instance, that a brain in a person exerting their willpower - resisting eating brownies or what have you - used twice as many calories as a person not exerting willpower. That person would need an extra one third of a calorie per minute to make up the difference compared to someone not exerting willpower. Does exerting self control burn more calories? 16. Kurzban gives some additional skeptics: • Clarke and Sokoloff (1998) remarked that although [a] common view equates concentrated mental effort with mental work…there appears to be no increased energy utilization by the brain during such processes (p. 664), and …the areas that participate in the processes of such reasoning represent too small a fraction of the brain for changes in their functional and metabolic activities to be reflected in the energy metabolism of the brain… (p. 675). • Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185). • Lennie (2003) concluded that [t]he brain’s energy consumption does not change with normal variations in mental activity and that overall energy consumption is essentially constant (p. 495). • Messier (2004) concluded that it is unlikely that the blood glucose changes observed during and after a difficult cognitive task are due to increased brain glucose uptake (p. 39). • Gibson (2007), concluded that task-induced changes in human peripheral blood glucose are unlikely to reflect changes in relevant areas of brain glucose supply (p. 75). 17. And in his followup work, An opportunity cost model of subjective effort and task performance (discussion). Kurzban seems to have successfully refuted the blood-glucose theory, with few dissenters from commenting researchers. The more recent opinion seems to be that the sugar interventions serve more as a reward-signal indicating more effort is a good idea, not refueling the engine of the brain (which would seem to fit well with research on procrastination). 18. This calculation - reaping only $\frac{7}{9}$ of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit. That it is somewhat valuable is clear if we consider it under another guise. Imagine you received the same salary you do, but paid every day. Accounting systems would incur considerable costs handling daily payments, since they would be making so many more and so much smaller payments, and they would have to know instantly whether you showed up to work that day and all sorts of other details, and the recipients themselves would waste time dealing with all these checks or looking through all the deposits to their account, and any errors would be that much harder to track down. (And conversely, expensive payday loans are strong evidence that for poor people, a bi-weekly payment is much too infrequent.) One might draw a comparison to batching or buffers in computers: by letting data pile up in buffers, the computer can then deal with them in one batch, amortizing overhead over many items rather than incurring the overhead again and again. The downside, of course, is that latency will suffer and performance may drop based on that or the items becoming outdated & useless. The right trade-off will depend on the specifics; one would not expect random buffer-sizes to be optimal, but one would have to test and see what works best. Similarly, we could try applying Nick Bostrom’s reversal test and ask ourselves, how would we react to a virus which had no effect but to eliminate sleep from alternating nights and double sleep in the intervening nights? We would probably grouch about it for a while and then adapt to our new hedonistic lifestyle of partying or working hard. On the other hand, imagine the virus had the effect of eliminating normal sleep but instead, every 2 minutes, a person would fall asleep for a minute. This would be disastrous! Besides the most immediate problems like safely driving vehicles, how would anything get done? You would hold a meeting and at any point, a third of the participants would be asleep. If the virus made it instead 2 hours on, one hour off, that would be better but still problematic: there would be constant interruptions. And so on, until we reach our present state of 16 hours on, 8 hours off. Given that we rejected all the earlier buffer sizes, one wonders if 16:8 can be defended as uniquely suited to circumstances. Is that optimal? It may be, given the synchronization with the night-day cycle, but I wonder; rush hour alone stands as an argument against synchronized sleep - wouldn’t our infrastructure would be much cheaper if it only had to handle the average daily load rather than cope with the projected peak loads? Might not a longer cycle be better? The longer the day, the less we are interrupted by sleep; it’s a hoary cliche about programmers that they prefer to work in long sustained marathons during long nights rather than sprint occasionally during a distraction-filled day, to the point where some famously adopt a 28 hour day (which evenly divides a week into 6 days). Are there other occupations which would benefit from a 20 hour waking period? Or 24 hour waking period? We might not know because without chemical assistance, circadian rhythms would overpower anyone attempting such schedules. It certainly would be nice if one had long time chunks in which could read a challenging book in one sitting, without heroic arrangements. 19. As before in the Adderall trial, we use a binary logarithmic proper scoring rule: logBinaryScore = sum . map (\(result,p) -> if result then 1 + logBase 2 p else 1 + logBase 2 (1-p)) logBinaryScore [(True,0.40),(True,0.50),(False,0.65),(False,0.50), (True,0.75),(False,0.40),(False,0.35),(False,0.60)] -- 0.007 Compare 0.007 to the 5.16 I racked up guessing Adderall! My score is essentially 0. 20. I don’t understand how Sun can produce any armodafinil, as the armodafinil patents are recent enough that the modafinil loophole shouldn’t apply. 21. From slide 6 in the second link: Kinetic Profiles (Darwish et al.) [Darwish et al 2009, Armodafinil and Modafinil have substantially different pharmacokinetic profiles despite having the same terminal half-lives] • S-modafinil has a relatively short half-life (4-5 hours) • R-modafinil has a 3-4 fold longer half-life (~15 hours) • R-modafinil has 43% higher concentrations 7-11 hours after dosing • Greater systemic exposure to R-modafinil; AUC∞ was 40% higher • R-modafinil’s plasma fluctuation was 28% less than S-modafinil over 24-hours • More linear, monophasic elimination of R-modafinil" Slide 8: Patients report a more profound & sustainedwakefulness" with armodafinil. Slightly better side-effect profile?* • Slightly less incidence of headache/anxiety • Longer lasting armodafinil = more insomnia? • Reduced medication-load on the body, since it does not have to metabolize S-modafinil. *Doses compared may influence the reliability of this data (400mg modafinil vs 250mg armodafinil) 22. Specifically, the film is completely unintelligible if you had not read the book. The best I can say for it is that it delivers the action and events one expects in the right order and with basic competence, but its artistic merits are few. It seems generally devoid of the imagination and visual flights of fancy that animated movies 1 and 3 especially (although Mike Darwin disagrees), copping out on standard imagery like a Star Wars-style force field over Hogwarts Castle, or luminescent white fog when Harry was dead and in his head; I was deeply disappointed to not see any sights that struck me as novel and new. (For example, the aforementioned dead scene could have been done in so many interesting ways, like why not show Harry & Dumbledore in a bustling King’s Cross shot in bright sharp detail, but with not a single person in sight and all the luggage and equipment animatedly moving purposefully on their own?) The ending in particular boggles me. I actually turned to the person next to me and asked them whether that really was the climax and Voldemort was dead, his death was so little dwelt upon or laden with significance (despite a musical score that beat you over the head about everything else). In the book, I remember it feeling like a climactic scene, with everyone watching and little speeches explaining why Voldemort was about to be defeated, and a suitable victory celebration; I read in the paper the next day a quote from the director or screenwriter who said one scene was cut because Voldemort would not talk but simply try to efficiently kill Harry. (This is presumably the explanation for the incredible anti-climax. Hopefully.) I was dumbfounded by the depths of dishonesty or delusion or disregard: Voldemort not only does that in Deathly Hallows multiple times, he does it every time he deals with Harry, exactly as the classic villains (he is numbered among) always do! How was it possible for this man to read the books many times, as he must have, and still say such a thing? 23. This was using Brain Workshop, D5B, 45 trials over 157 seconds. 24. Cognitive effects of nicotine in humans: an fMRI study, Kumari et al 2003 …Four subjects correctly stated when they received nicotine, five subjects were unsure, and the remaining two stated incorrectly which treatment they received on each occasion of testing. These numbers are sufficiently close to chance expectation that even the four subjects whose statements corresponded to the treatments received may have been guessing. 25. On the Quantified Self forum, Christian Kleineidam asked: As I see you didn’t control for the training effect of dual-n-back. Are your dual-n-back scores generally stable enough that you don’t have a strong training effect anymore? I don’t believe there’s any need to control for training with repeated within-subject sampling, since there will be as many samples on both control and active days drawn from the later trained period as with the initial untrained period. But yes, my D5B scores seem to have plateaued pretty much and only very slowly increase; you can look at the stats file yourself. But to investigate, let’s look a graph of my last ~200 D5B scores: dnb <- c(30,34,41,45,44,33,30,38,48,52,37,50,45,30,53,46,50,25,20,52,40,54,36,58,10,32, 33,36,43,36,41,29,40,29,28,36,25,27,38,50,25,34,30,40,57,34,41,51,36,26,34,62, 33,22,40,28,37,50,25,37,42,40,45,31,24,38,40,47,42,44,58,47,55,35,31,27,66,25, 38,35,43,60,47,17,43,46,50,36,38,58,50,23,50,31,38,33,66,30,68,42,40,29,69,45, 60,37,22,28,40,41,45,37,18,50,20,41,42,47,44,60,31,46,46,55,47,42,35,40,45,27, 35,45,30,29,47,56,37,50,44,40,33,44,19,58,38,41,52,41,33,47,45,45,55,20,31,42, 53,27,45,50,65,33,33,30,52,36,28,43,33,40,47,41,25,55,40,31,30,45,50,20,25,30, 70,45,50,27,29,55,47,47,42,40,35,45,60,37,22,38,36,54,64,25,28,31,15,47,64,35, 33,60,38,28,60,45,64,50,44,38,50,42,31,50,30,35,61,56,30,44,37,43,38) The point about randomization is key, BTW, because the theoretical training effect is actually greater than the observed improvement between randomized days. Watch: lm(dnb ~ c(1:231)) # Coefficients: # (Intercept) c(1:231) # 38.37041 0.01752 ## 0.017 is a positive slope! It’s not much of a slope but it’s there. Now, I spent 200 rounds of n-back doing the randomized nicotine experiment, and those would be the latter 200 rounds graphed; how much of an improvement should I expect? The model is: $y = 38.37041 + 0.01752 \times x$. We want the endpoint, score 231, and what is 200 before 231? 31: (38.37041 + 0.01752*231) - (38.37041 + 0.01752*31) # 3.504 Notice that 3.5 > 1.1. So if this was just training effect, why isn’t the benefit from nicotine greater? 26. The full series: 28,61,36,25,61,57,39,56,23,37,24,50,54,32,50,33,16,42,41,40,34,33,31,65,23,36,29,51,46,31,45,52,30, 50,29,36,57,60,34,48,32,41,48,34,51,40,53,73,56,53,53,57,46,50,35,50,60,62,30,60,48,46,52,60,60,48, 47,34,50,51,45,54,70,48,61,43,53,60,44,57,50,50,52,37,55,40,53,48,50,52,44,50,50,38,43,66,40,24,67, 60,71,54,51,60,41,58,20,28,42,53,59,42,31,60,42,58,36,48,53,46,25,53,57,60,35,46,32,26,68,45,20,51, 56,48,25,62,50,54,47,42,55,39,60,44,32,50,34,60,47,70,68,38,47,48,70,51,42,41,35,36,39,23,50,46,44,56,50,39 27. That study is also interesting for finding benefits to chronic piracetam+choline supplementation in the mice, which seems connected to a Russian study which reportedly found that piracetam (among other more obscure nootropics) increased secretion of BDNF in mice. See also Drug heuristics on a study involving choline supplementation in pregnant rats. 28. Graphing each time period: pone <- c(4,3,4,3,4,3,4,4,3,3,2,3,2,4,4,3,4,2,3,4,2,3,3,2,2,2,3,2,3,3,4,2,3,4,3,4,3) poff <- c(3,2,2,3,4,4,3,4,2,2,3,2,3,2,2,2,4,3,3) ptwo <- c(4,2,2,3,3,3,4,4,3,2,3,2,2,2,3,3,3,4,3,4,3,3,3,2,2,3,3,3,4,4,3,2,2,2,3,3) plot(1:92, rep(3, 92), type="n", ylab="mood/productivity (1-4)", xlab="days") points(1:37, pone, col="blue") points(38:56, poff, col="red") points(57:92, ptwo, col="blue") 29. The usual: source("BEST.R") mcmcChain = BESTmcmc(poff, c(pone, ptwo)) postInfo = BESTplot(poff, c(pone, ptwo), mcmcChain); postInfo # SUMMARY.INFO # PARAMETER mean median mode HDIlow HDIhigh pcgtZero # mu1 2.78153 2.78130 2.77061 2.3832 3.1752 NA # mu2 2.98579 2.98566 2.98369 2.8103 3.1606 NA # muDiff -0.20426 -0.20463 -0.21982 -0.6315 0.2316 17.07 # sigma1 0.83778 0.81619 0.78042 0.5665 1.1559 NA # sigma2 0.73900 0.73476 0.73031 0.6158 0.8690 NA # sigmaDiff 0.09877 0.08114 0.05378 -0.2103 0.4443 70.61 # nu 50.19929 42.00024 28.00379 5.8283 115.9430 NA # nuLog10 1.61236 1.62325 1.63557 1.0515 2.1480 NA # effSz -0.26083 -0.26144 -0.28521 -0.7992 0.2774 17.07 30. We do a one-tailed test because the original hypothesis was that M/P would fall, certainly not that it would increase: wilcox.test(poff,c(pone,ptwo), alternative="less") # Wilcoxon rank sum test with continuity correction # # data: poff and c(pone, ptwo) # W = 593, p-value = 0.1502 31. One might expect some sort of catch - surely there’s a massive quality difference to go with the massive price difference? But there could well not be; I would not be surprised to learn that the dog selegiline and the human selegiline came out of the same vat. It’s basic economics: the price of a good must be greater than cost of producing said good, but only under perfect competition will price = cost. Otherwise, the price is simply whatever maximizes profit for the seller. (Bottled water doesn’t really cost$2 to produce.) This can lead to apparently counter-intuitive consequences involving price discrimination & market segmentation - such as damaged goods which are the premium product which has been deliberately degraded and sold for less (some Intel CPUs, some headphones etc.). The most famous examples were railroads; one notable passage by French engineer-economist Jules Dupuit describes the motivation for the conditions in 1849:
It is not because of the few thousand francs which would have to be spent to put a roof [!] over the third-class carriages or to upholster the third-class seats that some company or other has open carriages with wooden benches. What the company is trying to do is to prevent the passengers who can pay the second class fare from traveling third class; it hits the poor, not because it wants to hurt them, but to frighten the rich. And it is again for the same reason that the companies, having proved almost cruel to the third-class passengers and mean to the second-class ones, become lavish in dealing with first-class passengers. Having refused the poor what is necessary, they give the rich what is superfluous.
Price discrimination is aided by barriers such as ignorance and oligopolies. An example of the former would be when I went to a Food Lion grocery store in search of spices, and noticed that there was a second selection of spices in the Hispanic/Latino ethnic food aisle, with unit prices perhaps a fourth of the regular McCormick-brand spices; I rather doubt that regular cinnamon varies that much in quality. An example of the latter would be using veterinary drugs on humans - any doctor to do so would probably be guilty of medical malpractice even if the drugs were manufactured in the same factories (as well they might be, considering economies of scale). Similarly, we can predict that whenever there is a veterinary drug which is chemically identical to a human drug, the veterinary drug will be much cheaper, regardless of actual manufacturing cost, than the human drug because pet owners do not value their pets more than themselves. Human drugs are ostensibly held to a higher standard than veterinary drugs; so if veterinary prices are higher, then there will be an arbitrage incentive to simply buy the cheaper human version and downgrade them to veterinary drugs.
As with any thesis, there are exceptions to this general practice. For example, theanine for dogs is sold under the brand Anxitane is sold at almost a dollar a pill, and apparently a month’s supply costs $50+ vs$13 for human-branded theanine; on the other hand, this thesis predicts downgrading if the market priced pet versions higher than human versions, and that Reddit poster appears to be doing just that with her dog.
32. See for example the mentions in A rationalist’s guide to psychoactive drugs or the discussion in the post Coffee: When it helps, when it hurts; see also the description of a rare bad experience with theanine.
33. It’s important one uses D-3 and not vitamin D-2, alfacalcidol, or calcitriol: the Cochrane review found mortality benefits only with D-3. (And use with calcium doesn’t look too good either.)
34. It’s been suggested that caffeine interferes with production or absorption of vitamin D and this may be a bad thing; eg. Medpedia or blogger Chris Hunt (HN discussion):
Caffeine keeps you awake, which keeps you coding. It may also be a nootropic, increasing brain-power. Both desirable results. However, it also inhibits vitamin D receptors, and as such decreases the body’s uptake of this-much-needed-vitamin. OK, that’s not so bad, you’re not getting the maximum dose of vitamin D. So what? Well, by itself caffeine may not cause you any problems, but combined with cutting off a major source of the vitamin - the production via sunlight - you’re leaving yourself open to deficiency in double-quick time.
Too much caffeine may be bad for bone health because it can deplete calcium. Overdoing the caffeine also may affect the vitamin D in your body, which plays a critical role in your body’s bone metabolism. However, the roles of vitamin D as well as caffeine in the development of osteoporosis continue to be a source of debate. Significance: Caffeine may interfere with your body’s metabolism of vitamin D, according to a 2007 Journal of Steroid Biochemistry & Molecular Biology study. You have vitamin D receptors, or VDRs, in your osteoblast cells. These large cells are responsible for the mineralization and synthesis of bone in your body. They create a sheet on the surface of your bones. The D receptors are nuclear hormone receptors that control the action of vitamin D-3 by controlling hormone-sensitive gene expression. These receptors are critical to good bone health. For example, a vitamin D metabolism disorder in which these receptors don’t work properly causes rickets.
The only study ever cited is Caffeine decreases vitamin D receptor protein expression and 1,25(OH)2D3 stimulated alkaline phosphatase activity in human osteoblast cells, Rapuri et al 2007:
Caffeine dose dependently decreased the 1,25(OH)(2)D(3) induced VDR expression and at concentrations of 1 and 10mM, VDR expression was decreased by about 50-70%, respectively. In addition, the 1,25(OH)(2)D(3) induced alkaline phosphatase activity was also reduced at similar doses thus affecting the osteoblastic function. The basal ALP activity was not affected with increasing doses of caffeine. Overall, our results suggest that caffeine affects 1,25(OH)(2)D(3) stimulated VDR protein expression and 1,25(OH)(2)D(3) mediated actions in human osteoblast cells.
One should note the serious caveats here: it is a small in vitro study of a single category of human cells with an effect size that is not clear on a protein which feeds into who-knows-what pathways. It is not a result in a whole organism on any clinically meaningful endpoint, even if we take it at face-value (many results never replicate). A look at followup work citing Rapuri et al 2007 is not encouraging: Google Scholar lists no human studies of any kind, much less high-quality studies like RCTs; just some rat followups on the calcium effect. This is not to say Rapuri et al 2007 is a bad study, just that it doesn’t bear the weight people are putting on it: if you enjoy caffeine, this is close to zero evidence that you should reduce or drop caffeine consumption; if you’re taking too much caffeine, you already have plenty of reasons to reduce; if you’re drinking lots of coffee, you already have plenty of reasons to switch to tea; etc.
If we go looking for meaningful human studies, what we find is that there’s clear evidence that caffeine damages bone density via calcium uptake, especially in old women, but there is little or no interaction between vitamin D and caffeine, and some reports of correlations entirely opposite the claimed correlation.
• Results: Women with high caffeine intakes had significantly higher rates of bone loss at the spine than did those with low intakes (−1.90 ± 0.97% compared with 1.19 ± 1.08%; P = 0.038). When the data were analyzed according to VDR genotype and caffeine intake, women with the tt genotype had significantly (P = 0.054) higher rates of bone loss at the spine (−8.14 ± 2.62%) than did women with the TT genotype (−0.34 ± 1.42%) when their caffeine intake was >300 mg/d…In 1994, Morrison et al (22) first reported an association between vitamin D receptor gene (VDR) polymorphism and BMD of the spine and hip in adults. After this initial report, the relation between VDR polymorphism and BMD, bone turnover, and bone loss has been extensively evaluated. The results of some studies support an association between VDR polymorphism and BMD (23-,25), whereas other studies showed no evidence for this association (26,27)…At baseline, no significant differences existed in serum parathyroid hormone, serum 25-hydroxyvitamin D, serum osteocalcin, and urinary N-telopeptide between the low- and high-caffeine groups (Table 1⇑). In the longitudinal study, the percentage of change in serum parathyroid hormone concentrations was significantly lower in the high-caffeine group than in the low-caffeine group (Table 2⇑). However, no significant differences existed in the percentage of change in serum 25-hydroxyvitamin D
• In simple and multiple regression analyses, the only significant variable that affected Ad-SOS and nutrient intake was vitamin D (p<0.0001). Phalangeal bone Ad-SOS was influenced only by the intake of vitamin D, not of caffeine or other nutrients.
• In this large population-based cohort, we saw consistent robust associations between cola consumption and low BMD in women. The consistency of pattern across cola types and after adjustment for potential confounding variables, including calcium intake, supports the likelihood that this is not due to displacement of milk or other healthy beverages in the diet. The major differences between cola and other carbonated beverages are caffeine, phosphoric acid, and cola extract. Although caffeine likely contributes to lower BMD, the result also observed for decaffeinated cola, the lack of difference in total caffeine intake across cola intake groups, and the lack of attenuation after adjustment for caffeine content suggest that caffeine does not explain these results. A deleterious effect of phosphoric acid has been proposed (26). Cola beverages contain phosphoric acid, whereas other carbonated soft drinks (with some exceptions) do not.
• Compared with those reporting no use, subjects drinking >4 cups/day of decaffeinated coffee were at increased risk of RA [rheumatoid arthritis] (RR 2.58, 95% CI 1.63-4.06). In contrast, women consuming >3 cups/day of tea displayed a decreased risk of RA (RR 0.39, 95% CI 0.16-0.97) compared with women who never drank tea. Caffeinated coffee and daily caffeine intake were not associated with the development of RA.
• see also Vitamin D intake is inversely associated with rheumatoid arthritis: results from the Iowa Women’s Health Study, Merlino et al 2004
• Since coffee drinking may lead to a worsening of calcium balance in humans, we studied the serial changes of serum calcium, PTH, 1,25-dihydroxyvitamin D (1,25(OH)2D) vitamin D and calcium balance in young and adult rats after daily administration of caffeine for 4 weeks. In the young rats, there was an increase in urinary calcium and endogenous fecal calcium excretion after four days of caffeine administration that persisted for the duration of the experiment. Serum calcium decreased on the fourth day of caffeine administration and then returned to control levels. In contrast, the serum PTH and 1,25(OH)2D remained unchanged initially, but increased after 2 weeks of caffeine administration…In the adult rat group, an increase in the urinary calcium and endogenous fecal calcium excretion and serum levels of PTH was found after caffeine administration. However, the serum 1,25(OH)2D levels and intestinal absorption coefficient of calcium remained the same as in the adult control group.
• Vitamin D Receptor Genotype and the Risk of Bone Fractures in Women, Feskanich et al 1998:
The addition of body mass index, physical activity, calcium intake, and alcohol consumption to the regression model raised the effect estimate slightly. The further addition of vitamin D, protein, and caffeine intakes had little effect on the results.
• Tea and coffee consumption in relation to vitamin D and calcium levels in Saudi adolescents, Al-Othman et al 2012 (emphasis added):
A total of 330 randomly selected Saudi adolescents were included. Anthropometrics were recorded and fasting blood samples were analyzed for routine analysis of fasting glucose, lipid levels, calcium, albumin and phosphorous. Frequency of coffee and tea intake was noted. 25-hydroxyvitamin D levels were measured using enzyme-linked immunosorbent assays…Vitamin D levels were significantly highest among those consuming 9-12 cups of tea/week in all subjects (p-value 0.009) independent of age, gender, BMI, physical activity and sun exposure.
35. Although there have been large trials with the elderly using much higher Vitamin D doses, such as 4 doses every year of 100,000 IU, or a single annual dose of up to 300,000 IU without observed problems. | 2019-01-16 17:42:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.512046754360199, "perplexity": 3789.1096811316193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657555.87/warc/CC-MAIN-20190116154927-20190116180927-00547.warc.gz"} |
http://math.stackexchange.com/questions/181909/why-do-you-need-tensors-of-rank-2/451868 | # Why do you need tensors of rank $>2$?
Question from someone just starting to study tensors (sorry if it's silly):
So I understand (maybe?) that tensors are basically about coordinate transformations (and things that are invariant under said transformations), and that in writing out a tensor, we use notation that represents functions that perform the coordinate transformations.
But when I see a tensor of, say, rank 3 or 4 written out, it looks like we're jumping from one coordinate system to another to another, before we arrive from the original coordinate system to the final one. If you're only really starting at one coordinate system, and ending at another, why can't you treat each tensor as just a single coordinate transformation, i.e. a function that takes you directly from the original coordinate system to the final one?
-
I'm not sure what you mean. You don't need to think about tensors in terms of changing coordinates at all. – Qiaochu Yuan Aug 13 '12 at 2:12
One place where higher order tensors occur in analysis is Stokes' theorem. Generally, high rank tensors are just general multilinear maps, so they are quite natural... – tomasz Aug 13 '12 at 2:13
(just expanding on Qiaochu Yuan's comment) $A = A_mdx^m$ or $A = \bar{A}_j d\bar{x}^j$ the one-form $A$ could either be written in the $(x^{\mu})$ or in the $({\bar{x}}^{j})$ coordinate charts. The object $A$ is itself coordinate-free. The coordinate transformations of the differentials then force particular transformations on the components of $A$ in barred or unbarred coordinate charts. It's not that $A$ is a coordinate change, rather, it's components change. $A$ in contrast is what it is no matter how you observe it. – James S. Cook Aug 13 '12 at 2:28
In representation theory, new representations can be created from old ones by forming tensor powers of the original representation, and often such powers higher than the second are important. – KCd Aug 13 '12 at 6:13
Tensors of rank 2, that change coordinates, are only one particular type of tensors. Not all tensors do that!
In particular:
• Tensors of rank 0 are scalars;
• Tensors of rank 1 are vectors or differential forms;
• Tensors of rank 2 are quadratic forms, coordinate transformations, or any other linear object that can be represented by a single matrix;
• Tensors of higher rank are linear maps of higher rank. They are not changes of coordinates!
By the way, in the physics literature, it is often said that the metric tensor is a change of coordinates (between covariant and contravariant). This is mathematically incorrect. Covariant and contravariant vectors are mathematically distinct objects!
-
To add to what is written above, an early and well known application of higher order tensors was in the mechanics of deformable bodies (more particularly, in the linearized theory of elasticity), due to Valdemar Voigt (circa 1898)
For instance, $\sigma$, a stress tensor and $\epsilon$, a strain tensor, are related by the the tensor equation
$$\mathbf{\sigma} = \mathbf{C \epsilon}$$
Here $\sigma$ and $\epsilon$ are second order tensors and $C$ is a fourth-order tensor.
- | 2016-02-06 20:58:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587844371795654, "perplexity": 327.56241124968534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147841.50/warc/CC-MAIN-20160205193907-00298-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://esgp.eu/en/produktas/kstar-3/ | # Kstar
2,2 – 5,2kW KSG 3.2 ~ 5K series PV inverter is a single-phase inverter which is applicable for residential PV system. It has the characteristics of high efficiency, high reliability, small size and easy installation. | 2022-12-06 13:18:48 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8567177653312683, "perplexity": 6480.052712425979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00592.warc.gz"} |
https://www.physicsforums.com/threads/pseudospin-operator.649111/ | # Pseudospin operator
1. Nov 2, 2012
### Niles
Hi
Often in the context of multi-atom systems, such as in cavity QED, it is customary to introduce a so-called "collective pseudospin operator". An example of this is for the inversion for some atom j, $\sigma_{j, z}$, which becomes
$\sum_{j} \sigma_{z, j} = \sigma_z$
To me this seems very reasonable, we just try to describe the collectice behavior via a single operator. But what makes is "pseudospin"?
Best,
Niles.
2. Nov 5, 2012
### Cthugha
From the historical point of view, the first detailed study of a two-level system has been given by Bloch (F. Bloch, "Nuclear Induction", Phys. Rev. 70, 460–474 (1946)). This was a study of a spin 1/2 NMR system. In this paper the famous Bloch equations were presented first. Afterwards it could be shown that any ensemble of noninteracting two-level sytems subject to external perturbation behaves similarly and follows equations having the same structure as the Bloch equations (I think it was shown in J. Appl. Phys. 28, 49 (1957) by Feynman et al. first, but I am not sure about that).
So as these two-level systems behave in the same manner as the spin systems which were well known at that time, but obviously are not necessarily spin systems, they were termed pseudospin systems.
3. Nov 5, 2012
### Niles
Ah, I see, that makes good sense actually. Thanks for taking the time to write all that and also for the links!
Best,
Niles.
Last edited: Nov 5, 2012 | 2018-08-15 15:12:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7782253623008728, "perplexity": 748.5116804628531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210133.37/warc/CC-MAIN-20180815141842-20180815161842-00476.warc.gz"} |
https://www.cs.colostate.edu/~asa/courses/linux/fall17/doku.php?id=wiki:navigating2016 | NSCI 580A4 fall 2017
Sidebar
NSCI 580A4
Instructors
Tai Montgomery
Erin Nishimura
wiki:navigating2016
NAVIGATING THE FILE SYSTEM
Who, What, When, Where commands
In your previous exercise you did the following:
$whoami$hostname
$pwd$ls
$date Hopefully, when you typed in these commands, the shell responded more intelligibly than when you typed gibberish. This is because commands are programs that the shell has installed. When you type the command, you are instructing the shell to run a program it recognizes. $whoami #WHO am I?
$hostname #WHAT computer is this?$pwd #(Path to Working Directory) WHERE on the computer am I?
$ls #(List Segments) WHAT are the contents of this directory?$date #WHEN? What is the date and time?
Quick tip: Everything written after a “#” sign is a comment or annotation. The shell will not read things written after “#”.
Quick tip: A directory is just the Linux-y term for “folder”. Just as there are slight terminology differences between MAC and PC, so too are there differences in Linux.
Dissecting the path
Let's take a closer look at what spits out when you type the command $pwd When I type this out on my MAC computer, I get something that looks like this: $pwd
/Users/erinnishimura
This notation is called a path and it describes the location, or the address, of my working directory within the file structure of a computer system.
When I look in this directory, I see…
$ls Applications Library Desktop Music Documents Movies Downloads Pictures Dropbox These are the items I have inside the working directory. This directory corresponds to the same directory I can locate in my MAC Finder (or PC Explorer). Exercise: Open your Finder or Explorer and navigate to the same directory you're in on the terminal. Double check that the contents are the same. Check that the path is similar. MAC tip: If you don't see you path in the Finder, pull down the View menu and select Show Path Bar. The directory you find yourself in when you first open up your terminal is called you home directory. This is a special place where the shell starts up by default. Moving around -- Up to the root We will learn how to move into different directories and at the same time learn more about paths. To move to a new directory we use the command cd for Change Directory. Exercise: Try the cd command like so: $cd
$pwd$ls
What happened? Well, if you started in your home directory, nothing. This is because typing cd without anything after it defaults to changing you into your home directory. To give more instruction as to where we want to change into, we need to add an argument. Arguments are additional user-specific information we supply to a command.
$cd <directoryname> A cool quirk of Linux is that a period, “.”, is shorthand for the current working directory. And two periods, “..”, is shorthand for the directory a level up from my current directory. The directory one-level-up is called the parent directory. Exercise: Write down the path of your current working directory somewhere so you can remember it. Next, try the following: $pwd
$ls$cd .
$pwd$ls
$cd ..$pwd
$ls Exercise: Open Finder or Explorer and navigate to the same location. Exercise: Keep navigating up and up through your path using cd .. until you get to the top. Do the same in the Finder or Explorer. You should get to a place where you eventually see this: $pwd
/
This location is known as the root. This is the uppermost directory of your computer's file structure (that you are allowed to be in).
Moving around -- Down a path
Now that we are in the upper most directory, let's navigate back down to where we were before. To do this, we'll browse the contents of our root directory using ls and then select a specific sub-directory to change into using:
cd <subdirectoryname>
Exercise Navigate back down to your path. To do this, consult the path you wrote down above. Then, execute the following set of commands in which you substitute <subdirectoryname> with the first directory name in your path.
$ls$cd <subdirectoryname>
$pwd$ls
Exercise: Continue to come down your path until you are in your original home directory.
Common pitfall: Many new users have trouble navigating directories when they first start out. It is something that you'll get used to over time. One thing that can help make the process easier is to continually execute pwd and ls commands. Just imagine that anytime you want to look at something in your Finder/Explorer, you are in effect issuing an ls command. So you should be typing ls as often as you look at your files!
Common pitfall: Linux does NOT like spaces in directory or filenames. If one of your directories contains a space, you'll need to type a backslash+space as \ instead of just a single space. This is called escaping a character.
Quick tip: When you're typing out the name of a directory or sub-directory, instead of typing out the whole thing, start typing it a few characters and then autocomplete by typing TAB. If the characters you've typed so far limit you down to one option, the name will autocomplete. If it narrows it down into a few options, press TAB again and those options will be listed.
Moving around -- jumping to a new directory
The types of paths we've used for navigating up to this point (.. and <subdirectory>) are called relative paths. This means that they only make sense from the perspective of the current working directory. In contrast, we can use cd to take us to absolute paths that would make sense anywhere on the computer system. Absolute paths always begin with the root directory, /. When we execute pwd, the shell spits out our current working directory as an absolute path because it start with a / such as /Users/erinnishimura/.
Exercise: Open your Finder/Explorer and navigate to some directory on your computer where you keep a piece of data you've recently gathered. Look at the path bar to get a sense of where this directory is located. Now, using the terminal, try to change into this directory using an absolute path as the argument for a cd command. Use TAB to autocomplete to save time and improve accuracy.
Shortcuts
TAB autocomplete
CTRL+u erase the current line
CTRL+l (that's a lower case “L”), clear the terminal screen
CTRL+a go to the beginning of the line
CTRL+e to go the end of the line
CTRL+c cancel out of a program or command that is being executed
CTRL+d log out of the terminal
UP arrow print out the last command executed (even if it failed). | 2020-07-12 19:38:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.665557861328125, "perplexity": 2005.2048819891145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657139167.74/warc/CC-MAIN-20200712175843-20200712205843-00095.warc.gz"} |
https://denman1720.exblog.jp/27740973/ | # Mystery of Dimension(PART 1)
Mystery of Dimension(PART 1)
(god031.jpg)
(dimanim3.gif)
(dimanim6.gif)
(dimanim.gif)
(dimanim4.gif)
(dimanim5.gif)
(diane02.gif)
Kato. . . , what kind of mystery is that?
(kato3.gif)
Diane, have you ever heard of a four-dimensional world?
Of course, I have. . . We're living in the four-dimensional world or spacetime world, aren't we?
Yes, most of us believe we are, but actually we're living in a 10-dimensional world.
You gotta be kidding!
But what about other six dimensions? What the heck are they? . . . And where in the world do we find those six dimensions?
Good question! . . . Tell me, Diane, what the dimension is.
Well. . . the dimension is a measurable extent of some kind, such as length, breadth, depth, or height. . . So usually we can move along the vertical, horizontal, height and time axis. . . Therefore, it is said that we're living in a four-dimensional world.
(dimanim6.gif)
You're darn right on that. . . But according to some mathematicians, our world has ten dimensions, which is mathematically proven.
Are you pulling my leg?
Oh no. . . I'm profoundly dead serious. . . If you're in doubt, you should view the following clip.
(dim04.jpg)
In other words, the 10-dimensional world has something to do with the birth of the universe, huh?
That's right. . . If you look carefully at the clip above, you can understand it. . .
Kato, can you understand it by only watching the above clip?
I think I can. . .
You can understand because you've got an engineering background, but I'm a student of liberal arts. . . Please explain it easily so that I can understand. . .
Well, first of all, I'll talk about two theoritical physicists. . .
(god014.jpg)
The above two men are said to be the creators of the so-called superstring theory, but Joël Scherk, French physicist, died at the age of 34. . .
Was he killed by a traffic accident?
Oh no. . . He is said to have died because he suffered from severe diabetes. They say, he died of overdose. . .
So, that superstring theory is related to the 10-dimensional world, isn't it?
That's right. . . Currently, in his 70s John Schwarz teaches theoretical physics at the California Institute of Technology. . .
(god001.jpg)
(god002.jpg)
This photo shows that Professor Schwarz teaches students in the lecture room, huh?
Oh no, he doesn't. . . He's writing mathematical formulas on the blackboard in his own office. . .
Is he doing research on a blackboard without using a computer? It’s much faster to do research using a personal computer. . . He could get better research, I suppose.
No, not really because no matter how great software he uses on his personal computer, he cannot get an original idea.
Why is that?
It is because every software and programs were originally created by humans. . . As long as you use such software, you will not come up with original ideas. . . So, in order to derive original formulas, it is best to do research while writing formulas by hand using a blackboard or whiteboard. . .
I see. . .
By the way, the big bang that occurred 13.8 billion years ago was predicted by a mathematical formula. . .
What is that formula?
That is Einstein's general theory of relativity. . .
(dim05.jpg)
Is Einstein's general theory of relativity flawed compared to the superstring theory?
Yes, it is. . . The general relativity cannot unravel the bottom of the black hole, that is, the profoundly microscopic world. . .
why. . . ?
It turns out that the birth of the universe has a deep connection with black holes. . .
(god009.jpg)
The big bang and the black hole are very similar in structure. . . The big bang and the bottom of the black hole are mathematically the only keys to the birth of the universe. . . In other words, in order to explore the birth of the universe, it is necessary to elucidate the black holes mathematically. . . However, the general theory of relativity cannot unravel the bottom of the black hole. . .
Why can the general relativity not unravel it?
If the depth of the black hole is squeezed by general relativity, the denominator will be zero at the depth of the black hole. . . This means infinity!
(god006.jpg)
(god007.jpg)
What's wrong when the denominator becomes zero and becomes infinite?
When the denominator beconmes zero, it cannot be calculated in the mathematical formula. . . So, if you try to unravel the depth of a black hole with a mathematical formula, you cannot apply the general relativity, which is not valid. . .
In short, in order to unravel the depth of a black hole with a mathematical formula, you must understand the birth of the universe, huh?
That's right. . .
(dim06.jpg)
"Zoom In"
In other words, unless the infinite problem is solved, the birth of the universe cannot be solved, huh?
You're telling me. . .
So what happened?
Somebody came up with an idea. . . How about combining quantum mechanics with general relativity? . . .
(god010.jpg)
(god011.jpg)
This idea came to the mind of Matvei Bronstein (1906-1938) in the Soviet era. . .
(blonstein2.jpg)
He was a genius of theoretical physics. . . It is said that at the age of 19 he had understood the mathematical formulas of quantum mechanics and general relativity completely.
Wow! . . . So why did he die so young?
Just when he was immersed in his research, that is, in 1937, he was arrested and killed by Stalin. . . The intellectuals who were doing suspicious research seemed to Stalin an enemy who must be purged. . .
(stalin02.jpg)
Stalin did a terrible thing, didn't he? . . . So what happened to the study of Matvei Bronstein?
Unfortunately, combining the two mathematical formulas did not provide an infinite solution. . . As the research progressed, the theoretical physicists found a great deal of infinity in the profoundly microscopic world. . .
(god012.jpg)
So the problem of infinity had deepened, huh?
That's right. . . So, most researchers gave up and threw away the infinite mystery.
What happened then?
About 40 years later, precisely in 1974, two unknown theoretical physicists published a paper that solved the mystery. . . They are Joël Scherk and John Schwarz introduced above. . .
(god013.jpg)
(god014.jpg)
They were studying string theory that no one looked at. . . In this string theory, elementary particles are not dots but strings like threads.
(god015.jpg)
John Schwarz went further and discovered the superstring theory that solves an infinite problem. . . If an elementary particle is a point, it becomes infinite when two elementary particles collide.
So . . . ?
In the superstring theory, the distance does not become zero even if it collides because elementary particles are strings---not points, so there is no infinity. . .
(dim07.jpg)
However, there was a problem here. . . The condition that supports the superstring theory seems impossible in reality!
What is that condition?
The superstring theory is valid only in a 10-dimensionnal world.
(dim08.jpg)
That means that the universe was born as a 10-dimensional universe, huh?
That's right. . . However, from a common-sense point of view, our world is not a 10-dimensional world. . .
According to Einstein, we're living in a four-dimensional world or spacetime world. . . What are the other six dimensions?
So, many researchers came to conclusion that this theory was absurd and they gave up.
How about the followers of the superstring theory?
They combined two formulas, namely those of general relativity and quantum mechanics.
(god016.jpg)
(god022b.jpg)
(god022.jpg)
When they verified whether the two formulas were included in the superstring theory formula, they were surprised to see the superstring formula include two formulas that seem to be completely unrelated. . . And they found the perfect number (496) appearing one after another.
(god023.jpg)
Perfect number
In number theory, a perfect number is a positive integer that is equal to the sum of its positive divisors, excluding the number itself.
For instance, 6 has divisors 1, 2 and 3 (excluding itself), and 1 + 2 + 3 = 6, so 6 is a perfect number.
The sum of divisors of a number, excluding the number itself, is called its aliquot sum, so a perfect number is one that is equal to its aliquot sum.
Equivalently, a perfect number is a number that is half the sum of all of its positive divisors including itself i.e. σ1(n) = 2n.
For instance, 28 is perfect as 1 + 2 + 4 + 7 + 14 + 28 = 56 = 2 × 28.
This definition is ancient, appearing as early as Euclid's Elements (VII.22) where it is called τέλειος ἀριθμός (perfect, ideal, or complete number).
Euclid also proved a formation rule (IX.36) whereby q ( q + 1 ) / 2 {\displaystyle q(q+1)/2} {\displaystyle q(q+1)/2} is an even perfect number whenever q {\displaystyle q} q is a prime of the form 2 p − 1 {\displaystyle 2^{p}-1} 2^{p}-1 for prime p {\displaystyle p} p—what is now called a Mersenne prime.
Two millennia later, Euler proved that all even perfect numbers are of this form.
This is known as the Euclid–Euler theorem.
It is not known whether there are any odd perfect numbers, nor whether infinitely many perfect numbers exist.
The first few perfect numbers are 6, 28, 496 and 8128 (sequence A000396 in the OEIS).
Source: “Perfect number”
Free encyclopedia "Wikipedia"
In other words, it was thought for ages that perfect numbers symbolize completeness. . .
That's right. . . Anyway, the news that the two formulas are included in the formula of the superstring theory became known to researchers all over the world. . . It was a sensation!
That's how the superstring theory attracted attention, huh?
You're telling me, Diane.
But what are the other six dimensions?
Well, for example, the one-dimensional world for humans are actually two-dimensional or three-dimensional world for ladybugs. . .
(god024.jpg)
(god025.jpg)
(god026.jpg)
In other words, as shown above, the person on a tightrope is one-dimensional on the cable. . . The person can only move forward and backward. . . However, there are dimensions invisible to the human eye because, for the ladybug, it is two-dimensional or three-dimensional world. . .
I see. . . But what exactly do you mean by a 10-dimensional world?
The remaining 6 dimensions are hidden in one ultra-micro world of one trillionth of a trillion of an atom!
(god027b.jpg)
(god027.jpg)
(dimanim.gif)
This solved the 10-dimensional problem, but there were more problems. . . The genius of that wheelchair proposed the “Hawking Paradox”.
(hawking.jpg)
“Why is heat generated at the bottom
of a black hole where even elementary particles
cannot move?”
So who solved it?
Joseph Polchinski, who appeared in the photo above, succeeded in calculating the heat of the black hole by adding a group of strings called "D-brane".
(god028.jpg)
The film-like elementary particles move around in six dimensions, generating heat. . .
(god029.jpg)
In other words, Joseph Polchinski solved all the problems?
No, not really. . . In the latest research, some scientists insist that this universe is not 10-dimensional but 11-dimentional, and there is a hypothesis that there are a great number of universes---10 to the 500th power. . . The problem is deepening. . .
I wonder if scientists try to experimentally confirme that there are actually the remaining 6 dimensions.
Yes, they do. . . At the famous European Organization for Nuclear Research (CERN) in France, scientists are experimenting to actually verify the remaining six dimensions using a giant accelerator. . .
(cern02.jpg)
(cern03.jpg)
Talking about the European Organization for Nuclear Research (CERN), the World Wide Web (WWW) was born here, huh?
That's right. . . You're telling me. . .
(cern01.jpg)
(dianelin3.jpg)
(laughx.gif)
(To be continued)
タグ:
by | 2019-08-24 03:01 | ミステリー | Comments(0)
by denman1720 | 2019-12-07 12:16:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8300434350967407, "perplexity": 972.0317930955373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499389.15/warc/CC-MAIN-20191207105754-20191207133754-00379.warc.gz"} |
http://tex.stackexchange.com/questions/106106/proper-way-to-align-labels-with-nodes-tikz/106120 | # Proper way to align labels with nodes tikz
Consider the following code:
\documentclass{standalone}
\usepackage{tikz}
\usetikzlibrary{calc}
\begin{document}
\begin{tikzpicture}
\coordinate (A) at (1,1);
\coordinate (B) at (2,2);
\draw[->] (A) -- (B);
\draw[dashed] (0,0) rectangle (3,3) node (C) {};
\draw [solid] ($(A.south west)-(0,1.5)$) -- ++(0.5, 0) node [label =right:Signalling] {};
\end{tikzpicture}
\end{document}
How can I properly align the end of label, with the edge of rectangle in a automated way?
-
@cacamailg I added the picture. I hope that is okay. – hpesoj626 Mar 30 '13 at 23:34
I am interested in the pgfplots solution. Can you add a MWE? – cacamailg Mar 30 '13 at 23:34
The corners of the rectangle seem to have a problem. They should look like L-shape broken lines actually. – kiss my armpit Mar 30 '13 at 23:38
@hpesoj626 that is ok, thanks. – cacamailg Mar 30 '13 at 23:40
@Karl'sstudents I didn't understand what you mean. Instead of dashed you can use solid for the rectangle. – cacamailg Mar 30 '13 at 23:41
Two options;
With TikZ, you can use the name of the node for a later use in this context;
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\coordinate (A) at (1,1);
\coordinate (B) at (2,2);
\draw[->] (A) -- (B);
\draw[dashed] (0,0) rectangle (3,3) node (C) {};
\draw node[% Note that we are inside a path not inside a node declaration
anchor=east,
append after command={([xshift=-2mm]\tikzlastnode.west) -- ++ (-0.5,0)},
inner sep=0] at (3,-0.5) {Signalling};
\end{tikzpicture}
\end{document}
Or with pgfplots by actually drawing a function instead of giving it as a path;
\documentclass{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.8}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
axis line style=dashed,
xtick=\empty,
ytick=\empty,
legend pos={south east}
]
\end{axis}
\end{tikzpicture}
\end{document}
-
Thank you very much. Just a question in the 1st solution: [xshift=-2mm]\tikzlastnode.west is the same as ($(\tikzlastnode.west) + (-2mm,0)$)? – cacamailg Mar 31 '13 at 0:29
@cacamailg Yes, but then you don't need calc library – percusse Mar 31 '13 at 0:41 | 2015-07-31 23:27:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819852709770203, "perplexity": 3648.583496956724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988317.67/warc/CC-MAIN-20150728002308-00035-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://math.stackexchange.com/tags/cryptography/hot | # Tag Info
3
Let's assume both have a password length of $N$, and an alphabet size of $s$. Then the first has a possible $s^N - (s-1)^N$ many passwords: all passwords except those that are all from the alphabet without an a. The other has $(s-1)^N$ many passwords. Now compare...
1
You must have $b-1\ne 0 \pmod{26}$ and the restrictions $b-1\ne 2n$ and $b-1\ne 13m$ because $b-1\ne 0\pmod{2}$ and $b-1\ne 0\pmod{13}$. In other words $b$ even and $b-1$ not multiple of $13$.
1
Well it depend on two things: First the length of the password The size of the alphabet used for the password Assume that the passwords have length 1 Bob did give away his password but Alice didn't. However if you have only 2 possible letters (say a and b) then it is Alice that gave away her password (a sequence of b's).
1
$M^{17}$ is encoded as $(((M^2)^2)^2)^2\times M$ because it's faster that way. (I don't know why the $(1)^2$ is in there.) The naive way of obtaining $M^{17}$ requires $16$ successive multiplications of $M$. By comparison, if you apply the square function to $M$ (and then its results) four times, you reduce the count to five times. If $M$ is large ...
1
The issue is that $13^{-1}=4\pmod{17}$, since $13\cdot 4\equiv 1\pmod{17}$. "Division" does not exist here, only multiplication by reciprocals. Now, we calculate $6\cdot 4\pmod{17}$, and indeed we get $7$, as desired. By the way, this is not a very good cryptosystem. Suppose we encode each byte of the message this way. Well, most bytes of a text ...
1
If perfect means as few collisions as possible, you can just do $f(n)=(n \pmod N)/10$ where the divide is integer division. You have to have $10$ of each number in the range $[1,N]$ mapped to each hash value, which this does. Often we also ask that hash functions be such that one cannot reasonably predict the hash from the number to be hashed, nor invert ...
1
I do have information after a permutation: I know exactly what the distribution of characters (bytes / alphabet members..) of the plain text was. If I see "trapa" and "olleh", I can certainly tell which one came from "apart".... So it's pretty trivial to win the distinguisher game here (so no perfect secrecy). Added: Another, more formal, way to see ...
1
(a) you get $g^{a-b}=1$ mod $p$. By definition, $r$ "the order of $g$" is the smallest (for the divisor relation) positive integer verifying $g^k=1$ mod $p$. Since $a-b$ also verifies this relation and since $r$ is the smallest, it follows that $r$ must divide $a-b$. No need for Fermat's little theorem, it would only give you that $r$ divides $p-1$. (b) ok. ...
1
A small table will do: $$\begin{array}{r*{5}{c}} x &\pm2&\pm3&\pm4&\pm5\\ \hline x^2& 4&-2&5&3\\ x^4 &5&4&3&-2\\ x^5&\mp1&\pm1&\pm1&\pm1\\ \hline \end{array}$$ Hence the solutions are $$\alpha= 1,\;-2,\;3,\;4,\;5.$$
Only top voted, non community-wiki answers of a minimum length are eligible | 2016-02-14 12:54:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8525900840759277, "perplexity": 460.9734351397884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701962902.70/warc/CC-MAIN-20160205195242-00162-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://bootmath.com/relation-between-the-two-probability-densities.html | Relation between the two probability densities
Suppose $X_1, X_2, Y_1, Y_2$ are independent random variables on the same probability space with densities $f_1,f_2,g_1,g_2$ respectively.
If $$\int_{x} f_1(x)f_2(z-x) \,dx = \int_{y} g_1(y)g_2(z-y) \,dy \quad (*)$$ for all feasible $z$ and $$\frac{f_i(x)}{g_i(x)}$$ is non-decreasing in $x$ for all $x$ in the support of $X_i$ and $Y_i$ for both $i\in\{1,2\}$
then can we say something about the relationship between $f_i$ and $g_i$ for both $i\in\{1,2\}$?
Thanks in advance for any kind of help.
The support of the random variables $X_1, X_2, Y_1, Y_2$ is $[0,a]$ for some $a>0$.
Also I know that $f_i(x)=g_i(a-x)$ for all $x$ and for both $i\in\{1,2\}$.
I concluded that $g_i(0) = g_i(a)$ using $(*)$ with $z=2a$ and hence $f_i(0) = f_i(a)$. I think this is correct, just wanted to scrutinise this statement over stackexchange. | 2018-08-15 15:27:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868806958198547, "perplexity": 52.90740119373061}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210133.37/warc/CC-MAIN-20180815141842-20180815161842-00445.warc.gz"} |
http://maps.thefullwiki.org/Muon | # Muon: Map
### Map showing all locations mentioned on Wikipedia article:
The muon (from the Greek letter mu (μ) used to represent it) is an elementary particle similar to the electron, with negative electric charge and a spin of . Together with the electron, the tauon, and the three neutrinos, it is classified as a lepton. It is the unstable subatomic particle with the second longest mean lifetime ( ), behind the neutron (~ ). Like all elementary particles, the muon has a corresponding antiparticle of opposite charge but equal mass and spin: the antimuon (also called a positive muon). Muons are denoted by and antimuons by . Muons were sometimes referred to as mu mesons in the past, even though they are not classified as mesons by modern particle physicists (see History).
Muons have a mass of , which is about 200 times the mass of the electrons. Even so, muons are the lightest particles of ordinary matter, after the electrons. Since the muon's interactions are very similar to those of the electron, a muon can be thought of in most ways as simply a much heavier version of the electron. Due to their greater mass, muons are not as sharply accelerated when they encounter electromagnetic fields, and do not emit as much bremsstrahlung radiation. For this reason, muons of a given energy are far more highly penetrating of matter than electrons, since slowing of these particles in matter to capture velocities is primarily due to energy loss from this mechanism. Muons generated by cosmic rays in the atmosphere are capable of penetrating to the ground and into deep mines.
As with the case of the other charged leptons, the muon has an associated muon neutrino. Muon neutrinos are denoted by .
## History
Muons were discovered by Carl D. Anderson in 1936 while he studied cosmic radiation. He had noticed particles that curved in a manner distinct from that of electrons and other known particles, when passed through a magnetic field. In particular, these new particles were negatively charged but curved to a smaller degree than electrons, but more sharply than protons, for particles of the same velocity. It was assumed that the magnitude of their negative electric charge was equal to that of the electron, and so to account for the difference in curvature, it was supposed that these particles were of intermediate mass (lying somewhere between that of an electron and that of a proton). The discovery of the muon seemed so incongruous and surprising at the time that Nobel laureate I. I. Rabi famously quipped, "Who ordered that?"
For this reason, Anderson initially called the new particle a mesotron, adopting the prefix meso- from the Greek word for "mid-". Shortly thereafter, additional particles of intermediate mass were discovered, and the more general term meson was adopted to refer to any such particle. Faced with the need to differentiate between different types of mesons, the mesotron was in 1947 renamed the mu meson (with the Greek letter μ (mu) used to approximate the sound of the Latin letter m).
However, it was soon found that the mu meson significantly differed from other mesons; for example, its decay products included a neutrino and an antineutrino, rather than just one or the other, as was observed in other mesons. Other mesons were eventually understood to be hadrons—that is, particles made of quarks—and thus subject to the residual strong force. In the quark model, a meson is composed of exactly two quarks (a quark and antiquark), unlike baryons, which are composed of three quarks. Mu mesons, however, were found to be fundamental particles (leptons) like electrons, with no quark structure. Thus, mu mesons were not mesons at all (in the new sense and use of the term meson), and so the term mu meson was abandoned, and replaced with the modern term muon.
## Muon sources
Since the production of muons requires an available center of momentum frame energy of 105.7 MeV, neither ordinary radioactive decay events nor nuclear fission and fusion events (such as those occurring in nuclear reactors and nuclear weapons) are energetic enough to produce muons. Only nuclear fission produces single-nuclear-event energies in this range, but due to conservation constraints, muons are not produced.
On Earth, all naturally occurring muons are apparently created by cosmic rays, which consist mostly of protons, many arriving from deep space at very high energy.
When a cosmic ray proton impacts atomic nuclei of air atoms in the upper atmosphere, pions are created. These decay within a relatively short distance (meters) into muons (the pion's preferred decay product), and neutrinos. The muons from these high energy cosmic rays, generally continuing essentially in the same direction as the original proton, do so at very high velocities. Although their lifetime without relativistic effects would allow a half-survival distance of only about 0.66 km at most, the time dilation effect of special relativity allows cosmic ray secondary muons to survive the flight to the earth's surface. Indeed, since muons are unusually penetrative of ordinary matter, like neutrinos, they are also detectable deep underground and underwater, where they form a major part of the natural background ionizing radiation. Like cosmic rays, as noted, this secondary muon radiation is also directional.
The same nuclear reaction described above (i.e., hadron-hadron impacts to produce pion beams, which then quickly decay to muon beams over short distances) is used by particle physicists to produce muon beams, such as the beam used for the muon g − 2 experiment. In naturally-produced muons, the very high-energy protons to begin the process are thought to originate from acceleration by electromagnetic fields over long distances between stars or galaxies, in a manner somewhat analogous to the mechanism of proton acceleration used in laboratory particle accelerators.
## Muon decay
The most common decay of the muon
Muons are unstable elementary particles and are heavier than electrons and neutrinos but lighter than all other matter particles. They decay via the weak interaction to an electron, two neutrinos and possibly other particles with a net charge of zero. Nearly all of the time, they decay into an electron, an electron-antineutrino, and a muon-neutrino. Antimuons decay to a positron, an electron-neutrino, and a muon-antineutrino:
\mu^-\to e^- + \bar\nu_e + \nu_\mu,~~~\mu^+\to e^+ + \nu_e + \bar\nu_\mu.
The mean lifetime of the (positive) muon is 2.197 019 ± 0.000 021 μs. The equality of the muon and anti-muon lifetimes has been established to better than one part in 104.
The tree-level muon decay width is
\Gamma=\frac{G_F^2 m_\mu^5}{192\pi^3}I\left(\frac{m_e^2}{m_\mu^2}\right), where I(x)=1-8x-12x^2\ln x+8x^3-x^4.
A photon or electron-positron pair is also present in the decay products about 1.4% of the time.
The decay distributions of the electron in muon decays have been parametrized using the so-called Michel parameters. The values of these four parameters are predicted unambiguously in the Standard Model of particle physics, thus muon decays represent an excellent laboratory to test the space-time structure of the weak interaction. No deviation from the Standard Model predictions has yet been found.
Certain neutrino-less decay modes are kinematically allowed but forbidden in the Standard Model. Examples, forbidden by lepton flavour conservation, are
\mu^-\to e^- + \gamma and \mu^-\to e^- + e^+ + e^-.
Observation of such decay modes would constitute clear evidence for physics beyond the Standard Model (BSM).Upper limits for the branching fractions of such decay modes are in the range 10−11 to 10−12.
## Muonic atoms
The muon was the first elementary particle discovered that does not appear in ordinary atoms. Negative muons can, however, form muonic atoms by replacing an electron in ordinary atoms. Muonic atoms are much smaller than typical atoms because the larger mass of the muon gives it a smaller ground-state wavefunction than the electron.
A positive muon, when stopped in ordinary matter, can also bind an electron and form an exotic atom known as muonium (Mu) atom, in which the muon acts as the nucleus. The positive muon, in this context, can be considered a pseudo-isotope of hydrogen with one ninth of the mass of the proton. Because the reduced mass of muonium, and hence its Bohr radius, is very close to that of hydrogen, this short-lived "atom" behaves chemically — to a first approximation — like hydrogen, deuterium and tritium.
## Anomalous magnetic dipole moment
The anomalous magnetic dipole moment is the difference between the experimentally observed value of the magnetic dipole moment and the theoretical value predicted by the Dirac equation. The measurement and prediction of this value is very important in the precision tests of QED (quantum electrodynamics). The E821 experiment at Brookhaven National Laboratory (BNL) studied the precession of muon and anti-muon in a constant external magnetic field as they circulated in a confining storage ring. The E821 Experiment reported the following average value (from the July 2007 review by Particle Data Group)
a = \frac{g-2}{2} = 0.00116592080(54)(33)
where the first errors are statistical and the second systematic.
The difference between the g-factors of the muon and the electron is due to their difference in mass. Because of the muon's larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon's anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon's anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED ( Phys.Lett. B649, 173 (2007)).
## References
• S.H. Neddermeyer and C.D. Anderson, "Note on the Nature of Cosmic-Ray Particles", Phys. Rev. 51, 884–886 (1937). Full text available in [3176].
• J.C. Street and E.C. Stevenson, "New Evidence for the Existence of a Particle of Mass Intermediate Between the Proton and Electron", Phys. Rev. 52, 1003-1004 (1937). Full text available in [3177].
• Serway & Faughn, College Physics, Fourth Edition (Fort Worth TX: Saunders, 1995) page 841
• Emanuel Derman, My Life As A Quant (Hoboken, NJ: Wiley, 2004) pp. 58-62.
• Marc Knecht ; The Anomalous Magnetic Moments of the Electron and the Muon, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.) ; Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) [ISBN 3-7643-0579-7]. Full text available in PostScript. | 2019-10-16 13:23:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628942966461182, "perplexity": 1072.6149893096124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00534.warc.gz"} |
http://www.maa.org/publications/books/differential-geometry-and-its-applications?device=desktop | # Differential Geometry and Its Applications
### By John Oprea
2nd Edition
Catalog Code: DGA
Print ISBN: 978-0-88385-748-9
Electronic ISBN: 978-1-61444-608-8
510 pp., Hardbound, 2007
List Price: $77.00 MAA Member:$62.00
Series: MAA Textbooks
Differential geometry has a wide range of applications, going far beyond strictly mathematical pursuits to include architecture, engineering, and just about every scientific discipline. John Oprea’s second edition of Differential Geometry and Its Applications illuminates a wide range of ideas that can be beneficial to students majoring not only in mathematics but also in other fields.
The textbook touches on many different mathematical concepts, including aspects of linear algebra, the Gauss-Bonnet Theorem, and geodesics. It also encourages students to visualize and experiment with the ideas they are studying through their use of the computer program Maple. This allows students to develop a better understanding of the mathematics involved in differential geometry.
Preface
Note to Students
1. The Geometry of Curves
2. Surfaces
3. Curvatures
4. Constant Mean Curvature Surfaces
5. Geodesics, Metrics and Isometries
6. Holonomy and the Gauss-Bonnet Theorem
7. The Calculus of Variations and Geometry
8. A Glimpse at Higher Dimensions
List of Examples
Hints and Solutions to Selected Problems
Suggested Projects for Differential Geometry
Bibliography
Index
John Oprea was born in Cleveland, Ohio and was educated at Case Western Reserve University and at Ohio State University. He received his PhD at OSU in 1982 and, after a post-doc at Purdue University, he began his tenure at Cleveland State in 1985. Oprea is a member of the Mathematical Association of America and the American Mathematical Society. He is an Associate Editor of the Journal of Geometry and Symmetry in Physics. In 1996, Oprea was awarded the MAA’s Lester R. ford award for his Monthly article, “Geometry and the Foucault Pendulum.” Besides various journal articles on topology and geometry, he is also the author of The Mathematics of Soap Films (AMS Student Math Library, volume 10), Symplectic Manifolds with no Kähler Structure (with A. Tralle, Springer Lecture Notes in Mathematics, volume 166). Lusternik-Schnirelmann Category (with O. Cornea, G. Lupton and D. Tanré, AMS Mathematical Surveys and Monographs, volume 103) and the forthcoming Algebraic Models in Geometry (with Y. Felix and D. Tanré, for Oxford University Press).
### MAA Review
John Oprea begins Differential Geometry and Its Applications with the notion that differential geometry is the natural next course in the undergraduate mathematics sequence after linear algebra. He argues that once students have studied some multivariable calculus and linear algebra, a differential geometry course provides an attractive transition to more advanced abstract or applied courses. His thoughtful presentation in this book makes an excellent case for this. As he says, the natural progression of concepts in differential geometry allows the student to progress gradually from calculator to thinker.
This edition of the text is over a hundred pages longer than the first edition. Evidently Oprea has incorporated many suggestions from those who have taught from the text. There is a good deal to like about this book: the writing is lucid, drawings and diagrams are plentiful and carefully done, and the author conveys a contagious sense of enthusiasm for his subject. Continued...
Book Series: | 2015-04-01 10:48:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23585836589336395, "perplexity": 1298.635925969514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131304444.86/warc/CC-MAIN-20150323172144-00144-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://blog.mdda.net/ai/2017/02/27/estimator-extra-outputs | ## Estimator Output Smuggling
While the TensorFlow Estimator framework has a lot of appeal, since it can hide a lot of the training / evaluation / prediction mechanics, the price of this kind of convenience is often paid in flexibility in how one can work with the models dynamically (i.e. in research mode). In particular, it would be very convenient to be able to look at the values of several different output tensors created by a model (other than just the ones designated ‘label’, etc).
The Estimator framework has now bifurcated, base on using either the ‘OLD style’ x, y, batch_size input parameters, or feeding information to the model using the ‘NEW style’ input_fn method (which is more flexible, and doesn’t complain about DEPRECATION).
This post shows how to ‘smuggle’ out tensor results from a model that has been integrated into the Estimator framework. The key parts are the SMUGGLE TENSORS OUT HERE sections in the model, and the subsequent .evaluate or .predict call (depending on which style you’re using).
NB: It seems that once you’ve run specific batch_size ‘NEW style’, the model becomes specialized w.r.t. batch_size and so no longer accepts ‘OLD style’ batches. This issue probably warrants further exploration - except that ‘NEW style’ is clearly the better, more modern and more flexible way to go.
### OLD STYLE runs (uses features and integer_labels PLAIN)
The following also illustrates the logical process required to find the magic incantation tf.contrib.metrics.streaming_concat that pulls all the right stuff together.
The predictions['input_grad'] becomes the value of labels that gets concatenated into the mnist_classifier.evaluate() results :
### NEW STYLE runs (uses features dictionary and integer_labels PLAIN)
This is considerably easier, since the features dictionary allows one to smuggle more values IN, and the (non-DEPRECATED) new style also allows one to use the outputs parameter of Estimator.predict(), which means that only the tensors specified get calculated…
And here is the function that does a .predict() to get the extra tensor value out. Because the outputs is defined, no superfluous computations are done : | 2017-09-22 15:05:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36274078488349915, "perplexity": 2467.382057832602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688997.70/warc/CC-MAIN-20170922145724-20170922165724-00262.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=446838 | ## double slit experiment with electrons and larger particles
The where the interference tassles appear on the film at the double slit experiment, made the conclusion that the waveleghth λ of the wave of the red right is λ=(that much). For the blue light, λ=(that much), and so on. But I cannot find the λ=(that much) for the experiment with electrons and larger particles. Where is it? A link please?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Mentor The wavelength depends on the momentum of the particle: $$\lambda = \frac{h}{p}$$
I am asking the λ which is derived by the interference tossles by the method of Thomas Young alone. No uncertainty, E=hf and all such stuff.
## double slit experiment with electrons and larger particles
$$n*\lambda=d*sin(\theta)$$
Why are you giving me equations? I don't have any data to use any equation anyway.
that depends on the velocity (momentum) of the electron, C60, or what ever large particle you are using. If you don't have any specific velocity or data the question is meaningless.
The same goes for the width of each slit?
what kind of environment will you use in double-slit experiment with C60 ? somekind of liquid ? | 2013-05-20 19:23:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40616631507873535, "perplexity": 1275.2240107833682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699186520/warc/CC-MAIN-20130516101306-00095-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://lists.gnu.org/archive/html/lilypond-user/2020-09/msg00157.html | lilypond-user
[Top][All Lists]
## Re: left-hand marking
From: Martín Rincón Botero Subject: Re: left-hand marking Date: Wed, 16 Sep 2020 14:53:37 +0200
Hi Tom,
perhaps \arpeggioBracket does what you want?
https://lilypond.org/doc/v2.20/Documentation/notation/expressive-marks-as-lines#arpeggio
Cheers,
Martín.
On Wed 16. Sep 2020 at 14:30 Tom Sgouros <tomfool@as220.org> wrote:
Hello all:
How does one mark notes that should be played with the other hand in piano music?
I'm thinking about notes in the treble clef that should be played with the left hand crossed over, or notes at the top of a bass clef chord that should be played with the right hand.
I've seen brackets extending from the other clef, for the chord, as well as little "L.H" marks. (Or "M.S." or "M.D." depending.) Is there a Lilypond-canonical way to indicate this?
Thank you,
-Tom
-- | 2020-10-27 22:09:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8303281664848328, "perplexity": 10267.456506741943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894759.37/warc/CC-MAIN-20201027195832-20201027225832-00553.warc.gz"} |
https://mathschallenge.net/library/number/rational_powers | #### When is a square root irrational?
The answer is very simple: if the square root a natural number is non-integer, then it is irrational.
From y² = ny = √n (taking positive root).
Alternatively, (y²)½ = n½y = n½.
Therefore √n = n½.
In general, n1/q is the qth root of n, and from this there is a remarkable and general result which we shall prove.
Theorem
If n, q are natural numbers and n1/q (the qth root of n) is non-integer, then it is irrational.
Proof
Given that n1/q is non-integer and n, q are natural numbers, let us assume that it is rational.
Let n1/q = a/b, where a and b are a pair of natural numbers with no common factors; clearly b cannot equal one.
Therefore, (n1/q)q = (a/b)q, gives n = aq/bq.
As the LHS is integer, the RHS must also be integer. But a and b have no common factors, so bq cannot divide aq (see note), unless bq = 1, which is a contradiction. Hence there can be no ratio of natural numbers such that n1/q is rational and so we prove it must be irrational.
NOTE: The statement, bq cannot divide aq, is based on the Fundamental Theorem of Arithmetic. For example, if a = 5 and b = 2 (which have no common factors), then no amount of multiplying 2 by itself (2q) will produce a factor of 5, and so it will never divide into 5q.
Corollary
If n, p, q are natural numbers and (np)1/q is non-integer then it must be irrational (by the result just proved). As (np)1/q = np/q, it follows that if the result of raising n to any rational power, p/q, is non-integer it must be irrational. | 2020-01-28 21:02:48 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9147632122039795, "perplexity": 828.9577358280085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783000.84/warc/CC-MAIN-20200128184745-20200128214745-00405.warc.gz"} |
https://www.physicsforums.com/threads/can-i-get-surface-density-from-volume-density.52806/ | Can I get Surface Density from Volume Density?
1. Nov 15, 2004
Farina
I'm working on a vibration frequency problem
involving a thin, circular aluminum membrane
I know the volume density of Al.
How do I arrive at a surface density for this circular
membrane -- especially since I'm not given the
thickness (I'm told that frequencies for thin membranes
are independant of thickness).
I could see how to do this if I had a rectangular membrane,
but I have a circular membrane instead.
??
2. Nov 15, 2004
Gokul43201
Staff Emeritus
Yes, you can calculate the surface density from the volume density. It's just $\sigma = \rho ^ {2/3}$
3. Nov 15, 2004
AKG
No, I don't think you can. If something has surface density $\sigma$, and you stack 3 thin sheets on top of each other, the total mass will be the mass of the three sheets. Now, something with finite thickness would be like having an infinite number of thin sheets stacked on top of each other, so the mass would be infinite (and so would the volume density).
Conversely, assume something has volume density $\rho$. Let's say that we take a very bad approximation of it's surface density by taking a 1cm thick piece of the substance, and approximating it's surface density to be it's mass/surface area = mass/(volume/1cm) = 1cm * $\rho$. Now, the "true" surface density would be this number as the thickness approches zero. If we start with a thickness t = 1cm, then we have that it's "bad-approximate" surface density is $t\rho$. What we need to do, obviously, is evaluate the limit as t approaches zero, and since $\rho$ is just some positive finite number, the limit is zero, so it's surface density is zero, which is what we have in real life (because objects are 3-d).
I'm not sure how to go about solving your problem, but the best suggestion I can give is to treat "thin" as having the thickness of the atomic radius of aluminum. You can then treat the membrane as a zero-thickness membrane with surface density (approximated to) $t\rho$, where t is the radius of aluminum atom, and $\rho$ is its density.
Gokul is saying something else, I'm not sure where he's getting that from.
4. Nov 15, 2004
rayjohn01
I assume you mean a sperical membrane not circular -- the volume density then tells you the mass of the membrane -- thickness assumed at some value -- so you have the details the rest is up to you -- I would not know how to solve this offhand. | 2018-07-16 22:36:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225714206695557, "perplexity": 489.03079038125657}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589470.9/warc/CC-MAIN-20180716213101-20180716233101-00081.warc.gz"} |
https://mathoverflow.net/questions/323513/optimal-control-problem-with-spike-source-and-split-state | # Optimal control problem with spike source and “split” state
For $$p \in \mathbb{R}$$, consider the following problem: $$$$\label{1} \begin{cases} \operatorname{div}(a \nabla u ) = p\delta_{x_0} \quad \text{in } \Omega \\ u=0 \quad \text{on } \partial \Omega ; \end{cases}$$$$ under the assumption that $$a \in L^\infty$$ is constant in some neighbourhood of $$x_0$$, i.e. $$a(\mathbf{x})= a_0 \text{Id}$$ for $$\mathbf{x} \in B=B_{r_0}(x_0)$$, $$a_0 \in \mathbb{R}$$, we can look for a solution in the form $$u(x) = \psi(x) + K(x-x_0),$$ where $$K(\cdot)$$ is the fundamental solution (up to the constants $$a_0,p$$) of the Laplace operator and $$\psi \in H^1(\Omega)$$ satisfies a classical, well-posed, Neumann problem with data depending on $$K|_{\Omega \setminus B}$$. Note that the solution $$u$$ is not quite regular globally since it reads the singularity of $$K$$ at $$x_0$$.
Nevertheless, we can set up a control problem "away" from $$x_0$$ with the number $$p$$ as control and the quadratic tracking cost functional $$\min_{p} \left( \frac{1}{2} \| u(p) - u_{d} \|_{0, \Omega \setminus B}^2 + \frac{1}{2} |p|^2 \right),$$ for some desired state $$u_d \in L^2$$, $$u(p)$$ being the solution of the above problem (in the above sense!) corresponding to the control $$p$$.
I see some problems arising while trying to formulate go-to results like necessary optimality conditions: it is not clear what should be a suitable adjoint problem, since a weak formulation is only available for $$\psi=\psi_p$$, but the state $$u$$ also depends on $$K=K_p$$, making $$u(p)$$ not a trivial translation of $$\psi$$. Moreover, the choice of the $$L^2(\Omega \setminus B)$$ norm in the optimization was made to somehow regularize $$u$$, on the other hand:
• Is the control problem still meaningful, as we are trying - in principle - to approximate a global a priori chosen desired state taking into account only the behavior away from a fixed point?
• Working with integrals in $$\Omega \setminus B$$ rather than $$\Omega$$ gives rise to unwanted boundary terms in integrations by parts.
Are there any references for optimal control problems of this kind?
Note: I know that it is possible to set up a global weak formulation for this type of Dirac-source problems (see reference) using sharp functional analysis results on weighted spaces, but this is not known to be possible for larger classes of operators, like those I have to deal with in my research. Therefore, this is a model example and the "split" solution is most likely the only option.
Reference: Allendes, Alejandro, et al. "An a posteriori error analysis for an optimal control problem with point sources." ESAIM: Mathematical Modelling and Numerical Analysis 52.5 (2018): 1617-1650. | 2019-10-14 14:33:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9776353240013123, "perplexity": 233.31035754937648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00119.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=33&t=22881&p=67062 | ## Finding the Formal Charge
$FC=V-(L+\frac{S}{2})$
Kathleen Vidanes 1E
Posts: 62
Joined: Fri Sep 29, 2017 7:07 am
### Finding the Formal Charge
How do we know when to stop changing the numbers within the formula for formal charge in order to obtain the best representation of a certain molecule? For example, SO4^2-
Lily Sperling 1E
Posts: 49
Joined: Tue Oct 10, 2017 7:14 am
### Re: Finding the Formal Charge
In the case of SO4^2-, the total charge should add up to 2-. Since "S" has the lowest ionization energy, it is the central atom and should therefore have a formal charge of zero, which would leave two of the oxygens with a formal charge of 1-. Oxygen is much more likely to have a negative tendency so we would write the correct lewis structure with two double bonds to make this happen. Lowest formal charge and octet followed is ideal.
605011646
Posts: 14
Joined: Fri Sep 29, 2017 7:07 am
### Re: Finding the Formal Charge
why is it that the central atom in a molecule usually doesn't hold a formal charge well?
Jacquelyn Hill 1
Posts: 41
Joined: Sat Jul 22, 2017 3:01 am
### Re: Finding the Formal Charge
If a central atom has a formal charge, then the overall molecule will have an unbalanced charge and be more unstable than if it did not have a formal charge. | 2020-08-11 01:43:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6418213248252869, "perplexity": 1972.2352939704206}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00170.warc.gz"} |
http://2a4math.weebly.com/geometry/archives/08-2012 | 8/20/2012
## Question
Given that there are 6 people: p, q, r, s, t, and u, and they start from either points A, B, C, D, E, F, G or H, such that no 2 or more people start from the same point, how should these people move such that they end up all at the same point, and:
a) Each person covers the same distance (don't have to be exactly same, just optimize the results to match this condition)
b) The sum of the distance covered is minimized
c) All the points must be visited by at least one person
## Solution
(see map: http://sdrv.ms/P4z7Ul)
We see that amongst the lines converging at D, AD is the longest, where AD = 11
If we try and find another destination (end point) whereby the longest line converging there is smaller than AD, we find it impossible:
1) A must be visited, so we look at the lines converging there
2) Find the lines shorter than AD: AC or AH
3) If we make C the destination, CG > AD. If we make H as the destination, BH > AD
So D should be the destination.
I know this doesn't really answer question (a) or (b), but it does help to make sure that we reduce the maximum walking distance for one person.
p.s. we can make person starting at F go to E before going to D, so EF + ED < AD
8/19/2012
## Some Geometry Questions
Some friends have requested for some help for EOY questions in the HCI paper. Here are the solutions.
Q15)
a)
angle EAB = 65 (base of isoc triangle)
angle AEF = angle EAB = 65 (alt. angles)
cos 65 = FE/EA
6 cos 65 = FE = roughly 2.54 (3sf)
therefore FE = roughly 2.54 (3sf)
b)
AF = roughly 5.44 (3sf)
sin angle D = sin 55 = 5.44/AD
5.44/sin 55 = AD
therefore AD = roughly 6.64 (3sf)
Q16)
The question:
ABC is a right-angled triangle with angle ABC - 90 degrees. Given that AD = 10cm, DB = 6cm, BE = 8cm and EC = 10cm, find the shaded area.
See the solution below. We'll make use of menelaus theorem
Click to set custom HTML
## Geometry
8/19/2012
This is the page where we'll be posting all our geometry stuff so please stay tuned. :)
## Author
Write something about yourself. No need to be fancy, just an overview. | 2018-11-12 22:17:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7906392812728882, "perplexity": 1803.239316413363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741151.56/warc/CC-MAIN-20181112215517-20181113001517-00449.warc.gz"} |
https://www.physicsforums.com/threads/definition-of-current-density-j.950657/ | # Definition of current density J?
I am confused by the definition of current density in Maxwell electrodynamics. Perhaps someone can help me out?
As I understand it, the current density function can be written as
$$\vec{J} = \rho \vec{v}_S$$
where ρ is the charge density function and v_S is the continuous source charge velocity function. What I am confused about is why there isn't another part involving the test charge (or detector, or observation point) velocity? For example
$$\vec{J} = \rho ( \vec{v}_S - \vec{v}_T)$$
where v_T is the test charge (or detector or observation point) velocity in the arbitrary coordinate system chosen.
If you have a test charge, and source charges, since you can't tell if the source charges are moving with a constant velocity versus the test charges moving in the opposite direction at constant velocity, it seems that the current density J should involve the difference of these two independently moving objects (test and sources). What am I missing?
Dale
Mentor
2021 Award
The velocity is defined with respect to some specified inertial frame, not with respect to a test charge. The fields can be (but do not need to be) defined with respect to hypothetical test charges, but the current density is not.
Last edited:
marcusl
No, I can't agree with that without a lot more info. Yes, velocity is defined with respect to some specified inertia frame. Why do you say that the current density does not need to be defined with respect to the test charge?
For example, consider a test charge "T" and a source charge "S" in the chosen inertial frame of reference. We take "T" to be our detector. Let us put T and S on the x-axis and any motion will be on the x-axis, for this example. If T is at rest and S is moving in the +x direction at a constant speed v_S, the detector T will record some amount of force over time. Now, reset the situation. Have S at rest and T move in the -x direction at a constant speed v_T = -v_S. Then the detector should record the same force over time as in the first case. So, with many S charges making up an (approximate) current density, we need to include both v_S and v_T.
For example, v_S and v_T could have the same value, both S and T moving in the same direction at the same speed. In this case, the detector should see not changing force over time because the S and T remain stationary with respect to one another, but not with respect to the frame of reference. If
$$\vec{J} = \rho \vec{v}_S$$
then this case would indicate that there is a current even when S is not moving with respect to T. But with the definition
$$\vec{J} = \rho (\vec{v}_S - \vec{v}_T) = \vec{0}$$
because there is no relative motion between S and T. No relative motion means no current. And the detector reads a constant value.
You need to include both T and S velocities because the physics says it doesn't matter which one is stationary and which one is in motion, because you can go to another inertial frame of reference in which both are in motion in the new inertial frame of reference.
Dale
Mentor
2021 Award
Why do you say that the current density does not need to be defined with respect to the test charge?
Why would you say that it does? Do you have any reference which defines current density using a test charge? The MIT text book does not, nor has any other text book I have seen.
Dale
Mentor
2021 Award
For example, consider a test charge "T" and a source charge "S" in the chosen inertial frame of reference. We take "T" to be our detector. Let us put T and S on the x-axis and any motion will be on the x-axis, for this example. If T is at rest and S is moving in the +x direction at a constant speed v_S, the detector T will record some amount of force over time.
The force on T is mediated by the fields at T’s position. The concept of a test charge can be introduced for the fields. It does not need to be introduced twice, and in fact doing so would be problematic as you would now have two sets of test charges, one for the fields and another for the sources, and with no particular requirement that they be the same.
No, the idea of test charges for defining current density is not only unnecessary for your above example, it is a fundamentally bad idea and inconsistent with the literature. | 2022-05-19 12:42:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8309748768806458, "perplexity": 266.76018990193273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00386.warc.gz"} |
https://www.activeml.net/wp/?p=32&replytocom=114 | # Design of Experiments 101: Cross Validation
## What is an experiment?
An experiment is a procedure that you perform in order to validate (or to reject) your hypothesis.
Your hypothesis might be that the selection strategy, the classifier (regressor), or a smart combination of those that you developed performs better than others. Or maybe you just want to let your approaches in the wild (on your data) and assess the results.
For the sake of simplicity, let’s assume that you have a paradigm H (your hypothesis), a data set X, and a performance measure E (this is how you assess the performance of your approach numerically; e.g. classification accuracy).
The following approach works for supervised learning too, not just for active learning.
## A simple example
The main idea behind design of experiments is:
the design of the experiment is similar to a contest.
The Contest: Alice has a dataset consisting of 100 data points and wants to know if Bob or Carl is the better data scientist. So, she gives Bob and Carl 75 data points and asks each of them to provide the best model they can achieve. After that Alice will compare both models on the 25 data points, she held back.
The Optimization: Now, both data scientists try to find the best parameters for their model. They also split the data: 60 for training and 15 for validation. After training several models with different parameters on the 60 data points, each of them chooses the model which performed best on the remaining 15 data points.
The Comparison: Finally, Alice will evaluate the final models of both data scientists on her held out data. Bob wins if his model performs best and Carl respectively.
## Our terminology
In the following, we use these terms to describe the different kinds of subsets (see also wikipedia):
• Outer training set: the data Bob and Carl are given by Alice to find their best approach (75 data points)
• Outer test set (often: test or evaluation set): the data Alice held back to test Bob’s and Carl’s approach (25 data points)
• Inner training set (often: training set): the data Bob and Carl used to train a model with specific parameters of their approach (60 data points)
• Inner test set (often: validation set): the data Bob and Carl used to determine the best parameter set (15 data points)
## How can Bob and Carl do better (improve the generalization of their training procedure)?
So far, both data scientist just had one fixed training set (inner training set) and one validation set (inner test set). By random it could happen that one test set is particularly difficult for parameter setting and easy for another. Hence, we should ensure that every instance has been used for testing.
In k-fold cross validation, the data given by Alice (75 data points) is split in $$k=5$$ folds. Hence, they have 5 subsets with 15 instances each. To predict the labels of the first fold, the data from folds 2, 3, 4, 5 is used for training. For the second fold, the algorithm is trained on folds 1, 3, 4, 5, etc. This methodology is much more robust and therefore leads to better results. Hence, it is more probable that the parameter setting which performed best actually is the best for the given data.
But now, one problem occurs. For the best parameter setting, each data scientist has 5 different model because of the k-fold cross validation. As Carl did not know what to do, he chose one by random. Bob had a better idea: He used the parameter setting, he found out was best, and trained the model on all data that he was given.
## How can Alice do better?
Alice is faced with a similar situation as Bob and Carl. Maybe, someone just got lucky or the selection of training resp. test instances has been better for one of the competitors. Hence, Alice also performs k-fold cross validation (here $$k=4$$). Hence, Bob and Carl are asked to provide 4 different models and Alice checks if the results are consistent.
To be even more certain, she calculates only one performance value for one k-fold cross validation. Then she repeats the selection of instances multiple times to be certain that the results are not random.
## Summary: How do you split your data?
The main idea of cross validation is to prevent that the model had seen the test data during training. This means that test data has neither been used for training or tuning. If we want to rank different algorithms with their best parameter setting, we need the two-staged cross validation. Hence, algorithms selection is the outer cross validation and on each training set, we perform a separate inner cross validation. More details can be found in the wikipedia pages mentioned above.
If you are interested how to evaluate active learning algorithms, please see the paper:
Challenges of Reliable, Realistic and Comparable Active Learning Evaluation by Kottke, Calma et al.
## 11 Replies to “Design of Experiments 101: Cross Validation”
1. Hi there, I log on to your blogs like every week. Your
story-telling style is witty, keep doing what you’re doing!
Also visit my blog post; 구글상위노출교육
2. I think that is among the such a lot important information for me.
And i’m happy studying your article. But want to statement on few common issues,
The site taste is great, the articles is actually excellent
: D. Excellent job, cheers
3. Excellent post. I was checking constantly this weblog and I am inspired!
Very useful information specially the ultimate phase :
) I care for such information a lot. I used to be looking for this particular info for a long time.
Thank you and good luck.
4. cc dumps 2020 Good validity rate Purchasing Make good job for MMO Pay all site activate your card now
for worldwide transactions.
————-CONTACT———————–
WEBSITE : >>>>>> Cvvdumps✷ Site
—– HERE COMES THE PRICE LIST ———–
***** CCV US:
– US MASTER CARD = $2,8 per 1 (buy >5 with price$3 per 1).
– US VISA CARD = $2,7 per 1 (buy >5 with price$2.5
per 1).
– US AMEX CARD = $4,1 per 1 (buy >5 with price$2.5 per 1).
– US DISCOVER CARD = $2,2 per 1 (buy >5 with price$3.5 per 1).
– US CARD WITH DOB = $15 per 1 (buy >5 with price$12 per 1).
– US FULLZ INFO = $40 per 1 (buy >10 with price$30 per 1).
***** CCV UK:
– UK CARD NORMAL = $2,7 per 1 (buy >5 with price$3 per 1).
– UK MASTER CARD = $2,9 per 1 (buy >5 with price$2.5 per 1).
– UK VISA CARD = $2,8 per 1 (buy >5 with price$2.5 per 1).
– UK AMEX CARD = $4,5 per 1 (buy >5 with price$4 per 1).
$5,9 – UK CARD WITH DOB =$15 per 1 (buy >5 with price
$14 per 1). – UK WITH BIN =$10 per 1 (buy >5 with price $9 per 1). – UK WITH BIN WITH DOB =$25 per 1 (buy >20 with price $22 per 1). – UK FULLZ INFO =$40 per 1 (buy >10 with price $35 per 1). ***** CCV AU: – AU MASTER CARD =$5.5 per 1 (buy >5 with price $5 per 1). – AU VISA CARD =$5.5 per 1 (buy >5 with price $5 per 1). – AU AMEX CARD =$8.5 per 1 (buy >5 with price $8 per 1). – AU DISCOVER CARD =$8.5 per 1 (buy >5 with price $8 per 1). ***** CCV CA: – CA MASTER CARD =$6 per 1 (buy >5 with price $5 per 1). – CA VISA CARD =$6 per 1 (buy >5 with price $5 per 1). – CA VISA BUSINESS =$14 per 1 (buy >5 with price $13 per 1). 5. cc dumps free Good validity rate Purchasing Make good job for you Pay on web activate your card now for worldwide transactions. ————-CONTACT———————– WEBSITE : >>>>>> Cvvdumps✷ Site —– HERE COMES THE PRICE LIST ———– ***** CCV US: – US MASTER CARD =$2,5 per 1 (buy >5 with price $3 per 1). – US VISA CARD =$3 per 1 (buy >5 with price $2.5 per 1). – US AMEX CARD =$2,2 per 1 (buy >5 with price $2.5 per 1). – US DISCOVER CARD =$2,9 per 1 (buy >5 with price $3.5 per 1). – US CARD WITH DOB =$15 per 1 (buy >5 with price $12 per 1). – US FULLZ INFO =$40 per 1 (buy >10 with price $30 per 1). ***** CCV UK: – UK CARD NORMAL =$2,5 per 1 (buy >5 with price $3 per 1). – UK MASTER CARD =$2,7 per 1 (buy >5 with price $2.5 per 1). – UK VISA CARD =$3 per 1 (buy >5 with price $2.5 per 1). – UK AMEX CARD =$4,2 per 1 (buy >5 with price $4 per 1).$
– UK CARD WITH DOB = $15 per 1 (buy >5 with price$14 per 1).
– UK WITH BIN = $10 per 1 (buy >5 with price$9 per 1).
– UK WITH BIN WITH DOB = $25 per 1 (buy >20 with price$22 per
1).
– UK FULLZ INFO = $40 per 1 (buy >10 with price$35 per 1).
***** CCV AU:
– AU MASTER CARD = $5.5 per 1 (buy >5 with price$5 per 1).
– AU VISA CARD = $5.5 per 1 (buy >5 with price$5 per
1).
– AU AMEX CARD = $8.5 per 1 (buy >5 with price$8
per 1).
– AU DISCOVER CARD = $8.5 per 1 (buy >5 with price$8 per 1).
***** CCV CA:
– CA MASTER CARD = $6 per 1 (buy >5 with price$5 per 1).
– CA VISA CARD = $6 per 1 (buy >5 with price$5 per 1).
– CA VISA BUSINESS = $14 per 1 (buy >5 with price$13 per 1).
6. You are my breathing in, I have few web logs and occasionally run out from brand :
).
7. I really value your piece of work, Great post.
8. Good way of telling, and fastidious paragraph to take facts regarding my presentation topic, which i am going to convey in college.
9. Hi i am kavin, its my first occasion to commenting anywhere, when i read this article i
thought i could also make comment due to this good article.
10. You are a very clever person!
11. This site certainly has all the information I wanted concerning this subject and
didn’t know who to ask. | 2022-08-07 16:04:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2337902933359146, "perplexity": 3079.0234544971545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00304.warc.gz"} |
https://stats.stackexchange.com/questions/406466/cox-proportional-hazard-analysis-with-non-uniform-samples-power-analysis | # Cox proportional hazard analysis with non-uniform samples; power analysis
We have a study involving 10,000 patients, 5,000 of them treated with drug A and 5,000 with drug B. We want to know if drug A is more effective than B. The median time to event (death) after treatment with drug A is 1,000 days and for drug B it is 2,000 days. Suppose we wish to subsample 100 patients receiving either treatment (200 total) from this larger sample to test the same hypothesis.
There are two proposed subsampling strategies:
1. We are given data (time to event) for 100 patients who received drug A and 100 patients who received drug B.
2. We are given data for 100 patients who died between 800-1200 days after treatment (some of them received A, some B, we pick randomly) and data for 100 patients who died between 1800-2200 days after treatment.
For the two cases above, which are the appropriate analytical tools to consider? Which has more power?
• Is data missing in scenario 2 (for patients who died between 1201-1799 days afeter treatment)? Then I think we have better power in scenario 1. May 14 '19 at 23:31
• The total number of samples in the two scenarios are equal. In the second scenario we pick samples at specific time periods. May 15 '19 at 0:38
• Just to confirm you don't actually pick the subjects based on their treatment in case 2, you rather mean to say that selection is not conditional on treatment assignment. May 15 '19 at 17:45
• Yes. In this model we don’t know the treatment before selecting patients. May 15 '19 at 21:11
These scenarios should be remarkable similar. In survival analysis, it is helpful to think about the cumulative observation time as informative regarding the precision and standard errors. For instance, in a sample where 200 die, if another 5,000 are observed who do not die, they can contribute to the precision of the estimates of HRs. Deaths remain the main (but not sole) driver of power. In Cox models this is made explicit by the risk set.
Considering that approximately half the sample lives past 1,500 days it's perhaps not a coincidence that in scenario 2 you subsample in an arbitrary interval around the median survival for the two balanced groups (e.g. $$\pm 200$$). That does not mean that, on average, the number of person-days of observation will or will not be the sample in the two scenarios. The more powerful sampling strategy will be the one which has a highest person years of observation period, provided all the appropriate modeling assumptions are met.
Suppose we consider person-days of observation for one treatment group. The actual distribution of event times will determine whether the subsample of events falling in a symmetric interval around the median actually comprises a greater survival time. As an example, consider survival times exponentially distributed.
set.seed(123)
surv <- rexp(5000)
i <- surv[abs(surv-median(surv)) < 0.1]
n <- length(i)
sum(i) ## PD-fu in scenario 2
mean(replicate(1000, sum(sample(surv, n)))) ## average PD-fu in scenario 1
gives
> sum(i) ## PD-fu in scenario 2
[1] 338.1843
> mean(replicate(1000, sum(sample(surv, n)))) ## average PD-fu in scenario 1
[1] 485.7077
so the unconditional sample includes more persons having longer event time follow-up, and hence would be slightly more powerful.
• Could you elaborate a bit about the follow up time metric that you used in your analysis. For example if on the third row of your code you change median with 2*median would that give better sampling?? May 15 '19 at 21:17
• @mghandi not necessarily. That was just an arbitrary decision on my part. I compare sampling all individuals within a range of the median vs sampling the same number unconditional on survival time. Sampling some individuals* in a larger range of the median lies somewhere between the two methods in terms of efficiency. May 16 '19 at 13:39
Scenario 2 can't in general be counted on to demonstrate a survival difference between drugs A and B.
For example:
Say that with drug A 1/3 of patients die before 800 days, 1/3 die between 800 and 1200 days (with half of those before and half after 1000 days), and 1/3 die between 1800 and 2200 days.
Say that with drug B 1/3 of patients die between 800 and 1200 days, 1/3 die between 1800 and 2200 days (with half of those before and half after 2000 days), and 1/3 die after 2200 days.
Then you have the specified median survivals of 1000 days for A and 2000 days for B. Median survival is clearly better for B.
Nevertheless, 1/3 of both groups die between 800 and 1200 days, and 1/3 of both groups die between 1800 and 2200 days. Sampling scenario 2 would show no difference in deaths between drugs A and B despite the longer median survival with drug B.
Yes, the above example might be extreme. You don't, however, want study design to depend on untestable assumptions about the shapes of survival curves, hidden assumptions that underlie the proposal of sampling scenario 2.
With scenario 1 you directly get information about the shape of the survival curves. That would be important for testing the proportional-hazards hypothesis that underlies the Cox regression proposed in an earlier version of this question. (Note that for a simple 2-treatment comparison you could use a log-rank test in scenario 1 and avoid such assumptions.)
Scenario 1 also gets your results faster. This calculator for survival studies with 2 equal groups at the start shows that, under your assumptions (200 patients in total with data, all starting at day 0, 1000-day median time to failure with drug A, a hazard ratio of 2) you will have 80% power to detect (at p < 0.05, log-rank test) a survival difference after 850 days of follow up. There will only be 46 deaths total at 850 days. You will barely have started collecting data under Scenario 2 at that time.
It's frankly hard to think of a situation in which scenario 2 would be preferable, although that might just indicate the limits of my imagination. For example, under your hypotheses with exponential survival curves, I calculate that 59% of the deaths between 800 and 1200 days would be with drug A, while 51% between 1800 and 2200 would still be with drug A. If you wanted, say, to compare those proportions between the 2 time intervals as a test of differences in survival, Russ Lenth's power calculator indicates that you would have less than 20% power to detect that difference in A/B proportions with only 100 deaths sampled within each time interval.
If you would like to explore different sampling times like sampling scenario 2, the following plot shows the fraction of all deaths that occur in patients who took drug B, as a function time into the study, within 400-day windows, with exponential survival times.
Remember that scenario 1 gets you useful results by 850 days. That takes advantage of the low proportion of drug B deaths at early times so it's easiest and fastest to see differences between A and B at early times. As the sampling time gets later the difference in deaths between drugs A and B in the types of time windows you propose gets smaller (making it harder to detect B/A differences) until they are about equal at 2000 days, after which increasingly more deaths are associated with drug B and B/A differences become easier to detect. Again, it's hard to see what that type of sampling approach would provide over the standard scenario, scenario 1.
• Thanks for the answer. What if the survival curves for A and B have a simple exponential form (with Poisson distribution for the time of event with median 1000 for A and 2000 for B).? May 15 '19 at 16:32 | 2021-09-19 17:10:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5221096277236938, "perplexity": 1244.9682260782802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00062.warc.gz"} |
https://educationwithfun.com/course/view.php?id=19§ion=7 | ## Topic outline
• ### Like and Unlike Fractions
• Like and Unlike Fractions
Like Fractions
All fractions with the same denominators (i.e. number written below the horizontal line) are called like fractions.
Example: 2/14, 3/14, 4/14, 5/14 are like fractions as all fraction have same bottom number i.e. denominator.
Unlike Fractions
All fractions with the different denominators (i.e. number written below the horizontal line) are called unlike fractions.
Example: 2/3, 5/14, 7/8, 9/13 are unlike fractions as all fraction have different bottom number i.e. denominator.
Conversion of Unlike Fractions to Like Fractions
Step1. Find the LCM of the denominators of the given fractions.
Step2. Find the quotient by dividing the LCM by denominator of the given fractions.
Step3. Multiply the numerator by corresponding quotient.
Example: Convert 2/9 and 5/6 to like fractions.
Step 1: LCM of 9 and 6 = 2 x 3 x 3 = 18
2 9,6 3 9,3 3 3,1 1,1
Now, we have to adjust the numerators to the LCM =18
Step 2:
Convert 2/9
Numerator to be multiplied by (18/9 = 2)
Changed numerator = 2 x 2 = 4
2/9 (old fraction) = 4/18 (new fraction)
Convert 5/6
Numerator to be multiplied by (18/6 = 3)
Changed numerator = 5 x 3 = 15
5/6 (old fraction) = 15/18 (new fraction)
Therefore; 4/18 and 15/18 are the required like fractions.
Comparing Like Fractions
In like fractions, the fraction with the greater numerator is greater.
Example: Compare 4/5 and 3/5
Here numerators are 4 and 3.
And 4 > 3
Therefore, 4/5 > 3/5
Comparing Unlike Fractions
Step 1: Find the denominators of the fractions and find their LCM.
Step 2: Convert each given fraction to equivalent fraction with denominator equal to the LCM.
Step 3: Now compare the numerators of the equivalent fractions whose denominators are same.
Example: Compare 3/8 and 4/6.
Step 1: LCM of 8 and 6 = 2 x 2 x 2 x 3 = 24
2 8,6 2 4,3 2 2,3 3 1,3 1,1
Now, we have to adjust the numerators to the LCM =24
Step 2:
Convert 3/8
Numerator to be multiplied by (24/8 = 3)
Changed numerator = 3 x 3 = 9
3/8 (old fraction) = 9/24 (new fraction)
Convert 4/6
Numerator to be multiplied by (24/6 = 4)
Changed numerator = 4 x 4 = 16
4/6 (old fraction) = 16/24 (new fraction)
Therefore; 9/24 and 16/24 are the required like fractions.
Now compare numerators of the two like fractions
16 > 9
Therefore; 16/24 > 9/24
i.e. 4/6 > 3/8 | 2023-03-23 20:41:40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039182066917419, "perplexity": 2922.5759434409524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00108.warc.gz"} |
http://icpc.njust.edu.cn/Problem/Pku/1692/ | # Crossed Matchings
Time Limit: 1000MS
Memory Limit: 10000K
## Description
There are two rows of positive integer numbers. We can draw one line segment between any two equal numbers, with values r, if one of them is located in the first row and the other one is located in the second row. We call this line segment an r-matching segment. The following figure shows a 3-matching and a 2-matching segment.
We want to find the maximum number of matching segments possible to draw for the given input, such that: 1. Each a-matching segment should cross exactly one b-matching segment, where a != b . 2. No two matching segments can be drawn from a number. For example, the following matchings are not allowed.
Write a program to compute the maximum number of matching segments for the input data. Note that this number is always even.
## Input
The first line of the input is the number M, which is the number of test cases (1 <= M <= 10). Each test case has three lines. The first line contains N1 and N2, the number of integers on the first and the second row respectively. The next line contains N1 integers which are the numbers on the first row. The third line contains N2 integers which are the numbers on the second row. All numbers are positive integers less than 100.
## Output
Output should have one separate line for each test case. The maximum number of matching segments for each test case should be written in one separate line.
## Sample Input
3
6 6
1 3 1 3 1 3
3 1 3 1 3 1
4 4
1 1 3 3
1 1 3 3
12 11
1 2 3 3 2 4 1 5 1 3 5 10
3 1 2 3 2 4 12 1 5 5 3
## Sample Output
6
0
8
Tehran 1999 | 2020-10-24 20:10:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4221893846988678, "perplexity": 262.76083278432213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884755.46/warc/CC-MAIN-20201024194049-20201024224049-00500.warc.gz"} |
https://optimization-online.org/2010/01/2523/ | Copositivity detection by difference-of-convex decomposition and omega-subdivision
We present three new copositivity tests based upon difference-of-convex (d.c.) decompositions, and combine them to a branch-and-bound algorithm of $\omega$-subdivision type. The tests employ LP or convex QP techniques, but also can be used heuristically using appropriate test points. We also discuss the selection of efficient d.c.~decompositions and propose some preprocessing ideas based on the spectral d.c.~decomposition. We report on first numerical experience with this procedure which are very promising.
Citation
AM Preprint Series No. 333, Univ. Erlangen-Nuremberg; and Technical Report 2011-05, ISOR, Univ. Wien. To appear in: Math. Programming A | 2023-02-05 08:09:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19700780510902405, "perplexity": 4109.866123983524}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00825.warc.gz"} |
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=181 | WeBWorK Main Forum
by Lars Jensen -
Number of replies: 8
Hi,
The login timeout in webwork seems to be quite short - I was testing a gateway quiz, and had it configured to allow 60 minutes to take the quiz. I also had it configured so all problems appear on a single page (I haven't tested whether this actually makes any difference). After about 39 minutes I clicked the "Grade" button, and up popped the webwork login screen. I got nervous that I'd lost everything, and I'm sure some students would panic too. After I logged in, however, it turned out that the quiz had been properly submitted.
Where is the login timeout defined? (The only thing on timeout I could find in global.conf had to do with ldap, which we're not using.) I'd like to change the login timeout - 2 hours seems like a more reasonable period.
Thanks,
Lars.
by Gavin LaRose -
Hi Lars,
The variable is defined in the global.conf file: it's the $sessionKeyTimeout variable, around line 720. Gavin In reply to Lars Jensen Re: login timeout by Miguel-Angel Manrique - In case it is helpful to someone working with the modern WeBworK-- now the variable$sessionKeyTimeout can be found in defaults.config in the folder /opt/webwork/webwork2/conf/
by Sean Fitzpatrick -
I'd like to resurrect this, since more people are probably looking at doing WeBWorK exams as part of their remote teaching.
The advice from Miguel-Angel works. But defaults.config is typically not a file one wants to edit: doing so will cause problems when updating via git.
Two questions on how to do this "better":
Is there a line we can add to localOverrides.conf or elsewhere (one of the files we're meant to edit)?
And can we adjust this on a course by course basis?
by Danny Glin -
One should not edit defaults.config ever.
If you want to change any of the default settings server-wide, you can put the same command in localOverrides.conf. In this case that would mean adding the following line to localOverrides.conf:
\$sessionKeyTimeout = 7200;
Note that the time is in seconds, so adjust accordingly.
To change a setting for a single course, you can add the command to the course.conf file in that course's directory.
Note that some configuration options (including this one) can be set from within the WeBWorK interface by clicking on "Course Configuration".
These are the four places for WeBWorK configuration, in order of precedence (where the later ones override earlier ones):
defaults.config - Do not edit this file. It will cause problems when you upgrade
localOverrides.conf - For global changes to any of the settings in defaults.config
course.conf - Configuration specific to a course not available through the web interface
simple.conf - Don't edit this file. It is created by the "Course Configuration" page in the web interface
by Ever Barbero -
What is the full path to localOverrides.conf ?
by Ever Barbero -
/opt/webwork/webwork2/conf/
by Glenn Rice -
You should not edit defaults.config, as it will cause problems when updating via git as you said. Anything that is set in defaults.config can be set in localOverrides.conf, and that is where it should be done. That will override anything in defaults.config.
Many of these settings can also be set on a course by course basis by editing the course.conf file in the course directory. Of course that still requires admin access (not really though). | 2021-04-21 08:17:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2376752346754074, "perplexity": 3046.7705552542757}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00359.warc.gz"} |
https://unapologetic.wordpress.com/2012/02/page/2/ | # The Unapologetic Mathematician
## The Electromagnetic Wave Equations
Maxwell’s equations give us a collection of differential equations to describe the behavior of the electric and magnetic fields. Juggling them, we can come up with other differential equations that give us more insight into how these fields interact. And, in particular, we come up with a familiar equation that describes waves.
Specifically, let’s consider Maxwell’s equations in a vacuum, where there are no charges and no currents:
\displaystyle\begin{aligned}\nabla\cdot E&=0\\\nabla\times E&=-\frac{\partial B}{\partial t}\\\nabla\cdot B&=0\\\nabla\times B&=\epsilon_0\mu_0\frac{\partial E}{\partial t}\end{aligned}
Now let’s take the curl of both of the curl equations:
\displaystyle\begin{aligned}\nabla\times(\nabla\times E)&=-\frac{\partial}{\partial t}(\nabla\times B)\\&=-\frac{\partial}{\partial t}\left(\epsilon_0\mu_0\frac{\partial E}{\partial t}\right)\\&=-\epsilon_0\mu_0\frac{\partial^2 E}{\partial t^2}\\\nabla\times(\nabla\times B)&=\epsilon_0\mu_0\frac{\partial}{\partial t}(\nabla\times E)\\&=\epsilon_0\mu_0\frac{\partial}{\partial t}\left(-\frac{\partial B}{\partial t}\right)\\&=-\epsilon_0\mu_0\frac{\partial^2 B}{\partial t^2}\end{aligned}
We also have an identity for the double curl:
$\displaystyle\nabla\times(\nabla\times F)=\nabla(\nabla\cdot F)-\nabla^2F$
But for both of our fields we have $\nabla\cdot F=0$, meaning we can rewrite our equations as
\displaystyle\begin{aligned}\frac{\partial^2 E}{\partial t^2}-\frac{1}{\epsilon_0\mu_0}\nabla^2E&=0\\\frac{\partial^2 B}{\partial t^2}-\frac{1}{\epsilon_0\mu_0}\nabla^2B&=0\end{aligned}
which are the wave equations we were looking for.
February 7, 2012
## Deriving Physics from Maxwell’s Equations
It’s important to note at this point that we didn’t have to start with our experimentally-justified axioms. Maxwell’s equations suffice to derive all the physics we need.
In the case of Faraday’s law, we’re already done, since it’s exactly the third of Maxwell’s equations in integral form. So far, so good.
Coulomb’s law is almost as simple. If we have a point charge $q$ it makes sense that it generate a spherically symmetric, radial electric field. Given this assumption, we just need to calculate its magnitude at the radius $r$. To do this, set up a sphere of that radius around the point; Gauss’ law in integral form tells us that the flow of $E$ out through this sphere is the total charge $q$ inside. But it’s easy to calculate the integral, getting
$\displaystyle4\pi r^2\lvert E(\lvert r\rvert)\rvert=\frac{q}{\epsilon_0}$
or
$\displaystyle\lvert E(\lvert r\rvert)\rvert=\frac{1}{4\pi\epsilon_0}\frac{q}{r^2}$
which is the magnitude given by Coulomb’s law.
To get the Biot-Savart law, we can use Ampère’s law to calculate the magnetic field around an infinitely long straight current $I$. We again argue on geometric grounds that the magnitude of the magnetic field should only depend on the distance from the current and should point directly around the current. If we set up a circle of radius $r$ then, the total circulation around the circle is, by Ampère’s law:
$\displaystyle2\pi r\lvert B(\lvert r\rvert)\rvert=\mu_0I$
or
$\displaystyle\lvert B(\lvert r\rvert)\rvert=\frac{\mu_0}{2\pi}\frac{I}{r}$
Now, we can compare this to the last time we computed the magnetic field of the straight infinite current by integrating the Biot-Savart law directly and got essentially the same answer.
Finally, we can derive conservation of charge from Ampère’s law, with Maxwell’s correction by taking its divergence:
$\displaystyle\nabla\cdot(\nabla\times B)=\mu_0\nabla\cdot J+\epsilon_0\mu_0\frac{\partial}{\partial t}(\nabla\cdot E)$
The quantity on the left is the divergence of a curl, so it automatically vanishes. Meanwhile, Gauss’ law tells us that $\epsilon\nabla\cdot E=\rho$, so we conclude
$\displaystyle0=\mu_0\left(\nabla\cdot J+\frac{\partial\rho}{\partial t}\right)$
or
$\displaystyle\nabla\cdot J+\frac{\partial\rho}{\partial t}=0$
which is the “continuity equation” expressing the conservation of charge.
The importance is that while we originally derived Maxwell’s equations from four experimentally-justified laws, those laws are themselves essentially derivable from Maxwell’s equations. Thus any reformulation of Maxwell’s equations is just as sufficient a basis for all of electromagnetism as our original physical axioms.
February 3, 2012
## Maxwell’s Equations (Integral Form)
It is sometimes easier to understand Maxwell’s equations in their integral form; the version we outlined last time is the differential form.
For Gauss’ law and Gauss’ law for magnetism, we’ve actually already done this. First, we write them in differential form:
\displaystyle\begin{aligned}\nabla\cdot E&=\frac{1}{\epsilon_0}\rho\\\nabla\cdot B&=0\end{aligned}
We pick any region $V$ we want and integrate both sides of each equation over that region:
\displaystyle\begin{aligned}\int\limits_V\nabla\cdot E\,dV&=\int\limits_V\frac{1}{\epsilon_0}\rho\,dV\\\int\limits_V\nabla\cdot B\,dV&=\int\limits_V0\,dV\end{aligned}
On the left-hand sides we can use the divergence theorem, while the right sides can simply be evaluated:
\displaystyle\begin{aligned}\int\limits_{\partial V}E\cdot dS&=\frac{1}{\epsilon_0}Q(V)\\\int\limits_{\partial V}B\cdot dS&=0\end{aligned}
where $Q(V)$ is the total charge contained within the region $V$. Gauss’ law tells us that the flux of the electric field out through a closed surface is (basically) equal to the charge contained inside the surface, while Gauss’ law for magnetism tells us that there is no such thing as a magnetic charge.
Faraday’s law was basically given to us in integral form, but we can get it back from the differential form:
$\displaystyle\nabla\times E=-\frac{\partial B}{\partial t}$
We pick any surface $S$ and integrate the flux of both sides through it:
$\displaystyle\int\limits_S\nabla\times E\cdot dS=\int\limits_S-\frac{\partial B}{\partial t}\cdot dS$
On the left we can use Stokes’ theorem, while on the right we can pull the derivative outside the integral:
$\displaystyle\int\limits_{\partial S}E\cdot dr=-\frac{\partial}{\partial t}\Phi_S(B)$
where $\Phi_S(B)$ is the flux of the magnetic field $B$ through the surface $S$. Faraday’s law tells us that a changing magnetic field induces a current around a circuit.
A similar analysis helps with Ampère’s law:
$\displaystyle\nabla\times B=\mu_0J+\epsilon_0\mu_0\frac{\partial E}{\partial t}$
We pick a surface and integrate:
$\displaystyle\int\limits_S\nabla\times B\cdot dS=\int\limits_S\mu_0J\cdot dS+\int\limits_S\epsilon_0\mu_0\frac{\partial E}{\partial t}\cdot dS$
Then we simplify each side.
$\displaystyle\int\limits_{\partial S}B\cdot dr=\mu_0I_S+\epsilon_0\mu_0\frac{\partial}{\partial t}\Phi_S(E)$
where $\Phi_S(E)$ is the flux of the electric field $E$ through the surface $S$, and $I_S$ is the total current flowing through the surface $S$. Ampère’s law tells us that a flowing current induces a magnetic field around the current, and Maxwell’s correction tells us that a changing electric field behaves just like a current made of moving charges.
We collect these together into the integral form of Maxwell’s equations:
\displaystyle\begin{aligned}\int\limits_{\partial V}E\cdot dS&=\frac{1}{\epsilon_0}Q(V)\\\int\limits_{\partial V}B\cdot dS&=0\\\int\limits_{\partial S}E\cdot dr&=-\frac{\partial}{\partial t}\Phi_S(B)\\\int\limits_{\partial S}B\cdot dr&=\mu_0I_S+\epsilon_0\mu_0\frac{\partial}{\partial t}\Phi_S(E)\end{aligned}
February 2, 2012
## Maxwell’s Equations
Okay, let’s see where we are. There is such a thing as charge, and there is such a thing as current, which often — but not always — arises from charges moving around.
We will write our charge distribution as a function $\rho$ and our current distribution as a vector-valued function $J$, though these are not always “functions” in the usual sense. Often they will be “distributions” like the Dirac delta; we haven’t really gotten into their formal properties, but this shouldn’t cause us too much trouble since most of the time we’ll use them — like we’ve used the delta — to restrict integrals to smaller spaces.
Anyway, charge and current are “conserved”, in that they obey the conservation law:
$\displaystyle\nabla\cdot J=-\frac{\partial\rho}{\partial t}$
which states that the mount of current “flowing out of a point” is the rate at which the charge at that point is decreasing. This is justified by experiment.
Coulomb’s law says that electric charges give rise to an electric field. Given the charge distribution $\rho$ we have the differential contribution to the electric field at the point $r$:
$\displaystyle dE(r)=\frac{1}{4\pi\epsilon_0}\rho\frac{r}{\lvert r\rvert^3}dV$
and we get the whole electric field by integrating this over the charge distribution. This, again, is justified by experiment.
The Biot-Savart law says that electric currents give rise to a magnetic field. Given the current distribution $J$ we have the differential contribution to the magnetic field at the poinf $r$:
$\displaystyle dB(r)=\frac{\mu_0}{4\pi}J\times\frac{r}{\lvert r\rvert^3}dV$
which again we integrate over the current distribution to calculate the full magnetic field at $r$. This, again, is justified by experiment.
The electric and magnetic fields give rise to a force by the Lorentz force law. If a test particle of charge $q$ is moving at velocity $v$ through electric and magnetic fields $E$ and $B$, it feels a force of
$\displaystyle F=q(E+v\times B)$
But we don’t work explicitly with force as much as we do with the fields. We do have an analogue for work, though — electromotive force:
$\displaystyle\mathcal{E}=-\int\limits_CE\cdot dr$
One unexpected source of electromotive force comes from our fourth and final experimentally-justified axiom: Faraday’s law of induction
$\displaystyle\mathcal{E}=\frac{\partial}{\partial t}\int\limits_\Sigma B\cdot dS$
This says that the electromotive force around a circuit is equal to the rate of change of magnetic flux through any surface bounded by the circuit.
Using these four experimental results and definitions, we can derive Maxwell’s equations:
\displaystyle\begin{aligned}\nabla\cdot E&=\frac{1}{\epsilon_0}\rho\\\nabla\cdot B&=0\\\nabla\times E&=-\frac{\partial B}{\partial t}\\\nabla\times B&=\mu_0J+\epsilon_0\mu_0\frac{\partial E}{\partial t}\end{aligned}
The first is Gauss’ law and the second is Gauss’ law for magnetism. The third is directly equivalent to Faraday’s law of induction, while the last is Ampère’s law, with Maxwell’s correction.
February 1, 2012
## Conservation of Charge
When we worked out Ampères law in the case of magnetostatics, we used a certain identity:
$\displaystyle\nabla\cdot J+\frac{\partial\rho}{\partial t}=0$
which we often write as
$\displaystyle\frac{\partial\rho}{\partial t}=-\nabla\cdot J$
That is, the rate at which the charge at a point is increasing is the negative of the divergence of the current at that point, which measures how much current is “flowing out” from that point. This may be clearer if we integrate this equation over some macroscopic region $V$:
\displaystyle\begin{aligned}\frac{\partial}{\partial t}\int\limits_V\rho\,dV&=\int\limits_V\frac{\partial}{\partial t}\rho\,dV\\&=\int\limits_V-\nabla\cdot J\,dV\\&=-\int\limits_{\partial V}J\,dA\\&=\int\limits_{-\partial V}J\,dA\end{aligned}
The rate of change of the total amount of the charge within $V$ is equal to the amount of current flowing inwards across the boundary of $V$, so this flow of current is the only way that the charge in a region can change. This is another physical law, borne out by experiment, and we take it as another axiom.
But we might note something interesting if we couple this with Gauss’ law:
$\displaystyle0=\nabla\cdot J+\frac{\partial\rho}{\partial t}=\nabla\cdot J+\epsilon_0\frac{\partial}{\partial t}(\nabla\cdot E)$
Or, to put it slightly differently:
$\displaystyle\nabla\cdot\left(J+\epsilon_0\frac{\partial E}{\partial t}\right)=0$
Recall that in deriving Ampère’s law we had to assume that $J$ was divergence-free; when things are not static, the above equation shows that the composite quantity
$\displaystyle J+\epsilon_0\frac{\partial E}{\partial t}$
is always divergence-free. The derivative term isn’t associated with any electric charge moving around, and yet it still behaves like a current for all intents and purposes. We call it the “displacement current”, and we add it into Ampère’s law to see how things work without the magnetostatic assumption:
$\displaystyle\nabla\times B=\mu_0J+\epsilon_0\mu_0\frac{\partial E}{\partial t}$
This additional term is known as Maxwell’s correction to Ampère’s law.
February 1, 2012 | 2015-03-04 08:37:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 70, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9248618483543396, "perplexity": 376.47601971667916}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463460.96/warc/CC-MAIN-20150226074103-00261-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-to-deduct-the-gradient-in-spherical-coordinates.703515/ | # How to deduct the gradient in spherical coordinates?
1. Jul 30, 2013
### igorronaldo
2. Jul 30, 2013
### vanhees71
The one given in Wikipedia is correct, or what is your question?
3. Jul 30, 2013
### yungman
Should be this one:
$$\nabla f(r, \theta, \phi) = \frac{\partial f}{\partial r}\mathbf{e}_r+ \frac{1}{r}\frac{\partial f}{\partial \theta}\mathbf{e}_\theta+ \frac{1}{r \sin\theta}\frac{\partial f}{\partial \phi}\mathbf{e}_\phi$$
4. Jul 30, 2013
### vanhees71
That's correct. Maybe you want to know, how to derive it?
The point is to write
$$\mathrm{d} \phi=\mathrm{d} \vec{r} \cdot \vec{\nabla} \phi=\mathrm{d} r \partial_r \phi + \mathrm{d} \vartheta \partial_{\vartheta} \phi + \mathrm{d} \varphi \partial_{\varphi}\phi$$
in terms of the normalized coordinate basis $(\vec{e}_r,\vec{e}_{\vartheta},\vec{e}_{\varphi})$. The term from the variation of $r$ is
$$\mathrm{d} r \frac{\partial \vec{r}}{\partial r} \cdot \vec{\nabla} \phi=\mathrm{d} r \vec{e}_r \cdot \vec{\nabla} \phi.$$
Comparing the coefficients of $\mathrm{d} r$ gives
$$\vec{e}_r \cdot \vec{\nabla} \phi=\partial_r \phi.$$
For the $\vartheta$ component
$$\mathrm{d} \vartheta \frac{\partial \vec{r}}{\partial \vartheta} \cdot \vec{\nabla} \phi = r \vec{e}_{\vartheta} \cdot \vec{\nabla} \phi, \;\Rightarrow \; \vec{e}_{\vartheta} \cdot \vec{\nabla} \phi=\frac{1}{r} \partial_{\vartheta} \phi,$$
and for $\mathrm{d} \varphi$
$$\mathrm{d} \varphi \frac{\partial \vec{r}}{\partial \varphi} \cdot \vec{\nabla} \phi = r \sin \vartheta \vec{e}_{\vartheta} \cdot \vec{\nabla} \phi, \; \Rightarrow \; \vec{e}_{\varphi} \cdot \vec{\nabla} \phi=\frac{1}{r \sin \varphi} \partial_{\vartheta} \phi,$$
5. Jul 31, 2013
### igorronaldo
Now clear, thanks. | 2017-11-21 10:28:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6826792359352112, "perplexity": 3282.039464170569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806338.36/warc/CC-MAIN-20171121094039-20171121114039-00225.warc.gz"} |