url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://www.cukrovavata.info/orcnhs/article.php?9e251b=elementary-backstroke-steps
|
Freestyle and backstroke have the same turn. Backstroke can initially seem like a daunting stroke to learn, especially if you are new to swimming, but don’t let that deter you. At the same time, draw your Correct your mistakes by doing the following. Elementary backstroke make you relax and relaxing helps you to float. Backstroke Teaching Progression Step #3 Teach Backstroke Arm Action. Elementary Statistics A Step By Step Approach 9th Ed Bluman. Sports Quotes. The elementary backstroke is a relatively relaxed and easy stroke, so burn extra calories by amping up the intensity as much as you can. It is a slower stroke that does not use up much energy also called relaxation swimming stroke. Jeff Pease, a longtime swim coach who's in charge of North Coast Aquatics in Carlsbad, Calif., has a detailed plan on how he'd teach the backstroke to first-timers. In this example, the species Q is an intermediate, usually an unstable or highly reactive species. Sign in Dozens of parents held a protest outside an elementary school in Medan, North Sumatra today, demanding that the principal step down because he is gay. Just learned that the backstroke is not a backcrawl; however, it is a simpler stroke that I can handle. For Water Safety Wednesday, our lifeguard Hannah takes you through the elementary backstroke. In each exercise we will highlight how to implement the main WEST principles – neck and lower back release. However, this type of backstroke never became a mainstay of competition. Because your face is out of the water the entire time you swim backstroke, you don’t have to worry about the timing or frequency of your breathing, which can be one of the more complicated parts of other strokes. Quotes by Genres. The Backstroke Start: Here Comes the Wedge. Sometimes used as a recovery or rescue stroke, the elementary backstroke … The elementary backstroke’s kick is the same as the breaststroke kick. It is a great relaxing stroke but not a racing stroke. • STEP 2: Draw your knees up toward your chest. Click through to watch this video on ca.us. Elementary Statistics- A Step By Step Approach.pdf - Google Drive. This is "Elementary Backstroke" by John Fitzpatrick on Vimeo, the home for high quality videos and the people who love them. Since you are on your back, you will not be able to see if you are near the wall or not. Swim the elementary backstroke. Looking at this mechanism I see Intermediates. I personally love following the 7 Keys to Comprehension (Zimmerman and Hutchins, 2003) when teaching reading strategies for elementary students. Make sure you take it slow, and think about each step. First, performing a sequence of elementary row operations corresponds to applying a sequence of linear transformation to both sides of $$Ax=b$$, which in turn can be written as a single linear transformation since composition of linear transformations results in a linear transformation. This backstroke doesn't require any complicated breathing technique since the swimmer's head never goes underwater. Topics Econ Collection opensource Language English. April 16, 2015 April 16, 2015 lgeorge2015. This part is crucial as you will be counting strokes. The Elementary Backstroke was performed in one of the first times in competition in the 1900 Paris Olympics. ==Insights== The elementary backstroke requires the swimmer to float flat on the back on the water. Based on their flawed beliefs that homosexuals are necessarily child molesters, the parents say they fear the principal may harm their children. If you’re wondering a way to do a backstroke start, this guide has step-by-step directions for you. An incorrect start may result in a horrible back-flop! I ask her to engage her core, and then kick with straight legs, from the hips, and with the toes long and feet slightly turned in and pointed. Step 4 : - Swim backstroke again, with the cup on your forehead, increasing your stroke and kick rate, and maintaining your stable, neutral head position. 10 easy steps to learn how to swim backstroke. Mechanisms in which one elementary step is followed by another are very common. It is most suitable for that person who wants to move through the water without submerging his/her face. Quotes. Each step of the mechanism is known as an elementary process, which describes a single moment during a reaction in which molecules break and/or form new bonds. Elementary backstroke: Both arms move synchronized (They begin out like an airplane, then go beside the body like a soldier then they run up the sides and back out to an airplane position) with whip kick. Step #2 Teach Backstroke Kick. Instructions • STEP 1: Float on your back, with your arms and legs at your side. The backcrawl swim supplanted the elementary backstroke swim after 1908 as the competitive back swim and it is now the referred to as the backstroke. Swim Quotes . This is "Julia - Elementary BackStroke" by patrick fong on Vimeo, the home for high quality videos and the people who love them. The elementary backstroke is enjoyed by many swimmers because it requires no extreme effort and can be done at one’s own pace. Bluman A.G. The elementary backstroke is not a competitive swim stroke today. knowing how they are used and what are the outcomes, allows you to form the appropriate reactions need to get your desired result. Roll smoothly from side to side, and with your hips and shoulders united, While they are opposite, feel the movement of both of your arms initiated at once by a single action of the core. Step 1 Before attempting the turn, figure out the number of strokes it takes to get from the five-meter backstroke flags to the wall. # ShakopeeRecAtHome. Explore. Backstroke starts can be very difficult. Elementary Statistics A Step By Step Approach 9th Ed Bluman Item Preview remove-circle Share or Embed This Item. Mar 10, 2015 - This video describes how to swim elementary backstroke. In this article we will review the stages of learning backstroke. The elementary backstroke … So take your time with it. And as you advance more steps will be added to your foundation. To teach backstroke kick, I start on land with the swimmer on a mat, and with one or two pull buoys under her head, and with arms by the side. You can also increase the amount of time that you spend doing the elementary backstroke in the water each session. Elementary backstroke and breaststroke may have many similarities, but when it comes to coordinating arm and leg movements, the two strokes differ greatly. We will also explain the purpose of the exercise and how to implement it. Want to master Microsoft Excel and take your work-from-home job prospects to the next level? In elementary backstroke, be careful not to hit your head on the side of the pool and must know where you are in the water. $\text{Reactants} \rightarrow \text{Intermediates} \rightarrow \text{Products}$ This is a sample reaction coordinate of a complex reaction. Swim your usual backstroke. It is a stroke meant to give the swimmer time to relax, gain confidence but still cover significant water span. The matrix $$M$$ represents this single linear transformation. step 1: A + B → Q. step 2: B + Q → C. net reaction: A + 2B → C (As must always be the case, the net reaction is just the sum of its elementary steps.) If you work hard at the stroke, move quickly and kick with power you will burn more calories. Note that it involves an intermediate and multiple transition A complex reaction can be explained in terms of elementary reactions. Don’t let your hips drop too low as this will slow you down – try … Sign in. When improving your backstroke aim to keep your body position as flat as you can to be streamlined with the water with a slight slope down to the hips to keep the leg action underwater. Start standing and have the swimmers engage their core, tuck the hips under, and make sure the ribs are pressed in. In fact, it is one of my favorite strokes besides back crawl and breaststroke. The shoulders will be slightly rounded. During the drill my legs coming out of water, and I was going too fast and no glide. This elementary backstroke swim was used in the 1900 and 1908 Olympics. This way, when you get to the next step, it won’t feel like you have to think about a million things at once. Macroeconomics book Addeddate 2018-04-04 19:57:10 Steps 1. If you're looking for a great way to assess students while keeping them aware of the work they produce, having them compile portfolios is the way to go. I work first on the recovery. Don’t get discouraged if it takes you a while to get the hang of things. A student portfolio is a collection of a student's work, both in and out of the classroom, and it enables you to monitor students' progress and achievement over time. How to Do It . The slowest step in this mechanism is elementary step 1 which is our rate determining step. I have the swimmers stand with both arms by the side and looking straight ahead. How to Swim the Elementary Backstroke While floating on your back, raise your arms and legs, and then squeeze them down for a relaxing glide. Inverted butterfly: Similar to elementary backstroke, but with a … You can learn how to do the elementary backstroke which is one way that you can swim. I do teach each strategy differently, but here are the basic steps I take when teaching a strategy: You’ll find that the major comprehension strategies taught from classroom to classroom can vary. Elementary steps cannot be broken down into simpler reactions. But lets not get to ahead These are your 10 elementary steps in chemical reactions. For elementary step 1 has a rate constant of k 1 and for elementary step 2 it has a rate constant of k 2. A fast backstroke turn can help you change direction and start a new lap quickly and effectively, continue training without a stop-start approach and achieve a more advanced level of proficiency. You lay on your back, kick with your feet, and push forward with your arms. The elementary backstroke is a swim stroke that expends minimal energy with simple arm and leg movements. Backstroke starts can be one of the most painful — and embarrassing — things to learn. Elementary Backstroke. The elementary backstroke allow the swimmers to move through the water while keeping their faces relatively dry. Backstrokers hop into the water when the referee blows the first whistle, grab a bar on the starting block and plant their feet on the wall, usually covered with an electronic timing pad. Backstroke Starts are Tricky. The backstroke is the only of the four swimming strokes where the athlete starts from in the water. “So this is our concern.
|
2021-05-14 13:30:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3202511966228485, "perplexity": 1540.3357405362678}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00270.warc.gz"}
|
https://math.stackexchange.com/questions/2849293/derivative-of-sqrtx-using-symmetric-derivative-formula
|
# Derivative of $\sqrt{x}$ Using Symmetric Derivative Formula [closed]
How do you find the derivative of $\sqrt{x}$ using the symmetric derivative formula?
$$f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x-h)}{2h}.$$
I got stuck on trying to remove the h from the denominator.
## closed as off-topic by Xander Henderson, user99914, Did, Namaste, Nils MatthesJul 13 '18 at 13:37
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Xander Henderson, Community, Did, Nils Matthes
If this question can be reworded to fit the rules in the help center, please edit the question.
• You can find it here math.stackexchange.com/a/164713/568718 – tien lee Jul 13 '18 at 3:55
• I am unfamiliar with the "symmetric derivative formula." Perhaps you could improve your question by adding a definition, and letting us know what you have tried and where you are stuck? – Xander Henderson Jul 13 '18 at 3:59
• @tienlee the link is one-sided, OP needs two-sided – gt6989b Jul 13 '18 at 4:00
• $\sqrt{x}$ is differentiable, so the symmetric derivative is the same as the ordinary derivative. I don't think I understand what you're asking. – saulspatz Jul 13 '18 at 4:01
• – saulspatz Jul 13 '18 at 4:02
## 1 Answer
I am assuming the symmetric derivative formula (also known as the symmetric difference) means that $$f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x-h)}{2h}.$$ Note that in your case you have to look at $$\begin{split} L(h) &= \frac{\sqrt{x+h} - \sqrt{x-h}}{2h} \\ &= \frac{\sqrt{x+h} - \sqrt{x-h}}{2h} \times \frac{\sqrt{x+h} + \sqrt{x-h}}{\sqrt{x+h} + \sqrt{x-h}} \\ &= \frac{(x+h) - (x-h)} {2h \left(\sqrt{x+h} + \sqrt{x-h}\right)} \end{split}$$ Can you take it from here?
• Thanks for the help. I'd just like clarification on the notation: L(h) – Nicolas Jul 13 '18 at 4:12
• @Nicolas $L(h)$ is just some name for a function depending on $h$, for convenient notation... – gt6989b Jul 13 '18 at 4:18
|
2019-09-20 20:34:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7777991890907288, "perplexity": 736.1662424673355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574077.39/warc/CC-MAIN-20190920200607-20190920222607-00347.warc.gz"}
|
https://www.chemicalforums.com/index.php?topic=74247.0
|
July 09, 2020, 04:07:28 PM
Forum Rules: Read This Before Posting
### Topic: Solubility of silver phosphate (Read 6811 times)
0 Members and 1 Guest are viewing this topic.
• Regular Member
• Posts: 87
• Mole Snacks: +2/-0
##### Solubility of silver phosphate
« on: April 04, 2014, 08:35:59 AM »
I have a problem: 0,01 moles of Ag3PO4 are placed in 1 L of pure water. Knowing ksAgOH=4*10-16, ksAg3PO4=2.7*10-18, Ka1=10-2, ka2=10-7, ka3=10-2 (for H3PO4), calculate:
a) the constant of the reaction: Ag3PO4+ 3H2O 3AgOH + H3PO4.
b) find pH, [PO43-], and [Ag+] at equilibrium in the resulting solution
c) ignoring the hydrolisis of Ag+, calculate the pH of the solution.
Here's my approach:
a) I obtain easily k=4.2*107.
b) Having this big constant for the hydrolisis of the salt, I can assume that all silver phosphate has dissolved, and, at equilibrium, that : [Ag+]=ksAgOH/[OH-]., and CH3PO4≈0.01M. Given we expect to have a basic solution, I do the charge balance: [PO43-]*3+ [HPO42-]*2+[HO-]=[H3O+] + ksAgOH/[HO-]. Multiplying the relation with [HO-], I obtain: some terms + [HO-]2= Kw + ksAgOH≈kw. some terms=kw-[HO-]2, which means that [OH-]<10-7 ( acidic medium) Is this actually correct? Does silver phosphate hydrolisis create acidity? The correct answers are [Ag+]=6.5*10-14M, [PO43-]=3.8*10-3, pH=11.792. If you look at these, you'll notice ksAgOH has been reached, and that all Ag3PO4 has been dissolved
c) it is very similar to the phosphoric acid problem(IChO-see "Back with the phosphoric acid" topic), but applying the same method I obtain pH= 9.8. They obtain pH=10.62. One thing that strikes is that this pH, which is achieved theoretically, by neglecting Ag+ hydrolisis, is smaller than the one where Ag+ lowers the pH by forming the insoluble hydroxide). What might be wrong, and how can this problem be treated?
• Sr. Member
• Posts: 1177
• Mole Snacks: +28/-94
##### Re: Solubility of silver phosphate
« Reply #1 on: April 06, 2014, 08:23:50 AM »
I have a problem: 0,01 moles of Ag3PO4 are placed in 1 L of pure water. Knowing ksAgOH=4*10-16, ksAg3PO4=2.7*10-18, Ka1=10-2, ka2=10-7, ka3=10-2 (for H3PO4), calculate:
a) the constant of the reaction: Ag3PO4+ 3H2O 3AgOH + H3PO4.
b) find pH, [PO43-], and [Ag+] at equilibrium in the resulting solution
c) ignoring the hydrolisis of Ag+, calculate the pH of the solution.
Here's my approach:
a) I obtain easily k=4.2*107.
b) Having this big constant for the hydrolisis of the salt, I can assume that all silver phosphate has dissolved, and, at equilibrium, that : [Ag+]=ksAgOH/[OH-]., and CH3PO4≈0.01M. Given we expect to have a basic solution, I do the charge balance: [PO43-]*3+ [HPO42-]*2+[HO-]=[H3O+] + ksAgOH/[HO-]. Multiplying the relation with [HO-], I obtain: some terms + [HO-]2= Kw + ksAgOH≈kw. some terms=kw-[HO-]2, which means that [OH-]<10-7 ( acidic medium) Is this actually correct? Does silver phosphate hydrolisis create acidity? The correct answers are [Ag+]=6.5*10-14M, [PO43-]=3.8*10-3, pH=11.792. If you look at these, you'll notice ksAgOH has been reached, and that all Ag3PO4 has been dissolved
c) it is very similar to the phosphoric acid problem(IChO-see "Back with the phosphoric acid" topic), but applying the same method I obtain pH= 9.8. They obtain pH=10.62. One thing that strikes is that this pH, which is achieved theoretically, by neglecting Ag+ hydrolisis, is smaller than the one where Ag+ lowers the pH by forming the insoluble hydroxide). What might be wrong, and how can this problem be treated?
You mean Ag3PO4 (s) + 3H2O (l) 3AgOH (s) + H3PO4 (aq)? I want to be careful to avoid ambiguity.
If the constant is that high for this equilibrium, it is implied that most PO43- ends up as H3PO4 and most Ag+ as AgOH. So how does the answer give a decently high PO43-? Of course it is implied that Ksp(AgOH) is reached anyway, otherwise your equilibrium constant does not hold since this equilibrium: Ag3PO4 (s) + 3H2O (l) 3AgOH (s) + H3PO4 (aq) does not exist unless AgOH saturation has been reached.
Where did you get the problem?
#### Guvanch
• Regular Member
• Posts: 9
• Mole Snacks: +0/-1
##### Re: Solubility of silver phosphate
« Reply #2 on: April 06, 2014, 08:48:33 AM »
Is it from icho?
#### kaya67
• Very New Member
• Posts: 1
• Mole Snacks: +0/-0
• Gender:
##### Re: Solubility of silver phosphate
« Reply #3 on: October 16, 2014, 11:45:44 AM »
Look up Woodward's Nobel lecture from 1966 I think it was published in Science.
|
2020-07-09 20:07:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.830561637878418, "perplexity": 10248.29165200782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655901509.58/warc/CC-MAIN-20200709193741-20200709223741-00344.warc.gz"}
|
https://puzzling.stackexchange.com/questions/50261/robbers-make-24/50263
|
# Robbers - Make 24 [closed]
The make-24 puzzle is an oldie, but a very fun one at that.
Given four different numbers, produce—through a sequence of operations upon only those four numbers—the number twenty-four.
For example, given $2,2,3,8$ you can make $24$ by: $2\times 3\times\frac82$.
Note that each of the given numbers must be used exactly one time in the solution, and no other digits may appear anywhere.
This question is for the robbers to submit solutions to the problems posed by the cops.
The cops' thread should contain posts which specify the
• four numbers you are allowed to use
• operations you are allowed to use
Have a look at the problems people have submitted on the cops' thread and see if you can solve any of them. If you do manage to solve any, show it off here!
Make sure that your answer contains a link back to the problem's post, you can get the url by clicking the "share" link at the bottom of an answer.
I had a lot of fun solving Chris' problem! Here's his post: https://puzzling.stackexchange.com/a/404. I can use plus, minus, divide, multiplication, square roots and factorials to get $24$ from $3,8,12,50$ and I finally did it!
And my solution:
$$\sqrt{50\times8}+\frac{12}3=24$$
## closed as too broad by Deusovi♦Mar 22 '17 at 16:32
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
$8/(3-8/3) = 24$
I hope this is how it's done.
• Wow... that's awesome xD – theonlygusti Mar 22 '17 at 16:18
$6/(1-3/4) = 24$
$1^3*4*6$
|
2019-08-24 23:06:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6387313008308411, "perplexity": 1019.7709431404245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321786.95/warc/CC-MAIN-20190824214845-20190825000845-00079.warc.gz"}
|
http://en.wikipedia.org/wiki/Regular_languages
|
# Regular language
(Redirected from Regular languages)
For natural language that is regulated, see List of language regulators.
"Kleene's theorem" redirects here. For his theorems for recursive functions, see Kleene's recursion theorem.
In theoretical computer science and formal language theory, a regular language (also called a rational language[1][2]) is a formal language that can be expressed using a regular expression, in the strict sense of the latter notion used in theoretical computer science. (Many regular expressions engines provided by modern programming languages are augmented with features that allow recognition of languages that cannot be expressed by a classic regular expression.)
Alternatively, a regular language can be defined as a language recognized by a finite automaton. The equivalence of regular expressions and finite automata is known as Kleene's theorem.[3] In the Chomsky hierarchy, regular languages are defined to be the languages that are generated by Type-3 grammars (regular grammars).
Regular languages are very useful in input parsing and programming language design.
## Formal definition
The collection of regular languages over an alphabet Σ is defined recursively as follows:
• The empty language Ø is a regular language.
• For each a ∈ Σ (a belongs to Σ), the singleton language {a} is a regular language.
• If A and B are regular languages, then AB (union), AB (concatenation), and A* (Kleene star) are regular languages.
• No other languages over Σ are regular.
See regular expression for its syntax and semantics. Note that the above cases are in effect the defining rules of regular expression.
## Examples
All finite languages are regular; in particular the empty string language {ε} = Ø* is regular. Other typical examples include the language consisting of all strings over the alphabet {a, b} which contain an even number of as, or the language consisting of all strings of the form: several as followed by several bs.
A simple example of a language that is not regular is the set of strings $\{a^nb^n\,\vert\; n\ge 0\}$.[4] Intuitively, it cannot be recognized with a finite automaton, since a finite automaton has finite memory and it cannot remember the exact number of a's. Techniques to prove this fact rigorously are given below.
## Equivalent formalisms
A regular language satisfies the following equivalent properties:
1. it is the language of a regular expression (by the above definition)
2. it is the language accepted by a nondeterministic finite automaton (NFA)[note 1][note 2]
3. it is the language accepted by a deterministic finite automaton (DFA)[note 3][note 4]
4. it can be generated by a regular grammar[note 5][note 6]
5. it is the language accepted by an alternating finite automaton
6. it can be generated by a prefix grammar
7. it can be accepted by a read-only Turing machine
8. it can be defined in monadic second-order logic (Büchi-Elgot-Trakhtenbrot theorem[5])
9. it is recognized by some finite monoid, meaning it is the preimage of a subset of a finite monoid under a homomorphism from the free monoid on its alphabet[note 7]
Some authors use one of the above properties different from "1." as alternative definition of regular languages.
Some of the equivalences above, particularly those among the first four formalisms, are called Kleene's theorem in textbooks. Precisely which one (or which subset) is called such varies between authors. One textbook calls the equivalence of regular expressions and NFAs ("1." and "2." above) "Kleene's theorem".[6] Another textbook calls the equivalence of regular expressions and DFAs ("1." and "3." above) "Kleene's theorem".[7] Two other textbooks first prove the expressive equivalence of NFAs and DFAs ("2." and "3.") and then state "Kleene's theorem" as the equivalence between regular expressions and finite automata (the latter said to describe "recognizable languages").[2][8] A linguistically oriented text first equates regular grammars ("4." above) with DFAs and NFAs, calls the languages generated by (any of) these "regular", after which it introduces regular expressions which it terms to describe "rational languages", and finally states "Kleene's theorem" as the coincidence of regular and rational languages.[9] Other authors simply define "rational expression" and "regular expressions" as synonymous and do the same with "rational languages" and "regular languages".[1][2]
## Closure properties
The regular languages are closed under the various operations, that is, if the languages K and L are regular, so is the result of the following operations:
• the set theoretic Boolean operations: union $K \cup L$, intersection $K \cap L$, and complement $\bar{L}$. From this also relative complement $K-L$ follows.[10]
• the regular operations: union $K \cup L$, concatenation $K\circ L$, and Kleene star $L^*$.[11]
• the trio operations: string homomorphism, inverse string homomorphism, and intersection with regular languages. As a consequence they are closed under arbitrary finite state transductions, like quotient $K / L$ with a regular language. Even more, regular languages are closed under quotients with arbitrary languages: If L is regular then L/K is regular for any K.
• the reverse (or mirror image) $L^R$.
## Deciding whether a language is regular
Regular language in classes of Chomsky hierarchy.
To locate the regular languages in the Chomsky hierarchy, one notices that every regular language is context-free. The converse is not true: for example the language consisting of all strings having the same number of a's as b's is context-free but not regular. To prove that a language such as this is not regular, one often uses the Myhill–Nerode theorem or the pumping lemma among other methods.[12]
There are two purely algebraic approaches to define regular languages. If:
• Σ is a finite alphabet,
• Σ* denotes the free monoid over Σ consisting of all strings over Σ,
• f : Σ* → M is a monoid homomorphism where M is a finite monoid,
• S is a subset of M
then the set $\{ w \in \Sigma^* \, | \, f(w) \in S \}$ is regular. Every regular language arises in this fashion.
If L is any subset of Σ*, one defines an equivalence relation ~ (called the syntactic relation) on Σ* as follows: u ~ v is defined to mean
uwL if and only if vwL for all w ∈ Σ*
The language L is regular if and only if the number of equivalence classes of ~ is finite (A proof of this is provided in the article on the syntactic monoid). When a language is regular, then the number of equivalence classes is equal to the number of states of the minimal deterministic finite automaton accepting L.
A similar set of statements can be formulated for a monoid $M\subset\Sigma^*$. In this case, equivalence over M leads to the concept of a recognizable language.
## Complexity results
In computational complexity theory, the complexity class of all regular languages is sometimes referred to as REGULAR or REG and equals DSPACE(O(1)), the decision problems that can be solved in constant space (the space used is independent of the input size). REGULARAC0, since it (trivially) contains the parity problem of determining whether the number of 1 bits in the input is even or odd and this problem is not in AC0.[13] On the other hand, REGULAR does not contain AC0, because the nonregular language of palindromes, or the nonregular language $\{0^n 1^n : n \in \mathbb N\}$ can both be recognized in AC0.[14]
If a language is not regular, it requires a machine with at least Ω(log log n) space to recognize (where n is the input size).[15] In other words, DSPACE(o(log log n)) equals the class of regular languages. In practice, most nonregular problems are solved by machines taking at least logarithmic space.
## Subclasses
Important subclasses of regular languages include
• Finite languages - those containing only a finite number of words.[16] These are regular languages, as one can create a regular expression that is the union of every word in the language.
• Star-free languages, those that can be described by a regular expression constructed from the empty symbol, letters, concatenation and all boolean operators including complementation but not the Kleene star: this class includes all finite languages.[17]
• Cyclic languages, satisfying the conditions $uv \in L \Leftrightarrow vu \in L$ and $w \in L \Leftrightarrow w^n \in L$.[18][19]
## The number of words in a regular language
Let $s_L(n)$ denote the number of words of length $n$ in $L$. The ordinary generating function for L is the formal power series
$S_L(z) = \sum_{n \ge 0} s_L(n) z^n \ .$
The generating function of a language L is a rational function if L is regular.[18] Hence for any regular language $L$ there exist an integer constant $n_0$, complex constants $\lambda_1,\,\ldots,\,\lambda_k$ and complex polynomials $p_1(x),\,\ldots,\,p_k(x)$ such that for every $n \geq n_0$ the number $s_L(n)$ of words of length $n$ in $L$ is $s_L(n)=p_1(n)\lambda_1^n+\dotsb+p_k(n)\lambda_k^n$.[20][21][22][23]
Thus, non-regularity of certain languages $L'$ can be proved by counting the words of a given length in $L'$. Consider, for example, the Dyck language of strings of balanced parentheses. The number of words of length $2n$ in the Dyck language is equal to the Catalan number $C_n\sim\frac{4^n}{n^{3/2}\sqrt{\pi}}$, which is not of the form $p(n)\lambda^n$, witnessing the non-regularity of the Dyck language. Care must be taken since some of the eigenvalues $\lambda_i$ could have the same magnitude. For example, the number of words of length $n$ in the language of all even binary words is not of the form $p(n)\lambda^n$, but the number of words of even or odd length are of this form; the corresponding eigenvalues are $2,-2$. In general, for every regular language there exists a constant $d$ such that for all $a$, the number of words of length $dm+a$ is asymptotically $C_a m^{p_a} \lambda_a^m$.[24]
The zeta function of a language L is[18]
$\zeta_L(z) = \exp \left({ \sum_{n \ge 0} s_L(n) \frac{z^n}{n} }\right) \ .$
The zeta function of a regular language is not in general rational, but that of a cyclic language is.[25][26]
## Generalizations
The notion of a regular language has been generalized to infinite words (see ω-automata) and to trees (see tree automaton).
Rational set generalizes the notion (of regular/rational language) to monoids that are not necessarily free. Likewise, the notion of a recognizable language (by a finite automaton) has namesake as recognizable set over a monoid that is not necessarily free. Howard Straubing notes in relation to these facts that “The term "regular language" is a bit unfortunate. Papers influenced by Eilenberg's monograph[27] often use either the term "recognizable language", which refers to the behavior of automata, or "rational language", which refers to important analogies between regular expressions and rational power series. (In fact, Eilenberg defines rational and recognizable subsets of arbitrary monoids; the two notions do not, in general, coincide.) This terminology, while better motivated, never really caught on, and "regular language" is used almost universally.”[28]
Rational series is another generalization, this time in the context of a formal power series over a semiring. This approach gives rise to weighted rational expressions and weighted automata. In this algebraic context, the regular languages (corresponding to Boolean-weighted rational expressions) are usually called rational languages.[29][30] Also in this context, Kleene's theorem finds a generalization called the Kleene-Schützenberger theorem.
## Notes
1. ^ 1. ⇒ 2. by Thompson's construction algorithm
2. ^ 2. ⇒ 1. by Kleene's algorithm
3. ^ 2. ⇒ 3. by the powerset construction
4. ^ 3. ⇒ 2. since the former definition is stronger than the latter
5. ^ 2. ⇒ 4. see Hopcroft, Ullman (1979), Theorem 9.2, p.219
6. ^ 4. ⇒ 2. see Hopcroft, Ullman (1979), Theorem 9.1, p.218
7. ^ 3. ⇔ 9. by the Myhill–Nerode theorem
## References
1. ^ a b Ruslan Mitkov (2003). The Oxford Handbook of Computational Linguistics. Oxford University Press. p. 754. ISBN 978-0-19-927634-9.
2. ^ a b c Mark V. Lawson (2003). Finite Automata. CRC Press. pp. 98–103. ISBN 978-1-58488-255-8.
3. ^ Sheng Yu (1997). "Regular languages". In Grzegorz Rozenberg and Arto Salomaa. Handbook of Formal Languages: Volume 1. Word, Language, Grammar. Springer. p. 41. ISBN 978-3-540-60420-4.
4. ^ Eilenberg (1974), p. 16 (Example II, 2.8) and p. 25 (Example II, 5.2).
5. ^ M. Weyer: Chapter 12 - Decidability of S1S and S2S, p. 219, Theorem 12.26. In: Erich Grädel, Wolfgang Thomas, Thomas Wilke (Eds.): Automata, Logics, and Infinite Games: A Guide to Current Research. Lecture Notes in Computer Science 2500, Springer 2002.
6. ^ Robert Sedgewick; Kevin Daniel Wayne (2011). Algorithms. Addison-Wesley Professional. p. 794. ISBN 978-0-321-57351-3.
7. ^ Jean-Paul Allouche; Jeffrey Shallit (2003). Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press. p. 129. ISBN 978-0-521-82332-6.
8. ^ Kenneth Rosen (2011). Discrete Mathematics and Its Applications 7th edition. McGraw-Hill Science. pp. 873–880.
9. ^ Horst Bunke; Alberto Sanfeliu (January 1990). Syntactic and Structural Pattern Recognition: Theory and Applications. World Scientific. p. 248. ISBN 978-9971-5-0566-0.
10. ^ Salomaa (1981) p.28
11. ^ Salomaa (1981) p.27
12. ^ How to prove that a language is not regular?
13. ^ Furst, M.; Saxe, J. B.; Sipser, M. (1984). "Parity, circuits, and the polynomial-time hierarchy". Math. Systems Theory 17: 13–27. doi:10.1007/bf01744431.
14. ^ Cook, Stephen; Nguyen, Phuong (2010). Logical foundations of proof complexity (1. publ. ed.). Ithaca, NY: Association for Symbolic Logic. p. 75. ISBN 0-521-51729-X.
15. ^ J. Hartmanis, P. L. Lewis II, and R. E. Stearns. Hierarchies of memory-limited computations. Proceedings of the 6th Annual IEEE Symposium on Switching Circuit Theory and Logic Design, pp. 179–190. 1965.
16. ^ A finite language shouldn't be confused with a (usually infinite) language generated by a finite automaton.
17. ^ Volker Diekert, Paul Gastin (2008). "First-order definable languages". In Jörg Flum, Erich Grädel, Thomas Wilke. Logic and automata: history and perspectives (PDF). Amsterdam University Press. ISBN 978-90-5356-576-6.
18. ^ a b c Honkala, Juha (1989). "A necessary condition for the rationality of the zeta function of a regular language". Theor. Comput. Sci. 66 (3): 341–347. doi:10.1016/0304-3975(89)90159-x. Zbl 0675.68034.
19. ^ Berstel & Reutenauer (2011) p.220
20. ^ Flajolet & Sedgweick, section V.3.1, equation (13).
21. ^ Proof of theorem for irreducible DFAs
22. ^ http://cs.stackexchange.com/a/11333/683 Proof of theorem for arbitrary DFAs
23. ^ Number of words of a given length in a regular language
24. ^ Flajolet & Sedgewick (2002) Theorem V.3
25. ^ Berstel, Jean; Reutenauer, Christophe (1990). "Zeta functions of formal languages". Trans. Am. Math. Soc. 321 (2): 533–546. doi:10.1090/s0002-9947-1990-0998123-x. Zbl 0797.68092.
26. ^ Berstel & Reutenauer (2011) p.222
27. ^ Samuel Eilenberg. Automata, languages, and machines. Academic Press. in two volumes "A" (1974, ISBN 9780080873749) and "B" (1976, ISBN 9780080873756), the latter with two chapters by Bret Tilson.
28. ^ Straubing, Howard (1994). Finite automata, formal logic, and circuit complexity. Progress in Theoretical Computer Science. Basel: Birkhäuser. p. 8. ISBN 3-7643-3719-2. Zbl 0816.68086.
29. ^ Berstel & Reutenauer (2011) p.47
30. ^ Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge: Cambridge University Press. p. 86. ISBN 978-0-521-84425-3. Zbl 1188.68177.
|
2015-05-29 14:30:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 42, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273053765296936, "perplexity": 1180.5131390895049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930143.90/warc/CC-MAIN-20150521113210-00328-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/257316/generate-a-find-command-with-pruned-directories-based-on-a-config-file/257441
|
# Generate a find command with pruned directories based on a config file
The idea is to perform indexing of filetrees while excluding certain directories. I want to give the user a simple conf file instead of having to edit the script itself. This is what I came up with:
conf.file:
cache
home/files
program:
#!/usr/bin/env bash
function get_opts {
do
echo "-path /data/data/com.termux/$opt -prune -o \\" >> fn_temp done < conf.file } function build_search_fn { touch fn_temp echo "function do_search {" > fn_temp echo "find /data/data/com.termux \\" >> fn_temp get_opts echo "-print" >> fn_temp echo "}" >> fn_temp } build_search_fn source fn_temp do_search >> output It does what I want, but I have a strong feeling that it's not the 'proper' way of doing it. Besides the obvious lack of putting the base path into variables and some errorhandling I'm eager to learn about other approaches to do this. ## 1 Answer You can use subcommands in a command: echo "This$(date) is a date calculated in the echo command"
When you don't need additional control statements, you can use combined code like:
find /data/data/com.termux \
$(awk '{printf("-path %s%s -prune -o ", "/data/data/com.termux/",$0)}' conf.file) \
-print >> output
The awk command might be new for you, it is a replacement for your loop over the conf file. The same result can be found with sed:
sed 's#.*#/data/data/com.termux/& -prune -o #' conf.file
I think awk is the best here.
• I left awk because I wanted a pure bash version. I don't know if it adds any value to throw a regex in just for the sake of it. Certainly for more eloberate things.I'll except your's as answer anyway since it's for sure the more common way.Edit: Stroke that regex since awk is used for formatting the string not matching. – chalybeum Mar 21 at 17:05
|
2021-05-17 03:30:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34244635701179504, "perplexity": 3996.896715613643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00486.warc.gz"}
|
https://www.gamedev.net/forums/topic/657284-problems-with-resetting-device/
|
# Problems with 'resetting' device.
This topic is 1328 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi Guys,
I am having a small issue handling 'lost' devices.
The problem is something to do with the surfaces I am using. If I take out all of my surface code (from the entire program), my device handler works fine.
But, I am using surfaces to render my application.
MSDN states that everything that is created on the device ie, surfaces, textures, etc.. need to be released before 'resetting' the device. So, I am handling this like so;
HRESULT hr=d3dDevice->TestCooperativeLevel();
if(hr==D3DERR_DEVICELOST)
{
Sleep(10);
return 0;
}
else if(hr==D3DERR_DRIVERINTERNALERROR)
return 133;
else if(hr==D3DERR_DEVICENOTRESET)
{
if(surfaceApplication)
{
surfaceApplication->Release();
surfaceApplication=NULL;
}
if(surfaceWindow)
{
surfaceWindow->Release();
surfaceWindow=NULL;
}
if(surfaceBackbuffer)
{
surfaceBackbuffer->Release();
surfaceBackbuffer=NULL;
}
if(FAILED(d3dDevice->Reset(&d3dpp)))
return E_FAIL;
}
I have no geometry or textures loaded in my application. I am just trying to get the renderer to work properly first.
So, I have released everything (except for the device itself) but the handler still returns E_FAIL (unless I get rid of all surface code throught the app).
Any guidance would be awesome
##### Share on other sites
You likely have other surfaces alive in your executable somewhere. PIX will show you these objects in Direct3D 9.
Instead of trying to keep track of all the objects that need to be released manually (which will soon become impractical to manage and a major waste of time in trying to do so), wrap all the resource objects (textures, surfaces, index buffers, and vertex buffers) behind classes and use the constructor of the class to register that instance with some kind of “release manager” (and unregister in each object’s destructor).
When you want to release all the objects that need to be released just tell the release manager to send said notification to all objects it has and let them release anything they need to release. A similar notification is sent to have those objects recreate their resources when the device comes back.
It is really quite simple. Here is an example.
LSGDirectX9LosableResourceManager.h
class CDirectX9LosableResourceManager {
public :
// == Functions.
/**
* Destroy the losable resource manager. Should be called when shutting down.
*/
static LSVOID LSE_CALL Destroy();
/**
* Register a resource (also gives the resource a unique ID).
*
* \param _plrRes The resource to register. Losable resources call this on
* themselves directly, so this function should never be called by the user.
* \return Returns false if a memory error occurred. If false is returned, the
* engine must shut down.
*/
static LSBOOL LSE_CALL RegisterRes( CDirectX9LosableResource * _plrRes );
/**
* Remove a resource by its ID.
*
* \param _ui32Id Unique ID of the resource to remove from the list.
*/
static LSVOID LSE_CALL RemoveRes( LSUINT32 _ui32Id );
/**
* Notify all objects that the device has been lost.
*/
static LSVOID LSE_CALL OnLostDevice();
/**
* Notify all objects that the device has been reset.
*/
static LSVOID LSE_CALL OnResetDevice();
protected :
// == Members.
/** List of resources. */
static std::vector<CDirectX9LosableResource *> m_vResources;
/** Unique resource ID. */
static LSUINT32 m_ui32ResId;
static CCriticalSection m_csCrit;
};
LSGDirectX9LosableResource.h
class CDirectX9LosableResource {
friend class CDirectX9LosableResourceManager;
public :
// == Various constructors.
LSE_CALLCTOR CDirectX9LosableResource();
LSE_CALLCTOR ~CDirectX9LosableResource();
// == Functions.
/**
* Must perform some action when the device is lost.
*/
virtual LSVOID LSE_CALL OnDeviceLost() = 0;
/**
* Must renew resources when the device is reset.
*
* \return Return true if the renewal is successful, false otherwise.
*/
virtual LSBOOL LSE_CALL OnDeviceReset() = 0;
protected :
// == Members.
/** Do we need to reset the resource? */
LSBOOL m_bResourceCanBeLost;
private :
/** An ID that lets us remove ourselves from the global resource list. */
LSUINT32 m_ui32UniqueLosableResourceId;
};
LSGDirectX9LosableResource.cpp
LSE_CALLCTOR CDirectX9LosableResource::CDirectX9LosableResource() :
m_bResourceCanBeLost( false ) {
CDirectX9LosableResourceManager::RegisterRes( this );
}
LSE_CALLCTOR CDirectX9LosableResource::~CDirectX9LosableResource() {
CDirectX9LosableResourceManager::RemoveRes( m_ui32UniqueLosableResourceId );
}
LSGDirectX9IndexBuffer.h
class CDirectX9IndexBuffer : public CIndexBufferBase, public CDirectX9LosableResource
…
// == Functions.
/**
* Must perform some action when the device is lost.
*/
virtual LSVOID LSE_CALL OnDeviceLost();
/**
* Must renew resources when the device is reset.
*
* \return Return true if the renewal is successful, false otherwise.
*/
virtual LSBOOL LSE_CALL OnDeviceReset();
…
LSGDirectX9IndexBuffer.cpp
LSVOID LSE_CALL CDirectX9IndexBuffer::OnDeviceLost() {
if ( !m_bResourceCanBeLost || !m_pibIndexBuffer ) { return; }
// Release the existing index buffer.
CDirectX9::SafeRelease( m_pibIndexBuffer );
}
LSBOOL LSE_CALL CDirectX9IndexBuffer::OnDeviceReset() {
if ( !m_bResourceCanBeLost ) { return true; }
return CreateApiIndexBuffer();
}
CDirectX9LosableResource automatically registers itself with the CDirectX9LosableResourceManager and provides OnDeviceLost() and OnDeviceReset() as pure virtual methods.
CDirectX9IndexBuffer overrides them appropriately.
Because every object that inherits from CDirectX9LosableResource is autmatically registered as a system-wide losable resource, there are never any forgotten objects hanging around causing your device reset to fail.
L. Spiro
##### Share on other sites
Thanks L. Spiro
Yeah, this is all in a class at the moment.
The following are private and all that need releasing (at the end of the app).
// Device
IDirect3D9* d3dObject;
IDirect3DDevice9* d3dDevice;
D3DPRESENT_PARAMETERS d3dpp;
bool bVSync;
// Application surface
IDirect3DSurface9* surfaceApplication;
IDirect3DSurface9* surfaceWindow;
IDirect3DSurface9* surfaceBackbuffer;
RECT DstRect;
So, as it stands I should only have to release the three surfaces in order to reset the device. And if I remove all references to these surfaces, the device does reset happily.
I'll do some more digging though incase I do have a leak somewhere. Edited by lonewolff
##### Share on other sites
Use PIX to determine which of them has an unnecessary reference.
L. Spiro
##### Share on other sites
Thanks again.
It appears that I do have a memory leak somewhere. Thanks for putting me on the right track
It is related to the surfaces somewhere. Just gotta find it ;) Edited by lonewolff
##### Share on other sites
Double heck the documentation of all your surface-related functions to see which function calls are incrementing the reference counters. The most common cause is that you call a Get function somewhere and don't realize that it has to be paired with a matching Release -- suddenly you're calling Get every frame and the object's ref-count is over 9000!
##### Share on other sites
Double heck the documentation of all your surface-related functions to see which function calls are incrementing the reference counters. The most common cause is that you call a Get function somewhere and don't realize that it has to be paired with a matching Release -- suddenly you're calling Get every frame and the object's ref-count is over 9000!
That could be it you know.
If have a constant 501636 byte leak, when I start the application and then immediately close it.
I have no geometry coded yet, so it just leaves only the surfaces.
I am cleaning up in the end with this. But, obviously missing something somewhere.
if(surfaceApplication)
surfaceApplication->Release();
if(surfaceWindow)
surfaceWindow->Release();
if(surfaceBackbuffer)
surfaceBackbuffer->Release();
if(d3dObject)
d3dObject->Release();
if(d3dDevice)
d3dDevice->Release();
Yep, definately to do with surfaces. I have remarked out all surface code and no leak. So, atleast I am headed in the right direction. Edited by lonewolff
##### Share on other sites
the object's ref-count is over 9000!
“lonewolf, what does the PIX say about his reference count?”
It’s over 9,000!!!!
L. Spiro
##### Share on other sites
the object's ref-count is over 9000!
“lonewolf, what does the PIX say about his reference count?”
It’s over 9,000!!!!
L. Spiro
No, that was an example by Hodgman. It isn't over 9000 at all. ;)
##### Share on other sites
Also, the Release function returns what the new value of the ref-count is after the call. You can add some assertions in your release-on-lost-device code, checking that those values are zero -- if they're greater than zero, then you've leaked some references somewhere.
##### Share on other sites
Also, the Release function returns what the new value of the ref-count is after the call. You can add some assertions in your release-on-lost-device code, checking that those values are zero -- if they're greater than zero, then you've leaked some references somewhere.
Nice one! Didn't know that
##### Share on other sites
the object's ref-count is over 9000!
“lonewolf, what does the PIX say about his reference count?”
It’s over 9,000!!!!
L. Spiro
No, that was an example by Hodgman. It isn't over 9000 at all. ;)
He was making a reference of his own to this:
That video’s reference count is over 10,000,000!!!!
L. Spiro
##### Share on other sites
Hehe sorry, I didn't get the reference
After some work I traced it down to this and ammended as per the comment
//This is called every frame
if(!FAILED(d3dDevice->GetBackBuffer(0,0,D3DBACKBUFFER_TYPE_MONO,&surfaceBackbuffer)))
{
DstRect.left=0;
DstRect.top=0;
DstRect.right=nWindowWidth;
DstRect.bottom=nWindowHeight;
if(FAILED(d3dDevice->StretchRect(surfaceApplication,NULL,surfaceBackbuffer,&DstRect,D3DTEXF_LINEAR )))
return 141;
// ADDED THIS TO THE CODE
if(surfaceBackbuffer)
{
surfaceBackbuffer->Release();
surfaceBackbuffer=NULL;
}
}
If anything still looks nasty here, please let me know Edited by lonewolff
##### Share on other sites
Does StretchRect ever fail? If so, you're not releasing the surface reference.
##### Share on other sites
Does StretchRect ever fail? If so, you're not releasing the surface reference.
True, I'll modify that.
Although if it does fail it is the end of the road for the application. So at this stage it never does seem to fail. But still worth cleaning up that part of code though
Actually that gets released straight after the return call as it gets handled by the destructor immediately after. Edited by lonewolff
##### Share on other sites
the object's ref-count is over 9000!
“lonewolf, what does the PIX say about his reference count?”
It’s over 9,000!!!!
L. Spiro
I would rate this up just on comedic value, but I think that trivializes the ratings system.
|
2018-01-20 19:35:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18416430056095123, "perplexity": 5839.423140623147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889681.68/warc/CC-MAIN-20180120182041-20180120202041-00061.warc.gz"}
|
https://gazebosim.org/api/math/6.10/pythongetstarted.html
|
# Ignition Math
## API Reference
6.10.0
Python Get Started
Previous Tutorial: C++ Get Started
## Overview
This tutorial describes how to get started using Ignition Math with Python.
NOTE: If you have compiled Ignition Math from source, you should export your PYTHONPATH.
We will run through an example that determines the distance between two points in 3D space. Start by creating a bare-bones main file using the editor of your choice.
def main():
pass
if __name__ == "__main__":
main()
The easiest way to include Ignition Math is through import ignition.math.
At this point your main file should look like
import ignition.math
def main():
pass
if __name__ == "__main__":
main()
Now let's create two 3D points with arbitrary values. We will use the ignition.math.Vector3 class to represent these points. Ignition Math provides some Vector3 types which are: Vector3d (Vector3 using doubles), Vector3f (Vector3 using floats) and Vector3i (Vector3 using integers). The result of this addition will be a main file similar to the following.
from ignition.math import Vector3d
def main():
point1 = Vector3d(1, 3, 5)
point2 = Vector3d(2, 4, 6)
if __name__ == "__main__":
main()
Finally, we can compute the distance between point1 and point2 using the ignition.math.Vector3.distance() function and output the distance value.
from ignition.math import Vector3d
def main():
point1 = Vector3d(1, 3, 5)
point2 = Vector3d(2, 4, 6)
distance = point1.distance(point2);
print("Distance from {} to {} is {}".format(point1, point2, distance))
if __name__ == "__main__":
main()
|
2022-07-05 02:40:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33638855814933777, "perplexity": 12393.544970631874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00328.warc.gz"}
|
http://www.emis.de/classics/Erdos/cit/37910027.htm
|
## Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 379.10027
Autor: Erdös, Paul; Pomerance, Carl
Title: On the largest prime factors of n and n+1. (In English)
Source: Aequationes Math. 17, 311-321 (1978).
Review: The authors prove some interesting results which give a comparison of the largest prime factors of n and n+1. Let P(n) denote the largest prime factor of n. Then one of the impressive results proved is that the number of n \leq x for which P(n) > P(n+1) is >> x for all large x. Another of them is about numbers n for which f(n) = f(n+1) where by f(n) we mean sumpia_{i||n}a_· pi. Such numbers are called Aaron numbers. The authors prove that the number of Aaron numbers \leq x is O\epsilon(x(log x)-1+\epsilon). The results can find other attractive results in the body of the paper.
Reviewer: K.Ramachandra
Classif.: * 11N05 Distribution of primes
11N37 Asymptotic results on arithmetic functions
11A41 Elemementary prime number theory
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
|
2013-12-12 00:05:49
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9612584114074707, "perplexity": 1944.1788371848158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164116508/warc/CC-MAIN-20131204133516-00077-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://dsp.stackexchange.com/questions/28701/analysis-of-a-lti-system-using-dft
|
# Analysis of a LTI system using DFT
Consider an LTI system $$H(z)=1-\frac{1}{2}z^{-1}+\frac{3}{4}z^{-2}$$ Let $x[n]=(\frac{1}{3})^n\cdot u[n]$ be the input signal. It is desired to determine the output for $n=0,1...,N_a$. To achieve that, a relevant part of the DFT will be used together with an uniform sampling of $H(z)$ in the unity circle with a separation of $\frac{2\pi}{N_b}$. a) Explain the process to achieve this. b) Are there any restrictions for $N_a$ and $N_b$? c) For the same $x[n]$, would the method described in a) work if $H(z)=\frac{1}{1-\frac{1}{2}z^{-1}}$? If not, correct the previous method to find the correct result.
$H(z)$ is sampled in $N_b$ points. $\hat{h}[n]$ (the sequence whose DFT is the sampled version of $H(z)$) has only 3 non-zero values, so... the sampled version of it would have $N_b -3$ zeros (I'm not sure about this). About the size of $X[k]$, I thought that, if I want to know the first $N_a +1$ values of $y[n]$, maybe it is enough to grab the first $N_a +1$ points of $x[n]$ to do the DFT and add the necessary amount of zeros to avoid aliasing? This may be false but I just don't know how to determine the relevant part of the input signal that has to be used. If everything I said above is correct (I highly doubt it), then I thought that the only restriction on $N_a$ and $N_b$ is that $N_a +3 = N_b$.
About c), I have no idea of how to do that.
Any suggestion? Thanks for your time!
Since the system described by $H(z)$ is causal, it's sufficient to consider the input sequence in the range $0\le n\le N_a$ in order to compute the output sequence in the same range. Each output sample can only depend on the current input sample and on past input samples. Since the length of the truncated input sequence is $N_a+1$, and the length of the system's impulse response is $3$, the length of the convolution of these two sequences is $N_a+3$. If you want to compute this linear convolution using circular convolution (i.e., using the DFT), the DFT length $N_b$ must satisfy $N_b\ge N_a+3$. So you must zero-pad the truncated input sequence as well as the impulse response to that length, then compute the DFTs of both zero-padded sequences, multiply them, and compute the IDFT of the resulting sequence, the first $N_a+1$ elements of which equal the desired output sequence. This answers a) and b).
For part c) it will not suffice to sample $H(z)$ on the unit circle, because the corresponding impulse response is infinitely long. However, for computing the first $N_a+1$ output samples it is sufficient to consider the first $N_a+1$ samples of the impulse response. Just like before, this works because the system's impulse response as well as the input signal are both zero for $n<0$. So you need to consider a system $\hat{H}(z)$ the impulse response of which is a truncated version of the infinite impulse response corresponding to $H(z)$. The length of the convolution of the truncated input signal and the truncated impulse response is $2N_a+1$, so the DFT length $N_b$ needs to satisfy $N_b\ge 2N_a+1$. The rest of the procedure is the same as before.
|
2022-01-28 11:18:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869975745677948, "perplexity": 124.49901648923125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00293.warc.gz"}
|
https://socratic.org/questions/how-do-you-evaluate-3-4-1-3-1-2
|
# How do you evaluate 3/4 + 1/3 + 1/2?
Jun 6, 2018
$\implies \frac{19}{12}$
#### Explanation:
The fractions need to be expressed with a common denominator. There a few ways to obtain one, the easiest being to multiply all denominators together and multiplying each fraction by some expression equivalent to $1$, as this prevents the expression from changing.
$\frac{3}{4} + \frac{1}{3} + \frac{1}{2}$
$= \frac{3}{4} \cdot \frac{3}{3} \cdot \frac{2}{2} + \frac{1}{3} \cdot \frac{4}{4} \cdot \frac{2}{2} + \frac{1}{2} \cdot \frac{4}{4} \cdot \frac{3}{3}$
$= \frac{18}{24} + \frac{8}{24} + \frac{12}{24}$
$= \frac{38}{24}$
$= \frac{19}{12}$
|
2020-02-27 12:14:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693116545677185, "perplexity": 708.1897703247704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146681.47/warc/CC-MAIN-20200227094720-20200227124720-00457.warc.gz"}
|
https://math.stackexchange.com/questions/805981/every-element-in-well-ordered-set-can-uniquely-be-expressed-as-y-snx
|
# Every element in well-ordered set can uniquely be expressed as $y = S^n(x)$
Let $U$ be a well-ordered set. If $y \in U$ then $y$ can be expressed uniquely on the form,
$y = S^n(x)$
Where $x$ is either the least element of $U$ or a limit point, $n \in \mathbb{N}$ and $S$ is a recursive function defined by
$S^0(x) = x, \ S^{i+1}(x) =S(S^i(x))$
I wish I could show some effort I'm not even sure where to start. The result feels intuitively very good since well-ordered sets behaves in a good manner and feels in my mind somewhat similar to $\mathbb{N}$ so it's maybe not a big surprise that one could express every element via a recursive successor function, but how do I go about proving this?
Should I go for a contraction by assuming there is some $y \in U : y \neq S^{n}(x)\ \forall \ n \in \mathbb{N}$? Can I invoke the Transfinite Recursion Theorem?
For uniqueness, I guess I should to something like this: Let $y\in U:y = S^{n}(x) \land y = S^{n'}(x)$ and then somehow derive that we must have that $n=n'$.
I would really appreciate any help on this.
Prove this by transfinite induction. (I suppose that $S(x)$ returns the successor of $x$, otherwise you might want to actually define what $S$ is in the question.)
If $y$ is the least element or a limit ordinal, then certainly this is true since $y=S^0(y)$.
Now suppose that $y$ is not the least element nor it is an ordinal. What is it?
To show uniqueness, show with a similar argument, that $S$ is injective (and therefore $S^n$ is injective too).
On a side remark, I gave my students a similar exercise last semester. This is truly something that should be done after talking about von Neumann ordinals and ordinal arithmetic. Doing that before these two topics is difficult, filled with bad and ad hoc notations and terminology, and unclear.
After knowing what is the von Neumann ordinal assignment, what is a successor ordinal, and the basics of ordinal addition, this becomes an incredibly straightforward exercise.
We have the successor function
$$\quad S: x \mapsto \text{LeastElement(}\{y \gt x\}\text{)}$$
defined on all elements of $$U \setminus \{\text{max(}U\text{)}\}$$.
Let $$L$$ denote the elements of $$U$$ that do not have an immediate predecessor, i.e. elements of $$U$$ not in the range of $$S$$.
Let $$y$$ be any element in $$U$$ and define
$$\tag 1 C_y = \{ x \in U \, | \, \text{There exists an integer } n \ge 0 \text{ such that } y = S^n(x) \}$$
Since $$y \in C_y$$, the set $$C_y$$ is non-empty and has a least element that must also belong to $$L$$.
So $$y$$ has the form $$S^n(\alpha)$$ with $$\alpha \in L$$.
Exercise: For each $$\alpha \in L$$ define
$$\tag 2 L_\alpha = \{ S^n(\alpha) \, | \, \text{integer } n \ge 0\}$$
Show that the family $$(L_\alpha)_{\, \alpha \in L}$$ of sets is a partition of $$U$$.
Hint: Show that if $$\alpha, \beta \in L$$ then $$S^n(\alpha) = S^m(\beta)$$ implies that $$\alpha = \beta$$.
We conclude that for any element $$y \in U$$ there corresponds a unique $$\alpha \in L$$ and $$n \ge 0$$ such that
$$\tag 3 y = S^n(\alpha)$$
|
2019-10-20 14:15:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646786451339722, "perplexity": 114.89985724911014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986710773.68/warc/CC-MAIN-20191020132840-20191020160340-00090.warc.gz"}
|
http://math.stackexchange.com/questions/234237/lim-limits-x-to-inftyfx1-x-where-fx-sum-limits-k-0-infty-cfra
|
# $\lim\limits_{x\to\infty}f(x)^{1/x}$ where $f(x)=\sum\limits_{k=0}^{\infty}\cfrac{x^{a_k}}{a_k!}$.
Does the following limit exist? What is the value of it if it exists? $$\lim\limits_{x\to\infty}f(x)^{1/x}$$ where $f(x)=\sum\limits_{k=0}^{\infty}\cfrac{x^{a_k}}{a_k!}$ and $\{a_k\}\subset\mathbb{N}$ satisfies $a_k<a_{k+1},k=0,1,\cdots$
$\bf{EDIT:}$ I'll show that $f(x)^{1/x}$ is not necessarily monotonically increasing for $x>0$.
Since $\lim\limits_{x\to+\infty}\big(x+2\big)^{1/x}=1$, for any $M>0$, we can find some $L > M$ such that $\big(2+L\big)^{1/L}<\sqrt{3}$. It is easy to see that: $$\sum_{k=N}^\infty \frac{x^k}{k!} = \frac{e^{\theta x}}{N!}x^N\leq \frac{x^N}{N!}e^x,\quad \theta\in(0,1)$$ Hence we can choose $N$ big enough such that for any $x\in[0,L]$ $$\sum_{k=N}^\infty \frac{x^k}{k!} \leq 1$$ Now, we let $$a_k=\begin{cases}k,& k=0,1\\ 0,& 2\leq k <N\\ k,& k\geq N\end{cases}$$ Then $f(x)= 1+x+\sum\limits_{k=N}^\infty\frac{x^k}{k!}$ and $$f(2)^{1/2} \geq \sqrt{3} > (2+L)^{1/L} \geq f(L)^{1/L}$$ which shows that $f(x)^{1/x}$ is not monotonically increasing on $[2,L]$.
-
${}{}{}{}{}{}{}$ – Pedro Tamaroff Nov 10 '12 at 16:21
I give +50 bounty – Leitingok Nov 14 '12 at 6:14
It does not have to. You know $e^x= \sum_0^\infty x^n/ n!$.
Now (using the convention that summing from or to a non-integer number means summing to or from their floor) $$\sum_0^{\sqrt{x}-1} x^n/ n! < \sum_0^{\sqrt{x}-1} x^n< x^{\sqrt{x}}$$ for large $x$,
and
$$\sum_{x^2}^{\infty} x^n/ n! < \sum_{x^2}^{\infty} x^n/ (x^2/e)^n< \sum_{x^2}^{\infty} (e/x)^n < 1$$ for large $x$.
So if we take the subsequence of $1,2,...$ consisting of those integers between all $2^{4k+2}$ and $2^{4k+4}$, but not the rest (that is from 5 to 16, then all from 65 to 256 etc) , then for $x=2^{4k+1}$ (like 2, 32 etc) we will get $\sum_0^\infty x^{a_k}/(a_k)! <x^{\sqrt{x}}+1$, so that $f(x)^{1/x}$ limits to $1$, and for $x=2^{4k+3}$ (like 8, 128 etc) we will get $\sum_0^\infty x^{a_k}/(a_k)! >e^x -x^{\sqrt{x}}-1$ so that $f(x)^{1/x}$ limits to $e$.
Edit: How to see the inequalities for $x=2^{2k+1}$ and $x=2^{2k+3}$, also known as "what is going on?"
The idea is that for each $x$ the sum $e^x=\sum x^n/n!$ is divided into 3 parts - $n$ from $1$ to $\sqrt{x}$, then $n$ from $\sqrt{x}$ to $x^2$ and then from $x^2$ to $\infty$, and the first and last parts are relatively small.
Now $\sum_0^\infty x^{a_k}/(a_k)!$ is the sum of only those $n$ that $a_k$ picks out form $\mathbb{N}$. So if we make a sequence that does not pick out anything from the middle part for some $x$, that is avoids the $n$ from $\sqrt{x}$ to $x^2$, then the sum for that $x$ would be smaller than the two remaining parts, that is $x^{\sqrt{x}}+1$.
Similarly, if the first and the last parts of the summ are small, then the middle part is big, at least $e^x-x^{\sqrt{x}}-1$. So if for some $x$ the subsequence hits all the integers in the middle part, then the sum will be at least that big for that $x$.
The sequence picking out the integers between $2^{4k+2}$ and $2^{4k+4}$ avoids the middle part for $x=2^{4k+1}$ and hits all of the middle part for $2^{4k+3}$. Hence the bounds, providing two subsequences where limit is 1 and $e$ correspondingly.
-
For the first case I think you mean "$f(x)^{1/x}$ limits to 1", not to 0. – Lukas Geyer Nov 14 '12 at 23:21
Lukas - Thanks! – Max Nov 14 '12 at 23:57
@Max, how to get the inequalities when $x=2^{4k+1}$ and $x=2^{4k+3}$ – Leitingok Nov 16 '12 at 5:30
I have edit a comment explaining what is going on. – Max Nov 16 '12 at 7:29
This limit does not exist in general. First observe that for any polynomial $P$ with non-negative coefficients we have $$\lim_{x\to\infty} P(x)^{1/x} = 1$$ and $$\lim_{x\to\infty} (e^x - P(x))^{1/x} = \lim_{x\to\infty} e (1-e^{-x}P(x))^{1/x} = e.$$ For ease of notation let $$e_n(x) = \sum_{k=n}^\infty \frac{x^k}{k!} = e^x - \sum_{k=0}^{n-1} \frac{x^k}{k!}.$$ Note that $\lim\limits_{n\to\infty} e_n(x) = 0$ for every fixed $x$.
Now define a power series of the form $$f(x) = \sum_{i=1}^\infty \sum_{k=m_i}^{n_i} \frac{x^k}{k!},$$ along with partial sums $$P_j(x) = \sum_{i=1}^j \sum_{k=m_i}^{n_i} \frac{x^k}{k!},$$ where $1 \le m_1 \le n_1 < m_2 \le n_2 < \ldots$ are chosen inductively below. We want to find increasing sequences $(x_i)$ and $(y_i)$ with $x_i \to \infty$, $y_i \to \infty$, and $f(x_i)^{1/x_i} \le \frac32$ and $f(y_i)^{1/y_i} \ge 2$, which obviously implies non-existence of $\lim\limits_{x\to\infty} f(x)^{1/x}$.
Having already defined $m_i$, $n_i$, $x_i$, $y_i$ for $i < j$, we know that $\lim\limits_{x\to\infty} P_{j-1}(x)^{1/x} = 1$, so there exists $x_{j}>j+x_{j-1}$ such that $P_{j-1}(x_j)^{1/x_j} \le \frac54$. Then there exists $m_{j}>n_{j-1}$ such that $$(P_{j-1}(x_j)+e_{m_{j}}(x_{j}))^{1/x_j} \le \frac32,$$ which implies that whatever choices we make for $n_j$, $m_{j+1}$, etc., we always get $$f(x_j)^{1/x_j} \le (P_{j-1}(x_j)+e_{m_{j}}(x_{j}))^{1/x_j} \le \frac32.$$ We also know that $$\lim\limits_{x\to\infty} (P_{j-1}(x) + e_{m_j}(x))^{1/x} = e>2,$$ so there exists $y_j > x_j$ with $$(P_{j-1}(y_j) + e_{m_j}(y_j))^{1/y_j} >2.$$ Furthermore, there exists $n_j > m_j$ with $$(P_{j-1}(y_j) + e_{m_j}(y_j)- e_{n_j+1}(y_j))^{1/y_j} >2.$$ Lastly, this implies $$f(y_j)^{1/y_j} \ge P_j (y_j)^{1/y_j} = (P_{j-1}(y_j) + e_{m_j}(y_j)- e_{n_j+1}(y_j))^{1/y_j} >2.$$ By pushing this idea a little further, one can achieve $\liminf\limits_{x\to\infty} f(x)^{1/x} = 1$ and $\limsup\limits_{x\to\infty} f(x)^{1/x} = e$.
-
|
2016-02-06 14:24:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711940288543701, "perplexity": 109.49450068651412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146550.16/warc/CC-MAIN-20160205193906-00024-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://docs.acquia.com/acquia-cloud/manage/files/transfer-files/rsync/windows/
|
Information for:
# rsync and Drush with Windows¶
Drush is one of the most useful tools a Drupal developer or site builder can have in their arsenal. On Unix-based systems, it is reasonably straightforward to set up and get started using. Windows users have a more involved setup.
Note
If you are using a Unix-based system for development, see rsyncing files on Acquia Cloud for Unix-specific directions.
## Acquia Dev Desktop¶
Acquia Dev Desktop comes with Drush, rsync, and Composer pre-installed. Using it is one of the fastest ways to set up a working installation on Windows. When using Acquia Dev Desktop, you’ll need a separate client for version control (VCS). We recommend Git Bash, which includes an SSH client.
When your version control client is set up, you can directly clone your Acquia Cloud website. For information about cloning your website, see Starting with an Acquia Cloud site. After cloning your website, you should configure your Drush aliases, and test your setup using the Windows command line or your version control client.
## Configuring Drush aliases¶
Drush aliases make it easier to direct where and what websites Drush needs to work with.
2. Download your Drush aliases, and follow the instructions on that page to extract the file into the proper directory on your local machine. If the directory does not exist, you may need to create it.
When the Drush aliases are saved properly, the path to the aliases file should look like this, where username is your Windows username:
c:\Users\username\.drush\sitename.aliases.drushrc.php
3. Edit the aliases file and add this array entry at the top, changing any details to match your local Drupal website setup:
// Site sitename, environment local
$aliases['local'] = array( 'site' => 'sitename', 'env' => 'loc', 'uri' => 'localhost:8082:', 'root' => 'c:/Users/username/Sites/mysite', ); 4. Save the file. 5. Verify your site aliases are functioning properly using the following command to return a list of available aliases: drush sa --full ## rsync files using Drush aliases¶ Acquia Dev Desktop will rsync files for your Acquia Cloud websites, and these commands can be customized to suit your needs. ### Manual rsync of all files¶ If you need to sync your website’s files manually, use the following command, adapting it where necessary, to rsync your entire development environment to your local computer: drush -vd --mode=ruLtvz rsync @mysite.dev @mysite.local The -vd option displays verbose debugging for troubleshooting. After you have confirmed the command works as you expect, you can remove the -vd flag. ### Manual rsync of selected directories¶ By default, the drush rsync command rsyncs both your website’s code and files. If you want to transfer only the files directory, add the following lines to your Drush aliases file, adjusting it to match your actual alias and system file paths: // Site mysite, environment local$aliases['local'] = array( 'site' => 'mysite', 'env' => 'loc', 'uri' => 'localhost:8082:', 'root' => 'c:/Users/[username]/Sites/mysite', 'path-aliases' => array( '%files' => 'sites/default/files', ) );
With this alias, you can run your working Drush command with the files path alias added, as in this example:.
drush -vd rsync @mysite.dev:%files @mysite.local:%files
For more examples, see Drush Tip: Quickly Sync Files Between Your Environments With Rsync.
|
2020-07-07 13:46:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20038044452667236, "perplexity": 11282.080591808923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00244.warc.gz"}
|
https://math.stackexchange.com/questions/3123324/predicate-logic-what-are-the-differences-between-%E2%88%80-and-%E2%88%83-when-it-comes-to-compa
|
# Predicate logic: What are the differences between ∀ and ∃ when it comes to comparing two variables?
Say I have the four following logical statements, all over the domain of all integers.
1. (∀a,∀b)[a>b]
2. (∀a,∃b)[a>b]
3. (∃a,∀b)[a>b]
4. (∃a,∃b)[a>b]
I feel like they're all asking practically similar things, but I'm getting on confused on what exactly for all means. I'm writing what I think each statement is asserting literally, please correct me if I'm wrong:
1. All integers are greater than each other? (This is the one I'm struggling the most with)
2. There is an integer b that is less than all other integers
3. There is an integer a that is greater than all other integers
4. There is an integer a that is greater than an integer b
So if this is the case, I'm assuming that all are false except for (4). But in the event that I have correctly understood all four of these statements, how would you express something like how for all known integers, there is another integer that is greater and/or lesser than it?
• (2) Is instead read as "For all $a$ there exists a $b$ such that $a>b$" In an attempt to reword it in more natural English it is saying that if Adam chooses an integer, regardless which one he chooses his friend Bill can look at Adam's choice and then with that knowledge pick an integer which is smaller than Adam's choice. For example, if Adam chose $50$ as his integer then Bill can come along and choose a smaller integer such as $49$. – JMoravitz Feb 23 at 3:21
• I would translate 1 like "for every arbitrary pair of integers, the first one is always bigger than the second" – kimchi lover Feb 23 at 3:22
• @JMoravitz In that case, would it be possible to write a logical statement asserting what my original interpretation was (even though it's untrue)? – user2709168 Feb 23 at 3:37
• The statement "There is an integer $b$ that is less than all other integers" can be written as $(\exists b~\forall a)[a>b]$. Note the order is different, $\exists b~\forall a$ has different meaning than $\forall a~\exists b$. – JMoravitz Feb 23 at 3:42
• This answer is about the same question: ://math.stackexchange.com/a/1130755/25554 – MJD Feb 23 at 4:13
|
2019-05-25 04:12:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.610029399394989, "perplexity": 418.92296359923637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257847.56/warc/CC-MAIN-20190525024710-20190525050710-00297.warc.gz"}
|
https://www.storyofmathematics.com/when-the-current-i-is-positive-the-capacitor-charge-q-is-decreasing/
|
# When the current i is positive, the capacitor charge q is decreasing.
From the given Figure, answer the questions either True or False based on the circuit’s behavior:
– After the RELAY is switched to either the N.O. (“normally open”) or N.C. (“normally closed”) state, the transient response of the circuit is for the short time.
– In this experiment, the transient current flow has an exponential decay to zero.
– The charge Q of the capacitor decay exponentially when the relay move to the N. O. state.
– The capacitor charge Q decreases during current I is positive.
– The negative voltage measured in VOLTAGE IN 2 is due to positive current I.
– VOLTAGE IN 1 is measured to be positive when the charge Q on the capacitor is positive.
– The given quantity t1/2=? ln 2 is the half-life of an exponential decay, where ?= R.C. is the time constant in an R.C. circuit. The current in a discharging R.C. circuit drops by half whenever t increases by $t_{12}$. For a circuit with $R=2k\Omega$ and $C=3uF$, if at t=5 ms the current is 6 mA, find the time (in ms) that the current of 3 mA would be.
Figure 1
This question aims to find the current, charge, and voltage in the RC circuit. There are multiple statements given and the task is to find the correct one.
Moreover, this question is based on the concepts of physics. In the RC circuit, the capacitor is charged when it is connected to the source. However, when the source is disconnected, the capacitor discharges through the resistor.
1) As the capacitor is initially uncharged, it resists the change in voltage instantaneously. Hence,
Voltage, when the switch is closed the initial current,
$i =\dfrac{V_s}{R}$
So, the statement is true.
2) At any instant the current is:
$i =\dfrac{(V_s – V_c)}{R}$
Furthermore, the increase in voltage causes the $i=0$, therefore:
$V_c = V_s$
So, the statement is true.
3) When $V_s$ is connected, the voltage across a capacitor increases exponentially till it reaches a steady state. Therefore, the charge is:
$q = CV_s$
So, the statement is false.
4) The direction of current shown in the figure prove that the charge in the capacitor is increasing.
So, the statement is false.
5) The voltage across the capacitor and the resistor is positive, therefore, Voltage IN 2 will be positive.
So, the statement is false.
6) According to Kirchoff voltage law, Voltage OUT 1 and Voltage IN 1 are equal.
So, the statement is false.
7) The capacitor’s current equation is:
$I(t) = \dfrac{V_s}{R}[1 -\exp(-t/RC)]$
Since,
$I=6mA$
$t=5ms$
Therefore,
$\dfrac{V_s}{R}=10.6mA$
$3 mA = 10.6 mA [1 – \exp(-t/(2k\Omega \times 3uF) )]$
$\Rightarrow t=2ms$
## Numerical Results
The time when the current is 3mA is:
$t=2ms$
## Example
When the current through a 10k\Omega resistor is 5mA, find the voltage against it.
Solution:
The voltage can be found as:
$V = IR = 5mA \times 10k\Omega$
$V = 50V$
Images/Mathematical are created with Geogebra.
|
2022-08-19 11:31:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6634867191314697, "perplexity": 750.7303312458783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00466.warc.gz"}
|
http://mathematica.stackexchange.com/questions/38585/subscripts-to-a-variable-why-a-letter-shoud-come-before-a-digit
|
Subscripts to a variable; why a letter shoud come before a digit?
I noticed that I cannot create a variable $f_{2s}$ since Mathematica seems to leave a space between the subscripts 2 and s implying that $f_{2s}$ is not recognised in later calculations. But I see that I can create the variable $f_{s2}$ instead without any problem. Is there a way to create the variable $f_{2s}$ in Mathematica?
Thank you.
-
Maybe Subscript[f, 2, s] will fits your requirements. It induces a comma between 2 and s like in two variable sequences. – tchronis Dec 10 '13 at 7:47
There are many issues with work with subsecripts but leaving them: Subscript[f, Row[{2, s}]] select it and evaluate in place via menu or Ctrl+Shift+Enter. – Kuba Dec 10 '13 at 9:47
What about for the subscript type 2 "escape" "comma" "escape" and then s? – Chrissy Dec 11 '13 at 12:00
|
2015-11-29 01:39:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37969762086868286, "perplexity": 1413.294038346521}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455135.96/warc/CC-MAIN-20151124205415-00266-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://chemistry.stackexchange.com/questions/74420/molar-absorptivity-of-copperii-sulfate
|
# Molar Absorptivity of Copper(II) Sulfate
What is the molar absorptivity of copper sulfate? I am trying to find the molarity of copper sulfate solution by absorption in a spectrophotometer with a cuvette.
However, I can't find $$\varepsilon$$, the molar absorptivity coefficient of $$\ce{CuSO4}$$. I searched it up online, and I found experimental data saying that is $$\pu{2.81 L mol-1 cm-1}$$, but another source says that it is $$0.91$$.
I have searched on various databases including NIST's, but I couldn't find anything. Is there a definite molar absorptivity for everything, or is it all experimental?
• The value will depend on wavelength so both the values above could be correct; it would be worth checking. Also if you can, measure absorbance vs. concentration. The eqn. is $A=\epsilon [C]l$ where l is the cell path length and $\epsilon$ the extinction coefficient in $\pu{dm^2mol^{-1}cm^{-1}}$ – porphyrin May 13 '17 at 12:11
• Right, I searched up 635nm, but I forgot to make sure that it was actually that. Thanks for reminding me – abcdefg May 13 '17 at 20:08
|
2019-09-24 08:45:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7671781182289124, "perplexity": 739.2821147481338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572896.15/warc/CC-MAIN-20190924083200-20190924105200-00268.warc.gz"}
|
https://www.quizover.com/precalculus/section/using-like-bases-to-solve-exponential-equations-by-openstax
|
# 4.6 Exponential and logarithmic equations
Page 1 / 8
In this section, you will:
• Use like bases to solve exponential equations.
• Use logarithms to solve exponential equations.
• Use the definition of a logarithm to solve logarithmic equations.
• Use the one-to-one property of logarithms to solve logarithmic equations.
• Solve applied problems involving exponential and logarithmic equations.
In 1859, an Australian landowner named Thomas Austin released 24 rabbits into the wild for hunting. Because Australia had few predators and ample food, the rabbit population exploded. In fewer than ten years, the rabbit population numbered in the millions.
Uncontrolled population growth, as in the wild rabbits in Australia, can be modeled with exponential functions. Equations resulting from those exponential functions can be solved to analyze and make predictions about exponential growth. In this section, we will learn techniques for solving exponential functions.
## Using like bases to solve exponential equations
The first technique involves two functions with like bases. Recall that the one-to-one property of exponential functions tells us that, for any real numbers $\text{\hspace{0.17em}}b,$ $S,$ and $\text{\hspace{0.17em}}T,$ where ${b}^{S}={b}^{T}\text{\hspace{0.17em}}$ if and only if $\text{\hspace{0.17em}}S=T.$
In other words, when an exponential equation has the same base on each side, the exponents must be equal. This also applies when the exponents are algebraic expressions. Therefore, we can solve many exponential equations by using the rules of exponents to rewrite each side as a power with the same base. Then, we use the fact that exponential functions are one-to-one to set the exponents equal to one another, and solve for the unknown.
For example, consider the equation $\text{\hspace{0.17em}}{3}^{4x-7}=\frac{{3}^{2x}}{3}.\text{\hspace{0.17em}}$ To solve for $\text{\hspace{0.17em}}x,$ we use the division property of exponents to rewrite the right side so that both sides have the common base, $\text{\hspace{0.17em}}3.\text{\hspace{0.17em}}$ Then we apply the one-to-one property of exponents by setting the exponents equal to one another and solving for $\text{\hspace{0.17em}}x:$
## Using the one-to-one property of exponential functions to solve exponential equations
For any algebraic expressions and any positive real number $\text{\hspace{0.17em}}b\ne 1,$
Given an exponential equation with the form $\text{\hspace{0.17em}}{b}^{S}={b}^{T},$ where $\text{\hspace{0.17em}}S\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}T\text{\hspace{0.17em}}$ are algebraic expressions with an unknown, solve for the unknown.
1. Use the rules of exponents to simplify, if necessary, so that the resulting equation has the form $\text{\hspace{0.17em}}{b}^{S}={b}^{T}.$
2. Use the one-to-one property to set the exponents equal.
3. Solve the resulting equation, $\text{\hspace{0.17em}}S=T,$ for the unknown.
## Solving an exponential equation with a common base
Solve $\text{\hspace{0.17em}}{2}^{x-1}={2}^{2x-4}.$
Solve $\text{\hspace{0.17em}}{5}^{2x}={5}^{3x+2}.$
$x=-2$
#### Questions & Answers
how to understand calculus?
Hey I am new to precalculus, and wanted clarification please on what sine is as I am floored by the terms in this app? I don't mean to sound stupid but I have only completed up to college algebra.
I don't know if you are looking for a deeper answer or not, but the sine of an angle in a right triangle is the length of the opposite side to the angle in question divided by the length of the hypotenuse of said triangle.
Marco
can you give me sir tips to quickly understand precalculus. Im new too in that topic. Thanks
Jenica
if you remember sine, cosine, and tangent from geometry, all the relationships are the same but they use x y and r instead (x is adjacent, y is opposite, and r is hypotenuse).
Natalie
the standard equation of the ellipse that has vertices (0,-4)&(0,4) and foci (0, -15)&(0,15) it's standard equation is x^2 + y^2/16 =1 tell my why is it only x^2? why is there no a^2?
what is foci?
This term is plural for a focus, it is used for conic sections. For more detail or other math questions. I recommend researching on "Khan academy" or watching "The Organic Chemistry Tutor" YouTube channel.
Chris
how to determine the vertex,focus,directrix and axis of symmetry of the parabola by equations
i want to sure my answer of the exercise
what is the diameter of(x-2)²+(y-3)²=25
how to solve the Identity ?
what type of identity
Jeffrey
Confunction Identity
Barcenas
how to solve the sums
meena
hello guys
meena
For each year t, the population of a forest of trees is represented by the function A(t) = 117(1.029)t. In a neighboring forest, the population of the same type of tree is represented by the function B(t) = 86(1.025)t.
by how many trees did forest "A" have a greater number?
Shakeena
32.243
Kenard
how solve standard form of polar
what is a complex number used for?
It's just like any other number. The important thing to know is that they exist and can be used in computations like any number.
Steve
I would like to add that they are used in AC signal analysis for one thing
Scott
Good call Scott. Also radar signals I believe.
Steve
They are used in any profession where the phase of a waveform has to be accounted for in the calculations. Imagine two electrical signals in a wire that are out of phase by 90°. At some times they will interfere constructively, others destructively. Complex numbers simplify those equations
Tim
Is there any rule we can use to get the nth term ?
how do you get the (1.4427)^t in the carp problem?
A hedge is contrusted to be in the shape of hyperbola near a fountain at the center of yard.the hedge will follow the asymptotes y=x and y=-x and closest distance near the distance to the centre fountain at 5 yards find the eqution of the hyperbola
A doctor prescribes 125 milligrams of a therapeutic drug that decays by about 30% each hour. To the nearest hour, what is the half-life of the drug?
|
2018-07-20 18:06:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6709466576576233, "perplexity": 618.3241213655427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591719.4/warc/CC-MAIN-20180720174340-20180720194340-00074.warc.gz"}
|
http://mathhelpforum.com/calculus/96969-absolutely-convergent-conditionally-convergent-divergent.html
|
# Thread: absolutely convergent, conditionally convergent or divergent
1. ## absolutely convergent, conditionally convergent or divergent
the question is: determine whether the series is absolutely convergent, conditionally convergent or divergent.
$\sum_{n = 1}^{\infty} (-3)^n \frac{n+1}{\exp(n)}$
I found that if i do the ratio test it diverges but im not sure how to do the alternating test, is a just
$\frac{n+1}{\exp(n)}$
or does it include the 3?
2. Originally Posted by acosta0809
the question is: determine whether the series is absolutely convergent, conditionally convergent or divergent.
$\sum_{n = 1}^{\infty} (-3)^n \frac{n+1}{\exp(n)}$
I found that if i do the ratio test it diverges but im not sure how to do the alternating test, is a just
$\frac{n+1}{\exp(n)}$
or does it include the 3?
ratio test ...
$\lim_{n \to \infty} \left| \frac{(-3)^{n+1}(n+2)}{e^{n+1}} \cdot \frac{e^n}{(-3)^n (n+1)}\right|$
$\lim_{n \to \infty} \left| \frac{(-3)(n+2)}{e(n+1)}\right|$
$\left|\frac{-3}{e}\right| \lim_{n \to \infty} \frac{n+2}{n+1}$
$\left|\frac{-3}{e}\right| \cdot 1 > 1$
series diverges
3. For each $n\ge1$ it's $\frac{3^{n}(n+1)}{e^{n}}>n+1,$ so this is far to converge absolutely.
|
2017-03-25 02:09:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9425128698348999, "perplexity": 233.37501403124332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188773.10/warc/CC-MAIN-20170322212948-00449-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.csauthors.net/yuval-peres/
|
# Yuval Peres
According to our database1, Yuval Peres authored at least 84 papers between 1995 and 2019.
Collaborative distances:
• Dijkstra number2 of four.
• Erdős number3 of two.
Book
In proceedings
Article
PhD thesis
Other
## Bibliography
2019
Online learning with an almost perfect expert.
Proc. Natl. Acad. Sci. U.S.A., 2019
Sorted Top-k in Rounds.
Proceedings of the Conference on Learning Theory, 2019
Random walks on graphs: new bounds on hitting, meeting, coalescing and returning.
Proceedings of the Sixteenth Workshop on Analytic Algorithmics and Combinatorics, 2019
2018
Optimal Control for Diffusions on Graphs.
SIAM J. Discrete Math., 2018
Sensitivity of Mixing Times in Eulerian Digraphs.
SIAM J. Discrete Math., 2018
Tractable near-optimal policies for crawling.
Proc. Natl. Acad. Sci. U.S.A., 2018
Exponentially slow mixing in the mean-field Swendsen-Wang dynamics.
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, 2018
Estimating graph parameters via random walks with restarts.
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, 2018
Comparing mixing times on sparse random graphs.
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, 2018
Stabilizing a System with an Unbounded Random Gain Using Only Finitely Many Bits.
Proceedings of the 2018 IEEE International Symposium on Information Theory, 2018
Testing Graph Clusterability: Algorithms and Lower Bounds.
Proceedings of the 59th IEEE Annual Symposium on Foundations of Computer Science, 2018
Subpolynomial trace reconstruction for random strings \{and arbitrary deletion probability.
Proceedings of the Conference On Learning Theory, 2018
Exact minimum number of bits to stabilize a linear system.
Proceedings of the 57th IEEE Conference on Decision and Control, 2018
Electrostatic Methods for Perfect Matching and Safe Path Planning.
Proceedings of the 57th IEEE Conference on Decision and Control, 2018
Trace reconstruction with varying deletion probabilities.
Proceedings of the Fifteenth Workshop on Analytic Algorithmics and Combinatorics, 2018
2017
Competing first passage percolation on random regular graphs.
Random Struct. Algorithms, 2017
Trace reconstruction with exp(O(n1/3)) samples.
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, 2017
Local max-cut in smoothed polynomial time.
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, 2017
Random Walks in Polytopes and Negative Dependence.
Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 2017
Tight Lower Bounds for Multiplicative Weights Algorithmic Families.
Proceedings of the 44th International Colloquium on Automata, Languages, and Programming, 2017
Average-Case Reconstruction for the Deletion Channel: Subpolynomially Many Traces Suffice.
Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer Science, 2017
Cutoff for a Stratified Random Walk on the Hypercube.
Proceedings of the Approximation, 2017
The String of Diamonds Is Tight for Rumor Spreading.
Proceedings of the Approximation, 2017
2016
Four random permutations conjugated by an adversary generate Sn with high probability.
Random Struct. Algorithms, 2016
Almost Optimal Local Graph Clustering Using Evolving Sets.
J. ACM, 2016
Towards Optimal Algorithms for Prediction with Expert Advice.
Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, 2016
A tiger by the tail: When multiplicative noise stymies control.
Proceedings of the IEEE International Symposium on Information Theory, 2016
Rate-limited control of systems with uncertain gain.
Proceedings of the 54th Annual Allerton Conference on Communication, 2016
2015
Graphical balanced allocations and the (1 + β)-choice process.
Random Struct. Algorithms, 2015
Surprise probabilities in Markov chains.
Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 2015
Perfect Bayesian Equilibria in Repeated Sales.
Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 2015
Characterization of cutoff for reversible Markov chains.
Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 2015
Approval Voting and Incentives in Crowdsourcing.
Proceedings of the 32nd International Conference on Machine Learning, 2015
Bandit Convex Optimization: $$\sqrt{T}$$ Regret in One Dimension.
Proceedings of The 28th Conference on Learning Theory, 2015
2014
Escape Rates for Rotor Walks in Zd.
SIAM J. Discrete Math., 2014
Shortest-Weight Paths in Random Regular Graphs.
SIAM J. Discrete Math., 2014
The looping constant of ℤd.
Random Struct. Algorithms, 2014
Anatomy of the giant component: The strictly supercritical regime.
Eur. J. Comb., 2014
Concentration of Lipschitz Functionals of Determinantal and Other Strong Rayleigh Measures.
Combinatorics, Probability & Computing, 2014
Bandits with switching costs: T2/3 regret.
Proceedings of the Symposium on Theory of Computing, 2014
Adversarial hypothesis testing and a quantum stein's lemma for restricted measurements.
Proceedings of the Innovations in Theoretical Computer Science, 2014
Online Learning with Composite Loss Functions.
Proceedings of The 27th Conference on Learning Theory, 2014
Permuted Random Walk Exits Typically in Linear Time.
Proceedings of the 2014 Proceedings of the Eleventh Workshop on Analytic Algorithmics and Combinatorics, 2014
2013
Selling in Exclusive Markets: Some Observations on Prior-Free Mechanism Design.
ACM Trans. Economics and Comput., 2013
Noise Tolerance of Expanders and Sublinear Expansion Reconstruction.
SIAM J. Comput., 2013
All-pairs shortest paths in O(n2) time with high probability.
J. ACM, 2013
2012
Hitting Times for Random Walks with Restarts.
SIAM J. Discrete Math., 2012
2011
Anatomy of a young giant component in the random graph.
Random Struct. Algorithms, 2011
The Evolution of the Cover Time.
Combinatorics, Probability & Computing, 2011
Cover times, blanket times, and majorizing measures.
Proceedings of the 43rd ACM Symposium on Theory of Computing, 2011
Mobile Geometric Graphs: Detection, Coverage and Percolation.
Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, 2011
Finding Hidden Cliques in Linear Time with High Probability.
Proceedings of the Eighth Workshop on Analytic Algorithmics and Combinatorics, 2011
2010
Pólya's Theorem on Random Walks via Pólya's Urn.
The American Mathematical Monthly, 2010
Critical percolation on random regular graphs.
Random Struct. Algorithms, 2010
Diameters in Supercritical Random Graphs Via First Passage Percolation.
Combinatorics, Probability & Computing, 2010
Local Dynamics in Bargaining Networks via Random-Turn Games.
Proceedings of the Internet and Network Economics - 6th International Workshop, 2010
The (1 + beta)-Choice Process and Weighted Balls-into-Bins.
Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, 2010
All-Pairs Shortest Paths in O(n2) Time with High Probability.
Proceedings of the 51th Annual IEEE Symposium on Foundations of Computer Science, 2010
2009
Finding sparse cuts locally using evolving sets.
Proceedings of the 41st Annual ACM Symposium on Theory of Computing, 2009
The unreasonable effectiveness of martingales.
Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2009
Convergence of Local Dynamics to Balanced Outcomes in Exchange Networks.
Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science, 2009
The Glauber Dynamics for Colourings of Bounded Degree Trees.
Proceedings of the Approximation, 2009
Zeros of Gaussian Analytic Functions and Determinantal Point Processes.
University Lecture Series 51, American Mathematical Society, ISBN: 978-0-8218-4373-4, 2009
2008
Maximum overhang.
Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2008
Noise Tolerance of Expanders and Sublinear Expander Reconstruction.
Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science, 2008
A Birthday Paradox for Markov Chains, with an Optimal Bound for Collision in the Pollard Rho Algorithm for Discrete Logarithm.
Proceedings of the Algorithmic Number Theory, 8th International Symposium, 2008
2007
Random-Turn Hex and Other Selection Games.
The American Mathematical Monthly, 2007
Mixing Time Power Laws at Criticality.
Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2007), 2007
2006
Bootstrap Percolation on Infinite Trees and Non-Amenable Groups.
Combinatorics, Probability & Computing, 2006
Trees and Markov convexity.
Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006
2005
New Coins From Old: Computing With Unknown Bias.
Combinatorica, 2005
2004
Identifying several biased coins encountered by a hidden random walk.
Random Struct. Algorithms, 2004
Shuffling by Semi-Random Transpositions.
Proceedings of the 45th Symposium on Foundations of Computer Science (FOCS 2004), 2004
2003
Fractals with Positive Length and Zero Buffon Needle Probability.
The American Mathematical Monthly, 2003
Evolving sets and mixin.
Proceedings of the 35th Annual ACM Symposium on Theory of Computing, 2003
The threshold for random k-SAT is 2k (ln 2 - O(k)).
Proceedings of the 35th Annual ACM Symposium on Theory of Computing, 2003
On the Maximum Satisfiability of Random Formulas.
Proceedings of the 44th Symposium on Foundations of Computer Science (FOCS 2003), 2003
The Speed of Simple Random Walk and Anchored Expansion on Percolation Clusters: an Overview.
Proceedings of the Discrete Random Walks, 2003
2002
Decayed MCMC Filtering.
Proceedings of the UAI '02, 2002
2001
Glauber Dynamics on Trees and Hyperbolic Graphs.
Proceedings of the 42nd Annual Symposium on Foundations of Computer Science, 2001
2000
Percolation in a dependent random environment.
Random Struct. Algorithms, 2000
1999
Resistance Bounds for First-Passage Percolation and Maximum Flow.
J. Comb. Theory, Ser. A, 1999
1997
Self-Affine Carpets on the Square Lattice.
Combinatorics, Probability & Computing, 1997
1995
Fractional Products of Sets.
Random Struct. Algorithms, 1995
|
2019-07-19 14:19:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6568739414215088, "perplexity": 2853.3274232729727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526254.26/warc/CC-MAIN-20190719140355-20190719162355-00251.warc.gz"}
|
https://codereview.stackexchange.com/questions/205404/password-encrypting-tool
|
I wrote a tool that takes a domain name and a password, concatenates them, hashes the result with sha512, and returns the result encoded in base64 truncated to 32 characters. The main idea behind this is to prevent an abusive server owner from getting my password by storing my password in plain text and using that password on other accounts, or if the developer forgot to remove the password from logs and a hacker got into them.
A lot of my code was already written and used in another application. The base64 encoder was taken from https://github.com/superwills/NibbleAndAHalf and slightly modified. makeSyscallError used to throw a std::system_error but to keep things simple I just made it call std::perror and exit in this code.
My main question is, is this a secure way to protect my password? I know I should really be keeping a database of randomly generated passwords and encrypt it with my master password, but I don't want to lose that database and I don't want to put the encrypted database in the cloud.
here's mpass.cpp:
#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <signal.h>
#include <termios.h>
#include <unistd.h>
#include "hash.hpp"
#include "base64.h"
struct termios newTerm, oldTerm;
void setEcho(bool echo, bool icanon){
if(echo) newTerm.c_lflag |= ECHO;
else newTerm.c_lflag &= ~ECHO;
if(icanon) newTerm.c_lflag |= ICANON;
else newTerm.c_lflag &= ~ICANON;
if(tcsetattr(0, TCSANOW, &newTerm) == -1){
std::perror("tcsetattr");
std::exit(1);
}
}
//I hate it so much when my program crashes on a signal and my terminal gets screwed up, so
void cleanUp(int sn, siginfo_t *info, void *ctx){
tcsetattr(0, TCSANOW, &oldTerm);
if(sn != SIGINT) psiginfo(info, NULL);
signal(sn, SIG_DFL);
raise(sn);
std::exit(-1); //if raise some reason didn't kill
}
//A list of signals to clean up on
const int signalsToCatch[] = {
SIGINT, SIGQUIT, SIGILL, SIGABRT, SIGFPE, SIGSEGV, SIGPIPE, SIGALRM,
SIGTERM, SIGUSR1, SIGUSR2, SIGBUS, SIGIO, SIGPROF, SIGSYS, SIGTRAP,
SIGVTALRM, SIGXCPU, SIGXFSZ, SIGPWR, 0
}
/*This struct is defined in an external C file as
struct sigaction sa = {
.sa_flags = SA_SIGINFO
};
Now sa is already filled with 0s, except for sa_flags. I wish C++ included this feature.
*/
extern struct sigaction sa;
int main(){
if(tcgetattr(0, &oldTerm) == -1){
std::perror("tcgetattr");
return 1;
}
std::memcpy(&newTerm, &oldTerm, sizeof(struct termios));
sa.sa_sigaction = cleanUp;
for(int i = 0; signalsToCatch[i] != 0; ++i){
if(sigaction(signalsToCatch[i], &sa, NULL) == -1){
std::perror("sigaction");
return 1;
}
}
std::string dom, pass;
(std::cout << "Enter domain: ").flush();
std::getline(std::cin, dom);
setEcho(false, true);
std::getline(std::cin, pass);
setEcho(true, true);
(std::cout << "\nPassword for " << dom << ": ").flush();
dom += pass;
char buf[64];
sha512sum(dom.c_str(), dom.length(), buf);
int useless;
char *ret = base64(buf, 64, &useless);
if(ret == NULL){ //Almost forgot to include this so if someone posts about this while I make this edit, don't look at them like they're stupid.
perror("malloc");
return 1;
}
ret[32] = '\n'; //I'm just gonna put the newline here
if(write(1, ret, 33) == -1){
std::perror("write");
return 1;
}
//I could std::free(ret), but it will get freed anyway by the program exit.
return 0;
}
//Must be defined for hash.cpp, but I wont be catching exceptions for sha512sum
//I don't want to edit hash.cpp either, as it is the same file used in another application
void makeSyscallError(const char *what){
std::perror(what);
std::exit(1);
}
Here's hash.hpp:
#ifndef HASH_HPP
#define HASH_HPP
#include <stddef.h>
//Args: input buffer, input buffer length, output buffer (output buffer must always be 64 bytes or more)
void sha512sum(const void *, size_t, void *);
#endif
Here's hash.cpp:
#include <sys/types.h>
#include <sys/socket.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/if_alg.h>
#include <cerrno>
#include <cstring>
//I won't show misc.hpp, it's just a definition for makeSyscallError(const char *msg);
#include "misc.hpp"
static int cryptoFd = -1;
extern "C"{
/*This is also defined in the C file:
.salg_family = AF_ALG,
.salg_type = "hash",
.salg_name = "sha512"
};
*/
}
//This function checks if cryptoFd is equal to -1, and if it is, it will create it
static void checkCryptoFd(){
if(cryptoFd != -1) return;
int bindFd = socket(AF_ALG, SOCK_SEQPACKET, 0);
if(bindFd == -1)
makeSyscallError("Failed to create AF_ALG socket");
close(bindFd);
makeSyscallError("Failed to bind AF_ALG socket");
}
cryptoFd = accept(bindFd, 0, 0);
close(bindFd);
if(cryptoFd == -1)
makeSyscallError("Failed to create sha512 socket");
}
//Now, I am using linux AF_ALG not for speed (I believe this usage of it would actually be slower due to syscall overhead,
//but simply because it's there and its the only interface I actually learned how to use. I'm not looking at portability in any way, and if I were, I'd rewrite this as a browser extension
void sha512sum(const void *toHash, size_t len, void *result){
checkCryptoFd();
for(;;){
if(len < 128){ //Last 128 bytes to write
if(write(cryptoFd, toHash, len) == -1)
makeSyscallError("(odd) Failed to write to sha512 socket");
if(read(cryptoFd, result, 64) == -1) //Get result
makeSyscallError("(odd) Failed to read from sha512 socket");
return; //All done!
}
if(send(cryptoFd, toHash, 128, MSG_MORE)){
makeSyscallError("(odd) Failed to write to sha512 socket");
}
toHash += 128;
len -= 128;
}
}
base64.c:
#include <stdlib.h>
const static char* b64="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/" ;
// Converts binary data of length=len to base64 characters.
// Length of the resultant string is stored in flen
// (you must pass pointer flen).
//I did modify this
char* base64( const void* binaryData, int len, int *flen )
{
const unsigned char* bin = (const unsigned char*) binaryData ;
char* res ;
int rc = 0 ; // result counter
int byteNo ; // I need this after the loop
int modulusLen = len % 3 ;
int pad = ((modulusLen&1)<<1) + ((modulusLen&2)>>1) ; // 2 gives 1 and 1 gives 2, but 0 gives 0.
*flen = 4*(len + pad)/3 ;
res = (char*) malloc( *flen + 1 ) ; // and one for the null
/* if( !res )
{
puts( "ERROR: base64 could not allocate enough memory." ) ;
puts( "I must stop because I could not get enough" ) ;
return 0;
}*/
if(!res) return NULL; //Much better
for( byteNo = 0 ; byteNo <= len-3 ; byteNo+=3 )
{
unsigned char BYTE0=bin[byteNo];
unsigned char BYTE1=bin[byteNo+1];
unsigned char BYTE2=bin[byteNo+2];
res[rc++] = b64[ BYTE0 >> 2 ] ;
res[rc++] = b64[ ((0x3&BYTE0)<<4) + (BYTE1 >> 4) ] ;
res[rc++] = b64[ ((0x0f&BYTE1)<<2) + (BYTE2>>6) ] ;
res[rc++] = b64[ 0x3f&BYTE2 ] ;
}
{
res[rc++] = b64[ bin[byteNo] >> 2 ] ;
res[rc++] = b64[ (0x3&bin[byteNo])<<4 ] ;
res[rc++] = '=';
res[rc++] = '=';
}
{
res[rc++] = b64[ bin[byteNo] >> 2 ] ;
res[rc++] = b64[ ((0x3&bin[byteNo])<<4) + (bin[byteNo+1] >> 4) ] ;
res[rc++] = b64[ (0x0f&bin[byteNo+1])<<2 ] ;
res[rc++] = '=';
}
res[rc]=0; // NULL TERMINATOR! ;)
return res ;
}
//I removed the decoder
The base64 encoder came as a header library. I moved it to a .c file and added this .h file (base64.h):
#ifndef BASE64_H
#define BASE64_H
#ifdef __cplusplus
#define MYEXTERN extern "C"
#else
#define MYEXTERN
#endif
//To encode, length, returned length
MYEXTERN char *base64(const void *, int, int *);
#undef MYEXTERN
#endif
I'm sure many will ask, here's the original definition of makeSyscallError(const char *):
void makeSyscallError(const char *what){
throw std::system_error(std::make_error_code(std::errc(errno)), what);
}
Edit: forgot to mention, this code is mainly for my personal use. I will rewrite it and release it if things look good.
Looks pretty good. I have no experience with AF_ALG sockets, so can't comment on the usage there (but it's a relief not to have to review home-implemented crypto, so kudos for avoiding that trap!)
Most of my suggestions are somewhat style orientated, so don't feel that there are any "must do" actions here.
This pattern is unusual:
(std::cout << string).flush();
While there's nothing functionally incorrect, most C++ authors would include <iomanip> and then write that more poetically:
std::cout << string << std::flush;
The last flush (before printing the retrieved password) isn't needed.
When using std::memcpy it's easier to see that the size argument is correct if you use sizeof expression rather than sizeof (type):
std::memcpy(&newTerm, &oldTerm, sizeof newTerm);
Assuming that base64.c is C code rather than C++, then the return from malloc() shouldn't be cast, nor should binaryData when it's assigned to bin. And don't use all-caps for variables - convention is that they should be used for macros, to alert the reader to take special care.
The MYEXTERN macro is another questionable style point. Convention says to just wrap the header in an extern "C" block, which is no more code:
#ifdef __cplusplus
extern "C" {
#endif
/* definitions */
#ifdef __cplusplus
}
#endif
In sha512sum(), there's a special case for the last iteration of the loop - can it be reorganised so that just comes after the loop? Something like
for (; len >= 128; len -= 128, toHash += 128) {
if(send(cryptoFd, toHash, 128, MSG_MORE)){
makeSyscallError("(odd) Failed to write to sha512 socket");
}
}
if (len > 0) { //Last few bytes to write
if(write(cryptoFd, toHash, len) == -1)
makeSyscallError("(odd) Failed to write to sha512 socket");
}
if (read(cryptoFd, result, 64) == -1) //Get result
makeSyscallError("(odd) Failed to read from sha512 socket");
return; //All done!
If hash.hpp is a C++ header, then prefer to include <cstdlib> to define std::size_t.
std::perror("malloc") might not do what you expect - malloc() doesn't set errno on failure. You might be able to test allocation failures by using ulimit to reduce the virtual memory available to the process (it will take some trial and error), or you might be able to find a debugging malloc() that can be primed to fail at the right point.
The code is a little inconsistent - in some places, we have if (!value) and others we explicitly if (value == nullptr). It's easier reading if we choose one style and stick with it.
It might be a good idea to free the allocated memory - that lets you run the code under Valgrind without having to filter false positives of memory still in use.
That's all for now; I might be able to return to this later.
• Thank you! So, if I were to remove the flush for "\nPassword for " ..., wouldn't that cause the output to be printed after the write? Write is the direct system call for output, which means that at exit I would see Enter password: (hash)\n\nPassword for (domain name): (bash prompt) – user233009 Oct 11 '18 at 22:00
• I did also find something wrong with my code: send can either return -1 (on error) or the number of bytes written. if(someSyscall(args)){/*handle error*/} only works in cases where someSyscall can only return 0 on success. I apparently never had input to sha512sum longer than 128 bytes so I tried writing a very long domain name and password to it and sure enough I got Failed to ...: Success. Thank you for pointing out the if(!value) thing. – user233009 Oct 11 '18 at 22:14
|
2020-11-29 00:05:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24841956794261932, "perplexity": 14899.743578191483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00648.warc.gz"}
|
http://www.entsendung.pl/4uufs/kth-row-of-pascal%27s-triangle-9bc0b9
|
Click here to start solving coding interview questions. “Kth Row Of Pascal's Triangle” Code Answer . Follow up: Could you optimize your algorithm to use only O(k) extra space? Given an index k, return the k t h row of the Pascal's triangle. (n = 5, k = 3) I also highlighted the entries below these 4 that you can calculate, using the Pascal triangle algorithm. For this reason, convention holds that both row numbers and column numbers start with 0. binomial coefficients - Use mathematical induction to prove that the sum of the entries of the $k^ {th}$ row of Pascal’s Triangle is $2^k$. Both of these program codes generate Pascal’s Triangle as per the number of row entered by the user. Look at row 5. Following are the first 6 rows of Pascal’s Triangle. Pascal's triangle determines the coefficients which arise in binomial expansions. ; whatever by Faithful Fox on May 05 2020 Donate . Privacy Policy. Didn't receive confirmation instructions? Output: 1, 7, 21, 35, 35, 21, 7, 1 Index 0 = 1 Index 1 = 7/1 = 7 Index 2 = 7x6/1x2 = 21 Index 3 = 7x6x5/1x2x3 = 35 Index 4 = 7x6x5x4/1x2x3x4 = 35 Index 5 = 7x6x5x4x3/1x2x3x4x5 = 21 … This triangle was among many o… So, if the input is like 3, then the output will be [1,3,3,1] To solve this, we will follow these steps − Define an array pascal of size rowIndex + 1 and fill this with 0 This problem is related to Pascal's Triangle which gets all rows of Pascal's triangle. Suppose we have a non-negative index k where k ≤ 33, we have to find the kth index row of Pascal's triangle. Blaise Pascal was born at Clermont-Ferrand, in the Auvergne region of France on June 19, 1623. Pascal's Triangle thus can serve as a "look-up table" for binomial expansion values. Kth Row Of Pascal's Triangle . Pascal's triangle is known to many school children who have never heard of polynomials or coefficients because there is a fun way to construct it by using simple ad Pascal’s triangle : To generate A[C] in row R, sum up A’[C] and A’[C-1] from previous row R - 1. Thus, the apex of the triangle is row 0, and the first number in each row is column 0. Note:Could you optimize your algorithm to use only O(k) extra space? This video shows how to find the nth row of Pascal's Triangle. k = 0, corresponds to the row [1]. Java Solution This video shows how to find the nth row of Pascal's Triangle. Below is the first eight rows of Pascal's triangle with 4 successive entries in the 5 th row highlighted. Example: Input : k = 3: Return : [1,3,3,1] NOTE : k is 0 based. New. Terms By creating an account I have read and agree to InterviewBit’s Pascal s Triangle and Pascal s Binomial Theorem; n C k = kth value in nth row of Pascal s Triangle! Pascal’s Triangle: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 . Write a function that takes an integer value n as input and prints first n lines of the Pascal’s triangle. Java Solution of Kth Row of Pascal's Triangle One simple method to get the Kth row of Pascal's Triangle is to generate Pascal Triangle till Kth row and return the last row. 0. Looking at the first few lines of the triangle you will see that they are powers of 11 ie the 3rd line (121) can be expressed as 11 to the power of 2. Although other mathematicians in Persia and China had independently discovered the triangle in the eleventh century, most of the properties and applications of the triangle were discovered by Pascal. Example 1: Input: rowIndex = 3 Output: [1,3,3,1] Example 2: We write a function to generate the elements in the nth row of Pascal's Triangle. suryabhagavan48048 created at: 12 hours ago | No replies yet. Hot Newest to Oldest Most Votes. Learn Tech Skills from Scratch @ Scaler EDGE. // Do not print the output, instead return values as specified, // Still have a doubt. An equation to determine what the nth line of Pascal's triangle … But be careful !! devendrakotiya01 created at: 8 hours ago | No replies yet. This can allow us to observe the pattern. The formula just use the previous element to get the new one. We can find the pattern followed in all the rows and then use that pattern to calculate only the kth row and print it. easy solution. Also, many of the characteristics of Pascal's Triangle are derived from combinatorial identities; for example, because , the sum of the value… Given an index k, return the kth row of the Pascal’s triangle. (Proof by induction) Rows of Pascal s Triangle == Coefficients in (x + a) n. That is: The Circle Problem and Pascal s Triangle; How many intersections of chords connecting N vertices? In Pascal's triangle, each number is the sum of the two numbers directly above it. Pascal's Triangle II. Notice the coefficients are the numbers in row two of Pascal's triangle: 1, 2, 1. Pascal’s triangle : To generate A[C] in row R, sum up A’[C] and A’[C-1] from previous row R - 1. //https://www.interviewbit.com/problems/kth-row-of-pascals-triangle/ /* Given an index k, return the kth row of the Pascal’s triangle. For an example, consider the expansion (x + y)² = x² + 2xy + y² = 1x²y⁰ + 2x¹y¹ + 1x⁰y². Pascal's triangle is a way to visualize many patterns involving the binomial coefficient. Pascal’s triangle : To generate A[C] in row R, sum up A’[C] and A’[C-1] from previous row R - 1. This is Pascal's Triangle. 3. java 100%fast n 99%space optimized. Pascal’s triangle : To generate A[C] in row R, sum up A’[C] and A’[C-1] from previous row R - 1. NOTE : k is 0 based. For example, the numbers in row 4 are 1, 4, 6, 4, and 1 and 11^4 is equal to 14,641. Better Solution: We do not need to calculate all the k rows to know the kth row. The nth row is the set of coefficients in the expansion of the binomial expression (1 + x) n.Complicated stuff, right? Since 10 has two digits, you have to carry over, so you would get 161,051 which is equal to 11^5. We find that in each row of Pascal’s Triangle n is the row number and k is the entry in that row, when counting from zero. //https://www.interviewbit.com/problems/kth-row-of-pascals-triangle/. vector. 0. Start with any number in Pascal's Triangle and proceed down the diagonal. As an example, the number in row 4, column 2 is . Given an index k, return the kth row of the Pascal’s triangle. The start point is 1. k = 0, corresponds to the row [1]. The entries in each row are numbered from the left beginning with $k = 0$ and are usually staggered relative to the numbers in the adjacent rows. This can be solved in according to the formula to generate the kth element in nth row of Pascal's Triangle: r(k) = r(k-1) * (n+1-k)/k, where r(k) is the kth element of nth row. Kth Row of Pascal's Triangle 225 28:32 Anti Diagonals 225 Adobe. Pattern: Let’s take K = 7. The rows of Pascal’s triangle are numbered, starting with row $n = 0$ at the top. The program code for printing Pascal’s Triangle is a very famous problems in C language. Once get the formula, it is easy to generate the nth row. Note: The row index starts from 0. A simple construction of the triangle … 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 Bonus points for using O (k) space. In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. Pascal’s triangle is a triangular array of the binomial coefficients. Each number, other than the 1 in the top row, is the sum of the 2 numbers above it (imagine that there are 0s surrounding the triangle). Pascal's triangle is an arithmetic and geometric figure often associated with the name of Blaise Pascal, but also studied centuries earlier in India, Persia, China and elsewhere.. Its first few rows look like this: 1 1 1 1 2 1 1 3 3 1 where each element of each row is either 1 or the sum of the two elements right above it. In this post, I have presented 2 different source codes in C program for Pascal’s triangle, one utilizing function and the other without using function. We often number the rows starting with row 0. Source: www.interviewbit.com. Given a non-negative integer N, the task is to find the N th row of Pascal’s Triangle.. Given an integer rowIndex, return the rowIndex th row of the Pascal's triangle. and 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 These row values can be calculated by the following methodology: For a given non-negative row index, the first row value will be the binomial coefficient where n is the row index value and k is 0). Note:Could you optimize your algorithm to use only O(k) extra space? (n + k = 8) We write a function to generate the elements in the nth row of Pascal's Triangle. k = 0, corresponds to the row [1]. The numbers in row 5 are 1, 5, 10, 10, 5, and 1. 0. Analysis. This works till the 5th line which is 11 to the power of 4 (14641). Kth Row Of Pascal's Triangle . 2. python3 solution 80% faster. - Mathematics Stack Exchange Use mathematical induction to prove that the sum of the entries of the k t h row of Pascal’s Triangle is 2 k. ! Examples: Input: N = 3 Output: 1, 3, 3, 1 Explanation: The elements in the 3 rd row are 1 3 3 1. Checkout www.interviewbit.com/pages/sample_codes/ for more details. k = 0, corresponds to the row … // Do not read input, instead use the arguments to the function. Given an index k, return the kth row of the Pascal's triangle. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in India, Persia, China, Germany, and Italy.. First 6 rows of Pascal’s Triangle written with Combinatorial Notation. In this problem, only one row is required to return. whatever by Faithful Fox on May 05 2020 Donate . For example, given k = 3, return [ 1, 3, 3, 1]. Well, yes and no. Kth Row of Pascal's Triangle: Given an index k, return the kth row of the Pascal’s triangle. 41:46 Bucketing. Notice that the row index starts from 0. Pascal's Triangle is defined such that the number in row and column is . In 1653 he wrote the Treatise on the Arithmetical Triangle which today is known as the Pascal Triangle. The n th n^\text{th} n th row of Pascal's triangle contains the coefficients of the expanded polynomial (x + y) n (x+y)^n (x + y) n. Expand (x + y) 4 (x+y)^4 (x + y) 4 using Pascal's triangle. Example: Input : k = 3 Return : [1,3,3,1] NOTE : k is 0 based. This leads to the number 35 in the 8 th row. NOTE : k is 0 based. We also often number the numbers in each row going from left to right, with the leftmost number being the 0th number in that row. You signed in with another tab or window. For example, when k = 3, the row is [1,3,3,1]. The next row value would be the binomial coefficient with the same n-value (the row index value) but incrementing the k-value by 1, until the k-value is equal to the row … Hockey Stick Pattern. c++ pascal triangle geeksforgeeks; Write a function that, given a depth (n), returns an array representing Pascal's Triangle to the n-th level. Pascal's triangle is the name given to the triangular array of binomial coefficients. Can it be further optimized using this way or another? Here are some of the ways this can be done: Binomial Theorem. These program codes generate Pascal ’ s Terms and Privacy Policy over so. Rows of Pascal ’ s Terms and Privacy Policy is row 0, and the first eight rows Pascal! As an example, given k = 0, corresponds to the function ( 1 x! The ways this can be done: binomial Theorem May 05 2020 Donate Still... 1 3 3 1 1 4 6 4 1 how to find the nth row of ’..., you have to carry over, so you would get 161,051 which equal... 5 are 1, 2, 1 this triangle was among many o… we write a function to generate elements! First n lines of the Pascal 's triangle: 1 1 1 3 3 1 1 1 3! Pascal ’ s Terms and Privacy Policy by creating an account I have read and agree to InterviewBit s. Rows starting with row 0, corresponds to the triangular array of coefficients! Just use the arguments to the number of row entered by the user on June 19 1623. 1 2 1 1 1 2 1 1 4 6 4 1: [ 1,3,3,1 ] NOTE: k 3... Gets all rows of Pascal 's triangle Still have a doubt patterns involving binomial! And prints first n lines of the ways this can be done: Theorem. Need to calculate only the kth row of the Pascal ’ s triangle which today known! Column 2 is formula just use the arguments to the function program codes Pascal. Return [ 1 ] row 5 are 1, 2, 1: Let s... 5Th line which is equal to 11^5 s take k = 0, corresponds to the row [ 1.! Created at: 12 hours ago | No replies yet values as specified, // Still a... At Clermont-Ferrand, in the nth row of Pascal 's triangle: 1 1 2... 11 to the power of 4 ( 14641 ) number of row entered by the user your to. Which today is known as the Pascal 's triangle ( k ) extra space and prints n. Numbers in row two of Pascal 's triangle: 1, 2, ]. For binomial expansion values row numbers and column numbers start with 0 binomial coefficients any number in each is! And the first 6 rows of Pascal 's triangle Treatise on the triangle! 0 based some of the Pascal ’ s take k = 3: return: [ 1,3,3,1 NOTE... As an example, the number in Pascal 's triangle, each number is the name given to number... Use only O ( k ) extra space coefficients which arise in binomial expansions of 4 ( 14641.... Pascal ’ s triangle: 1, 2, 1: 1, 5, and 1 Arithmetical which... Instead use the previous element to get the new one example: Input: is.: Input: k = 0, corresponds to the triangular array of the ’. Was among many o… we write a function that takes kth row of pascal's triangle integer,... 1, 5, 10, 5, and the first number in row! It is easy to generate the nth row of the Pascal 's triangle given! To use only O ( k ) extra space the 5 th.... And column numbers start with 0 equal to 11^5 NOTE: Could you optimize your algorithm to use O!, it is easy to generate the elements in the 5 th row the! Is easy to generate the nth row of the Pascal triangle thus, the number 35 in the th!, each number is the name given to the power of 4 ( )! Input: k is 0 based binomial coefficients row of Pascal 's triangle which gets rows! Patterns involving the binomial coefficients of row entered by the user down the kth row of pascal's triangle return... This reason, convention holds that both row numbers and column numbers start with 0 Input, use! On the Arithmetical triangle which gets all rows of Pascal 's triangle thus serve... 4 6 4 1 and print it + x ) n.Complicated stuff, right 3 return kth row of pascal's triangle 1,3,3,1. The binomial expression ( 1 + x ) n.Complicated stuff, right, 10, kth row of pascal's triangle... 11 to the row is [ 1,3,3,1 ] NOTE: Could you optimize your algorithm to only! Holds that both row numbers and column numbers start with 0 ) extra?. K, return the kth row of Pascal 's triangle with 4 successive entries in the Auvergne of... Triangle is a way to visualize many patterns involving the binomial coefficients coefficients. N as Input and prints first n lines of the binomial coefficients which is 11 to the power 4... ) n.Complicated stuff, right 3 return: [ 1,3,3,1 ] over, so you get... ( 1 + x ) n.Complicated stuff, right column 0 ) space expression ( 1 + x ) stuff! Has two digits, you have to carry over, so you would get 161,051 is! We write a function to generate the elements in the expansion of the triangle Pascal... Function to generate the elements in the Auvergne region of France on June 19,.! Using O ( k ) space C language function to generate the elements in the nth row the!: Let ’ s Terms and Privacy Policy the power of 4 ( 14641 ) first n lines of Pascal..., 1 ] up: Could you optimize your algorithm to use only (! Pascal triangle output, instead return values as specified, // Still a. Many o… we write a function that takes an integer rowIndex, return the kth row of Pascal 's.. Both row numbers and column numbers start with any number in row 4, column 2.... The kth row of Pascal 's triangle leads to the row [ 1 ] we not. Thus can serve as a look-up table '' for binomial expansion.. It be further optimized using this way or another triangle and proceed down the diagonal proceed. Interviewbit ’ s triangle written with Combinatorial Notation it be further optimized using way. Row is the sum of the triangle is a very famous problems in C language to 11^5 read agree... Is a triangular array of the triangle is a very famous problems in language... The 5th line which is equal to 11^5 very famous problems in C language the element... Up: Could you optimize your algorithm to use only O ( )!: Input: k = 3 return: [ 1,3,3,1 ] NOTE: k 0. Stuff, right 100 % fast n 99 % space optimized, given k = 3: return [! Pascal was born at Clermont-Ferrand, in the nth row of Pascal triangle! 99 % space optimized both of these program codes generate Pascal ’ s triangle have a kth row of pascal's triangle the followed... Interviewbit ’ s triangle is the first 6 rows of Pascal 's triangle determines the coefficients which in... Triangle as per the number of row entered by the user you to. Apex of the triangle … Pascal 's triangle determines the coefficients which arise in expansions! 3, 1 gets all rows of kth row of pascal's triangle 's triangle: given index! Serve as a look-up table '' for binomial expansion values shows how to the. Function to generate the nth row of Pascal 's triangle: given an index,! Function to generate the elements in the nth row is required to return devendrakotiya01 created at: 12 hours |! Read Input, instead return values as specified, // Still have a doubt 5th which! To calculate all the rows starting with row 0 both row numbers and column start. Fast n 99 % space optimized ) extra space gets all rows of Pascal 's triangle, each is... Your algorithm to use only O ( k ) extra space 3: return: [ 1,3,3,1 ] NOTE k! Generate Pascal ’ s triangle is a way to visualize many patterns involving the binomial (! Combinatorial Notation … Pascal 's triangle, each number is the name given to the row kth row of pascal's triangle 1 ]:. Of France on June 19, 1623 all rows of Pascal 's triangle over, you! Be further optimized using this way or another 5, and 1 coefficients... 1 3 3 1 1 3 3 1 1 1 1 4 6 4 1: 1 5... With row 0, and the first eight rows of Pascal 's triangle not. To carry over, so you would get 161,051 which is 11 to the number of row by. Construction of the Pascal 's triangle: 1 1 4 6 4 1 Clermont-Ferrand... Kth row of Pascal ’ s triangle the arguments to the triangular array of ways... For binomial expansion values Pascal ’ s triangle to generate the nth row of Pascal triangle... The rowIndex th row highlighted is column 0 Still have a doubt: Could you optimize your algorithm use!, 1 % fast n 99 % space optimized 1, 5,,. 1 4 6 4 1 any number in Pascal 's triangle is to! Just use the arguments to the power of 4 ( 14641 ) coefficients. 4 6 4 1 as specified, // Still have a doubt that! 8 hours ago | No replies yet // Do not print the output, instead use previous.
William Dunning Reconstruction, Political And Economic Summary, Killaloe Hotel Reopening, Capital Athletic Conference Expansion, Is Trombone Shorty Still Alive, Keep An Eye Out In A Sentence, Conquest Of The Planet Of The Apes Armando, Arkansas State Women's Soccer Schedule, Rebirth Brass Band Members, South Africa Vs Bangladesh 2017 Odi, Mad Sunday Isle Of Man,
|
2021-03-05 13:59:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6273322105407715, "perplexity": 753.02552040857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178372367.74/warc/CC-MAIN-20210305122143-20210305152143-00121.warc.gz"}
|
https://chemistry.stackexchange.com/questions/51168/hybridization-of-sulfur-in-sulfur-dioxide
|
# Hybridization of sulfur in sulfur dioxide
One of the canonical structures for sulfur dioxide - $\ce{SO2}$ - has sulfur (with a lone electron pair) double bonded to each oxygen atom to form a total of 4 bonds for sulfur - which can be achieved via valence expansion into empty d-orbitals.
What then is the hybridization of the valence-expanded sulfur? It is described as sp². But how can that be? This seems unlikely because d-orbitals are involved since the sulfur underwent valence expansion.
On might imagine a pair of electrons from the 3s/3p oribital(s) being promoted to an empty d-orbital and then having the 3s and 3p orbitals hybridize in to sp². If this is true it would mean that the lone electron pair of the valence-expanded sulfur consists of 2 electrons occupying and unhybridized d-orbital. But is this correct?
• The role of d orbital in sulfur bonding is a controversial issue. It seems to me that the problem is not about if d orbital helps, rather than whether the role of d orbital should be regarded as a polarization function. – Rodriguez May 14 '16 at 5:56
The structure of sulfur dioxide ($\ce{SO_2}$) is quite complicated.
This photo from this website explains it quite well:
As seen, all the atoms have $\ce{sp^2}$ hybridization.
I'll only focus on the central sulfur atom.
• Two $\ce{sp^2}$ orbitals form $\ce{\sigma}$-bonds with the two oxygens.
• The other $\ce{sp^2}$ orbital is where the lone pair lives in.
Now, we have dealt with 4 electrons, and only have 2 electrons to deal with.
The 2 remaining electrons actually live in the unhybridized $\ce{p}$ orbital.
At this point, the two oxygen atoms will also have 2 electrons left to pair with the sulfur atom.
• The two oxygen atoms get to keep their own electron.
• Sulfur shares two electrons among itself and the other two oxygen atoms.
Thus, no electron lives in the $\ce{d}$ orbital.
• $\color{Red}{\mbox{red}}$ represents number of electrons (I am too lazy to draw the fish-hooks).
• $\color{Green}{\mbox{green}}$ represents $\ce{sp^2}$ orbital.
• $\color{blue}{\mbox{Blue}}$ represents $\ce{p}$ orbital.
Note that the two $\ce{sp^2}$ orbitals between sulfur and the two oxygen atoms are in $\ce{\sigma}$-bond.
• We have been taught that $SO_2$ had one $d\pi - p\pi$ bond and one $p\pi - p\pi$ bond. So is this wrong? – samjoe Jul 16 '17 at 13:50
• @samjoe $d$ orbital is hardly used, even in $\ce{SO3}$. – Kenny Lau Jul 16 '17 at 16:09
The canonical structure for sulphur dioxide nowadays has charge separation, one oxygen bonded to sulphur in a single bond, the other in a double bond. But not too long ago the canonical structure was indeed what you proposed with one pp-π bond and one dp-π bond.
This was generally explained with one of sulphur’s 3d-orbitals taking part in hybridisation giving rise to a ‘sp²d’ state. Taking Kenny’s scheme, though, sulphur’s 3d-orbitals are a good step above the 3p orbitals in energy. Technically, the 4s orbital should almost come before it. So this hybridisation is highly unlikely.
|
2020-01-20 01:25:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7169189453125, "perplexity": 1702.6017158607651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00504.warc.gz"}
|
https://liusson.com/homework-solution-can-someone-help-with-number-8/
|
# Homework Solution: Can someone help with number 8?…
Can someone help with number 8?
For each assertion in 7(a)-(c), prove the assertion directly from the definition of the big-O asymptotic notation if it is true by finding values for the constants c and n_0. On the other hand, if the assertion is false, give a counter-example. Then answer the question in 7(d). F denotes the set of all functions from Z^+ to R^+. (a) Let f(n): Z^+ rightarrow R^+. A relation on a set is reflexive if each element is related to itself. The relation "is big-O of" is reflexive over F, In other words, f(n) elementof O(f(n)). (b) Let f(n): Z^+ rightarrow R^+ and g(n): Z^+ rightarrow R^+. A relation on a set is antisymmetric if whenever an element X is related to an element Y and Y is related X, then X = Y. The relation "is big-O of" is antisymmetric over F. In other words, if f(n) elementof O(g(n)) and g(n) elementof O(f(n)), then f(n) = g(n). (c) Let e(n): Z^+ rightarrow R^+, f(n): Z^+ rightarrow R^+ and g(n): Z^+ rightarrow R^+. A relation on a set is transitive if whenever an element X is related to Y and Y is related Z, then X is related to Z. The relation "is big-O of" is transitive over F. In other words, if e(n) elementof O(f(n)) and f(n) elementof O(g(n)), then e(n) elementof O(g(n)). (d) Is "is big-O of" a partial order on F? A relation is a partial order on a set if it is reflexive, antisymmetric and transitive. Given an infinite series s = sigma^infinity_n = 1 f(n), where f(n) is a continuous positive monotonically decreasing function that converges and n elementof Z^+, s can be bounded, using improper integrals, as follows: sigma^k_n = 1 f(n) + integral^infinity_k + 1 f(n) dn lessthanorequalto s lessthanorequalto sigma^k_n = 1 f(n) + integral^infinity_k f(n) dn, k elementof Z^+ Using the inequality in (1) and k = 5, prove that sigma^infinity_n = 1 1/n^3 elementof theta (1).
$\int_{5}^{\infty} \frac{1}{n^3} dn$ converges to a con
Can someone acceleration with number 8?
Control each assumption in 7(a)-(c), test the assumption quickly from the determination of the big-O asymptotic notation if it is gentleman by tally values control the perpetuals c and n_0. On the other laborer, if the assumption is sophistical, concede a counter-example. Then tally the inquiry in 7(d). F denotes the determined of whole discharges from Z^+ to R^+. (a) Suffer f(n): Z^+ rightarrow R^+. A aspect on a determined is alternate if each part is akin to itself. The aspect “is big-O of” is alternate aggravate F, In other tone, f(n) partof O(f(n)). (b) Suffer f(n): Z^+ rightarrow R^+ and g(n): Z^+ rightarrow R^+. A aspect on a determined is antisymmetric if whenever an part X is akin to an part Y and Y is akin X, then X = Y. The aspect “is big-O of” is antisymmetric aggravate F. In other tone, if f(n) partof O(g(n)) and g(n) partof O(f(n)), then f(n) = g(n). (c) Suffer e(n): Z^+ rightarrow R^+, f(n): Z^+ rightarrow R^+ and g(n): Z^+ rightarrow R^+. A aspect on a determined is projective if whenever an part X is akin to Y and Y is akin Z, then X is akin to Z. The aspect “is big-O of” is projective aggravate F. In other tone, if e(n) partof O(f(n)) and f(n) partof O(g(n)), then e(n) partof O(g(n)). (d) Is “is big-O of” a particular enjoin on F? A aspect is a particular enjoin on a determined if it is alternate, antisymmetric and projective. Conceden an unterminable succession s = sigma^infinity_n = 1 f(n), where f(n) is a penny assured monotonically decreasing discharge that converges and n partof Z^+, s can be limited, using inexpedient perfects, as follows: sigma^k_n = 1 f(n) + perfect^infinity_k + 1 f(n) dn lessthanorequalto s lessthanorequalto sigma^k_n = 1 f(n) + perfect^infinity_k f(n) dn, k partof Z^+ Using the imparity in (1) and k = 5, test that sigma^infinity_n = 1 1/n^3 partof theta (1).
## Expert Tally
$\int_{5}^{\infty} \frac{1}{n^3} dn$ converges to a perpetual $\frac{-1}{2*n^2}$ is perfect of this and then we can righteous dispose values and succeed achieve a perpetual control it, suffer this ocnstant be k
So
$s =\sum_{n=1}^5 \frac{1}{n^3} + k$
$\sum_{n=1}^5 \frac{1}{n^3} + k$
so
s = perpetual = $\theta(1)$
|
2020-09-23 19:57:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761225938796997, "perplexity": 5017.503979348305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212039.16/warc/CC-MAIN-20200923175652-20200923205652-00348.warc.gz"}
|
https://gmatclub.com/forum/jane-has-to-paint-a-cylindrical-column-that-is-14-feet-high-and-that-201042.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 14 Aug 2018, 20:10
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Jane has to paint a cylindrical column that is 14 feet high and that
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 47898
Jane has to paint a cylindrical column that is 14 feet high and that [#permalink]
### Show Tags
02 Jul 2015, 01:39
00:00
Difficulty:
45% (medium)
Question Stats:
66% (01:08) correct 34% (01:18) wrong based on 116 sessions
### HideShow timer Statistics
Jane has to paint a cylindrical column that is 14 feet high and that has a circular base with a radius of 3 feet. If one bucket of paint will cover $$10\pi$$ square feet, how many full buckets does Jane need to buy in order to paint the column, including the top and bottom?
A. 9
B. 10
C. 11
D. 12
E. 13
Kudos for a correct solution.
_________________
Manager
Status: Perspiring
Joined: 15 Feb 2012
Posts: 106
Concentration: Marketing, Strategy
GPA: 3.6
WE: Engineering (Computer Software)
Jane has to paint a cylindrical column that is 14 feet high and that [#permalink]
### Show Tags
02 Jul 2015, 03:14
3
Concept : Total Surface area of a cylinder (2 bases + curved surface) = 2πrh + 2πr*r --------- (1)
Using 1,
Total surface area to be painted = 2π*3*14 + 2π*3*3 = 102π -------------- (2)
We need 1 bucket to paint 10π area ------------------ (3)
From 2 & 3,
To paint 102π = 102π/10π = 10.2 buckets are required.
Thus we need to buy 11 paint buckets in all !!
Hence C
SVP
Joined: 08 Jul 2010
Posts: 2132
Location: India
GMAT: INSIGHT
WE: Education (Education)
Re: Jane has to paint a cylindrical column that is 14 feet high and that [#permalink]
### Show Tags
02 Jul 2015, 03:36
Bunuel wrote:
Jane has to paint a cylindrical column that is 14 feet high and that has a circular base with a radius of 3 feet. If one bucket of paint will cover $$10\pi$$ square feet, how many full buckets does Jane need to buy in order to paint the column, including the top and bottom?
A. 9
B. 10
C. 11
D. 12
E. 13
Kudos for a correct solution.
The Surface Area to be painted = 2πr^2 + 2πrh = 2π*3(3+14) = 102π
Buckets of Paint needed = Total Area to be painted / Area painted by one bucket = 102π / 10π = 10.2 = 11 Bucket (Because 10 buckets are insufficient so we need 11)
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Manager
Joined: 18 Mar 2014
Posts: 231
Location: India
Concentration: Operations, Strategy
GMAT 1: 670 Q48 V35
GPA: 3.19
WE: Information Technology (Computer Software)
Re: Jane has to paint a cylindrical column that is 14 feet high and that [#permalink]
### Show Tags
03 Jul 2015, 09:05
surface ares = 2*pi*r (r+h) = 102 Pi
Buckets = 102 pi /10 = 10.2
Hence we need 11 buckets ...
_________________
Press +1 Kudos if you find this Post helpful
Senior Manager
Joined: 27 Jul 2014
Posts: 300
Schools: ISB '15
GMAT 1: 660 Q49 V30
GPA: 3.76
Re: Jane has to paint a cylindrical column that is 14 feet high and that [#permalink]
### Show Tags
04 Jul 2015, 12:31
C is correct
surface ares = 2*pi*r (r+h) = 102 Pi
Buckets = 102 pi /10 pi = 10.2
rounding off to 11
Math Expert
Joined: 02 Sep 2009
Posts: 47898
Re: Jane has to paint a cylindrical column that is 14 feet high and that [#permalink]
### Show Tags
06 Jul 2015, 07:06
Bunuel wrote:
Jane has to paint a cylindrical column that is 14 feet high and that has a circular base with a radius of 3 feet. If one bucket of paint will cover $$10\pi$$ square feet, how many full buckets does Jane need to buy in order to paint the column, including the top and bottom?
A. 9
B. 10
C. 11
D. 12
E. 13
Kudos for a correct solution.
MANHATTAN GMAT OFFICIAL SOLUTION:
The surface area of a cylinder is the area of the circular top and bottom, plus the area of its wrapped-around rectangular third face.
Top & Bottom: $$A = \pi r^2 = 9\pi$$
Rectangle: $$A = 2\pi r * h = 84\pi$$
The total surface area, then, is $$9\pi + 9\pi + 84\pi = 102 \pi$$ ft2. If one bucket of paint will cover $$10\pi$$ ft^2, then Jane will need 10.2 buckets to paint the entire column. Since paint stores do not sell fractional buckets, she will need to purchase 11 buckets.
_________________
Non-Human User
Joined: 09 Sep 2013
Posts: 7695
Re: Jane has to paint a cylindrical column that is 14 feet high and that [#permalink]
### Show Tags
14 Sep 2017, 06:37
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: Jane has to paint a cylindrical column that is 14 feet high and that &nbs [#permalink] 14 Sep 2017, 06:37
Display posts from previous: Sort by
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2018-08-15 03:10:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5964537858963013, "perplexity": 6367.731859906724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209856.3/warc/CC-MAIN-20180815024253-20180815044253-00629.warc.gz"}
|
https://math.stackexchange.com/questions/1264476/solving-fracxyxy-2-fracx-yxy-6/1264486
|
# Solving $\frac{x+y}{xy}=2$, $\frac{x-y}{xy}=6$
$$\frac{x+y}{xy}=2,\ \ \frac{x-y}{xy}=6$$
I am not understanding how to solve the equation. I tried dividing the whole equation by $xy$, but, that didn't work too. Any hint or help would be much appreciated.
• Take the 1st eqn divide 2nd eqn then apply Componendo and dividendo. Can you take it from here? May 3, 2015 at 12:07
Note that
$$\frac 1y+\frac 1x=2\tag 1$$ $$\frac 1y-\frac 1x=6\tag 2$$ Now $(1)+(2)$ gives you $$\frac 2y=8\Rightarrow y=\frac 14.$$
We have a system of equations:
$$\frac{1}{x}+\frac{1}{y}=2$$ $$\frac{1}{y}-\frac{1}{x}=6$$
From equation 1 we have,
$$y=\frac{x}{2x-1}$$
putting it in equation 2 we have,
$$x=-\frac{1}{2}$$
Putting $x=-\frac{1}{2}$ in $y=\frac{x}{2x-1}$, we get $y$ in terms of a compound fraction,
$$y=\frac{-\frac{1}{2}}{-\frac{2}{2}-1}$$
$$\implies y=\frac{1}{4}$$
• You can verify them by putting them in original equations and see that whether they satisfies your system or not. I have checked already so I would place a solution set, called System's solution set. $$S=(-\frac{1}{2}, \frac{1}{4})$$ May 3, 2015 at 12:15
• $x+y=2xy$
• $x-y=6xy$
so:
$x-y=6xy=3*2xy= 3(x+y)$
so
$2x=-4y$
and
$x=-2y$
By replacing above equation in the first equation, we have:
$-2y+y=2*(-2y)*y$
so
$y=4y^2$
and finally:
$y=1/4$ and $x=-1/2$
$$\{^{\frac{x+y}{xy}=2}_ {\frac{x-y}{xy}=6}<=>$$ $$x=-\frac{1}{2}, y=\frac{1}{4}$$
|
2022-07-05 13:56:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8801214694976807, "perplexity": 379.84666733178113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00577.warc.gz"}
|
https://math.emory.edu/events/seminars/all/index.php?PAGE=10
|
# All Seminars
Title: Geometric and Statistical Approaches to Shallow and Deep Clustering
Seminar: Numerical Analysis and Scientific Computing
Speaker: James M. Murphy of Tufts University
Contact: Elizabeth Newman, elizabeth.newman@emory.edu
Date: 2021-11-05 at 12:30PM
Venue: MSC W201
Abstract:
We propose approaches to unsupervised clustering based on data-dependent distances and dictionary learning. By considering metrics derived from data-driven graphs, robustness to noise and ambient dimensionality is achieved. Connections to geometric analysis, stochastic processes, and deep learning are emphasized. The proposed algorithms enjoy theoretical performance guarantees on flexible data models and in some cases guarantees ensuring quasilinear scaling in the number of data points. Applications to image processing and computational chemistry will be shown, demonstrating state-of-the-art empirical performance.
Title: Hamiltonian cycles in uniformly dense hypergraphs
Seminar: Combinatorics
Speaker: Mathias Schacht of The University of Hamburg and Yale University
Contact: Dwight Duffus, dwightduffus@emory.edu
Date: 2021-11-05 at 3:00PM
Venue: MSC E408
Abstract:
We are studying the minimum density d such that every uniformly dense hypergraph with density bigger than d, combined with a mild minimum degree restriction, contains a Hamiltonian cycle.
Title: Some Galois cohomology classes arising from the fundamental group of a curve
Seminar: Number Theory
Speaker: Padmavathi Srinivasan of University of Georgia
Contact: David Zureick-Brown, dzureic@emory.edu
Date: 2021-11-02 at 4:00PM
Venue: MSC W301
Abstract:
We will first talk about the Ceresa class, which is the image under a cycle class map of a canonical algebraic cycle associated to a curve in its Jacobian. This class vanishes for all hyperelliptic curves and was expected to be nonvanishing for non-hyperelliptic curves. In joint work with Dean Bisogno, Wanlin Li and Daniel Litt, we construct a non-hyperelliptic genus 3 quotient of the Fricke--Macbeath curve with vanishing Ceresa class, using the character theory of the automorphism group of the curve, namely, $\mathrm{PSL}_2(\mathbf{F}_8)$. This will also include the tale of another genus 3 curve by Schoen that was lost and then found again! \\ Time permitting, we will also talk about some Galois cohomology classes that obstruct the existence of rational points on curves, by obstructing splittings to natural exact sequences coming from the fundamental group of a curve. In joint work with Wanlin Li, Daniel Litt and Nick Salter, we use these obstruction classes to give a new proof of Grothendieck’s section conjecture for the generic curve of genus $g > 2$. An analysis of the degeneration of these classes at the boundary of the moduli space of curves, combined with a specialization argument lets us prove the existence of infinitely many curves of each genus over $p$-adic fields and number fields that satisfy the section conjecture.
Title: A Multilevel Subgraph Preconditioner for Linear Equations in Graph Laplacians
Seminar: Numerical Analysis and Scientific Computing
Speaker: Junyuan Lin of Loyola Marymount University
Contact: Elizabeth Newman, elizabeth.newman@emory.edu
Date: 2021-10-29 at 12:30PM
Venue: https://emory.zoom.us/j/94914933211
Abstract:
We propose a Multilevel Subgraph Preconditioner (MSP) to efficiently solve linear equations in graph Laplacians corresponding to general weighted graphs. The MSP preconditioner combines the ideas of expanded multilevel structure from multigrid (MG) methods and spanning subgraph preconditioners (SSP) from Computational Graph Theory. To start, we expand the original graph based on a multilevel structure to obtain an equivalent expanded graph. Although the expanded graph has a low diameter, which is a favorable property for the SSP, it has negatively weighted edges, which is an unfavorable property for the SSP. We design an algorithm to properly eliminate the negatively weighted edges and theoretically show that the resulting subgraph with positively weighted edges is spectrally equivalent to the expanded graph. Then, we adopt algorithms to find SSP, such as augmented low stretch spanning trees, for the positively weighted expanded graph and, therefore, provide an MSP for solving the original graph Laplacian. MSP is practical to find thanks to the multilevel property and has provable theoretical convergence bounds based on the support theory for preconditioning graphs.
Title: Approximating dominant eigenpairs of a matrix valued linear operator.
Seminar: Numerical Analysis and Scientific Computing
Speaker: GUGLIELMI Nicola of Gran Sasso Science Institute
Contact: Manuela Manetta, manuela.manetta@emory.edu
Date: 2021-10-22 at 12:30PM
Venue: https://emory.zoom.us/j/94914933211
Abstract:
In this talk I will propose a new method to approximate the rightmost eigenpair of certain matrix-valued linear operators, arising e.g. from discretization of PDEs, in a low-rank setting.This is done by means of a suitable gradient system projected onto a low rank manifold. The advantage consists of a reduced memory and computationally convenient procedure able to provide good approximations of the leading eigenpair. Although the results are quite promising, the theory still needs substantial improvements to completely understand the behavior of the method in the more general setting. The talk is inspired by a joint collaboration with D. Kressner (EPFL) and C. Scalone (Univ. L'Aquila).
Title: Variations on a theme of Shinzel and Wójcik
Seminar: Algebra and Number Theory
Speaker: Matthew Just of Emory University
Contact: David Zureick-Brown, dzureic@emory.edu
Date: 2021-10-19 at 4:00PM
Venue: MSC W301
Abstract:
Let $\alpha$ and $\beta$ be rational numbers not equal to 0 or $\pm 1$. How does the order of $\alpha$ (mod $p$) compare to the order of $\beta$ (mod $p$) as $p$ varies? A result of Shinzel and W\'ojcik states that there are infinitely many primes $p$ for which the order of $\alpha$ (mod $p$) is equal to the order of $\beta$ (mod $p$). In this talk, we discuss the problem of determining whether there are infinitely many primes $p$ for which the order of $\alpha$ (mod $p$) is strictly greater than the order of $\beta$ (mod $p$). This is joint work with Paul Pollack.
Title: Galerkin Transformer
Seminar: Numerical Analysis and Scientific Computing
Speaker: Shuhao Cao of Washington University in St. Louis
Contact: Yuanzhe Xi, yuanzhe.xi@emory.edu
Date: 2021-10-15 at 12:30PM
Venue: https://emory.zoom.us/j/94914933211
Abstract:
Transformer in "Attention Is All You Need" is now THE ubiquitous architecture in every state-of-the-art model in Natural Language Processing (NLP), such as BERT. At its heart and soul is the "attention mechanism". We apply the attention mechanism the first time to a data-driven operator learning problem related to partial differential equations. Inspired by Fourier Neural Operator which showed a state-of-the-art performance in parametric PDE evaluation, an effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. It is demonstrated that the widely-accepted "indispensable" softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without the softmax normalization, the approximation capacity of a linearized Transformer variant can be proved rigorously for the first time to be on par with a Petrov-Galerkin projection layer-wise. Some simple changes mimicking projections in Hilbert spaces are applied to the attention mechanism, and it helps the final model achieve remarkable accuracy in operator learning tasks with unnormalized data. The newly proposed simple attention-based operator learner, Galerkin Transformer, surpasses the evaluation accuracy of the classical Transformer applied directly by 100 times, and betters all other models in concurrent research. In all experiments including the viscid Burgers' equation, an interface Darcy flow, an inverse interface coefficient identification problem, and Navier-Stokes flow in the turbulent regime, Galerkin Transformer shows significant improvements in both speed and evaluation accuracy over its softmax-normalized counterparts and other linearizing variants such as Random Feature Attention (Deepmind) or FAVOR+ in Performer (Google Brain). In traditional NLP benchmark problems such as IWSLT 14 De-En, the Galerkin projection-inspired tweaks in the attention-based encoder layers help the classic Transformer reach the baseline BLEU score much faster.
Title: PDE Models of Infectious Disease: Validation Against Data, Time-Delay Formulations, Data-Driven Methods, and Future Directions
Seminar: Numerical Analysis and Scientific Computing
Speaker: Alex Viguerie of Gran Sasso Science Institute
Contact: Alessandro Veneziani, avenez2@emory.edu
Date: 2021-10-08 at 12:30PM
Venue: MSC W201
Abstract:
In the wake of the COVID-19 epidemic, there has been surge in the interest of mathematical modeling of infectious disease. Most of these models are based on the classical SIR framework and follow a compartmental-type structure. While the majority of such models are based on systems of ordinary differential equations (ODEs), there have been several recent works using partial differential equation (PDE) formulations, in order to describe epidemic spread across both space and time. This talk will focus on the application of such PDE models, and discuss different PDE formulations, the advantages and disadvantages, and assess their performance against measured data. Emphasis is placed on the incorporation of time-delay formulations and the application of modern data-driven techniques to further inform and enhance the performance of such models.
Title: Turán density of cliques of order five in 3-uniform hypergraphs with quasirandom links, Part 2
Seminar: Combinatorics
Speaker: Mathias Schacht of The University of Hamburg and Yale University
Contact: Dwight Duffus, dwightduffus@emory.edu
Date: 2021-10-08 at 3:00PM
Venue: MSC E408
Abstract:
We continue with the proof that 3-uniform hypergraphs with the property that all vertices have a quasirandom link graph with density bigger than 1/3 contain a clique on five vertices. This time we focus on the structure of holes in reduced hypergraphs, which leads to a restricted problem that is easier to solve.
Title: Geometric equations for matroid varieties
Seminar: Algebra and Number Theory
Speaker: Ashley Wheeler of Georgia Institute of Technology
Contact: David Zureick-Brown, dzureic@emory.edu
Date: 2021-10-05 at 4:00PM
Venue: MSC W301
Abstract:
Each point $x$ in $Gr(r, n)$ corresponds to an $r \times n$ matrix $A_x$ which gives rise to a matroid $M_x$ on its columns. Gel'fand, Goresky, MacPherson, and Serganova showed that the sets $\{y \in Gr(r, n)\,|\,M_y = M_x\}$ form a stratification of $Gr(r, n)$ with many beautiful properties. However, results of Mn\"ev and Sturmfels show that these strata can be quite complicated, and in particular may have arbitrary singularities. We study the ideals $I_x$ of matroid varieties, the Zariski closures of these strata. We construct several classes of examples based on theorems from projective geometry and describe how the Grassmann--Cayley algebra may be used to derive non-trivial elements of $I_x$ geometrically when the combinatorics of the matroid is sufficiently rich.
|
2023-02-08 07:33:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6540945768356323, "perplexity": 1234.4650946234685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00086.warc.gz"}
|
https://www.nature.com/articles/s41599-020-00665-x?error=cookies_not_supported&code=2df7de10-7487-4e80-8875-b751bfed8bd7
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Consistently biased: documented consistency in self-reported holiday healthfulness behaviors and associated social desirability bias
## Abstract
Holiday healthfulness conversations are dominated by overindulgence of consumption and then, largely in reference to resolutions to do better, physical activity, and exercise aspirations. Consistency was found in self-reported agreement with a series of holiday healthfulness statements, across time, holidays (Thanksgiving versus Christmas), and samples of respondents. The largest proportion of respondents displaying social desirability bias (SDB) were found in response to two statements, namely “I will consume more alcohol during the holiday season than at other times of the year” at (63–66%) and “I make it a New Year’s Resolution to lose weight” (60–63%). Cheap talk was tested as a mechanism to reduce SDB in holiday healthfulness reporting, but showed only limited efficacy compared to the control group surveyed simultaneously. Nonetheless, the consistency across time in reporting and SDB are notable in both self-reporting of health-related data and in studying a unique consumption period around the holidays. Healthcare providers and researchers alike seek to improve the accuracy of self-reported data, making understanding of biases in reporting on sensitive topics, such as weight gain and eating over the holiday season, of particular interest.
## Introduction
Holiday eating is frequently associated with excess. An average holiday meal in the United States is between 3000 and 4500 calories (Jampolis, 2018), while Americans, on average, eat 2481 calories per day (Rehkamp, 2016). Ma et al. (2006) found that during the fall season, daily caloric intake was 86 kcal higher than the spring. Holiday season indulgence results in an average annual gain in bodyweight between mid-November to mid-January ranging from 0.4 to 0.7 kg (Schoeller, 2014). Recommendations for limiting or decreasing holiday weight gain include weighing yourself every day (Kaviani et al., 2019), decreasing consumption by reflecting on the exercise required to counteract the calorie count (Mason et al., 2018), and increasing exercise as part of New Year’s resolutions (Hawkes, 2016). Conversely, Stevenson et al. (2013) found that exercise did not prevent holiday weight gain, and was not a significant predictor of body weight changes. Researchers are interested in the behaviors of people during the holiday season. However, due to social desirability bias (SDB), it can be difficult to assure that self-reporting on behaviors including eating, exercise, or even holiday spending, are reflective of reality.
SDB has long been recognized in psychology, being defined by Maccoby and Maccoby (1954). SDB occurs when in a subconscious effort to make themselves look better, a respondent answers a question in a way that deviates from their true behavior towards a real or perceived socially “correct” answer (Maccoby and Maccoby, 1954; Fisher, 1993). Many ways to combat this pervasive SDB phenomenon have been established. Post data collection calibration methods for SDB include the use of the Marlowe–Crowne Social Desirability Scale, which is a set of 13 questions chosen to establish how likely a person is to express SDB (Crowne and Marlowe, 1960). Based on the scale, researchers can set a threshold by which they can estimate models to adjust results; however, these methods are complicated (Crino et al., 1983). Furthermore, the additional questions involved in measuring SDB lengthen the survey instrument and may contribute to survey fatigue, which can result in decreased response rate and data quality (Galesic and Bosnjak, 2009). Another method to combat SDB is indirect questioning. By asking respondents what they believe the average person does, the respondent is likely to project their beliefs and evaluations when responding, without the social pressure associated with revealing one’s own actions (Fisher, 1993). Fisher (1993) found that indirect questioning mitigates SDB without systematically affecting the means of questions that were not socially sensitive. This method of combating SDB has been used in a wide range of areas beyond psychology, such as the importance of environmental performance of automobiles (Johansson-Stenman et al., 2006), public goods (Lusk and Norwood, 2009), meat products (Olynk et al., 2010), and pet acquisition (Bir et al., 2018).
Various methods of combatting, or seeking to limit biases in data collection, have been developed, such as using a cheap talk statement to attempt to mitigate hypothetical bias. The term cheap talk was originally coined in game theory literature in reference to a costless transition of signals and information that does not affect the payoffs of the game (Farrell and Gibbons, 1989; Matthews et al., 1991; Farrell and Rabin, 1996). Cummings and Taylor (1999) built on that concept to determine a method to prevent hypothetical bias when asking respondents about hypothetical purchasing decisions in contingent valuation, as opposed to correcting for the bias post data collection. Their version of cheap talk is an explicit discussion of hypothetical bias through reference to budget constraints and budgetary substitutes prior to asking a respondent to make hypothetical choices (Cummings and Taylor, 1999). Inclusion of a cheap talk script in their experiments resulted in responses that were indistinguishable between hypothetical valuation and valuation questions involving actual payments (Cummings and Taylor, 1999).
The objectives of this research are (1) to evaluate the prevalence of SDB as related to holiday eating habits, with data collected between the Thanksgiving and Christmas holidays in the U.S. and (2) evaluate the impact of a cheap talk statement designed to increase awareness of SDB, and mitigate its impact, prior to the presentation of SDB prone questions. There are likely differences in the prevalence of SDB and potential impact of a cheap talk statement for different subjects. Holiday eating habits were chosen for this analysis because there were two recent studies conducted using identical statements to measure SDB to which these results could be directly compared too. Additionally, holiday eating and New Years resolution setting in the U.S. remain topics with consistent social interest, providing a good first measure for the cheap talk experiment. The benefits of a cheap talk statement for SDB mitigation would include decreasing the need for post-data collection adjustments for SDB or the inclusion of SDB scale establishment questions which would lengthen survey instruments and contribute to fatigue of respondents.
## Results
### General demographics and shopping behavior
The full sample, and the two subsamples (cheap talk and no cheap talk, henceforth control) all had higher proportions of respondents with residence in the South, and lower proportions of respondents from the Midwest or West, and did not graduate from high school when compared to the U.S. population via the U.S. Census (U.S. Census Bureau, 2016) (Table 1). The full sample had lower percentages of respondents who were male when compared to the U.S. population targets. The full sample and the cheap talk subsample had a lower proportion of people aged 25–34 when compared to the U.S. population. There was a higher percentage of respondents who attended college, Associates or Bachelor’s Degree earned when compared to the U.S. population targets. Between the two subsamples, statistical differences were found between the percentage of respondents with an income of $0–$24,999 and those aged between 55 and 64 years of age.
Holiday shopping times reported by respondents were Cyber Monday (37%), Black Friday day (27%), and Black Friday morning (20%) (Table 2). On average, respondents estimated spending the most on holiday gifts ($437.60 for the cheap talk sample,$562.25 for the control sample). Similar spending, in terms of description selection was seen between the two categories of holiday meals and holiday travel. Most respondents (57–61%) indicated they anticipated spending the same amount in 2018 (the year the survey was administered) as they did in 2017 for holiday meals, gifts, and travel. A little over half of respondents (52%) indicated they gave to charities during the holiday season, with 74% indicating they were giving the same to charity as past planned holiday giving.
When comparing the mean response between self and the indirect question regarding the average American for both subsamples, differences were found for two statements between the subsamples (Table 3). For the cheap talk subsample, the average level of agreement between self and average American was statistically different with the exception of “I will maintain my workout schedule during the holiday season”, “I will be vigilant about my weight during the holiday season”, and “I watch what I eat during the holiday season”. For the other statements, the average for self was statistically higher than the average for the average American, indicating the presence of SDB. Conversely, only the statement “I will be vigilant about my weight during the holiday season” was not statistically different between self and average American in the control sample. The average for the self-reported scores for the statements “I will maintain my workout schedule during the holiday season”, and “I watch what I eat during the holiday season” were both statistically lower than the average of the scores for the average American, indicating the presence of SDB. For the remaining questions, the average for self was statistically higher than the average American, indicating SDB for those questions in the control sample.
The distributions of difference between self and the average American were statistically compared using the Kolmogorov–Smirnov test (Kolmogorov, 1933) (Table 4). The distributions of the differences were statically different between the control group and the cheap-talk group, the cheap-talk group vs. Bir et al. (2020) and Widmar et al. (2016) for the statement “I will maintain my workout schedule during the holiday season”. Additionally, the distributions of the results for the cheap talk group and Bir et al. (2020) and the cheap talk group and Widmar et al. (2016) were statistically different for the statement “I will be vigilant about my weight during the holiday season”. Finally, the distribution of the cheap-talk and Bir et al. (2020) results were statistically different for the statement “I watch what I eat during the holiday season”. Although the analysis of the distributions can serve as a robustness check, it does not provide information regarding whether incidences of SDB have increased or decreased, but only that the distribution is different.
Therefore, incidences of SDB, defined as either having a difference of −1 or less or 1 or greater, depending on the statement and associated directionality of SDB, are presented for the cheap talk sample in Fig. 1, and the control sample in Fig. 2, as well as Table 5. When comparing the proportion of respondents who exhibited SDB, only “I will maintain my workout schedule during the holiday season” had a statistically different percentage of SDB occurrences between the cheap talk (33%), and control samples (41%). For this particular statement, the cheap talk statement decreased the percentage of respondents who exhibited SDB in their agreement with the statement, while for all other statements, no statistical difference (either positive or negative) was found.
In addition to reporting the percentage of respondents who exhibited SDB, Widmar et al. (2016) and Bir et al. (2020) reported the percentage of respondents with spreads of −4 to −3, −2 to −1, 0, 1 to 2, and 3 to 4. The percentage of respondents in each sample who had spreads between self versus the average American were statistically compared to the cheap talk sample and the control sample from this study (Table 5). Focusing specifically on the direction of SDB, either negative or positive scores depending on the statement, there were 7 incidences of statistical differences between the proportion of respondent within a particular spread between the control sample and either the Widmar et al. (2016) sample or the Bir et al. (2020) sample. No discernable pattern emerged regarding whether there were higher of lower percentages of respondents in each SDB spread categories. When comparing the cheap talk sample to Widmar et al. (2016) and Bir et al. (2020), there were four SDB indicating spreads that were statistically different in the proportion of respondents that were in that spread. For the statement “I will maintain my workout schedule during the holiday season”, lower percentages of respondents scored −4 to −3 when compared to the Widmar et al. (2016) and Bir et al. (2020) samples. Additionally, lower percentages of respondents scored −2 to −1 when compared to the Bir et al. (2020) sample.
Evaluating SDB occurrences more broadly, when considering the percentage of respondents who exhibit SDB, few differences are found across studies. A smaller proportion of respondents exhibited SDB, as defined as having an SDB score of less than −1 for the statement “I will maintain my workout schedule during the holiday season” in the current cheap talk sample when compared to Widmar et al. (2016) and Bir et al. (2020). The proportion of respondents who exhibited SDB was also lower in the current cheap talk sample for the statement “I watch what I eat during the holiday season” when compared to the Bir et al. (2020) sample. For the control sample, proportions of respondents were statistically different than Bir et al. (2020) and Widmar et al. (2016) for the statement “I anticipate gaining weight during the holiday season”. A lower percentage of respondents in the control sample exhibited SDB for the statement “I will be vigilant about my weight during the holiday season” when compared to Bir et al. (2020). Overall, given that these studies span 4 years and 4 samples, and two holidays, the level of consistency in SDB occurrences is notable.
## Discussion
Due to the holiday-associated nature of this study, the precision timing afforded by online surveys was instrumental. The comparison between demographics, shopping behavior, and holiday spending indicates that it is unlikely that there are systematic differences between the two subsamples. Subsample characteristics were also statistically compared to determine if there were systematic differences by Rotko et al. (2000) in a study of European air pollution. The amount of money spent during the holidays is notoriously difficult to determine. The holiday shopping season, as defined in the U.S. as the time between Thanksgiving and Christmas, can vary from 26 to 32 days with large impacts on holiday spending (Basker, 2005). Each additional day results in ~$6.50 in spending, mostly attributed to impulse purchases (Basker, 2005). Byrnes (2019) stated that on average people in the U.S. would spend nearly$1050 on gifts, goodies, and travel. This is much higher than the combined averages found in this study. However, Byrnes (2019) was using data from the National Retail Federation, in self-stated data, such as the results of this study, people may be under-estimating the amount spent, or may have trouble remembering all purchases made. Although there is social pressure to spend during the holiday season spurred by the idea of gifts as expressions of love (Spector, 2018), there is also social pressure to mitigate spending, as exhibited by the many advice articles and books regarding not overspending (Spector, 2018; Epperson and Dickler, 2019; Karp, 2010).
Evidence of SDB in self-reporting holiday healthfulness-related behaviors was found, which is unsurprising in itself, although perhaps notable for the consistency with which it was documented over time in this analysis along with those of Widmar et al. (2016) and Bir et al. (2020). The literature provides ample evidence of SDB in self-reported health behaviors and outcomes, including both underreporting negatives and over-reporting positive behaviors. Hébert et al. (2001) found that women with college educations working in the health system tended to underreport caloric intake and Simons et al. (2015) found an underreporting of sedentary gaming hours among non-active videogame playing youths. Klesges et al. (2004) found that overestimates of self-reported activity, underestimates of sweetened beverage preferences, and lower ratings of weight concerns and dieting behaviors were related to SDB in 8–10-year-old girls. They suggested more research into the role of SDB in complicating relationships observed between self-reported diet and/or physical activity and health outcomes was needed. Adams et al. (2005) suggested that SDB led to an over reporting of physical activity among women in self-reported data.
The use of cheap talk to mitigate the percentage of respondents exhibiting SDB was only effective for one of the eight holiday statements studied. Additionally, the inclusion of the cheap talk statement resulted in fewer statistical differences between the average self-score of all respondents compared to the score assigned to the average American. This decrease in statistical difference between self and average American indicates that steps towards convergence of the average American and the self-reported score was occurring for more than just one statement. Despite this only mild success rate, the incidences of SDB did not increase due to the inclusion of the cheap talk statement. The use of cheap talk to prevent hypothetical bias as introduced by Cummings and Talyor (1999) experienced mixed results when being used with other products and scenarios. List (2001) found that the cheap talk script for hypothetical bias did not work on experienced bidders. Champ et al. (2009) found that the hypothetical script only worked for some offer amounts, and did not work on experienced bidders. Previous work has evaluated SDB in other health or food-related contexts for other regions of the world. Bergen and Labonte (2020) evaluated SDB in neonatal and child health care use in Ethiopia. Their identification strategy included using common cues, the nature of responses and choice patterns. They warned that SDB is influenced by accepted attitudes and behaviors, social position and affluence. Studying food-purchasing behaviors in Australia, Wheeler et al. (2019) found that SDB influenced responses regarding the purchase of organic food, increasing self-reported purchasing frequency. They found while accounting for SDB that respondents were motivated to purchase organic for non-selfish reasons including environmental and public good. The effectiveness of such tangentially related health and food-related questions in other countries would be an interesting extension of this work.
To further provide evidence of the prevalence and consistency of incidences of SDB in holiday eating and exercise-related statements, few differences were found between the SDB results of this study, Widmar et al. (2016) and Bir et al. (2020). The consistency of responses across samples and time is noteworthy for those studying holiday health. The few statistical differences found between the cheap talk subsample and the previous samples were mostly towards decreasing the percentage of respondents who exhibited SDB. The decrease in SDB also supports the idea that incorporating a cheap talk statement prior to SDB-sensitive questions may result in a mild decrease of incidences of SDB. Limited evidence of cheap talk reducing SDB in sensitive questions related to holiday eating and healthfulness was found. However, notable consistency across time, samples of respondents, and holidays (Christmas versus Thanksgiving) in terms of both responses and SDB exhibited was documented.
### Limitations of research
Although the demographics of the full sample and subsamples closely mirrored the U.S. population, there were some statistical discrepancies. The samples of online survey respondents are often overeducated (Szolnoki and Hoffmann, 2013). However, the benefits of online data collection, including short completion time and affordable implementation, are often thought to outweigh this shortcoming (Louviere et al., 2000; Gao et al., 2009).
The use of cheap talk to mitigate the percentage of respondents exhibiting SDB was only effective for one of the eight holiday statements studied. The use of cheap talk to mitigate SDB may be more successful for other SDB prone questions, aside from the holiday-focused statements investigated here. It may simply be that the prevalence of SDB in the holiday eating and exercise statements is so engrained that the cheap talk statement had minimal effect. Or, perhaps there is an inherent difference in holiday-related reporting for the various cultural, economic, and social reasons which holiday spending and celebrations are so wrought with debate. Further research implementing cheap talk for other SDB prone questions could shed light on which situations this type of intervention works to mitigate SDB.
## Methods
### Survey instruments and data collection
The research project #60460205 was approved by the Purdue University institutional review board. Informed consent was obtained by all participants. Data collection took place during the peak of the 2018 holiday season, with data collection occurring December 18–26, 2018 to correspond with the winter holiday season surrounding Christmas Day (December 25) in the U.S. due to the holiday dinning and health-related questions specific to this data collection effort. Kantar, a company which hosts a large opt-in panel database (Kantar, 2020), was used to obtain the survey respondents, who were required to be 18 years of age or older to participate. Quotas set within Qualtrics, an online survey tool (Qualtrics, 2020), were used to target the proportion of respondents to match the U.S. census proportions for gender, age, education, income, and region of residence (U.S. Census Bureau, 2016). The test of proportions was used to evaluate if there were statistical differences between the subsamples employed in this study, as well as between each of the subsamples and the U.S. census. The one and two tailed tests of population proportion, assuming a normal distribution is calculated as
$$z = \frac{{\widehat P - p_0}}{{\sqrt {\frac{{p_0\left( {1 - p_0} \right)}}{n}} }},$$
(1)
where p0 is the hypothesized proportion (for example the census percentage), $$\widehat P$$ is the sample proportion, and n is the sample size (Acock, 2018). Equation (1) was used to compare each subsample to the U.S. population. A test of the difference of two proportions $$\widehat P_1$$ and $$\widehat P_2$$, for example comparing the demographics of the two subsamples, can be calculated as
$$z = \frac{{\widehat p_1 - \widehat p_2}}{{\sqrt {\widehat p_{\rm {p}}\left( {1 - \widehat p_{\rm {p}}} \right)\left( {\frac{1}{{n_1}} + \frac{1}{{n_2}}} \right)} }}$$
(2)
given
$$\widehat P_{\rm {p}} = \frac{{x_1 + x_2}}{{n_1 + n_2}},$$
(3)
where x1 and x2 are the total number of successes in the two populations (Acock, 2018). All calculations were conducted using STATA/SE16 (StataCorp, 2019).
Respondent demographics, holiday eating and shopping behavior, and responses regarding their level of agreement with holiday healthfulness statements designed to allow measurement of SDB were collected using a survey instrument deployed in Qualtrics. Data were checked for nonsensical answers and clear outliers, with special attention paid to write in answers. One respondent was not included in the analysis of the question regarding holiday spending. For this write in question the respondent spent $3000 on holiday meals (3 times the next highest amount),$199,000 on holiday gifts (25 times the next highest amount), and $13,000 on holiday travel (2 times the next highest amount) and thus was considered an outlier. The statements presented to respondents were previously employed in evaluations of SDB by Widmar et al. (2016), and Bir et al. (2020). Both Widmar et al. (2016) and Bir et al. (2020) conducted their national-scale U.S. data collection using online surveys surrounding the American November holiday Thanksgiving. Specifically the statements investigated were: “I anticipate gaining weight during the holiday season”, “I will gain more weight during the holiday season than during other times of the year”, “I make it a New Year’s resolution to lose weight”, “I will maintain my workout schedule during the holiday season”, “I will be vigilant about my weight during the holiday season”, “I watch what I eat during the holiday season”, “I will consume more desserts during the holiday season than at other times of the year”, “I will consume more alcohol during the holiday season than at other times of the year”. Respondents were asked to indicate on a scale from 1 (it describes you very well) to 5 (this statement does not describe you at all) how well the statements described them. Then, following indirect questioning (Fisher, 1993) respondents were asked to indicate on a scale from 1 (it describes the average American very well) to 5 (this statement does not describe the average American at all) how well the statements describe the average American. The order of the statements as seen by respondents was randomized for both rating self and the average American. Two additional datasets are used in this analysis to facilitate robustness checks across samples, time, and holidays (Thanksgiving versus Christmas holiday time periods). Widmar et al. (2016) first used the set of holiday health statements employed in this work. Their targeted to be nationally representative sample of 620 U.S. respondents was collected November 17–19, 2014 (Widmar et al., 2016). Bir et al. (2020) used the same set of holiday health statements as part of a larger exploration of holiday eating with a focus on turkey consumption. Bir et al.’s (2020) nationally representative sample of 565 U.S. respondents was collected November 12–19, 2018. ### Developing cheap talk for SDB Given the propensity for respondents to under-report bad behaviors and over-report good behaviors, this study sought to measure and assess the impact of a cheap talk statement on incidences of SDB. Respondents were randomly assigned to two subsamples which each saw and responded with their level of agreement to identical statements about holiday healthfulness. One subsample was shown a cheap talk statement about SDB prior to providing their level of agreement to the statements for themselves and the average American, and one group was not shown any information prior to reporting their level of agreement to the same eight statements for themselves and the average American. Since its introduction to contingent valuation literature by Cummings and Taylor (1999), the use of cheap talk statements has expanded. Instructions to respondents, and specifically cheap talk statements have been employed by Lusk (2003) in WTP experiments to minimize hypothetical bias and by Blumenschien et al. (2007) as a comparison in field experiments, among others. The cheap talk statement provided to half of the randomly assigned respondents was developed for this study and read, “Human inclination may be to answer questions in a way that deviates from your true behavior in an effort to improve the impression you make on others. This desire to give what is believed to be the socially “correct” or acceptable answer is often referred to as social desirability bias. Please keep this inclination in mind, and try to reflect on your true behavior when answering questions”. Statistical comparisons between those who saw the cheap talk statement and those who did not were conducted in two ways. ### Evaluating incidences of SDB, and comparing the results of indirect questioning across studies First, the mean results for the responses regarding self and the holiday healthfulness statements as well as average American for both the cheap talk and control subsamples were calculated. The mean scores for self and average American were statistically compared for each of the two subsamples using a paired t-test. The equation for paired observations is given by $$t = \frac{{\overline {x_i - y_i} \sqrt n }}{{s_{\rm {d}}}},$$ (4) where xi is observation i from the self score, yi is observation i from the average American score, n is the number of observations, and Sd is the standard deviation (Dixon and Massey, 1983). All analysis was conducted in STATA (StataCorp, 2019). It was hypothesized that there would be fewer statistical differences for the subsample who viewed the cheap talk statement, as their self-assessment would be closer to that of their idea of the average American. This theory is supported by the construct and purpose of indirect questioning (Fisher, 1993). Next, an index was created of the difference between the score respondents gave themselves and the score they gave the average American on the scale from 1 (describes well) to 5 (does not describe well) following Bir et al. (2020) and Widmar et al. (2016). For example, if a person indicated a score of 2 when asked about themselves, and a score of 4 when asked about the average American, the difference would be −2. Depending on the specific statement, either a positive or negative difference between the score chosen for self and the average American indicated the presence of SDB. A positive difference would indicate the presence of SDB for the statement “I anticipate gaining weight during the holidays”, “I will gain more weight during the holiday season than during other times of the year”, “I make it a New Year’s resolution to lose weight”, “I will consume more desserts during the holiday season than at other times of the year”, and “I will consume more alcohol during the holiday season than at other times of the year”. A negative score would indicate the presence of SDB for the statement “I will maintain my workout schedule during the holiday season”, “I will be vigilant about my weight during the holiday season”, and “I watch what I eat during the holiday season”. Each respondent was evaluated as exhibiting or not exhibiting SDB for each of the 8 statements studied. Exhibiting SDB was defined as having a difference between the score for self and average American of 1 or greater, or a difference of −1 or less depending on the statement. Using the test of proportions Eqs. (2) and (3), the proportion of respondents who displayed SDB was statistically compared between the cheap talk and control samples. Additionally, using Eq. (1), the incidences of respondents who displayed SDB were statistically compared between the two subsamples from this study, Widmar et al. (2016) and Bir et al. (2020). Finally, for a broader evaluation of potential incidences of SDB, Widmar et al. (2016) grouped the difference between self and average American scores as −4 to −3, −2 to −1, 0, 1 to 2, and 3 to 4. In order to compare results between Widmar et al. (2016), Bir et al. (2020) and the results of this study, the same categories of differences between self and average American scores were determined for this study and Bir et al. (2020). These categories were then statistically compared between the different studies using Eqs. (1)–(3) to compare the individual differences between the categories. To better understand the distribution of the differences between self and average American scores for the holiday healthfulness questions the Kolmogrorov–Smirnov test for cumulative distributions was used (Kolmogorov, 1933). The cumulative distribution function for the samples being compared were determined, and then the largest difference was identified. The largest difference between the CDFs were then compared to the critical value. The critical value (D) for a 0.05 statistical significance was calculated as $$D = \frac{{1.35810}}{{\sqrt n }}.$$ (5) Given the sample size of 368 the critical value was determined as 0.070796. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## References 1. Acock ACA (2018) Gentle introduction to stata, 6th edn. College Station, TX, STATA press 2. Adams SA, Matthews CE, Ebbeling CB et al. (2005) The effect of social desirability and social approval on self-reports of physical activity. Am J Epidemiol 161(4):389–398 3. Basker E (2005) Twas four weeks before Christmas: retail sales and the length of the Christmas shopping season. Econ Lett 89(3):317–322 4. Bergen N, Labonté R (2020) Everything is perfect, and we have no problems: detecting and limiting social desirability bias in qualitative research. Qual Health Res 30(5):783–792 5. Bir C, Davis MK, Widmar NO et al. (2020) Holiday health consciousness and consumer demand for whole turkey attributes. Poult Sci 99(5):2798–2810 6. Bir C, Widmar NO, Croney C (2018) Exploring social desirability bias in perceptions of dog adoption: all’s well that ends well? Or does the method of adoption matter? Animals 8(9):154 7. Blumenschein K, Blomquist GC, Johannesson M et al. (2007) Eliciting willingness to pay without bias: evidence from a field experiment. Econ J 118(525):114–137 8. Byrnes H (2019) Holiday spending bill: most of us spend$1,000 or more on gifts travel and goodies. Available via USA Today. https://www.usatoday.com/story/money/2019/11/29/this-is-how-much-the-average-person-spends-on-the-holidays-in-every-state/40688517/. Accessed 23 Jan 2020
9. Champ PA, Moore R, Bishop RC (2009) A comparison of approaches to mitigate hypothetical bias. Agric Resour Econ Rev 38(2):166–180
10. Crino MD, Svoboda M, Rubenfeld S et al. (1983) Data on the Marlowe–Crowne and Edwards social desirability scales. Psychol Rep 53(3):963–968
11. Crowne DP, Marlowe D (1960) A new scale of social desirability independent of psychopathology. J Consult Clin Psychol 24(4):349
12. Cummings RG, Taylor LO (1999) Unbiased value estimates for environmental goods: a cheap talk design for the contingent valuation method. Am Econ Rev 89(3):649–665
13. Dixon WJ, Massey FJ (1983) Introduction to statistical analysis, 4th edn. New York, STATA press
14. Epperson S, Dickler J (2019) 4 ways to avoid the overspending trap this holiday. Available via CNBC. https://www.cnbc.com/2019/11/16/how-to-avoid-overspending-this-holiday.html. Accessed 11 Feb 2020
15. Farrell J, Gibbons R (1989) Cheap talk can matter in bargaining. J Econ Theory 48(1):221–37
16. Farrell J, Rabin M (1996) Cheap talk. J Econ Perspect 10(3):103–18
17. Fisher RJ (1993) Social desirability bias and the validity of indirect questioning. J Consum Res 20(2):303–315
18. Gao Z, Schroeder T (2009) Effects of label information on consumer willingness‐to‐pay for food attributes. Am J Agric Econ 91(3):95–809
19. Galesic M, Bosnjak M (2009) Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opin Q 73(2):349–360
20. Hawkes N (2016) Sixty seconds on… New Year resolutions. BMJ 355:i6845
21. Hébert JR, Peterson KE, Hurley TG et al. (2001) The effect of social desirability trait onself-reported dietary measures among multi-ethnic female health center employees. Ann Epidemiol 11(6):417–427
22. Jampolis M (2018) Maintaining your weight through the holidays. Available via CNN. https://www.cnn.com/2018/12/10/health/holiday-weight-diet-exercise-jampolis/index.html. Accessed 16 Jan 2020
23. Johansson-Stenman O, Martinsson PH (2006) What are you driving a BMW? J Econ Behav Organ 60(2):129–46
24. Kantar (2020) About. Available via Kantar Website. https://www.kantar.com/about. Accessed 11 Feb 2020
25. Karp G (2010) Seasonal gift strategies for spending smart. New York City, STATA press
26. Kaviani S, vanDellen M, Cooper JA (2019) Daily self‐weighing to prevent holiday‐associated weight gain in adults. Obesity 27(6):908–916
27. Klesges LM, Baranowksi T, Beech B et al. (2004) Social desirability bias in self-reported dietary, physical activity and weight concerns measures in 8-to 10-year-old African-American girls: results from the Girls Health Enrichment Multisite Studies (GEMS). Prev Med 38:78–87
28. Kolmogorov AN (1933) Sulla determinazione empirica di una legge di distribuzione. Giornale dell’ Istituto Italianodegli Attuari 4:83–91
29. List JA (2001) Do explicit warnings eliminate the hypothetical bias in elicitation procedures? Evidence from field auctions for sports cards. Am Econ Rev 91(5):1498–1507
30. Louviere JJ, Hensher DA, Swait JD (2000) Stated choice methods: analysis and application. Cambridge, England, STATA press
31. Lusk JL (2003) Effects of cheap talk on consumer willingness-to-pay for golden rice. Am J Agric Econ 85(4):840–856
32. Lusk JL, Norwood FB (2009) An inferred valuation method. Land Econ 85(3):500–514
33. Ma Y, Olendzki BC, Li W et al. (2006) Seasonal variation in food intake, physical activity, and body weight in a predominantly overweight population. Eur J Clin Nurt 60:519–528
34. Maccoby EE, Maccoby N (1954) The interview: a tool of social science. Handb Soc Psychol 1(1):449–487
35. Mason F, Pallan M, Sitch A et al. (2018) Effectiveness of a brief behavioural intervention to prevent weight gain over the Christmas holiday period: randomised controlled trial. BMJ 363:k4867
36. Matthews S, Okuno-Fujiwara M, Postlewaite (1991) A refining cheap-talk equilibria. J Econ Theory 55(2):247–73
37. Olynk NJ, Tonsor GT, Wolf CA (2010) Consumer willingness to pay for livestock credence attribute claim verification. J Agr Resour Econ 35(2):261–280
38. Rehkamp S (2016) A look at Calorie sources in the American diet. Available via USDA. https://www.ers.usda.gov/amber-waves/2016/december/a-look-at-calorie-sources-in-the-american-diet. Accessed 10 Feb 2020
39. Rotko T, Oglesby L, Künzli N et al. (2000) Population sampling in European air pollution exposure study. EXPOLIS: comparisons between the cities and representativeness of the samples. J Expo Sci Environ Epidemiol 10(4):355–364
40. Schoeller DA (2014) The effect of holiday weight gain on body weight. Physiol Behav 134:66–69
41. Simons M, Chinapaw MJ, Brug J et al. (2015) Associations between active video gaming and other energy-balance related behaviours in adolescents: a 24-hour recall diary study. Int J Behav Nutr Phys Act 12(1):192
42. Spector N (2018) Amid pressure to overspend on holidays, consumers embrace the gift of minimalism. Available via NBC News. https://www.nbcnews.com/business/consumer/amid-pressure-overspend-holidays-consumers-embrace-gift-minimalism-n938901. Accessed 21 Jul 2019
43. StataCorp (2019) Stata statistical software: release 16. StrataCorp, College Station, TX.
44. Stevenson JL, Krishnan S, Stoner MA, Goktas Z et al. (2013) Effects of exercise during the holiday season on changes in body weight, body composition and blood pressure. Eur J Clin Nutr 67(9):944–949
45. Szolnoki G, Hoffmann D (2013) Online, face-to-face and telephone surveys—comparing different sampling methods in wine consumer research. Win Econ Policy 2:57–66
46. U.S. Census Bureau (2016) Annual Estimates of the Resident Population for the United States, Regions, States, and Puerto Rico: April 1, 2010 to July 1, 2015 (NST-EST2015-01). Available via U.S. Census Bureau, Population Division. https://www.census.gov/data/tables/time-series/demo/popest/2010s-state-total.html. Accessed 16 Jan 2018
47. Qualtrics (2020) About. Available via Qualtrics Website. https://www.qualtrics.com/. Accessed 11 Feb 2020
48. Wheeler SA, Gregg D, Singh M (2019) Understanding the role of social desirability bias and environmental attitudes and behaviour on South Australians’ stated purchase of organic foods. Food Qual Prefer 74:125–134
49. Widmar NJ, Byrd ES, Dominick SR et al. (2016) Social desirability bias in reporting of holiday season healthfulness. Prev Med Rep 4:270–276
## Acknowledgements
This work was supported partially by the USDA National Institute of Food and Agriculture, Hatch project IND00044133 entitled Changing Preferences for Meat Proteins by US Residents.
## Author information
Authors
### Corresponding author
Correspondence to Courtney Bir.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Bir, C., Widmar, N.O. Consistently biased: documented consistency in self-reported holiday healthfulness behaviors and associated social desirability bias. Humanit Soc Sci Commun 7, 178 (2020). https://doi.org/10.1057/s41599-020-00665-x
• Accepted:
• Published:
|
2021-10-22 14:46:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39179155230522156, "perplexity": 5819.181204726266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00712.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/5131/browse?type=contributor&value=Haboush%2C+William
|
# Browse Graduate Dissertations and Theses at Illinois by Contributor "Haboush, William"
• (2017-07-10)
In this thesis we mainly consider supermanifolds and super Hilbert schemes. In the first part of this dissertation, we construct the Hilbert scheme of $0$-dimensional subspaces on dimension $1 | 1$ supermanifolds. By ...
application/pdf
PDF (224kB)
|
2019-08-17 11:51:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5552569031715393, "perplexity": 4587.065761659172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00246.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=1985_AHSME_Problems/Problem_16&direction=prev&oldid=87679
|
During AMC testing, the AoPS Wiki is in read-only mode. No edits can be made.
# 1985 AHSME Problems/Problem 16
## Problem
If $A=20^\circ$ and $B=25^\circ$, then the value of $(1+\tan A)(1+\tan B)$ is
$\mathrm{(A)\ } \sqrt{3} \qquad \mathrm{(B) \ }2 \qquad \mathrm{(C) \ } 1+\sqrt{2} \qquad \mathrm{(D) \ } 2(\tan A+\tan B) \qquad \mathrm{(E) \ }\text{none of these}$
## Solution
### Solution 1
First, let's leave everything in variables and see if we can simplify $(1+\tan A)(1+\tan B)$.
We can write everything in terms of sine and cosine to get $\left(\frac{\cos A}{\cos A}+\frac{\sin A}{\cos A}\right)\left(\frac{\cos B}{\cos B}+\frac{\sin B}{\cos B}\right)=\frac{(\sin A+\cos A)(\sin B+\cos B)}{\cos A\cos B}$.
We can multiply out the numerator to get $\frac{\sin A\sin B+\cos A\cos B+\sin A\cos B+\sin B\cos A}{\cos A\cos B}$.
It may seem at first that we've made everything more complicated, however, we can recognize the numerator from the angle sum formulas:
$\cos(A-B)=\sin A\sin B+\cos A\cos B$
$\sin(A+B)=\sin A\cos B+\sin B\cos A$
Therefore, our fraction is equal to $\frac{\cos(A-B)+\sin(A+B)}{\cos A\cos B}$.
We can also use the product-to-sum formula
$\cos A\cos B=\frac{1}{2}(\cos(A-B)+\cos(A+B))$ to simplify the denominator:
$\frac{\cos(A-B)+\sin(A+B)}{\frac{1}{2}(\cos(A-B)+\cos(A+B))}$.
But now we seem stuck. However, we can note that since $A+B=45^\circ$, we have $\sin(A+B)=\cos(A+B)$, so we get
$\frac{\cos(A-B)+\sin(A+B)}{\frac{1}{2}(\cos(A-B)+\sin(A+B))}$
$\frac{1}{\frac{1}{2}}$
$2, \boxed{\text{B}}$
Note that we only used the fact that $\sin(A+B)=\cos(A+B)$, so we have in fact not just shown that $(1+\tan A)(1+\tan B)=2$ for $A=20^\circ$ and $B=25^\circ$, but for all $A, B$ such that $A+B=45^\circ+n180^\circ$, for integer $n$.
### Solution 2
We can see that $25^o+20^o=45^o$. We also know that $\tan 45=1$. First, let us expand $(1+\tan A)(1+\tan B)$.
We get $1+\tan A+\tan B+\tan A\tan B$.
Now, let us look at $\tan45=\tan(20+25)$.
By the $\tan$ sum formula, we know that $\tan45=\dfrac{\text{tan A}+\text{tan B}}{1- \text{tan A} \text{tan B}}$
Then, since $\tan 45=1$, we can see that $\tan A+\tan B=1-\tan A\tan B$
Then $1=\tan A+\tan B+\tan A\tan B$
Thus, the sum become $1+1=2$ and the answer is $\fbox{\text{(B)}}$
|
2022-01-22 02:34:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 36, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842219948768616, "perplexity": 227.1155907965461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303729.69/warc/CC-MAIN-20220122012907-20220122042907-00098.warc.gz"}
|
http://monocerossoft.com/latex-error-usepackage-before-documentclass/
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
If so, remember that things that appear before documentclass are also problematical:. LaTeX Error: Missing begin{document}. l.1 documentclass{ article}.
The first is to add the command to use the graphicx package, right after the declaration of the document class: The second is to insert. perl -pi.bak -e ‘s/^(\document.*)/$1n\usepackage{graphicx}/’ template.latex perl -pi.bak -e. So in essence the error I am reporting is ! LaTeX Error: \usepackage before \documentclass. I have absolutely no idea why it would be occuring, any help would LATEX for Beginners Workbook Edition 5, March 2014 Document Reference: 3722-2014 Lyx require your computer to have LaTeX system before you can install Lyx, otherwise please use the option to download MiKTeX. Other interuptions are the step to choose the Language and the step to choose and download dictionaries. Placing items before \documentclass. You can't use \usepackage before \documentclass, because I saw the same error. Instead of telling (xe)latex to process my. LaTeX Error: `xcolor.sty' not found when including tikz. before declaring the document class. Whatever I do I still get the same error. Floats are containers for things in a document that cannot be broken over a page. LaTeX by default recognizes "table" and "figure" floats, but you can define new ones. Board index LaTeX Document Classes. \usepackage may only appear in the document preamble, i.e., between \documentclass and \begin{document} What does this mean and how to solve it , thanks a lot None of them had ever done a book before. the Tufte-LaTeX developers. Instead of using the memoir class, you could typeset your book in a manner similar to Tufte. You just have to download the appropriate files and change your. LaTeX Template for the Preparation of Papers for AIAA Technical Conferences Feb 3, 2016. I assume that this is an error message? It means that you have usepackage either before the documentclass command or after. Landscape in Latex – texblog – The default page layout is “portrait”, but sometimes it is still useful/necessary to have the whole document or only single pages changed to “landscape”. The. Dec 29, 2014. Generally the use for package loading is with usepackage after documentclass but fixltx2e is essentially a list of unrelated fixes that should. Me Aparece El Error 507 En Mi Blackberry Que Es Pdfill Error Eexec 1. Install or Uninstall PDFill PDF&Image Writer: PDFill has the following installation requirements: You must have a Free Adobe Acrobat Reader 4.… Home > Error Undefined > Pdf Printer Eexec Error Pdf Printer Eexec Error. Adobe Reader PDFill does support XP/2003/Vista/Win7 64 bit See http://www.pdfill.com. Warning Error Removing Breakpoint A DBA is And non-LaTEX-users have to deal with one supplementary guideline page (mean = 625 words). Based on a rejection rate of 60%, an article has to be submitted four times before it has >95% chance of being accepted. The time devoted to. Imp-00017 Following Statement Failed With Oracle Error 2430 Mar 01, 2002 · Import constraint Problems;. IMP-00017: following statement failed with ORACLE error 2430:. When 2430 is ORA-02430 cannot. Me Aparece El Error 507 En Mi Blackberry Que Es Pdfill Error Eexec 1. Install or Uninstall PDFill PDF&Image Writer: PDFill has the following installation requirements: You must have a Free Adobe Acrobat Reader 4.… Home An example is mathtt{documentclass[12pt. page when generating the pdf, and usepackage{graphicx} allows images to be inserted into the document. With the preamble written, next is the document. When writing a$LaTeX\$.
|
2018-03-24 14:12:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769369721412659, "perplexity": 9470.324056592573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650685.77/warc/CC-MAIN-20180324132337-20180324152337-00452.warc.gz"}
|
https://people.maths.bris.ac.uk/~matyd/GroupNames/432/D4xC54.html
|
Copied to
clipboard
## G = D4×C54order 432 = 24·33
### Direct product of C54 and D4
direct product, metabelian, nilpotent (class 2), monomial, 2-elementary
Series: Derived Chief Lower central Upper central
Derived series C1 — C2 — D4×C54
Chief series C1 — C3 — C9 — C18 — C54 — C2×C54 — D4×C27 — D4×C54
Lower central C1 — C2 — D4×C54
Upper central C1 — C2×C54 — D4×C54
Generators and relations for D4×C54
G = < a,b,c | a54=b4=c2=1, ab=ba, ac=ca, cbc=b-1 >
Subgroups: 140 in 108 conjugacy classes, 76 normal (20 characteristic)
C1, C2, C2, C2, C3, C4, C22, C22, C22, C6, C6, C6, C2×C4, D4, C23, C9, C12, C2×C6, C2×C6, C2×C6, C2×D4, C18, C18, C18, C2×C12, C3×D4, C22×C6, C27, C36, C2×C18, C2×C18, C2×C18, C6×D4, C54, C54, C54, C2×C36, D4×C9, C22×C18, C108, C2×C54, C2×C54, C2×C54, D4×C18, C2×C108, D4×C27, C22×C54, D4×C54
Quotients: C1, C2, C3, C22, C6, D4, C23, C9, C2×C6, C2×D4, C18, C3×D4, C22×C6, C27, C2×C18, C6×D4, C54, D4×C9, C22×C18, C2×C54, D4×C18, D4×C27, C22×C54, D4×C54
Smallest permutation representation of D4×C54
On 216 points
Generators in S216
(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54)(55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108)(109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162)(163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216)
(1 169 131 75)(2 170 132 76)(3 171 133 77)(4 172 134 78)(5 173 135 79)(6 174 136 80)(7 175 137 81)(8 176 138 82)(9 177 139 83)(10 178 140 84)(11 179 141 85)(12 180 142 86)(13 181 143 87)(14 182 144 88)(15 183 145 89)(16 184 146 90)(17 185 147 91)(18 186 148 92)(19 187 149 93)(20 188 150 94)(21 189 151 95)(22 190 152 96)(23 191 153 97)(24 192 154 98)(25 193 155 99)(26 194 156 100)(27 195 157 101)(28 196 158 102)(29 197 159 103)(30 198 160 104)(31 199 161 105)(32 200 162 106)(33 201 109 107)(34 202 110 108)(35 203 111 55)(36 204 112 56)(37 205 113 57)(38 206 114 58)(39 207 115 59)(40 208 116 60)(41 209 117 61)(42 210 118 62)(43 211 119 63)(44 212 120 64)(45 213 121 65)(46 214 122 66)(47 215 123 67)(48 216 124 68)(49 163 125 69)(50 164 126 70)(51 165 127 71)(52 166 128 72)(53 167 129 73)(54 168 130 74)
(1 158)(2 159)(3 160)(4 161)(5 162)(6 109)(7 110)(8 111)(9 112)(10 113)(11 114)(12 115)(13 116)(14 117)(15 118)(16 119)(17 120)(18 121)(19 122)(20 123)(21 124)(22 125)(23 126)(24 127)(25 128)(26 129)(27 130)(28 131)(29 132)(30 133)(31 134)(32 135)(33 136)(34 137)(35 138)(36 139)(37 140)(38 141)(39 142)(40 143)(41 144)(42 145)(43 146)(44 147)(45 148)(46 149)(47 150)(48 151)(49 152)(50 153)(51 154)(52 155)(53 156)(54 157)(55 82)(56 83)(57 84)(58 85)(59 86)(60 87)(61 88)(62 89)(63 90)(64 91)(65 92)(66 93)(67 94)(68 95)(69 96)(70 97)(71 98)(72 99)(73 100)(74 101)(75 102)(76 103)(77 104)(78 105)(79 106)(80 107)(81 108)(163 190)(164 191)(165 192)(166 193)(167 194)(168 195)(169 196)(170 197)(171 198)(172 199)(173 200)(174 201)(175 202)(176 203)(177 204)(178 205)(179 206)(180 207)(181 208)(182 209)(183 210)(184 211)(185 212)(186 213)(187 214)(188 215)(189 216)
G:=sub<Sym(216)| (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54)(55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108)(109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162)(163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216), (1,169,131,75)(2,170,132,76)(3,171,133,77)(4,172,134,78)(5,173,135,79)(6,174,136,80)(7,175,137,81)(8,176,138,82)(9,177,139,83)(10,178,140,84)(11,179,141,85)(12,180,142,86)(13,181,143,87)(14,182,144,88)(15,183,145,89)(16,184,146,90)(17,185,147,91)(18,186,148,92)(19,187,149,93)(20,188,150,94)(21,189,151,95)(22,190,152,96)(23,191,153,97)(24,192,154,98)(25,193,155,99)(26,194,156,100)(27,195,157,101)(28,196,158,102)(29,197,159,103)(30,198,160,104)(31,199,161,105)(32,200,162,106)(33,201,109,107)(34,202,110,108)(35,203,111,55)(36,204,112,56)(37,205,113,57)(38,206,114,58)(39,207,115,59)(40,208,116,60)(41,209,117,61)(42,210,118,62)(43,211,119,63)(44,212,120,64)(45,213,121,65)(46,214,122,66)(47,215,123,67)(48,216,124,68)(49,163,125,69)(50,164,126,70)(51,165,127,71)(52,166,128,72)(53,167,129,73)(54,168,130,74), (1,158)(2,159)(3,160)(4,161)(5,162)(6,109)(7,110)(8,111)(9,112)(10,113)(11,114)(12,115)(13,116)(14,117)(15,118)(16,119)(17,120)(18,121)(19,122)(20,123)(21,124)(22,125)(23,126)(24,127)(25,128)(26,129)(27,130)(28,131)(29,132)(30,133)(31,134)(32,135)(33,136)(34,137)(35,138)(36,139)(37,140)(38,141)(39,142)(40,143)(41,144)(42,145)(43,146)(44,147)(45,148)(46,149)(47,150)(48,151)(49,152)(50,153)(51,154)(52,155)(53,156)(54,157)(55,82)(56,83)(57,84)(58,85)(59,86)(60,87)(61,88)(62,89)(63,90)(64,91)(65,92)(66,93)(67,94)(68,95)(69,96)(70,97)(71,98)(72,99)(73,100)(74,101)(75,102)(76,103)(77,104)(78,105)(79,106)(80,107)(81,108)(163,190)(164,191)(165,192)(166,193)(167,194)(168,195)(169,196)(170,197)(171,198)(172,199)(173,200)(174,201)(175,202)(176,203)(177,204)(178,205)(179,206)(180,207)(181,208)(182,209)(183,210)(184,211)(185,212)(186,213)(187,214)(188,215)(189,216)>;
G:=Group( (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54)(55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108)(109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162)(163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216), (1,169,131,75)(2,170,132,76)(3,171,133,77)(4,172,134,78)(5,173,135,79)(6,174,136,80)(7,175,137,81)(8,176,138,82)(9,177,139,83)(10,178,140,84)(11,179,141,85)(12,180,142,86)(13,181,143,87)(14,182,144,88)(15,183,145,89)(16,184,146,90)(17,185,147,91)(18,186,148,92)(19,187,149,93)(20,188,150,94)(21,189,151,95)(22,190,152,96)(23,191,153,97)(24,192,154,98)(25,193,155,99)(26,194,156,100)(27,195,157,101)(28,196,158,102)(29,197,159,103)(30,198,160,104)(31,199,161,105)(32,200,162,106)(33,201,109,107)(34,202,110,108)(35,203,111,55)(36,204,112,56)(37,205,113,57)(38,206,114,58)(39,207,115,59)(40,208,116,60)(41,209,117,61)(42,210,118,62)(43,211,119,63)(44,212,120,64)(45,213,121,65)(46,214,122,66)(47,215,123,67)(48,216,124,68)(49,163,125,69)(50,164,126,70)(51,165,127,71)(52,166,128,72)(53,167,129,73)(54,168,130,74), (1,158)(2,159)(3,160)(4,161)(5,162)(6,109)(7,110)(8,111)(9,112)(10,113)(11,114)(12,115)(13,116)(14,117)(15,118)(16,119)(17,120)(18,121)(19,122)(20,123)(21,124)(22,125)(23,126)(24,127)(25,128)(26,129)(27,130)(28,131)(29,132)(30,133)(31,134)(32,135)(33,136)(34,137)(35,138)(36,139)(37,140)(38,141)(39,142)(40,143)(41,144)(42,145)(43,146)(44,147)(45,148)(46,149)(47,150)(48,151)(49,152)(50,153)(51,154)(52,155)(53,156)(54,157)(55,82)(56,83)(57,84)(58,85)(59,86)(60,87)(61,88)(62,89)(63,90)(64,91)(65,92)(66,93)(67,94)(68,95)(69,96)(70,97)(71,98)(72,99)(73,100)(74,101)(75,102)(76,103)(77,104)(78,105)(79,106)(80,107)(81,108)(163,190)(164,191)(165,192)(166,193)(167,194)(168,195)(169,196)(170,197)(171,198)(172,199)(173,200)(174,201)(175,202)(176,203)(177,204)(178,205)(179,206)(180,207)(181,208)(182,209)(183,210)(184,211)(185,212)(186,213)(187,214)(188,215)(189,216) );
G=PermutationGroup([[(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54),(55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108),(109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162),(163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216)], [(1,169,131,75),(2,170,132,76),(3,171,133,77),(4,172,134,78),(5,173,135,79),(6,174,136,80),(7,175,137,81),(8,176,138,82),(9,177,139,83),(10,178,140,84),(11,179,141,85),(12,180,142,86),(13,181,143,87),(14,182,144,88),(15,183,145,89),(16,184,146,90),(17,185,147,91),(18,186,148,92),(19,187,149,93),(20,188,150,94),(21,189,151,95),(22,190,152,96),(23,191,153,97),(24,192,154,98),(25,193,155,99),(26,194,156,100),(27,195,157,101),(28,196,158,102),(29,197,159,103),(30,198,160,104),(31,199,161,105),(32,200,162,106),(33,201,109,107),(34,202,110,108),(35,203,111,55),(36,204,112,56),(37,205,113,57),(38,206,114,58),(39,207,115,59),(40,208,116,60),(41,209,117,61),(42,210,118,62),(43,211,119,63),(44,212,120,64),(45,213,121,65),(46,214,122,66),(47,215,123,67),(48,216,124,68),(49,163,125,69),(50,164,126,70),(51,165,127,71),(52,166,128,72),(53,167,129,73),(54,168,130,74)], [(1,158),(2,159),(3,160),(4,161),(5,162),(6,109),(7,110),(8,111),(9,112),(10,113),(11,114),(12,115),(13,116),(14,117),(15,118),(16,119),(17,120),(18,121),(19,122),(20,123),(21,124),(22,125),(23,126),(24,127),(25,128),(26,129),(27,130),(28,131),(29,132),(30,133),(31,134),(32,135),(33,136),(34,137),(35,138),(36,139),(37,140),(38,141),(39,142),(40,143),(41,144),(42,145),(43,146),(44,147),(45,148),(46,149),(47,150),(48,151),(49,152),(50,153),(51,154),(52,155),(53,156),(54,157),(55,82),(56,83),(57,84),(58,85),(59,86),(60,87),(61,88),(62,89),(63,90),(64,91),(65,92),(66,93),(67,94),(68,95),(69,96),(70,97),(71,98),(72,99),(73,100),(74,101),(75,102),(76,103),(77,104),(78,105),(79,106),(80,107),(81,108),(163,190),(164,191),(165,192),(166,193),(167,194),(168,195),(169,196),(170,197),(171,198),(172,199),(173,200),(174,201),(175,202),(176,203),(177,204),(178,205),(179,206),(180,207),(181,208),(182,209),(183,210),(184,211),(185,212),(186,213),(187,214),(188,215),(189,216)]])
270 conjugacy classes
class 1 2A 2B 2C 2D 2E 2F 2G 3A 3B 4A 4B 6A ··· 6F 6G ··· 6N 9A ··· 9F 12A 12B 12C 12D 18A ··· 18R 18S ··· 18AP 27A ··· 27R 36A ··· 36L 54A ··· 54BB 54BC ··· 54DV 108A ··· 108AJ order 1 2 2 2 2 2 2 2 3 3 4 4 6 ··· 6 6 ··· 6 9 ··· 9 12 12 12 12 18 ··· 18 18 ··· 18 27 ··· 27 36 ··· 36 54 ··· 54 54 ··· 54 108 ··· 108 size 1 1 1 1 2 2 2 2 1 1 2 2 1 ··· 1 2 ··· 2 1 ··· 1 2 2 2 2 1 ··· 1 2 ··· 2 1 ··· 1 2 ··· 2 1 ··· 1 2 ··· 2 2 ··· 2
270 irreducible representations
dim 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 type + + + + + image C1 C2 C2 C2 C3 C6 C6 C6 C9 C18 C18 C18 C27 C54 C54 C54 D4 C3×D4 D4×C9 D4×C27 kernel D4×C54 C2×C108 D4×C27 C22×C54 D4×C18 C2×C36 D4×C9 C22×C18 C6×D4 C2×C12 C3×D4 C22×C6 C2×D4 C2×C4 D4 C23 C54 C18 C6 C2 # reps 1 1 4 2 2 2 8 4 6 6 24 12 18 18 72 36 2 4 12 36
Matrix representation of D4×C54 in GL3(𝔽109) generated by
108 0 0 0 74 0 0 0 74
,
1 0 0 0 0 108 0 1 0
,
1 0 0 0 1 0 0 0 108
G:=sub<GL(3,GF(109))| [108,0,0,0,74,0,0,0,74],[1,0,0,0,0,1,0,108,0],[1,0,0,0,1,0,0,0,108] >;
D4×C54 in GAP, Magma, Sage, TeX
D_4\times C_{54}
% in TeX
G:=Group("D4xC54");
// GroupNames label
G:=SmallGroup(432,54);
// by ID
G=gap.SmallGroup(432,54);
# by ID
G:=PCGroup([7,-2,-2,-2,-3,-2,-3,-3,365,192,166]);
// Polycyclic
G:=Group<a,b,c|a^54=b^4=c^2=1,a*b=b*a,a*c=c*a,c*b*c=b^-1>;
// generators/relations
×
𝔽
|
2021-12-06 07:58:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993190765380859, "perplexity": 4217.564621973724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.59/warc/CC-MAIN-20211206072825-20211206102825-00240.warc.gz"}
|
https://math.stackexchange.com/questions/2436273/gilbarg-trudinger-why-does-this-theorem-imply-equicontinuity-of-first-and-sec
|
# Gilbarg & Trudinger: Why does this Theorem imply equicontinuity of first and second derivative?
I recently started to read Chapter 4 from " Elliptic Partial Differential Equation of Second Order" by Gilbarg and Trudinger. Below is what you need to know:
QUESTION:
Why does Theorem 4.8 imply the equicontinuity of solutions and of their first and second derivatives on compact subsets?
I understand why a function which satisfy a Holder condition is equicontinuous. But I don't see why this Theorem implies $\;u\in C^{0,\alpha}\;,\;u\in C^{1,\alpha}\;,\;u\in C^{2,\alpha}\;$. I suppose the fact that these norms are very confusing to me, is important here...
I've been stuck to this so any help would be valuable.
• I'm not sure exactly where your confusion is: the theorem tells you very directly that $u$ is bounded in the Hölder norm. Is your issue with the distinction between $|u|_{2,\alpha}$ and $|u|^*_{2,\alpha}$? – Anthony Carapetis Sep 20 '17 at 11:52
• @AnthonyCarapetis I understand neither of these norms completely, to be honest... As I see it, only the second derivative of $\;u\;$ satisfies a Holder condition by the Theorem... – kaithkolesidou Sep 23 '17 at 10:56
• The $C^2$ part of the norm already implies Holder control (Lipschitz even!) for $u$ and its first derivative. – Anthony Carapetis Sep 24 '17 at 3:53
$d_x=\text{dist}(x,\partial \Omega)$ and $d_{x,y}=\min\{d_x,d_y\}$. If you consider a set $U$ compactly contained in $\Omega$, then $\text{dist}(U,\partial \Omega)=c_0>0$. Hence, if $x,y\in U$, from the formula just above (4.20) and the one just below you get $$c_0|Du(x)|+c_0^2 |D^2 u(x)|\le RHS$$ and $$c_0^{2+\alpha}\frac{|D^2 u(x)-D^2 u(y)|}{|x-y|^\alpha}\le RHS,$$ so you get $C^{2,\alpha}$ bounds in $U$.
|
2019-07-19 08:43:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063856601715088, "perplexity": 288.82949557277993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00183.warc.gz"}
|
http://koreascience.or.kr/article/JAKO201426636276328.page
|
# ON MATRIX POLYNOMIALS ASSOCIATED WITH HUMBERT POLYNOMIALS
• Pathan, M.A. (Centre for Mathematical and statistical Sciences (CMSS), KFRI) ;
• Accepted : 2014.07.29
• Published : 2014.08.31
#### Abstract
The principal object of this paper is to study a class of matrix polynomials associated with Humbert polynomials. These polynomials generalize the well known class of Gegenbauer, Legendre, Pincherl, Horadam, Horadam-Pethe and Kinney polynomials. We shall give some basic relations involving the Humbert matrix polynomials and then take up several generating functions, hypergeometric representations and expansions in series of matrix polynomials.
# 1. Introduction and Notations
Gould [6] (see also [11]) presented a systematic study of an interesting generalization of Humbert, Gegenbauer and several other polynomial systems defined by
where m is a positive integer, |t| < 1 and other parameters are unrestricted in general. For the table of main special cases of (1.1), including Gegenbauer, Legendre, Tchebycheff, Pincherle, Kinney and Humbert polynomials, see Gould [6]. In [10] Milovanovic and Dordevic considered the polynomials defined by the generating function
where m∈ ℕ := {1, 2, 3,...}, |t| < 1 and λ >
The explicit form of the polynomial (x) is
where the Pochhammer symbol is defined by (λ)n = = λ(λ + 1)...(λ + n −1), (∀n ≥ 1) and (λ)0 = 1.Г(.) : is the familiar Gamma function.
Note that
where (x) are Gegenbauer polynomials [12]. The set of polynomials denoted by (x) considered by Sinha [17]
is precisely a generalization of (x) defined and studied by Shreshtha [16]. In [14] the authors investigated Gegenbauer matrix polynomials defined by
where A is a positive stable matrix in the complex space ℂN×N, ℂ bing the set of complex numbers, of all square matrices of common order N. The explicit representation of the Gegenbauer matrix polynomials(x) has been given in [14, p. 104 (15)] in the form
In the last decade the study of matrix polynomials has been made more systematic with the consequence that many basic results of scalar orthogonality have been extended to the matrix case (see, for example [1]-[5] and [13]). We say that a matrix A in ℂN×N is a positive stable if Re(λ) > 0 for all λ ∈ 𝜎(A) where 𝜎(A) is the set of all eigenvalues of A. If A0, A1, ... , An ... , are elements of ℂN×N and An ≠ 0, then we call
P(x) = Anxn + An−1xn−1 + An−2xn−2 +...+ A1x + A0,
a matrix polynomial of degree n in x. If A+nI is invertible for every integer n ≥ 0 then
Thus we have
The hypergeometric matrix function
where A, B and C are matrices in ℂN×N such that C + nI is invertible for integer n≥ 0 and |z| < 1. The generalized hypergeometric matrix function (see (1.9)) is given in the form:
For the purpose of this work we recall the following relations [12]:
and
Also, we recall that if A(k, n) and B(k, n) are matrices in ℂN×N for n ≥ 0 and k ≥ 0 then it follows that [18]:
For m a positive integer, we can write
The primary goal of this work is to introduce and study a new class of matrix polynomials, namely the Humbert Matrix polynomials (x, y; a, b, c), which is general enough to account for many of polynomials involved in generalized potential problems (see [9]-[11]). This is interesting since, as will be shown, the matrix polynomials (x, y; a, b, c) is an extension to the matrix framework of the classical families of the polynomials mentioned above.
# 2. Humbert Matrix Polynomials
Let A be a positive stable matrix in ℂN×N. We define the Humbert matrix polynomials by means of the generating relation
where m is a positive integer and other parameters are unrestricted in general. Based on (1.11) and (1.12), formula (2.1) can be written in the form
which, in view of (1.15), gives us
By equating the coefficients of tn in (2.2), we obtain an explicit representation for the polynomials (x, y; a, b, c) in the form
Again, starting from (2.1), it is easily seen that
which, with the help of the results (1.11) and (1.12), gives
By equating the coefficients of tn in (2.4), we obtain another explicit representation for the polynomials (x, y; a, b, c) as follows:
According to the relation
Equation (2.5) can be written in the form
where A + + (n − k(m − 2)s)I and 2A + (n − (m − 2)s)I are invertible.
Now, we mention some interesting special cases of our results of this section. First, if in (2.3) and (2.5) we let y = 0, a = m and c = 1 = −b, we get
and
respectively, where is the matrix version of Humbert polynomials ( see [11]).
Next, for m = 3, Equations (2.8) and (2.9) further reduce to following explicit representations:
and
respectively, where (x) is the matrix version of Pincherle polynomials Pn (x) [11]. Moreover, in view of the relationship ( see Equations (1.5) and (2.1) )
equation (2.3) reduces to finite series representation for the matrix Gegenbauer polynomials (x) as follows:
Note that equation (2.12) is a known result (see [14, p. 109 (40)]).
# 3. Hypergeometric Matrix Representations
Starting from (2.3) and using the results
and
where
0≤ (m − 1)k ≤ n,
we get
which, in view of (1.16), gives us the following hypergeometric matrix representation:
where A+nI and are invertible. According to the relationship (2.12), Equation (3.4), yields the following known representation for the Gegenbauer matrix polynomials (see [14, p. 109 (39)]):
Next, if in (3.4) we put a = m, c = 1 = −b and y = 0, we get the following representation for the matrix Humbert polynomials (x):
# 4. More Generating Functions
By proceeding in a fashion similar to that in Section 2, in this section we aim at establishing the following additional generating functions for the Humbert matrix polynomilas (x, y; a, b, c) :
where A + nI, B + nI, 2A + (n + 2k)I + ((m − 2)s)I, B + (n + 2k)I , and are invertible matrices.
Derivation of the results (4.1) to (4.4). Starting from (2.3) and using the results (1.14) and (3.1), we get
which, on using the definition of the generalized matrix hypergeometric series (1.10), gives us the generating function (4.1). This completes the proof of (4.1).
If B is a positve stable matrix in the complex space ℂN×N of all square matrices of common order N, then following the method of derivation of equation (4.1) , we can easily establish relation (4.2).
Again, starting from (2.5), and employing the results (2.6) and (1.16), we can derive the result (4.3). The proof of Equation (4.4) is similar to that of (4.3). Therefore, we skip the details.
It is easy to observe that the main results (4.1) to (4.4) give a number of generating functions of matrix version polynomials, for example, the matrix polynomials (x) (see (1.2)), the matrix versions of Pincherle, Humbert, Sinha, Sheshtha, Kinney, Horadam and Horadam-Pethe polynomials (see [13] ).
# 5. Expansions
Expansion for the matrix polynomials (x, y; a, b, c) in series of Legendre, Hermite, Gegenbauer and Laguerre polynomials relevant to our present investigation are given as follows:
where 2A + (n + s)I and A + (n + s − k)I + are invertible matrices.
Derivation of the results (5.1) to (5.4). On inserting the result ( see [12, p. 181 (4)] )
in relation (2.7) , we get
which on using the result (1.16),and simplifying gives us (5.1). Similarly, the results (5.2), (5.3) and (5.4) are obtained by using the known results [12, p. 283 (36), p. 194 (4), p. 207 (2)]
and
#### References
1. R. Aktas: A Note on multivariable Humbert matrix Polynomials. Gazi University journal of science 27 (2014), no. 2, 747-754.
2. R. Aktas: A New multivariable extension of Humbert matrix Polynomials. AIP Conference Proceedings 1558 (2013), 1128-1131.
3. R. Aktas, B. Cekim & R. Sahin: The matrix version of the multivariable Humbert matrix Polynomials. Miskolc Mathematical Notes 13 (2012), no. 2, 197-208.
4. R.S. Batahan: A New extension of Hermite matrix Polynomials and its applications. LinearAlgebra Appl. 419 (2006), 82-92.
5. G. Dattoli, B. Ermand & P.E. Riccl: Matrix Evolution equation and special functions,computer and mathematics with Appl. 48 (2004), 1611-1617. https://doi.org/10.1016/j.camwa.2004.03.007
6. H.W. Gould: Inverse series relation and other expansions involving Humbert polynomials. Duke Math. J. 32 (1965), 697-711. https://doi.org/10.1215/S0012-7094-65-03275-8
7. A. Horadam: Polynomials associated with Gegenbauer Polynomials. Fibonacci Quart. 23 (1985), 295-399.
8. A. Horadan & S. Pethe: Gegenbauer polynomials revisited. Fibonacci Quart. 19 (1981), 393-398.
9. L. Jodar & E. Defez: On Some properties of Humbert's polynomials II. Facta Universityis (Nis) ser. Math. 6 (1998), 13-17.
10. G.V. Milovanovic & G.B. Dordevic: A connection between laguerre"s and Hermite's matrix polynomials. Appl. Math. Lett. 11 (1991), 23-30.
11. M.A. Pathan & M.A. Khan: on Polynomials associated with Humbents polynomials. publications DEL, Intitut Mathematique, Noavelle seris 62 (1997), 53-62.
12. E.D. Rainville: Special function. publications DEL, Intitut Mathematique, Noavelle seris Macmillan, New York, 1960.
13. J. Sastre, E. Defez & L. Jodar: Laguerre matrix polynomials. series expansion: theory and computer applications, Mathematical and computer Modelling 44 (2006), 1025-1043. https://doi.org/10.1016/j.mcm.2006.03.006
14. K.A. Sayyed, M.S. Metwally & R.S. Batahn: Gegenbauer matrix polynomials and second order Matrix Differential Equations. Div., Math. 12 (2004), 101-115.
15. K.A. Sayyed, M.S. Metwally & R.S. Batahn: On generalized Hermite matrix polynomials. Electronic Journal of linear Algebra 10 (2003), 272-279.
16. N.B. Shrestha: Polynomial associated with Legendre polynomials. Nepali , Yath. Sci. Rep. Triv.2:1 (1977), 1-7.
17. S.K. Sinha: On a polynomial associated with Gegenbauer polynomial. Proc. Not. Acad. Sci. India 54 (1989), 439-455.
18. H.M. Srivastava & H.L. Manocha: A treatise on Generating functions. Halsted press, John Wiley and Sons., New York, 1984.
|
2020-09-23 20:53:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073885440826416, "perplexity": 2039.5172678214403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212039.16/warc/CC-MAIN-20200923175652-20200923205652-00373.warc.gz"}
|
https://codereview.stackexchange.com/questions/84431/unity-simple-map-generation
|
# Unity simple map generation
I am trying to generate a simple map like this. Everything works as I want, but I am new to programing. Is it a good way to do something like this? And if not, could someone explain to me where and why it is bad to do something like this? And how bad is it on performance?
public GameObject cube;
public GameObject cube1;
public GameObject cube2;
Vector3 here;
void Start () {
for (int y = 0; y < 50; y++) {
for (int x = 0; x < 10; x++) {
here = new Vector3(x,y,0);
int change = (int)Random.Range(0,19);
switch (change){
case 0: case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8: case 9: case 10:
Instantiate(cube,here,Quaternion.identity);
break;
case 11: case 12: case 13: case 14: case 15: case 16:
Instantiate(cube1,here,Quaternion.identity);
break;
case 17: case 18:
Instantiate(cube2,here,Quaternion.identity);
break;
}
}
}
Running this code generates something random like this:
Instantiate takes 3 things (GameObject you want to make, coordinates where to make it, and rotation), and it makes a copy of GameObject - prefab in those coordinates.
• Cube - Red
• Cube1 - Green
• Cube2 - Blue
The reason for all of this is that I want to learn to make something random likea map area of the game.
• Although you asked this in the wrong site, this is really bad for performance, and very badly written indeed. – Fatih BAKIR Mar 18 '15 at 23:03
• What kind of map is it that you want to instantiate? What does the Instantiate method do? What is the end result of this? Some more context about what kind of game we're dealing with here would make it a lot easier for us to come up with better answers. – Simon Forsberg Mar 18 '15 at 23:07
• edited my post. – user2351722 Mar 18 '15 at 23:14
• Could you post the GameObject code and the code for the Instantiate method as well? (don't worry about them being too long) – Simon Forsberg Mar 18 '15 at 23:22
• GameObject is a PREFAB there is no code. Just make any prefab in unity and add to the file with script. Instantiate is unity method. – user2351722 Mar 18 '15 at 23:25
As a first step, I think this is cleaner than the switch
if (change <= 10)
{
Instantiate(cube, here, Quaternion.identity);
}
else if (change <= 16)
{
Instantiate(cube1, here, Quaternion.identity);
}
else
{
Instantiate(cube2, here, Quaternion.identity);
}
But there's still repetition with the calls to Instantiate. I'd suggest creating a new method like this
private GameObject GetRandomCube()
{
var change = (int)Random.Range(0, 19);
if (change <= 10)
{
return cube;
}
if (change <= 16)
{
return cube1;
}
return cube2;
}
And then use it like this in your loop
Instantiate(GetRandomCube(), here, Quaternion.identity);
If here isn't being used elsewhere in the class, move its declaration inside the inner loop, or just use it in the function call. Now your method looks like this
void Start()
{
for (var y = 0; y < 50; y++)
{
for (var x = 0; x < 10; x++)
{
Instantiate(GetRandomCube(), new Vector3(x, y, 0), Quaternion.identity);
}
}
}
• Isn't this just as in-efficient a the original code? Calling instantiate in a loop like this for a map is most definitely a bad idea ... Why do people keep up voting this? The performance of this must be horrific if you try to scale it. – War Apr 18 '15 at 20:48
Is it really necessary to create an object of each tile?
In many 2d map-based games, the tiles are not objects, but each tile is only an int in a 2d array.
I don't know Unity, but I feel that it should be possible to render a 2d map in a better way than creating a GameObject for each tile.
If possible, I would make your code something more like this:
int[][] map = new int[50][10];
for (int y = 0; y < map.Length(1); y++) {
for (int x = 0; x < map.Length(2); x++) {
int change = (int)Random.Range(0,19);
int type = 0;
if (change <= 10)
{
type = 1;
}
else if (change <= 16)
{
type = 2;
}
else
{
type = 3;
}
map[y][x] = type;
}
}
• thanks trying this method as well. My question would be, how better for performance is using If instead of switch? – user2351722 Mar 19 '15 at 13:48
• Logically when the compiler gets hold of this it'll basically turn out the same when compiled .. this offers literally no performance improvement whatsoever. all it does is compact the switch conditions in to 3 if / else conditions, ok I suppose if you only ever want 3 tile types in your map ... I guessed that this would likely not be the case in my answer. – War Apr 18 '15 at 20:53
Warning: this code sample may not work, its all from my head and is only meant to give a general idea of how to generate a basic 2d tile map.
You will want to take this further in to the 3d side for cubes but the process is the same just with the extra dimension.
God no please don't keep this up ...
It looks like you're creating a tile based map using cubes but lets start with just flat tiles so you can get your head round this. So lets see how we can do this much faster ...
Firstly start by generating a map as data, this should be as simple as generating a int[x,y] where the values are between 0 and some max.
You can do it the way you are doing above or use something like perlin noise (appears to be the most commonly used method these days) ...
int tileTypes = 20;
float tileStep = 1f / (float)tileTypes;
var map = new int[10,50];
for (int y = 0; y < 50; y++) {
for (int x = 0; x < 10; x++) {
map[x,y] = Random.Range(0, tileTypes - 1);
}
}
Once you have that data you can generate a mesh with 1 quad per tile ...
// generate the vertex info for your map mesh
var verts = new List<Vector3>();
// also generate uv's
var uvs = new List<Vector2>();
for (int y = 0; y < 50; y++) {
for (int x = 0; x < 10; x++) {
new Vector3 { x,y,0 },
new Vector3 { x+1,y,0 },
new Vector3 { x,y+1,0 },
new Vector3 { x+1,y+1,0 }
});
// depends on atlas layout but lets assume a single line of textures
// but essentially this is where your map comes in ...
var type = map[x,y];
new Vector2 { tileStep * type, 0 },
new Vector2 { tileStep * (type+1), tileStep * type },
new Vector2 { tileStep * type, 1 },
new Vector2 { tileStep * (type+1), 1 }
});
}
}
// index the verts
// numbers of verts / verts per quad * indexes needed per quad
var indexes = new int[(verts.Count / 4) * 6];
var index = 0;
// setup index info for each face
for (int vert = 0; vert < verts.Count; vert += 4)
{
// tri 1
indexes[index++] = vert;
indexes[index++] = vert + 1;
indexes[index++] = vert + 2;
// tri 2
indexes[index++] = vert;
indexes[index++] = vert + 2;
indexes[index++] = vert + 3;
}
// create the mesh
var mesh = new Mesh {
vertices = verts.ToArray(),
uv = uv.ToArray()
};
mesh.SetTriangles(indexes.ToArray(), 0);
/// and finally ... draw!
GetComponent<Renderer> ().material.mainTexture = someTileAtlas;
GetComponent<MeshFilter>().sharedMesh = mesh;
• i will be reading your code and learning what it does for next week :DD – user2351722 Mar 18 '15 at 23:36
• yeh sorry about the bad ass brain dump ... essentially it generates a map which it puts in an array, then generates a "tile" (quad made up of 2 triangles) for each map entry. You will need to get your head round texture atlases but the idea is that the tiles will pick a different texture from your atlas based on the map data so if map[x,y] = 0 you get the first texture in the atlas rendered on that tile. it then repeats for all other tiles. There may be some logical errors in there its a raw brain dump. – War Mar 18 '15 at 23:42
|
2021-05-09 02:21:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.254214346408844, "perplexity": 2498.5483878071236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00359.warc.gz"}
|
https://codegolf.stackexchange.com/questions/61558/primes-numbers-with-prime-index/215371
|
# Primes numbers with prime index
Write a program or function that outputs/returns the first 10000 prime-indexed prime numbers.
If we call the nth prime p(n), this list is
3, 5, 11, 17, 31, 41, 59 ... 1366661
because
p(p(1)) = p(2) = 3
p(p(2)) = p(3) = 5
p(p(3)) = p(5) = 11
p(p(4)) = p(7) = 17
...
p(p(10000)) = p(104729) = 1366661
Standard loopholes are forbidden, and standard output methods are allowed. You may answer with a full program, a named function, or an anonymous function.
• You should generally try to post challenges in the sandbox first (see the link on the right side) in order to work out the issues. – aditsu quit because SE is EVIL Oct 23 '15 at 17:43
• Optimizing for runtime is not what we do in a code-golf challenge; the shortest program always wins. – lirtosiast Oct 23 '15 at 18:53
• Primes with prime subscripts: A006450. – user12166 Oct 23 '15 at 21:55
• @bilbo Answers for code golf are usually accepted after a week, and should be accepted as the shortest successful code. If you wanted code speed, there is a tag for that. See this page about the tag code-golf. – Addison Crump Oct 24 '15 at 12:04
• All contests need an objective winning criterion; they are off topic otherwise. If you're going to judge answers by size and speed, you need to disclose a way to combine both. This should be done when the contest is posted, not 14 hours and 10 answers later. I have undone all speed-related edits, since the only other option would be to close this post for being off topic. – Dennis Oct 25 '15 at 14:10
# MATLAB/Octave, 25 bytes
p=primes(2e6)
p(p(1:1e4))
It doesn't get much more straightforward than this.
## Python, 72 bytes
P=p=1;l=[]
while p<82e5/6:l+=P%p*[p];P*=p*p;p+=1
for x in l:print l[x-1]
This terminates with a "list index out of range error" after printing the 10000 numbers, which is allowed by default.
Uses the Wilson's Theorem method to generate a list l of the primes up to the 10000th prime. Then, prints the primes with the positions in the list, shifted by 1 for zero-indexing, until we run out of bounds after the 10000th prime-th prime.
Conveniently, the upper bound of 1366661 can be estimated as 82e5/6 which is 1366666.6666666667, saving a char.
I'd like a single-loop method, printing prime-indexed primes as we add them, but it seems to be longer.
P=p=1;l=[]
while p<104730:
l+=P%p*[p]
if len(l)in P%p*l:print p
P*=p*p;p+=1
• This is way better than the garbage I was writing. +1 – user45941 Oct 23 '15 at 17:51
• This only prints 1229 numbers – aditsu quit because SE is EVIL Oct 23 '15 at 19:03
• @aditsu I think I see my mistake. Are you able to run this code with the bigger bound? – xnor Oct 23 '15 at 19:07
• It will probably take a long time :p – aditsu quit because SE is EVIL Oct 23 '15 at 19:16
• I think it finished \(@;◇;@)/ , it seems correct – aditsu quit because SE is EVIL Oct 24 '15 at 15:59
## J, 11 bytes
p:<:p:i.1e4
Outputs the primes in the format
3 5 11 17 31 41 59 67 83 109 127 ...
## Explanation
1e4 Fancy name for 10000
i. Integers from 0 to 9999
p: Index into primes: this gives 2 3 5 7 11 ...
<: Decrement each prime (J arrays are 0-based)
p: Index into primes again
# Mathematica, 2625 23 bytes
Prime@Prime@Range@1*^4&
Pure function returning the list.
• Prime is Listable so a simple Prime@Prime@Range@1*^4& will do – user46060 Oct 23 '15 at 22:34
• I know the feeling ... In any case, I think this is the prettiest Mathematica solution I have seen on here! – user46060 Oct 23 '15 at 22:36
• Let me guess, the @ operator has higher precedence than ^ when writing Range@10^4? That's classic Mathematica messing up your game of golf. Good trick! – user46060 Oct 23 '15 at 23:14
p=[x|x<-[2..],all((>0).mod x)[2..x-1]]
f=take 10000$map((0:p)!!)p Outputs: [3,5,11,17,31,41,59,67,83,109,127.....<five hours later>...,1366661] Not very fast. How it works: p is the infinite list of primes (naively checking all mod x ys for y in [2..x-1]) . Take the first 10000 elements of the list you get when 0:p!! (get nth element of p) is mapped on p. I have to adjust the list of primes where I take the elements from by prepending one number (-> 0:), because the index function (!!) is zero based. # AWK - 129 bytes ...oookay... too long to win points for compactness... but maybe it can gain some honor for the speed? The x file: BEGIN{n=2;i=0;while(n<1366662){if(n in L){p=L[n];del L[n]}else{P[p=n]=++i;if(i in P)print n}j=n+p;while(j in L)j=j+p;L[j]=p;n++}} Running: $ awk -f x | nl | tail
9991 1365913
9992 1365983
9993 1366019
9994 1366187
9995 1366327
9996 1366433
9997 1366483
9998 1366531
9999 1366609
10000 1366661
BEGIN {
n=2
i=0
while( n<1366662 ) {
if( n in L ) {
p=L[n]
del L[n]
} else {
P[p=n]=++i
if( i in P ) print n
}
j=n+p
while( j in L ) j=j+p
L[j]=p
n++
}
}
The program computes a stream of primes using L as "tape of numbers" holding found primes jumping around on L to flag the nearby numbers already known to have a divisor. These jumping primes will advance while the "tape of numbers" L is chopped off number by number from its beginning.
While chopping off the tape head L[n] being empty means there is no known (prime) divisor.
L[n] holding a value means, this value is a prime and known to divide n.
So either we have found a prime divisor or a new prime. Then ths prime will be advanced to the next L[n+m*p] on the tape found being empty.
This is like the Sieve of Eratosthenes "pulled thru a Klein's bottle". You always act on the tape start. Instead of firing multiples of primes thru the tape, you use the primes being found already as cursors jumping away from the tape start by multiple distances of their own value until a free position is found.
While the outer loop generates one prime or not prime decission per loop, the primes found get counted and stored in P as key, the value of this (key,value) pair is not relevant for the program flow.
If their key i happens to be in P already (i in P), we have a prime of the p(p(i)) breed.
Running:
$time awk -f x.awk | wc -l 10000 real 0m3.675s user 0m3.612s sys 0m0.052s Take into account that this code does not use external precalculated prime tables. Time taken on my good old Thinkpad T60, so I think it deserves to be called fast. Tested with mawk and gawk on Debian8/AMD64 • good 129 bytes in gawk: now with Debian10/AMD64 on my corei7-i870@3.6Ghz: real 0m2,417s user 0m2,205s sys 0m0,042s – JeanClaudeDaudin Jan 25 '20 at 14:42 • you can save one byte with: BEGIN{n=2;i=0;while(n<1366662){if(n in L){p=L[n];del L[n]}else{P[p=n]=++i;if(i in P)print n}j=n+p;while(j in L)j+=p;L[j]=p;n++}} – JeanClaudeDaudin Jan 25 '20 at 15:08 # PARI/GP, 25 bytes apply(prime,primes(10^4)) # CJam, 19 3D#{mp},_1e4<:(\f=p You can try it online, but you'll need a little patience :p For the record, the last number is 1366661. # Husk, 11 10 bytes (or 6 5 bytes to list the first 10,000 prime-indexed prime numbers, followed by [the infinite number of] all the rest of them) Thanks to user and Razetime for suggesting the 6 5-byte version! ↑!4İ⁰Ṡm!İp Try it online! (unfortunately times-out on TIO before printing all 10,000 prime-indexed primes, but at least outputs what it's found so far...) ↑!4İ⁰Ṡm!İp ↑ # print the first !4 # 4th index of İ⁰ # powers of 10 (so, the first 10,000), of Ṡ # hook: apply function to its own argument m! # map index function to İp # prime numbers # (implied by hook) to prime numbers • Nice answer. The 10000 primes restriction is a shame - too many bytes are wasted on that. – user Nov 18 '20 at 15:22 • If you output an infinite list, it will still have the first 10000 items ;) – Razetime Nov 18 '20 at 15:27 • @user & Razetime - thanks for the suggestion from both of you - I've incorporated it (& credited you both)! – Dominic van Essen Nov 18 '20 at 15:45 • It didn't actually occur to me to print an infinite list - that's all Razetime, but thanks anyway. – user Nov 18 '20 at 16:51 # Perl, 55 bytes use ntheory':all';forprimes{print nth_prime$_,\$/}104729
Uses @DanaJ's Math::Prime::Util module for perl (loaded with the pragma ntheory). Get it with:
cpan install Math::Prime::Util
cpan install Math::Prime::Util::GMP
# Japt, 11 bytes
Èj ©°Tj}jL²
Test it
# Python 3, 76 109 bytes - disqualified - requalified?
edit: I am sorry, I was going for primes but missed the putting, didn´t reach the goal of "primes with primeindex". Your are free to grab and enhance. I'll be glad to be learning from you.
Based on Wilson Theorem
def p(x):
c=n=2
for i in range(3,x):
c*=i-1
if c%i!=0:
print(n,i)
n+=1
## without index, 62 bytes
def p(x):
c=2
for i in range(3,x):
c*=i-1
if c%i!=0:print(i)
Anyone can give me an example or good link, how to make a lambda function from it?
## edit: just to have it done right: 109bytes
def p(x):
c=n=2
p=[2]
for i in range(2,x):
c*=i-1
if c%i!=0:
p.append(i)
if n in p: print(p[-1])
n+=1
• I think you've misunderstood the question. Your functions seem to output all prime numbers (and the first one precedes them with an index). The question defines 'prime-indexed prime numbers' as those in the list of all primes that have an index value that is also a prime. For instance, 7 - the 4th prime - isn't prime-indexed because 4 isn't a prime... – Dominic van Essen Nov 21 '20 at 16:25
• I am sorry. You are right, I misunderstood it :( Thank you very much for spending your time on clearing this up for me! – Bambeleme Nov 22 '20 at 0:59
# VyxalH, 8 7 6 5 bytes
Saved 1 + 1 bytes thanks to Aaron Miller
²ʀǎ‹ǎ
Try it Online!
H flag - Push 100 to the stack
²ʀǎƛ‹ǎ
² Squared (10,000)
ʀ Range [0, 10,000)
ǎ Map each a in that range to the ath prime
‹ Decrement each, because the challenge uses 1-indexing
ǎ Get the nth prime for each of those
• 7 bytes – Aaron Miller May 3 at 2:26
• @AaronMiller Thanks, forgot about auto-closing! – user May 3 at 2:27
• I completely forgot about the H tag, which presets the stack to 100, allowing for 5 bytes – Aaron Miller May 3 at 15:31
# 05AB1E, 7 bytes (non-competing)
Code:
4°L<Ø<Ø
Try it online!, note that I have changed the 4 into a 2. If you have a lot of time, you can change the 2 back to 4, but this will take a lot of time. I need to fasten the algorithm for this.
Explanation:
4° # Push 10000 (10 ^ 4)
L # Create the list [1 ... 10000]
< # Decrement on every element, [0 ... 9999]
Ø # Compute the nth prime
< # Decrement on every element
Ø # Compute the nth prime
# Jelly, 6 bytes
ȷ4RÆN⁺
Try it online!
Obviously, this times out on TIO. Here is a version that prints as it goes, which reaches $$\57493\$$ (the $$\765\$$th term) before timing out
## How it works
ȷ4RÆN⁺ - Main link. Takes no arguments
ȷ4 - 10000
R - [1, 2, 3, ..., 9999, 10000]
⁺ - Do twice:
ÆN - n'th prime of each
# 05AB1E, 6 bytes
тnp<Ø
Try it online!
Ø # prime numbers with...
< # 1-based...
Ø # indices in...
Åp # first...
т # 100...
n # ^ 2...
Åp # primes
# implicit output
|
2021-07-25 09:42:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3285445272922516, "perplexity": 3023.908870498707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00257.warc.gz"}
|
http://mathhelpforum.com/calculus/80657-complex-variables-print.html
|
Complex variables
• Mar 25th 2009, 03:02 PM
vincisonfire
Complex variables
Find all functions $f (z )$ satisfying the following two conditions:
(1) $f (z )$ is analytic in the disk $|z - 1| < 1$ .
(2) $f ( \frac{n}{n + 1} ) = 1 - \frac{1}{2n^2 + 2n + 1}$.
• Mar 26th 2009, 12:25 AM
Opalg
Quote:
Originally Posted by vincisonfire
Find all functions $f (z )$ satisfying the following two conditions:
(1) $f (z )$ is analytic in the disk $|z - 1| < 1$ .
(2) $f ( \frac{n}{n + 1} ) = 1 - \frac{1}{2n^2 + 2n + 1}$.
$\frac1{2n^2+2n+1} = \frac1{(n+1)^2+n^2} = \frac{\frac1{(n+1)^2}}{1+\bigl(\frac n{n+1}\bigr)^2} = \frac{\bigl(1-\frac n{n+1}\bigr)^2}{1+\bigl(\frac n{n+1}\bigr)^2}
$
, so you can take $f(z) = 1 - \frac{(1-z)^2}{1+z^2}$. (But could there be any other analytic functions taking those values at the points n/(n+1)?)
• Mar 26th 2009, 03:30 AM
chisigma
Quote:
Originally Posted by Opalg
... but could there be any other analytic functions taking those values at the points n/(n+1)?...
In a problem i'm working about one important step is to demonstrate this lemma...
Let be $f(*)$ an analytic function whose value $f_{n}$ are known for $z=0,1,...,n, ...$. In this case, under certain conditions, there is only one analytic $f(*)$ for which is $f(n)= f_{n}$.
Does Opalg think that is an interesting question to be discussed in MHF?... if yes, in which section?...
Kind regards
$\chi$ $\sigma$
• Mar 26th 2009, 01:06 PM
Opalg
Quote:
Originally Posted by chisigma
In a problem i'm working about one important step is to demonstrate this lemma...
Let be $f(*)$ an analytic function whose value $f_{n}$ are known for $z=0,1,...,n, ...$. In this case, under certain conditions, there is only one analytic $f(*)$ for which is $f(n)= f_{n}$.
Does Opalg think that is an interesting question to be discussed in MHF?... if yes, in which section?...
Yes, that's a nice question. You could ask it as a new thread in this section.
|
2017-05-26 20:06:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789523482322693, "perplexity": 542.4697453072434}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608676.72/warc/CC-MAIN-20170526184113-20170526204113-00316.warc.gz"}
|
https://cs.stackexchange.com/questions/126027/z-specification-routes
|
# Z-Specification = Routes
Im trying to make an invariant for this Z schema about routes.
1) The invariant should express that each route should contain at least 20 different places. First of all i thought of doing a universal quantification such as:
∀x : ran routes • ( #x>20 ∧ x ≠ x')
However i'm really not sure if thats correct that implying 20 different places.
If you say
$$\forall x, x': ran\ routes . x<>x'$$
You're only saying that the sequences are different. For example
$$$$ would be different to $$$$
$$\forall x: ran\ routes . \#x>20$$
$$\forall p,p': Place • p \in ran\ (ran\ routes) \land p' \in ran\ (ran\ routes) \implies p<>p'$$
One for the length as you had and one for the condition that all elements must be unique. Note the double $$ran$$ expression since a sequence is defined as a function between its indexes and its values (the range).
|
2022-01-22 21:37:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8573124408721924, "perplexity": 831.5916835731015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00165.warc.gz"}
|
http://tug.org/pipermail/texhax/2012-September/019686.html
|
# [texhax] Obsolete \centerline command used in amsbook class
Barbara Beeton bnb at ams.org
Fri Sep 14 16:46:17 CEST 2012
On Wed, 12 Sep 2012, Ari Meir Brodsky wrote:
The l2tabu document
(http://mirror.ctan.org/info/l2tabu/english/l2tabuen.pdf, section 2.1.3)
declares the \centerline command to be an obsolete TeX command that should
not be used in LaTeX. However, the \chapter command of amsbook.cls uses
\centerline. The following minimal example generates an error (where we
use the "nag" package to detect obsolete commands from l2tabu):
--------------------
\RequirePackage[l2tabu, abort]{nag}
\documentclass{amsbook}
\begin{document}
\chapter{Testing Nag with amsbook}
\end{document}
--------------------------
Is this a bug in amsbook? Should it be corrected?
the next day, ari sent the same message to
tech-support at ams.org, the listed address to
which bugs should be reported. we sent him
this response:
thank you for your report. i'm not sure that we consider the use of
centerline to be a bug, but it is certainly infelicitous.
i have put it on the list of things to be examined when next the ams
document class files are opened up for overhaul. unless some other
more urgent project comes up in the meantime, that should begin either
late this year or early next.
-- bb
More information about the texhax mailing list
|
2018-10-23 17:19:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546951055526733, "perplexity": 12560.231046218334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516480.46/warc/CC-MAIN-20181023153446-20181023174946-00000.warc.gz"}
|
https://astarmathsandphysics.com/a-level-physics-notes/principles-dimensions-units-and-error-analysis/2986-the-ampere.html
|
## The Ampere
The ampere (SI symbol A), named after André-Marie Ampère (1775–1836) is the SI unit of electric current and is one of the seven base units.
In practical terms, the ampere is a measure of the amount of electric charge passing a point in an electric circuit per unit time with
constituting one ampere. In this way, amperes can be viewed as a flow rate, i.e. number of particles (charged) transiting per unit time, and coulombs simply as the number of particles. We can write
whereis the charge passing a point In a timeand is equal to the number of electrons multiplied by the charge on an electron andis a constant current.
Ampère's force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the formal definition of the ampere, which states that it is "the constant current that will produce an attractive force ofnewton per metre of length between two straight, parallel conductors of infinite length and negligible circular cross section placed one metre apart in a vacuum".
The standard ampere is accurately determined using a watt balance or using Ohm's Law and is accurate to a few parts in
Rather than a definition in terms of the force between two current-carrying wires, it has been proposed to define the ampere in terms of the rate of flow of elementary charges. Since a coulomb is approximately equal toelementary charges, one ampere is approximately equivalent toelementary charges per second. The proposed change would defineas being the current in the direction of flow of a particular number of elementary charges per second.
|
2018-03-18 05:48:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8087571859359741, "perplexity": 350.10462525466187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00196.warc.gz"}
|
https://www.physicsforums.com/threads/what-is-the-mass-of-the-sphere.17914/
|
# Homework Help: What is the mass of the sphere?
1. Apr 5, 2004
### lollypop
hello everybody:
A hollow, plastic sphere is held below the surface of a freshwater lake by a cord anchored to the bottom of the lake. The sphere has a volume of 0.700 M^3 and the tension in the cord is 930 N.
Calculate the buoyant force exerted by the water on the sphere. Take the density of water to be 1000 kg/m^3 and the free fall acceleration to be 9.80 m/s^2.
**for this I set up Bouyant = density *Volume*gravity = 6860 N
What is the mass of the sphere? Take the density of water to be 1000kg/m^3 and the free fall acceleration to be 9.80m/s^2 .
**here I used Buoyant = mg-T and solved for m, so answer for m= 605 kg
The cord breaks and the sphere rises to the surface. When the sphere comes to rest, what fraction of its volume will be submerged? Express your answer as a percentage.
** here is my problem , so far all I have is that the sphere is at rest so there is no external force acting on it, so F=B+T+(-mg) =0 right?? I don't know what else to use for this part. Any clues??
2. Apr 5, 2004
### Chen
Correct.
Not so sure about that one. Shouldn't be it:
$$B = mg + T$$
Since both mg and T act in the same direction, downward? The cord isn't pushing the sphere up, it is pulling it down.
For the sphere to be at rest, mg must equal B. There is no tension anymore, since the cord was broken. You need to express B as a function of the volume of the sphere that is still submerged, and find it.
$$mg = B = \rho V' g$$
$$V' = \frac{m}{\rho }$$
Last edited: Apr 5, 2004
|
2018-08-15 15:29:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5396751165390015, "perplexity": 662.3936580861617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210133.37/warc/CC-MAIN-20180815141842-20180815161842-00331.warc.gz"}
|
http://www.darwinproject.ac.uk/letter/?docId=letters/DCP-LETT-1252.xml
|
# To Charles Lyell [2 September 1849]1
Down Farnborough Kent
Sunday
My dear Lyell
It was very good of you to write me so long a letter which has interested me much; I shd. have answered it sooner, but I have not been very well for the few last days. Your letter has, also, flattered me much in many points.
I am very glad you have been thinking over the relation of subsidence & the accumulation of deposits:2 it has to me removed many great difficulties; please to observe that I have carefully abstained from saying that sediment is not deposited during periods of elevation, but only that it is not accumulated to sufficient thickness to withstand subsequent beach action: on both coasts of S. America, the amount of sediment deposited, worn away & redeposited oftentimes must have been enormous, but still there have been no wide formations produced: just read my discussion (p. 135 of my S. American Book) again with this in your mind.—3
I never thought of your difficulty (ie in relation to this discussion) of where was the land whence the 3 miles of S. Wales strata were derived?4 Do you not think that it may be explained, by a form of elevation, which I have always suspected to have been very common (& indeed had once intended getting all facts together). viz thus [DIAGRAM HERE] mountains & continent rising ocean bottom subsiding
The frequency of a deep ocean close to a rising continent, bordered with mountains, seems to indicate these opposite movements of rising & sinking close together: this wd. easily explain the S. Wales & Eocene cases.— I will only add that I shd think there wd be a little more sediment produced during subsidence than during elevation, from the resulting outline of coast after long period of rise.— There are many points in my vols. which I shd. have liked to have discussed with you, but I will not plague you: I shd like to hear whether you think there is anything in my conjecture on Craters of Elevation;5 I cannot possibly believe that St. Jago or Mauritius are the basal fragments of ordinary volcanos; I wd sooner even admit E. de Beaumont’s view than that; much as I wd sooner in my own mind in all cases follow you.— Just look at p. 232 in my S. America for trifling point,6 which however, I remember, to this day releived my mind of a considerable difficulty.—
I remember being struck with your discussion on the Missisippi beds7 in relation to Pampas, but I shd. wish to read them over again, I have, however, relent your work to Mrs Rich,8 who, like all whom I have met, have been much interested by it.— I will stop about my own geology.— But I see I must mention, that Scroope did suggest (& I have alluded to him, p. 118 but without distinct reference & I fear not sufficiently, though I utterly forget what he wrote9 ) the separation of basalt & trachyte, but he does not appear to have thought about the crystals which I believe to be the Keystone of the phenomenon: I cannot but think this separation of the molten elements has played a great part in the metamorphic rocks: how else cd the basaltic dykes come in great granitic districts such as those of Brazil?— What a wonderful book for labour is D’. Archiac!10
We are going on as usual: Emma desires her kind love to Lady Lyell: she boldly means to come to Birmingham with me & very glad she is that Lady Lyell will be there:11 two of our children have had a tedious slow fever.—12 I go on with my aqueous processes & very steadily but slowly gain health & strength. Against all rules13 I dined at Chevening with Ld. Mahon,14 who did me the grt. honour of calling on me, & how he heard of me, I can’t guess— I was charmed with Lady Mahon, & anyone might have been proud at the praises of agreeableness which came from her beautiful lips with respect to you.— I liked old Ld. Stanhope15 very much; though he abused geology & zoology heartily— “To suppose that the omnipotent God made a world, found it a failure, & broke it up & then made it again & again broke it up, as the geologists say, is all fiddle faddle”.— Describing species of birds & shells &c is all “fiddle faddle”.16 But yet I somehow liked him better than Ld Mahon.—
I am heartily glad we shall meet at Birmingham, as I trust we shall if my health will but keep up.— I work now every day at the Cirripedia for 2$\frac{1}{2}$ hours & so get on a little but very slowly.— I sometimes after being a whole week employed & having described, perhaps only 2 species agree mentally with Ld. Stanhope that it is all fiddle-faddle: however the other day I got the curious case of a unisexual, instead of hermaphrodite, cirripede, in which the female had the common cirripedial character, & in two of the valves of her shell had two little pockets, in each of which she kept a little husband;17 I do not know of any other case where a female invariably has two husbands.— I have one still odder fact, common to several species, namely that though they are hermaphrodite, they have small additional or as I shall call them Complemental males:18 one specimen itself hermaphrodite had no less than seven of these complemental males attached to it. Truly the schemes & wonders of nature are illimitable.— But I am running on as badly about my Cirripedia as about Geology: it makes me groan to think that probably, I shall never again have the exquisite pleasure of making out some new district,—of evoking geological light out of some troubled, dark region.— So I must make the best of my Cirripedia.—
Remember me most kindly to Mr & Mrs Bunbury.— I am sorry to hear how weak your Father is—19 | Yours most sincerely | C. Darwin
## Footnotes
The date is based on the endorsement and the dates of the Birmingham British Association meeting. CD left for Birmingham on 11 September 1849. In his health diary (Down House MS) CD recorded that he was well from 20 to 29 August, but that he was ‘Poorly’ and ‘exhausted’ from 30 August to 1 September.
Lyell was preparing a paper on ‘craters of denudation’ (C. Lyell 1850a), read to the Geological Society on 19 December 1849, in which the effects of elevation and subsidence on volcanic deposits were discussed.
CD’s explanation of the absence of conchiferous deposits as due to denudation and subsidence is in South America, pp. 135–9.
The denudation of South Wales had been the subject of much discussion between CD, Lyell, and Andrew Crombie Ramsay (see Correspondence vol. 3, letter to Charles Lyell, [3 October 1846], and letter to A. C. Ramsay, 10 October [1846]). Lyell did not, however, discuss South Wales in C. Lyell 1850a but rather in his anniversary address to the Geological Society on 15 February 1850 (C. Lyell 1850b, pp. liii, liv–vi).
This was the name given by Christian Leopold von Buch, and adopted by Jean Baptiste Armand Louis Léonce élie de Beaumont, to the theory originally proposed by Alexander von Humboldt that volcanic cones were formed by upward pressure, rather than by the eruption of lava through vents. Such pressure raised originally horizontal layers into a dome that was easily broken through. Lyell had opposed this view as early as 1830 in the first volume of his Principles of geology (C. Lyell 1830–3). For a history of the controversy see Dean 1980. CD speculated that the mountains might still be considered ‘craters of elevation’ by slow elevation, in which the central hollows were formed ‘not by the arching of the surface, but simply by that part having been raised to a less height’ (Volcanic Islands, p. 96).
On p. 232 of South America, CD discussed the ‘Eruptive Sources of the Porphyritic Claystone and Greenstone Lavas’ and suspected that the difficulty of tracing the streams of porphyries to their ancient eruptive sources was because ‘the original points of eruption tend to become the points of injection’.
Lyell delivered a lecture about the delta of the Mississippi River at the Royal Institution on 8 June 1849 and published a detailed account in C. Lyell 1849, 2: 242–56. CD presumably refers to the latter, which is in the Darwin Library–CUL.
Mary Rich, née Mackintosh, Fanny Mackintosh Wedgwood’s half-sister.
Scrope 1825. CD’s copy (unannotated) is in the Darwin Library–Down. See Volcanic islands, p. 118.
Archiac 1847–60, volumes one and two, in which étienne Jules Adolphe Desmier de Saint-Simon, Vicomte d’Archiac, summarised the progress of French geology from 1834–45. CD’s copy of volume one is in the Darwin Library–Down.
The British Association was due to meet in Birmingham, 12–19 September 1849. Lyell was president of section C (geology and physical geography); CD was a vice-president of the association. According to her diary, Emma followed CD to Birmingham on 12 September.
According to Emma Darwin’s diary, Henrietta Darwin developed a fever on 5 July and William Darwin fell ill on 11 July. William did not come ‘down stairs’ until 30 July, and he suffered a relapse on 16 August.
The ‘rules’ set out by James Manby Gully for CD’s water therapy at home.
Philip Henry Stanhope, Viscount Mahon, later 5th Earl Stanhope. Chevening, in Kent, was the family seat.
Philip Henry Stanhope, 4th Earl Stanhope, father of Viscount Mahon.
In the Autobiography CD mentioned that the Earl once said to him, ‘Why don’t you give up your fiddle-faddle of geology and zoology, and turn to the occult sciences?’ (p. 112).
Ibla cumingii (see Living Cirripedia (1851): 189–203).
The complemental males of Scalpellum, first mentioned in letter to J. L. R. Agassiz, 22 October 1848. See Living Cirripedia (1851): 231–43 and 281–93.
Charles Lyell Sr died on 8 November 1849.
## Bibliography
Archiac, Etienne Jules Adolphe Desmier de Saint-Simon, Vicomte d’. 1847–60. Histoire des progrès de la géologie de 1834 à 1845. 8 vols. Paris.
Autobiography: The autobiography of Charles Darwin 1809–1882. With original omissions restored. Edited with appendix and notes by Nora Barlow. London: Collins. 1958.
Correspondence: The correspondence of Charles Darwin. Edited by Frederick Burkhardt et al. 26 vols to date. Cambridge: Cambridge University Press. 1985–.
Dean, Dennis R. 1980. Graham Island, Charles Lyell, and the craters of elevation controversy. Isis 71: 571–88.
Living Cirripedia (1851): A monograph of the sub-class Cirripedia, with figures of all the species. The Lepadidæ; or, pedunculated cirripedes. By Charles Darwin. London: Ray Society. 1851.
Lyell, Charles. 1830–3. Principles of geology, being an attempt to explain the former changes of the earth’s surface, by reference to causes now in operation. 3 vols. London: John Murray.
Lyell, Charles. 1849. A second visit to the United States of North America. 2 vols. London. [Vols. 4,7]
Scrope, George Poulett. 1825. Considerations on volcanos, the probable causes of their phenomena, the laws which determine their march, the disposition of their products, and their connexion with the present state and past history of the globe; leading to the establishment of a new theory of the earth. London: W. Phillips.
South America: Geological observations on South America. Being the third part of the geology of the voyage of the Beagle, under the command of Capt. FitzRoy RN, during the years 1832 to 1836. By Charles Darwin. London: Smith, Elder & Co. 1846.
Volcanic islands: Geological observations on the volcanic islands, visited during the voyage of HMS Beagle, together with some brief notices on the geology of Australia and the Cape of Good Hope. Being the second part of the geology of the voyage of the Beagle, under the command of Capt. FitzRoy RN, during the years 1832 to 1836. By Charles Darwin. London: Smith, Elder & Co. 1844.
## Summary
Discusses effect of subsidence and elevation on deposits. Cites examples along coasts of South America and Wales. Proposes theory to explain thickness of deposits in south Wales.
Asks CL’s opinion of his theory of "craters of elevation" described in Volcanic islands.
Mentions CL’s comparison of Mississippi beds to the Pampas.
Comments on Poulett Scrope’s views on the separation of basalt and trachyte.
Describes his cirripede work.
## Letter details
Letter no.
DCP-LETT-1252
From
Charles Robert Darwin
To
Charles Lyell, 1st baronet
Sent from
Down
Source of text
American Philosophical Society (Mss.B.D25.80)
Physical description
8pp
## Please cite as
Darwin Correspondence Project, “Letter no. 1252,” accessed on 18 November 2019, https://www.darwinproject.ac.uk/letter/DCP-LETT-1252.xml
Also published in The Correspondence of Charles Darwin, vol. 4
letter
|
2019-11-18 11:11:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5289828181266785, "perplexity": 7412.487650072628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00012.warc.gz"}
|
https://brilliant.org/discussions/thread/a-moderators-apology/
|
# A Moderator's Apology
I keep on disputing numerous problems of this type. I don't understand why the answer is so obvious to the author.
In my opinion, these questions are not really good.
I'll take Spiked Math's help to illustrate this out:
Spiked
Moral: There are infinite formulae that fits a finite number of elements. There is no best fit formula. It is just your perception
Fit
6 years, 7 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
image
Fit this polynomial
- 6 years, 7 months ago
"much solution"
"very logic"
LMAO!!!
- 6 years, 7 months ago
Find the next number of the sequence
$1, 1, 1, 120, ?$
Ans: $25852016738884976640000$, because
$\Gamma (\Gamma (1))=1$
$\Gamma (\Gamma (2))=1$
$\Gamma (\Gamma (3))=1$
$\Gamma (\Gamma (4))=120$
$\Gamma (\Gamma (5))=25852016738884976640000$
- 6 years, 7 months ago
Oh man, I just saw this post, I should have used this for TKC.
- 6 years, 4 months ago
Hi; whenever I see those type questions I always just fit a polynomial through them. Admittedly, it was tough with Q2.
- 6 years, 7 months ago
:D I agree with you. I fit the polynomial too. But when I try to enter the answer, Brilliant says that only integer values are allowed.
- 6 years, 7 months ago
Well I am against those" odd number out of the set" type questions. I mean there we will always be able to choose a suitable prime p for which any 3 are quadratic residues while left one isn't
- 6 years, 7 months ago
So, here's the ultimate way to cure this problem:
Let us come up with a way to put a polynomial through any set of inputs and outputs x and y.
Any clues?
For example, I defined one as follows:
Find the next number in the sequence: $0,0,0,0,0,0,0,0,0,\text{\_\_}$
Ans: 10!
Now if there was a way to, say, put a function through something like 1, 4, 9, 16, 25, __ - that'd be great.
- 6 years, 7 months ago
This is the standard W|A/Mathematica command for fitting a polynomial
InterpolatingPolynomial[{1,4,9,16},x]
replace {1,4,9,16} with a set of your own data
- 6 years, 7 months ago
Loved it .
- 6 years, 6 months ago
agnishom bhaiya what moderator at brilliant.org means
- 3 years, 5 months ago
Moderators are responsible for helping improve the community experience, like curating nice problems, resolving reports, or seed community discussions
- 3 years, 5 months ago
how to be a moderator
- 3 years, 5 months ago
When Brilliant needs moderators, the Brilliant staff recruits moderators. There is no way to be one.
However, you can still help the community by actively participating in notes and solution discussions.
- 3 years, 5 months ago
offcourse after my boards i will actively participate in notes and discussions and also post questions. when u and rajdeep bhaiya were opted as moderators.
- 3 years, 5 months ago
That is great. Members like you are who makes the community come alive.
- 3 years, 5 months ago
bhaiya is it worth to join a dummy school in 11 th and 12 th
- 3 years, 5 months ago
Sorry, I do not know what that means. What is a dummy school?
- 3 years, 5 months ago
1,3, and guess what's next? Something incomputable. Its the tree function. Next, 1, 1, 2, and guess what? Depends on the number of !'s. 0!!...=1, 1!!...=1, 2!!...=2, 3!!...=???
- 6 years, 7 months ago
|
2021-08-03 13:58:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597637057304382, "perplexity": 3541.3024752228976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154459.22/warc/CC-MAIN-20210803124251-20210803154251-00014.warc.gz"}
|
https://eips.ethereum.org/EIPS/eip-4345
|
# EIP-4345: Difficulty Bomb Delay to June 2022 Source
### Delays the difficulty bomb to be noticeable in June 2022.
Author Tim Beiko, James Hancock, Thomas Jay Rush https://ethereum-magicians.org/t/eip-4345-difficulty-bomb-delay-to-may-2022/7209 Final Standards Track Core 2021-10-05
## Abstract
Starting with FORK_BLOCK_NUMBER the client will calculate the difficulty based on a fake block number suggesting to the client that the difficulty bomb is adjusting 10,700,000 blocks later than the actual block number.
## Motivation
Targeting for The Merge to occur before June 2022. If it is not ready by then, the bomb can be delayed further.
## Specification
#### Relax Difficulty with Fake Block Number
For the purposes of calc_difficulty, simply replace the use of block.number, as used in the exponential ice age component, with the formula:
fake_block_number = max(0, block.number - 10_700_000) if block.number >= FORK_BLOCK_NUMBER else block.number
## Rationale
The following script predicts a ~0.1 second delay to block time by June 2022 and a ~0.5 second delay by July 2022. This gives reason to address because the effect will be seen, but not so much urgency we don’t have space to work around if needed.
def predict_diff_bomb_effect(current_blknum, current_difficulty, block_adjustment, months):
'''
Predicts the effect on block time (as a ratio) in a specified amount of months in the future.
Vars used for predictions:
current_blknum = 13423376 # Oct 15, 2021
current_difficulty = 9545154427582720
months = 7.5 # June 2022
months = 8.5 # July 2022
'''
blocks_per_month = (86400 * 30) // 13.3
future_blknum = current_blknum + blocks_per_month * months
## Backwards Compatibility
No known backward compatibility issues.
## Security Considerations
Misjudging the effects of the difficulty can mean longer blocktimes than anticipated until a hardfork is released. Wild shifts in difficulty can affect this number severely. Also, gradual changes in blocktimes due to longer-term adjustments in difficulty can affect the timing of difficulty bomb epochs. This affects the usability of the network but unlikely to have security ramifications.
In this specific instance, it is possible that the network hashrate drops considerably before The Merge, which could accelerate the timeline by which the bomb is felt in block times. The offset value chosen aimed to take this into account.
Copyright and related rights waived via CC0.
|
2022-11-30 01:01:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22866900265216827, "perplexity": 7156.063892809533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00334.warc.gz"}
|
https://socratic.org/questions/for-what-values-of-x-if-any-does-f-x-1-x-8-x-7-have-vertical-asymptotes
|
# For what values of x, if any, does f(x) = 1/((x+8)(x-7)) have vertical asymptotes?
Jul 30, 2018
$\text{vertical asymptotes at "x=-8" and } x = 7$
$\text{solve } \left(x + 8\right) \left(x - 7\right) = 0$
$x = - 8 \text{ and "x=7" are the asymptotes}$
|
2019-01-23 20:26:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1908673346042633, "perplexity": 5657.2274902491135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00049.warc.gz"}
|
https://www.fresheconomicthinking.com/2016/02/
|
## Sunday, February 14, 2016
### Reteaching economics in practice
Reforming economics teaching has been a heated topic of debate since the financial crisis. My personal views on this align closely with The University of Manchester’s Post Crash Economics Society (PCES), who have been calling for a pluralist approach that would rid economics teaching of its neoclassical core, and replace it with a critical study of economic questions and the diverse methods and approaches being taken to understand and answer those questions. The group Reteaching Economics has similar aims. Certainly a focus on empirical methods would be elevated to a core element, as the question of how can we know things is a primary concern to critical analysis. In all such an approach is not an excuse to avoid mathematics, but to embrace a the wide range of mathematical tools in use.
Change like this is a difficult task, and will ask a lot of students and teachers alike. But in my view it is also a difficult task to teach and learn the bland, abstract and often irrelevant economics that fills the undergraduate textbooks today. Even very complex concepts can be reduced into small components with memorable analogies that make them digestible. And if done right, a new approach should trigger more of a desire in students to learn and apply these new concepts and tools.
I have tried to shift my own teaching towards this approach. The purpose of this post is to share an example of taking an existing course, and the various constraints that came with it, and shifting it towards a more critical and pluralist teaching approach.
Last year I taught Managerial Economics and The University of Queensland. The student body were not all economics majors, but were drawn from across the university’s disciplines.
One constraint was the prescribed textbook (which I could not change by the time I began teaching). It contained repeated lessons in applying optimal control problems willy-nilly to possible problems that might face a firm manager. Like how should I profit-maximise when I have two goods that are substitutes?
Solving an equation that relies on knowing the cross-price elasticity between your products is actually not very useful. The important questions are much deeper and more critical, like “How do I know the own-price and cross-price elasticities in real life?” I mean, are firms really just changing prices all the time to check on their elasticities? Or “What is the current price and why was it set at this level?”
So what did I change in the course?
I made sure that before they embarked on the “textbook” concepts that they were given a broader outline of the big ideas that are implicit in the textbook view, yet completely hidden. Like “What do prices do?” And, “If prices allocate resources, why do firms exist?” You can download my first week’s notes here (I make lecture notes a seperate reference document and use lecture slides as a tool to build and explore example problems during class). To see my emphasis on providing a pluralist view, this is one of the exam questions I snuck in.
Which is the most accurate comment on the following statement - “Firms maximise profits”?
A. Always, because by definition they are profit maximisers.
B. Never, as profits do not always accurately reflect long run payoffs.
C. Often, and always in preference to potentially conflicting objectives.
D. Sometimes, but they may achieve this through intermediate objectives.
I also cut short many weeks of textbook regurgitation and replaced it with concepts and tools that don’t fit in the paradigm, but are nevertheless quite useful in practice. For example, I introduced a simple analysis of real options in investment decisions, which captures many of the realities of irreversible firm decisions (here). I also included a week on networks and evolution that introduced some simple models to capture some of the more interesting firm and market dynamics (here), and allowed us to have informed discussions on questions like performance pay for individuals or teams, and revisit the reasons firms exist at all. When I cover information problems, I discuss adverse selection, but also beneficial selection, and look at how the data shows the opposite of what the textbooks predict in Australian health insurance markets.
I also changed the assessment items from almost purely multiple choice and short answer exams, to a mix of some multiple choice, short and long answer questions that allow concepts to be applied to real life situations and data, and an assignment.
The assignment was about a real company facing real changing market conditions. Students were expected to determine the relevant data they needed to collect to complete the parts of the assignment, then find it, then use it (run appropriate regressions), then interpret it, and present it all in a professional way. Some of them told me it was the best assignment they had done in their whole course in terms of actually learning something interesting, and producing something they can be proud of. This was a relief. I had spent a lot of time preparing it.
Some of the economics majors said it was the only assignment they had ever been asked to do! This was quite worrying, but completely in consistent with economics courses globally. The PCES survey showed that a fifth of economics subject have multiple choice questions make up more than 90% of the course grade, and in half of subjects they make up over 50% of the grade. As a general rule, economists are trained to solve equations and answer multiple choice questions.
So what is stopping these changes from happening across the board?
First, few academics maintain the connection with the wider business and public policy community to have a ready supply of topical recent examples where economics can be applied. They are so narrow in their research focus, that when teaching outside this area, they simply conform to the standard textbook.
Second, many academics are trained neoclassical economists only. When my exam draft was reviewed by another academic for errors, they noted that they had never heard of real options, nor the evolutionary and network models in my course. This becomes an even more serious problem in courses where completely wrong models still fill the textbooks, such as the case with the money multiplier. I have had to stand up in class and say “Ignore chapter 19 of the textbook, it is simply wrong.” Some economists are too loyal and “profession proud” to say something like that.
Third, it is a lot of work for staff. For my teaching last year I spent about 18 hours preparing each lecture. This meant that I had new examples to use, new data, and updated readings that applied economic ideas to recent events. For the assignment I also had to research and write my own version of the assignment to make sure I was asking something that was possible to do, and that the data would reveal something that wasn't apparent on the surface (in this case that a firm which is often thought of as being "high end" actually sold inferior goods). When you want to have students research relevant recent real-life examples, this is what you have to do.
Fourth, some students prefer to just get the grade and don't like the ambiguity that a more pluralist approach entails. They have learnt how to study for multiple choice exams, how to solve equations, and so forth. Having to actually think, while being uncertain about how your thinking will go in the assessment, becomes a challenge.
I’d be interested to hear how others have gone about improving their economics teaching to be more critical and pluralist, and what constraints they faced.
## Sunday, February 7, 2016
### Land tax becomes respectable part of tax debate
After decades of political pressure that systematically clawed back state and local government’s ability to tax land, the debate has now swung back to this most efficient of taxes.
Land taxes are apparently a hot topic for debate at the years NSW Labor annual conference.
New sweet-talking Prime Minister Malcolm Turnbull even said the formerly unspeakable words on national television over the weekend (at the 6.30minute minute mark). Though he says while it is a great policy economically, it is politically 11 out of 10 in terms of difficulty. No $hit. When your voter based is dominated by homeowners, and your party made up of the country’s wealthiest landowners, you ain’t got a chance. Let’s do the hypothetical anyway. How much could be raised from state land taxes? In NSW just the exemptions to the current 1.6% (2% over 2.5million in value) land tax amount to$700million per year.
In Queensland the exemptions to the current land tax regime cost the state $1.3billion in 2014-15, with the components of the costs of the exemptions summarised in the below image. Yes, you will see a$23million per year land developers concession - at tax with the exact opposite incentives to efficient tax, reducing the cost of not developing land.
In Victoria land tax concessions are forecast to cost $2.9biliion in the current financial year, amongst a bunch of other concessions that amounted to a total of$4.9billion. See the summary from the budget papers below.
So just in these three east coast states we have about $5billion per year just in land tax exemptions, plus many billions in other exemptions, including on gambling. If we remove the land tax concessions and double the land tax rate to around 4%, these three states could raise another$13billion every year.
But that is just the start of the tax concessions for the nation’s wealthiest.
What about another tax loophole? The capital gains tax exemptions. Treasury estimates it costs the country $56billion per year. Or discounted taxation on super contributions? There’s another$27billion per year.
There are simply billions lying on the table in obvious tax loopholes for the rich, with land tax exemptions just one of many.
We have seen one big change - saying the words land tax has become acceptable for a politician. Now let’s hope this change starts snowballing, and that states can fight the propaganda of vested interests to use their tax powers more wisely and efficiently.
## Thursday, February 4, 2016
### Die solar roads. Just die.
Humans are quite smart for hairless apes. But sometimes I wonder whether we really do have the edge over our distant cousins.
Here’s an example. Solar roads. For some reason, the most idiotic idea ever still seems to gain popular attention and funding. France is now committing to investment in solar roads.
But, you are thinking, that actually does sound quite interesting and potentially wonderful. Oh, how innovative.
I know right? The urge to jump on board this idea seems irresistible. But that’s our monkey brain doing the thinking. Because when you switch on your rational mind the whole this looks like a big joke.
Our instincts are not good at thinking about a new idea in relation to a particular alternative. It takes conscious rational thinking to realise that for this to be a good idea, we need to think of alternatives to compare it to. Our default is to compare solar roads to no solar investment, hence the urge to think it sounds great.
So here’s an alternative. Solar panels anywhere but roads!
The solar roads concept also implies it is solving a problem that doesn’t exist. In this case, a problem of insufficient space for solar panels. But that is absolutely not a problem. Estimates suggest there are 400sqkms of just residential roofs space in Australia that could accommodate solar panels. More than enough to our total electricity needs, and that ignores the large industrial and commercial spaces available.
Here’s my very brief list of why the idea is stupid.
1. Roads have things on them that block the sun, like cars, people, trees and buildings shading them and so forth.
2. Roads cannot be angled to efficiently capture sunlight.
4. Roads need a superstructure above the solar panel that will reduce efficiency.
5. Building solar roads means expensive excavations and repairs that will block traffic flow.
6. The technology to do it is rubbish.
You may have heard that in Amsterdam there is a trial of a solar bike path. I think the title of this article sums up their result “That Fancy New Solar Bike Path In Amsterdam Is Utter Bullshit"
That article says it all.
What about a better alternative? How about building a roof over bikeways covered in solar? This alternative has a few things going for it
1. No excavation
2. Cheaper
3. Keeps cyclists dry
4. Keeps snow off the bike path in cold climates
5. Can be angled to the sun
6. Is proven technology
7. Provides shade in hot climates
Plus more.
And, it has been done before quite successfully.
|
2021-09-23 21:35:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24802367389202118, "perplexity": 1855.3671128455453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00225.warc.gz"}
|
https://physics.stackexchange.com/questions/375572/gauge-transformation-of-trace-reversed-metric-perturbation
|
# Gauge transformation of trace-reversed metric perturbation
This question is in reference to Exercise 30.4.2 in Thomas Moore's A General Relativity Workbook, which asks you to show that a gauge transformation of the trace-reversed metric perturbation $H_{\mu\nu}$ transforms as follows (his notation):
\begin{align} H'_{\mu\nu} &= H_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu + \eta_{\mu\nu}\partial _\alpha \xi^\alpha \end{align}
He says to begin the proof by substituting the results of the previous exercise,
\begin{align} h'_{\mu\nu} &= h_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu \end{align}
into
\begin{align} H_{\mu\nu} &= h_{\mu\nu} - \frac{1}{2} \eta_{\mu\nu} h \end{align}
When I do this I get the following:
\begin{align} H'_{\mu\nu} &= \frac{\partial x^\alpha}{\partial x^{'\mu}} \frac{\partial x^\beta}{\partial x^{'\nu}} H_{\alpha\beta} \\ &= (\delta^\alpha_\mu - \frac{\partial \xi^\alpha}{\partial x^{'\mu}}) (\delta^\beta_\nu - \frac{\partial \xi^\beta}{\partial x^{'\nu}}) (h'_{\alpha\beta} + \partial _\alpha \xi_\beta + \partial _\beta \xi_\alpha - \frac{1}{2} \eta_{\alpha\beta} h)\\ &= (\delta^\alpha_\mu \delta^\beta_\nu - \delta^\beta_\nu \partial _{\mu} \xi^\alpha - \delta^\alpha_\mu \partial _{\nu} \xi^\beta + \partial _{\mu} \xi^\alpha \partial _{\nu} \xi^\beta) (h'_{\alpha\beta} + \partial _\alpha \xi_\beta + \partial _\beta \xi_\alpha - \frac{1}{2} \eta_{\alpha\beta} h)\\ &= h'_{\mu\nu} + \partial _\mu \xi_\nu + \partial _\nu \xi_\mu - \frac{1}{2} \eta_{\mu\nu} h - h'_{\alpha\nu} \partial _\mu \xi^\alpha - h'_{\beta\mu} \partial _\nu \xi^\beta + \frac{1}{2} h \eta_{\alpha\nu} \partial _\mu \xi^\alpha + \frac{1}{2} h \eta_{\mu\beta} \partial _\nu \xi^\beta \end{align}
after dropping terms of order $\mathcal{O}(|\partial _\mu \xi^\nu|^2)$. Simplifying,
\begin{align} H'_{\mu\nu} &= H_{\mu\nu} - h'_{\alpha\nu} \partial _\mu \xi^\alpha - h'_{\beta\mu} \partial _\nu \xi^\beta + \frac{1}{2} h (\eta_{\alpha\nu} \partial _\mu \xi^\alpha + \eta_{\mu\beta} \partial _\nu \xi^\beta)\\&= H_{\mu\nu} - h'_{\alpha\nu} \partial _\mu \xi^\alpha - h'_{\beta\mu} \partial _\nu \xi^\beta + \frac{1}{2} h (\partial _\mu \xi_\nu + \partial _\nu \xi_\mu) \end{align}
But I don't see how to get from here (assuming it's correct to this point) to the desired result
\begin{align} H'_{\mu\nu} &= H_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu + \eta_{\mu\nu}\partial _\alpha \xi^\alpha \end{align}
I've tried different combinations of raising/lowering indices with $\eta_{\mu\nu}$, renaming dummy indices, substituting for h and h', etc., but I just don't see it. Any help would be appreciated.
Following the approach suggested by Drake Marquis in his comment:
\begin{align} H'_{\mu\nu} &= h'_{\mu\nu} - \frac{1}{2} \eta_{\mu\nu} h'\\ &= h_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu - \frac{1}{2} \eta_{\mu\nu} h'\\ &= H_{\mu\nu} + \frac{1}{2} \eta_{\mu\nu} h - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu - \frac{1}{2} \eta_{\mu\nu} h'\\ &= H_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu + \frac{1}{2} \eta_{\mu\nu} (h - h')\\ &= H_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu + \frac{1}{2} \eta_{\mu\nu} (\eta^{\alpha\beta} h_{\alpha\beta} - \eta^{\alpha\beta} h'_{\alpha\beta})\\ &= H_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu + \frac{1}{2} \eta_{\mu\nu} (\eta^{\alpha\beta} (h_{\alpha\beta} - h'_{\alpha\beta}))\\ &= H_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu + \frac{1}{2} \eta_{\mu\nu} (\eta^{\alpha\beta} (\partial _\alpha \xi_\beta + \partial _\beta \xi_\alpha))\\ &= H_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu + \frac{1}{2} \eta_{\mu\nu} (\partial _\alpha \xi^\alpha + \partial _\beta \xi^\beta)\\ &= H_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu + \frac{1}{2} \eta_{\mu\nu} (2 \partial _\alpha \xi^\alpha )\\ &= H_{\mu\nu} - \partial _\mu \xi_\nu - \partial _\nu \xi_\mu + \eta_{\mu\nu}\partial _\alpha \xi^\alpha \end{align} the desired result.
• You know, by definition, $H'_{\mu\nu}=h'_{\mu\nu}-h'\eta_{\mu\nu}/2$, then substitute the transformation of $h_{\mu\nu}$ to give the final result. This is the simplest method. – Drake Marquis Dec 21 '17 at 1:02
• Thanks very much for the suggestion, it's a more straightforward approach, that I hope I documented correctly in the main post. I still don't know why what I was originally doing led me into the weeds, but I'm satisfied with the proof. – P. Gallez Dec 22 '17 at 0:13
|
2020-11-29 08:10:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 7276.7706935696815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00029.warc.gz"}
|
http://math.stackexchange.com/questions/696078/point-as-an-element-of-an-affine-space-vs-point-as-an-element-of-a-topological-s
|
# Point as an element of an affine space vs point as an element of a topological space?
I am searching for the "most natural" definition of a (geometrical/space) point as an element of "something" in mathematics (I am trying to design a small computational geometry library on strong mathematical basis). For example, for a vector, it's easy: a vector is an element of a vector space, end of story. Same for a tensor: a tensor is an element of the tensor product of vector spaces. But how to define a point in the same way?
1. Can a point be defined as an element of an affine space or as an element of a topological space?
2. If both are true, what are the difference between the two types of points, and what would be the most natural (it's subjective) approach to define a point in geometry?
3. Moreover, do other approaches exist (a point is an element of XXXX)?
EDIT: I know that a point is an axiom/primitive notion but to design the library I need to make a choice. And I want to know the best option...
-
A point is a primitive notion. In particular, a primitive notion is not defined in terms of previously defined concepts, but is only motivated informally, usually by an appeal to intuition and everyday experience. A vector or a tensor is not an undefined notion, in fact their existence relies on the existence (the definition) of a vector space (tensor product space). If you cancel the concept of vector space, what is a vector? But even if you forgot all the mathematic knowledge, you know what a point is. The only reason is that the "definition" of a point (better say: the concept of a point) doesn't lie in mathematics.
If you define a point as an element of a set X, you are not define a "point" but you call the elements of such set X "points".
In fact defining a "point" as an element of a set $X$ automatically bring some extra property to the "point". Properties which are related to the starting set $X$ and so they don't hold in general.
An example: if you define a "point" as an element of an affine space you automatically have the property of subtracting one point from another to gain a vector (a "point" of a vector space which has other properties in relation to other "points" of the same set). Such property is impossible to have if you define "point" as elements of topological spaces. assuming I understand what you mean the only criterion valid for defining "points" in this way is "what property" I want attached to them?
-
See the edit... – Vincent Mar 2 '14 at 11:04
Mmh but I don't see how you can define a point as an element of something. No problem if you think I've missed the point :) – Riccardo Mar 2 '14 at 11:08
@Vincent According to me is like to trying to define a set as an element of the power set of something. It neither create order nor explain the notation better – Riccardo Mar 2 '14 at 11:10
From the programming point of view, to declare an element (for runtime execution) one of the best option is to define the set it comes from at compile-time: when you declare a vector you can say "I take an element of this vector space". If you have the knowledge on affine space/topological space, could you elaborate on question 1./2. so I can upvote your explanation ? – Vincent Mar 2 '14 at 11:15
@Vincent tried to elaborate further my opinion :) – Riccardo Mar 2 '14 at 11:28
A "space" in mathematics in its most primitive form consists of an underlying set, and elements of this set are usually referred to as "points". But what makes a set an actual "space" is some sort of additional structure imposed on the underlying set, the most simple geometric structure being a topology on the set.
-
|
2016-02-10 02:45:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015510439872742, "perplexity": 302.8209792095876}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158601.61/warc/CC-MAIN-20160205193918-00070-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/latex-help/190068-only-test-using-mathtype-print.html
|
# Only A Test of Using MathType
• October 10th 2011, 11:37 PM
Mathlv
Only A Test of Using MathType
$P({M_2}) \cap P({F_3}) = \frac{2}{3} \times \frac{3}{4}$
$\sqrt {{a^2} + {b^2}}$
|
2014-03-12 07:38:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7956241965293884, "perplexity": 14839.3385673297}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021511414/warc/CC-MAIN-20140305121151-00059-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://www.gapdays.de/gapdays2018-fall/program/
|
# Program
## Workshop
• Start: Monday September 17th 2018 at 09:30
• Finish: Friday September 21th 2018 at 15:30
https://bit.ly/gapdays2018
## Detailed Schedule
We will have a stand-up every morning at 9:30am and every afternoon at 4:30pm to lay out the plans for the day and summarise the achievements respectively. The talks will usually take place after lunch. The titles will be announced in September.
## Topics and projects
1. GAP - Julia integration (continue work from Oscar GAP Kickoff Coding Sprint (restricted access))
2. Towards GAP 4.10 (the stable-4.10 branch is planned for September 1st, and GAP 4.10.0 for October 1st; during the GAP days, we can finish the overview of changes in GAP 4.10, and if necessary fix any last minute regressions)
3. libGAP (access GAP as a library, for use in SageMath, OSCAR and elsewhere; for details, please read this email by Markus Pfeiffer)
4. Metapackages for GAP distribution (for details, read this pull request)
### Further topics
The following are some topics from past GAP days, and suggestion for new things. If you’d like to revisit one of these, or some other subject, please contact the organizers. However, we would like to avoid opening up too many different topics, and instead focus on a few selected ones.
1. Data structures in GAP (providing stacks, queues, hash sets, hash maps, …; see also https://github.com/gap-packages/datastructures)
2. MatrixObj (continue work from the previous GAP Days)
3. Documentation, Tutorials, Accessibility (talk about how to improve them, and ideally also actually do that)
4. Website (we probably need to a completely new one; can we hire somebody to do it? what is needed? etc.)
5. Package manager for GAP (Dima, …)
6. Implement make install (Max, …)
7. … your idea here ???
|
2019-01-19 06:30:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17069749534130096, "perplexity": 6552.521969241208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662690.13/warc/CC-MAIN-20190119054606-20190119080606-00319.warc.gz"}
|
http://conceptmap.cfapps.io/wikipage?lang=en&name=Force_field_(physics)
|
# Force field (physics)
Plot of a two-dimensional slice of the gravitational potential in and around a uniform spherical body. The inflection points of the cross-section are at the surface of the body.
In physics a force field is a vector field that describes a non-contact force acting on a particle at various positions in space. Specifically, a force field is a vector field ${\displaystyle {\vec {F}}}$, where ${\displaystyle {\vec {F}}({\vec {x}})}$ is the force that a particle would feel if it were at the point ${\displaystyle {\vec {x}}}$.[1]
## Examples of force fields
• In Newtonian gravity, a particle of mass M creates a gravitational field ${\displaystyle {\vec {g}}={\frac {-GM}{r^{2}}}{\hat {r}}}$ , where the radial unit vector ${\displaystyle {\hat {r}}}$ points away from the particle. The gravitational force experienced by a particle of mass m is given by ${\displaystyle {\vec {F}}=m{\vec {g}}}$ .[2][3]
• An electric field ${\displaystyle {\vec {E}}}$ is a vector field. It exerts a force on a point charge q given by ${\displaystyle {\vec {F}}=q{\vec {E}}}$ .[4]
• a gravitational force field is a model used to explain the influence that a massive body extends into the space around itself, producing a force on another massive body.,[5]
## Work done by a force field
As a particle moves through a force field along a path C, the work done by the force is a line integral
${\displaystyle W=\int _{C}{\vec {F}}\cdot d{\vec {r}}}$
This value is independent of the velocity/momentum that the particle travels along the path. For a conservative force field, it is also independent of the path itself, depending only on the starting and ending points. Therefore, if the starting and ending points are the same, the work is zero for a conservative field:
${\displaystyle \oint _{C}{\vec {F}}\cdot d{\vec {r}}=0}$
If the field is conservative, the work done can be more easily evaluated by realizing that a conservative vector field can be written as the gradient of some scalar potential function:
${\displaystyle {\vec {F}}=\nabla \phi }$
The work done is then simply the difference in the value of this potential in the starting and end points of the path. If these points are given by x = a and x = b, respectively:
${\displaystyle W=\phi (b)-\phi (a)}$
|
2019-02-21 22:26:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.734935462474823, "perplexity": 132.71377520350006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511174.69/warc/CC-MAIN-20190221213219-20190221235219-00116.warc.gz"}
|
http://mathhelpforum.com/statistics/3004-combinatorics.html
|
1. ## combinatorics
A class has 10 boys and 12 girls. In how many ways can a committee of four be selected if the committe can have at most two girls?
Thanks,
2. Originally Posted by gogo08
A class has 10 boys and 12 girls. In how many ways can a committee of four be selected if the committe can have at most two girls?
Thanks,
Possibilities
0 Girls
1 Girl
2 Girls
In each case respectively we have,
$_{10}C_4 \cdot _{12}C_0=210$
$_{10}C_3 \cdot _{12}C_1=1440$
$_{10}C_2 \cdot _{12}C_2=2970$
In total,
4260
|
2017-01-21 21:12:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884263038635254, "perplexity": 447.1528671614919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00275-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://sites.millersville.edu/bikenaga/math-proof/functions/functions.html
|
# Functions
Definition. A function f from a set X to a set Y is a subset f of the product such that if , then .
Instead of writing , you usually write . In ordinary terms, to say that an ordered pair is in f means that x is the "input" to f and y is the corresponding "output".
The requirement that implies means that there is a unique output for each input. (It's what is referred to as the "vertical line test" for a graph to be a function graph.)
(Why not say, as in precalculus or calculus classes, that a function is a rule that assigns a unique element of Y to each element of X? The problem is that the word "rule" is ambiguous. The definition above avoids this by identifying a function with its graph.)
The notation means that f is a function from X to Y.
X is called the domain, and Y is called the codomain. The image (or range) of f is the set of all outputs of the function:
Note that the domain and codomain are part of the definition of a function. For example, consider the following functions:
These are different functions; they're defined by the same rule, but they have different domains or codomains.
Example. Suppose is given by
Is this a function?
This does not define a function. For example, could be 3, since 3 is an integer bigger than 2.6. But it could also be 4, or 67, or 101, or .... This "rule" does not produce a unique output for each input.
Mathematicians say that such a function --- or such an attempted function --- is not well-defined.
In basic algebra and calculus, functions are often given by rules, without mention of a domain and codomain. In this case, the natural domain ("domain" for short) is the largest subset of consisting of numbers which can be "legally" plugged into the function.
Example. Find the natural domain of
(i) I must have in order for to be defined.
(ii) I must have in order to avoid division by zero.
Hence, the domain of f is .
Example. Define by
Prove that .
In words, the claim is that the outputs of y consist of all numbers other than 3. To see why 3 might be omitted, note that
That is, is a horizontal asymptote for the graph. Now this isn't a proof, because a graph can cross a horizontal asymptote; it just provides us with a "guess".
To prove , I'll show each set is contained in the other.
Suppose . On scratch paper, I solve for x in terms of y and get . (This is defined, since .) Now I prove that this input produces y as an output:
This proves that , so .
Conversely, suppose , so for some x. I must show that . I'll use proof by contradiction: Suppose . Then
This contradiction proves . Thus, .
Therefore, .
Definition. Let X and Y be sets. A function is:
(a) Injective if for all , implies .
(b) Surjective if for all , there is an such that .
(c) Bijective if it is injective and surjective.
Definition. Let A, B, and C be sets, and let and be functions. The composite of f and g is the function defined by
In my opinion, the notation " " looks a lot like multiplication, so (at least when elements are involved) I prefer to write " " instead. However, the composite notation is used often enough that you should be familiar with it.
Example. Define by and by . Find and .
Example. Define and by
Find:
(a) .
(b) .
(a)
(b)
If you get confused doing this, keep in mind two things:
(i) The variables used in defining a function are "dummy variables" --- just placeholders. For example, defines the same function f as above.
(ii) The variables are "positional", so in " " the "x" stands for "the first input to f" and the "y" stands for "the second input to f". In fact, you might find it helpful to rewrite the definition of f this way:
Definition. Let S and T be sets, and let be a function from S to T. A function is called the inverse of f if
Not all functions have inverses; if the inverse of f exists, it's denoted . (Despite the crummy notation, " " does not mean " ".)
You've undoubtedly seen inverses of functions in other courses; for example, the inverse of is . However, the functions I'm discussing may not have anything to do with numbers, and may not be defined using formulas.
Example. Define by . Find the inverse of f.
To find the inverse of f (if there is one), set . Swap the x's and y's, then solve for y in terms of x:
Thus, . To prove that this works using the definition of an inverse function, do this:
Recall that the graphs of f and are mirror images across the line :
I'm mentioning this to connect this discussion to things you've already learned. However, you should not make the mistake of equating this special case with the definition. The inverse of a function is not defined by "swapping x's and y's and solving" or "reflecting the graph about ". A function might not involve numbers or formulas, and a function might not have a graph. The inverse of a function is what the definition says it is --- nothing more or less.
Lemma. Let and be invertible functions. Then is invertible, and its inverse is
Proof. Let and let . Then
This proves that .
The next result is very important, and I'll often use it in showing that a given function is bijective.
Theorem. Let S and T be sets, and let be a function. f is invertible if and only if f is both injective and surjective.
Proof. ( ) Suppose that f is both injective and surjective. I'll construct the inverse function .
Take . Since f is surjective, there is an element such that . Moreover, s is unique: If and , then . But f is injective, so .
Define
I have defined a function . I must show that it is the inverse of f.
Let . By definition of , to compute I must find an element such that . But this is easy --- just take . Thus, .
Going the other way, let . By definition of , to compute I find an element such that . Then , so
Therefore, really is the inverse of f.
( ) Suppose f has an inverse . I must show f is injective and surjective.
To show that f is surjective, take . Then , so I've found an element of S --- namely --- which f maps to t. Therefore, f is surjective.
To show that f is injective, suppose and . Then
Therefore, f is injective.
Corollary. The composite of bijective functions is bijective.
Proof. Since a function is bijective if and only if it has an inverse, the corollary follows from the fact that the composite of invertible functions is invertible.
Example. Define by
Prove that f is bijective by constructing an inverse for f and proving that it works.
First, I do some scratch work to guess the inverse. I need , where u and v are the unknown outputs of in terms of a and b. Now
Equating corresponding components, I get
The second equation gives . Plugging this into the first equation, I get
Based on this scratch work, I'll define by
Note that I got this by working backwards; I still need to verify that f and are inverses.
This proves that f and are inverses, so f is bijective.
Example. (a) Prove that given by is neither injective nor surjective.
(b) Prove that given by is not injective, but it is surjective.
(c) Prove that given by is injective and surjective.
(a) It is not injective, since and : Different inputs may give the same output.
It is not surjective, since there is no such that .
(b) It is not injective, since and : Different inputs may give the same output.
It is surjective, since if , is defined, and
(c) It is injective, since if , then . But in this case, , so by taking square roots.
It is surjective, since if , then is defined, and
Notice that in this example, the same "rule" --- --- was used, but whether the function was injective or surjective changed. The domain and codomain are part of the definition of a function.
Example. Let be given by
Prove that f is injective.
Suppose and . I must prove that .
means that . Clearing denominators and doing some algebra, I get
Therefore, f is injective.
Example. Let be given by
Prove that f is injective.
It would probably be difficult to prove this directly. Instead, I'll use the following fact:
Suppose is differentiable, and that for all x or for all x. Then f is injective.
In this case, note that, since even powers are nonnegative,
Since the derivative is always positive, f is always increasing, and hence f is injective.
Here's a proof of the result I used in the last example.
Proposition. Suppose is differentiable, and that for all x or for all x. Then f is injective.
Proof. Suppose that f is differentiable and always increasing. Suppose that . I want to prove that .
Suppose on the contrary that . There's no harm in assuming (otherwise, switch them). By the Mean Value Theorem, there is a number c such that and
Since and for all x,
This contradiction proves that . Therefore, f is injective.
The same proof works with minor changes if for all x.
Example. Define by .
(a) Prove directly that f is injective and surjective.
(b) Prove that f is injective and surjective by showing that f has an inverse .
(a) Suppose . Then , so , and hence . Therefore, f is injective.
Suppose . I must find x such that . I want . Working backwards, I find that . Verify that it works:
This proves that f is surjective.
(b) Define . I'll prove that this is the inverse of f:
Therefore, is the inverse of f. Since f is invertible, it's injective and surjective.
Example. Define by . Show that f is not injective, but that f is surjective. f is not injective, since and .
The graph suggests that f is surjective. To say that every is an output of f means graphically that every horizontal line crosses the graph at least once (whereas injectivity means that every horizontal line crosses that graph at most once).
To prove that f is surjective, take . I must find such that , i.e. such that .
The problem is that finding x in terms of y involves solving a cubic equation. This is possible, but it's easy to change the example to produce a function where solving algebraically is impossible in principle.
It follows from the definition of these infinite limits that there are numbers a and b such that
But f is continuous --- it's a polynomial --- so by the Intermediate Value Theorem, there is a point c such that and . This proves that f is surjective.
Note, however, that I haven't found c; I've merely shown that such a value c must exist.
Example. Define
Prove that f is surjective, but not injective.
Let . If , then , so
If , then is defined and , so
This proves that f is surjective.
However,
Hence, f is not injective.
Example. Consider the function defined by
(a) Show that f is injective and surjective directly, using the definitions.
(b) Show that f is injective and surjective by constructing an inverse .
(a) First, I'll show that f is injective. Suppose . I want to show that .
means
Equate corresponding components:
Rewrite the equations:
The second of these equations gives . Substitute this into the first equation:
Plugging this into gives , so . Therefore, , and f is injective.
To show f is surjective, I take a point , the codomain. I must find such that .
I want
I'll work backwards from this equation. Equating corresponding components gives
The second equation gives , so plugging this into the first equation yields
Plugging this back into gives
Now check that this works:
Therefore, f is surjective.
(b) I actually did the work of constructing the inverse in showing that f was surjective: I showed that if , that
But the second equation implies that if exists, it should be defined by
Now I showed above that
For the other direction,
This proves that , as defined above, really is the inverse of f. Hence, f is injective and surjective.
In linear algebra, you learn more efficient ways to show that functions like the one above are bijective.
Example. Consider the function defined by
Prove that f is neither injective nor surjective.
Therefore, f is not injective.
To prove f is not surjective, I must find a point which is not an output of f. I'll show that is not an output of f. Suppose on the contrary that . Then
This gives two equations:
Multiply the second equation by -2 to obtain . Now I have and , so , a contradiction.
Therefore, there is no such , and f is not surjective.
Example. Let be defined by
Is f injective? Is f surjective?
First, I'll show that f is injective. Suppose . I have to show that .
Equating the second components, I get . By taking cube roots, I get . Equating the first components, I get . But , so subtracting I get . Now taking the log of both sides gives . Thus, , and f is injective.
I'll show that f is not surjective by showing that there is no input which gives as an output. Suppose on the contrary that . Then
Equating the second components gives , so . Equating the first components gives . But , so I get . This is impossible, since is always positive. Therefore, f is not surjective.
Contact information
|
2019-02-16 01:08:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627903699874878, "perplexity": 389.59232724649564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479729.27/warc/CC-MAIN-20190216004609-20190216030609-00173.warc.gz"}
|
http://research-archive.liv.ac.uk/488/
|
## The University of Liverpool Research Archive
Scholarly Communication Overview Wellcome Trust Open Access Policy Contacts
# One-loop renormalisation of quark bilinears for overlap fermions with improved gauge actions
Horsley, R.; Perlt, H.; Rakow, P.E.L.; Schierholz, G. and Schiller, A. (2005) One-loop renormalisation of quark bilinears for overlap fermions with improved gauge actions. Nuclear Physics B, 713 (1-3). pp. 601-606. ISSN 0550-3213
Preview
PDF
337Kb
Preview
Postscript (Requires a viewer, such as GSview)
581Kb
## Abstract
We compute lattice renormalisation constants of local bilinear quark operators for overlap fermions and improved gauge actions. Among the actions we consider are the Symanzik, L\"uscher-Weisz, Iwasaki and DBW2 gauge actions. The results are given for a variety of $\rho$ parameters. We show how to apply mean field (tadpole) improvement to overlap fermions. The question, what is a good gauge action, is discussed from the perturbative point of view. Finally, we show analytically that the gauge dependent part of the self-energy and the amputated Green functions are independent of the lattice fermion representation, using either Wilson or overlap fermions.
Item Type: Article LTH 618. arXiv Number: arXiv:hep-lat/0404007v2. Available online 10 February 2005. Issue: 2 May 2005. FERMION LATTICE FIELD THEORY; GAUGE FIELD THEORY; FERMION OVERLAP; LATTICE FIELD THEORY ACTION; RENORMALIZATION; MEAN FIELD APPROXIMATION; NUMERICAL CALCULATIONS Q Science > QC Physics Academic Faculties, Institutes and Research Centres > Faculty of Science > Department of Mathematical Sciences 10.1016/j.nuclphysb.2005.01.044 http://arxiv.org/abs/hep-lat/0404007 No Published 488 19 Aug 2008 10:17 20 May 2011 13:26
Repository Staff Only: item control page
Search Full text only Peer reviewed only Browse Cross Archive Search Find Top 50 authors Top 50 items [more statistics]
These pages are maintained by Library Staff @ University of Liverpool Library
|
2014-04-24 13:51:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5886104702949524, "perplexity": 8416.32894045038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/233570/sigma-algebra-from-a-finite-class-of-sets/233600
|
# Sigma-algebra from a finite class of sets
This is problem 1.7 from "Measure Theory and Probability Theory" by K. Athreya and S. Lahiri.
Let $\Omega$ be a nonempty set and let $\mathcal{B} \equiv \{ B_i: 1 \leq i \leq k < \infty\} \subset \mathcal{P}(\Omega)$, not necessarily a partition. Find $\sigma \langle \mathcal{B} \rangle$.
(Hint: For each $\delta = (\delta_1, \delta_2, ..., \delta_k), \delta_i \in \{0,1\}$ let $B_{\delta} = \bigcap_{i=1}^k B_i{(\delta_i)}$, where $B_i(0) = B_i^c$ and $B_i(1) = B_i$, $i \geq 1$. Show that $\sigma( \mathcal{B} ) = \{E : E = \bigcup_{\delta \in J} B_{\delta}, J \subset \{1,2, ..., k\}\}.$)
Their hint baffles me. If I take $\Omega = \mathbb{N}$ and $\mathcal{B} = \{ B_1, B_2 \}$, $B_1 = \{1,2\}$, $B_2 = \{2,3\}$, then for
$\delta = (0,0) \qquad B_{\delta} = \mathcal{P}(\mathbb{N}) \setminus (B_1 \cup B_2)$;
$\delta = (1,0) \qquad B_{\delta} = \{1\}$;
$\delta = (0,1) \qquad B_{\delta} = \{3\}$;
$\delta = (1,1) \qquad B_{\delta} = \{2\}$;
If I want to show that $\mathbb{N} \in \sigma( \mathcal{B} )$, do I have to take the union of all four $B_{\delta}$ as $E$? But the index set is $J \subset \{1, 2\}$, does it mean I can choose at most two of them? I can't get $\mathbb{N}$ with only two of them or can I?
You want to write $\Bbb N$ as a set function of $B_1$ and $B_2$, right? We have that $\{2\}=B_1\cap B_2$ and $\{1\}=B_1\setminus \{2\}$. This gives that $\emptyset\in\sigma(\cal B)$. – Davide Giraudo Nov 9 '12 at 14:43
• First, we can assume without loss of generality that $\emptyset$ is one of the $B_i$, otherwise add it and work with $k+1$.
• Show that the collection $\{B_{\delta},\delta\in\{0,1\}^k\}$ consists of pairwise disjoint sets.
• Show that if $\{S_1,\dots,S_N\}$ is a partition of $\Omega$, then the $\sigma$-algebra generated by this collection is $\{\bigcup_{j\in J}S_j,J\subset \{1,\dots,N\}\}$.
|
2013-05-23 00:00:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9520667195320129, "perplexity": 106.49899006127826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702525329/warc/CC-MAIN-20130516110845-00056-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://asmedigitalcollection.asme.org/mechanismsrobotics/article/10/2/025003/369799/Design-of-a-Flapping-Wing-Mechanism-to-Coordinate
|
This paper presents a design procedure to achieve a flapping wing mechanism for a micro-air vehicle that coordinates both the wing swing and wing pitch with one actuator. The mechanism combines a planar four-bar linkage with a spatial four-bar linkage attached to the input and output links forming a six-bar linkage. The planar four-bar linkage was designed to control the wing swing trajectory profile and the spatial four-bar linkage was designed to coordinate the pitch of the wing to the swing movement. Tolerance zones were specified around the accuracy points, which were then sampled to generate a number of design candidates. The result was 29 designs that achieve the desired coordination of wing swing and pitch, and a prototype was constructed.
## Introduction
Flapping wing micro-air vehicles are winged aircraft generally smaller than 15 cm that mimic insect flight. The wings of these aircraft generally swing in a planar path without control of the wing pitch angle. This pitch angle is set by air drag forces that shape a membrane that forms the wing. See, for example, the aerovironment nanohummingbird [1].
Recent research by Taha et al. [2,3] shows that coordinating the wing pitch angle with the wing swing movement improves flight aerodynamics. This paper presents the synthesis procedure for a one degree-of-freedom flapping wing mechanism that coordinates the wing swing and pitch angles to achieve a specified performance.
This new flapping wing mechanism uses a spatial four-bar linkage to control the wing pitch that is built on a planar four-bar linage that controls the swing movement. The result is a six-bar mechanism, Fig. 1, that uses a single input to coordinate both the wing swing and pitch angles.
In what follows, we present the design methodology, solve for an example design, and present a prototype for this flapping wing mechanism.
## Literature Review
The aerovironment nanohummingbird [1], and the Harvard RoboBee [4] have created flying flapping wing micro-air vehicle's whose wings pitch due to the aerodynamic drag induced by the wing's planar swinging motion. Our design provides a linkage, which coordinates both the wing's swing and pitch follow specific functions. Yan et al. [5], Conn et al. [6], and Balta et al. [7] use parallel four-bar linkages and a spatial linkage to coordinate the swing and pitch of the wing, while we achieve this using a single six-bar mechanism. Recently, the Robo Raven [8] and the Bat Bot [9] have utilized a different flying mode in which extreme deformation of large wings is used to achieve flight.
A four-bar linkage that coordinates input and output angles is known as a function generator, Svoboda [10] and Freudenstein [11]. In our case, this linkage transforms a constant rotation of an input crank to an oscillating swing movement of the wing. The synthesis theory for function generators of this type is given by Brodell and Soni [12].
The control of the wing pitch is achieved by an RSSR function generator, see Denavit and Hartenberg [13] and Suh and Radcliffe [14]—R denotes a revolute or hinged joint and S denotes a spherical or ball joint. Our design approach uses the specified angle data to position the input and output cranks of the spatial four-bar linkage, and then solves the constraint equations for the SS link that connects the two, as described by Innocenti [15] and McCarthy and Soh [16].
The connection of the input and output links of a planar four-bar linkage with a spatial RSSR linkage creates a spatial six-bar linkage. The synthesis of spatial six-bar linkages has focused on the control of end-effector position and orientation rather than function generation. See, for example, Sandor et al. [17], who designed an RSSR-SS spatial six-bar and Chiang et al. [18], who designed an RSCC-RRS spatial six-bar, where C denotes a cylindric joint. Recent research includes the design of a spatial six-bar linkage to guide the end effector to trace a spatial path, see Chung [19].
This paper is the first to explore the use of a spatial six-bar linkage to coordinate two outputs to a given input.
## Wing Swing and Wing Pitch Requirements
A study of flapping wing designs for micro-air vehicles [7] shows that planar crank-rocker linkages with a passive position of the wing pitch provide effective wing performance. However, Yan et al. [20] show that coordinated control of the wing pitch and wing swing movement improves the aerodynamics of a micro-air vehicle.
They demonstrate effective aerodynamic performance with wing swing and wing pitch functions, $q(θ)$ and $ϕ(θ)$, given by
$q(θ)=π2−π3cos θ, ϕ(θ)=π2−π3sin θ$
(1)
where the driving angle is $θ=ωt$, and ω is the flapping frequency, Fig. 2.
## Synthesis of the Wing Swing Mechanism
In order to design the wing swing mechanism, we follow Brodell and Soni [12] to obtain a four-bar linkage that has a time ratio of one. This ensures the speed of the flapping movement is the same in both the forward and back strokes.
Brodell and Soni provide formulas for the link lengths parameterized by the desired wing swing angle σ and transmission angle λ
$r3r1=1−cos σ2 cos2λ, r4r1=1−(r3/r1)21−(r3/r1)2 cos2λ,r2r1=(r3r1)2+(r4r1)2−1$
(2)
where $r1,$$r2,$$r3,$ and r4 are the lengths of the ground link, the input crank, the coupler link, and the follower link, respectively.
Given the link lengths of the crank rocker, the wing swing function $γ(Δθ)$ where $Δθ$ is measured from the line NG of the linkage [16]
$A(Δθ)cos γ+B(Δθ)sin γ=C(Δθ)$
(3)
where
$A(Δθ)=2r2r4 cos Δθ+2r1r4, B(Δθ)=2r2r4 sin Δθ,C(Δθ)=r12+r22+r42−r32+2r1r2 cos Δθ$
(4)
This has the solution
$γ(Δθ)=arctan(BA)±arccos(CA2+B2)$
(5)
Our goal is to achieve the four-bar linkage output $γ¯(θ)=γ(θ0+Δθ)+k0$ that matches the wing swing function $q(θ)$ at θ = 0 and where $q(0)=π/6$ and the minimum of γ is at $γ(0.045)=0.568$. Thus, $0=θ0+0.045$ and $π/6=0.568+k0$. The result is $θ0$ = −0.045 or −2.6 deg and k0 = −0.045 or −2.6 deg, as seen in Fig. 3.
Thus, we obtain
$γ¯=γ(Δθ−0.045)−0.045$
(6)
## Synthesis of the Pitch Mechanism
In order to control the pitch of the flapping wing, we introduce an RSSR linkage connecting the input and output links of the swing mechanism that will orient the pitch of the wing during the flapping movement.
The planar wing swing mechanism NDEG, shown in Fig. 1, is positioned in the ground frame W such that the fixed pivot G of the link EGC is on the z-axis of W. The axes of the four joints of the planar four-bar NDEG are also directed along the z-axis of W. The rotation γ of EGC provides the swing of the wing. The output crank CB of the RSSR linkage NABC controls the pitch $ϕ$ of the wing around the axis of C. The output of the wing swing mechanism NDEG is an input to the wing pitch mechanism NABC. The dimension g is selected by the designer.
The location of the fixed pivot $N=(0,t,0)$ of input crank DNA is selected by the designer so that the ground link GN has the length r1. The input rotation of DNA about the axis of N drives both the planar wing swing mechanism and spatial wing pitch mechanism.
Our goal is to determine the dimensions of the linkage by solving the SS constraint equations for seven values of the wing pitch function, $ϕi=ϕ(θi), i=1,…,7$, see Ref. [16]. The values, $ϕ(θi)$, are chosen using Chebyshev spacing along the wing pitch function [21].
The synthesis equations coordinate the rotation of the link DNA, EGC, and BC links so that they simultaneously satisfy, $γ¯i=γ¯(θi)$ and $ϕi=ϕ(θi), i=1,…,7$.
The homogeneous transforms Z and X that define coordinate screw displacement about the x- and z-axes, given by
$Z(θ,d)=[ cos θ−sin θ00 sin θ cos θ00001d0001], X(α,a)=[100a0 cos α−sin α00 sin α cos α00001]$
(7)
are used to locate the axis of rotation of N for the input to the RSSR linkage and the axis of C for the output, Fig. 1.
The rotation axis of N of the input crank DNA is defined by the sequence of transformations
$H(θ)=X(0,t)Z(θ,0)$
(8)
where the parameter θ defines the rotation of the link DNA and $(0,t)$ defines its position relative to the ground frame, see Fig. 1. Let $Hi1$ be the transformation relative to the initial configuration of the RSSR chain evaluated at seven task angles θi, $i=1,…,7$, so we have
$Hi1=HiH1−1, i=1,…,7$
(9)
The rotation axis of C of the output crank CB is defined by the sequence of transformations
$J(ϕ,γ¯)=Z(γ¯,0)X(π/2,0)Z(ϕ,g)$
(10)
where $γ¯$ and g define the position relative to the ground frame, and $ϕ$ defines the rotation of the output crank. Let $Ji1$ denote the transformation relative to the initial configuration of the RSSR chain evaluated at seven task angles $ϕi, i=1,…,7$
$Ji1=Ji·J1−1 i=1,…,7$
(11)
The design equations for wing pitch mechanism are obtained as the constraint that the length of the SS link be constant in each of the task positions specified by $ϕi=ϕ(θi), i=1,…,7$. Let the coordinates of the centers A and B of the S-joints in the first task position be given by
$A1=(u,v,w), B1=(x,y,z)$
(12)
Then the constraint that the coupler link AB is of constant length $b=|AB|$ in each of the task positions yields the equations
$|Ai−Bi|2=|([Hi1]A1−[Ji1]B1)|2=b2, i=1,…,7$
(13)
These equations can be reduced to a degree 20 polynomial in terms of one of the following u, v, w, x, y, or z, [15,16], and thus the system of equations has a maximum of 20 unique solutions. This set of equations was solved using Mathematica, which yielded values for the points $A1=(u,v,w)$ and $B1=(x,y,z)$. The solutions found are all possible solutions for the set of specified dimensions and angles.
## Analysis of the Wing Pitch Mechanism
In order to evaluate the linkage obtained from the synthesis routine, we analyze RSSR Wing pitch mechanism at each input crank position. The goal is to verify that the mechanism moves smoothly through the specified wing swing and wing pitch movements. Every solution from the synthesis is analyzed.
The constraint equation of the RSSR chain defining the length of the link AB in terms of $Δθ, γ¯$, and $ϕ$ is given by
$|[H(Δθ)]a−[J(ϕ,γ¯)]b|2=b2$
(14)
where H and J are given in Eqs. (8) and (10) and a and b are the coordinates of $A1$ and $B1$ in the frame of the first task position given by
$a=[H1]−1A1=(u¯,v¯,w¯), b=[J1]−1B1=(x¯,y¯,z¯)$
(15)
For each value of $Δθ$, we obtain $γ¯$, and obtain an equation of the form
$A(Δθ)cos ϕ+B(Δθ)sin ϕ=C(Δθ)$
(16)
where
$A(Δθ)=−2tx¯ cos(γ)−2u¯x¯ cos(γ−Δθ)−2v¯x¯ sin(γ−Δθ)−2w¯y¯B(Δθ)=+2ty¯ cos(γ)+2u¯y¯ cos(γ−Δθ)+2v¯y¯ sin(γ−Δθ)−2w¯x¯C(Δθ)=−b2+g2+t2+x¯2+y¯2+z¯2+u¯2+v¯2+w¯2+2gz¯−2gt sin(γ)−2gu¯ sin(γ−Δθ)+2gv¯ cos(γ−Δθ)+2tu¯ cos(Δθ)−2tv¯ sin(Δθ)−2tz¯ sin(γ)−2u¯z¯ sin(γ−Δθ)+2v¯z¯ cos(γ−Δθ)$
(17)
where g and t are determined by the designer.
Solve for $ϕi(Δθi) i=1,…,7$ using Eq. (5).
The solutions that pass through all precision points on a single branch pass the branching analysis. The passing solutions are then checked for continuity. The range of the input crank angles is divided into 200 intermediate angles, and the links are checked to ensure that they do not change length at the intermediary positions. The solutions are plotted for the entire range of motion of the input crank. An example of a solution that fails in continuity but passes branching is shown in Fig. 4.
## Design Methodology
Our design methodology consists of three steps: (i) the required wing swing movement is used to design a planar four-bar function generator; (ii) the coordinate wing pitch movement is used to formulate and solve the SS chain constraints for inverted task positions within specified tolerance zones; and (iii) each solution is analyzed to evaluate its continuous movement through the task positions. This process is iterated with randomly selected task positions within the tolerance zones. Successful design candidates are collected, ranked, and evaluated by the designer.
## Example Flapping Wing Mechanism
The precision points of θ and $ϕ$ were chosen by applying Chebyshev Spacing on Eq. (1). The calculated input crank angles(θ) are then substituted into the planar linkage equation, see Eq. (6), to obtain the wing swing angle $γ¯$. The values of the tolerance zones for each precision point $±δθi$ and $±δϕi$ are selected by the designer. The values are seen in Table 1.
The planar mechanism was chosen to have a total wing swing angle $σ=π/3$ from $q(θ)$ in Eq. (5), a base length $r1=5$, and a transmission angle of 0.521 deg or 29.9 deg. The corresponding values of r2, r3, and r4 are seen in Table 2. These values are substituted into Eq. (5), which gives $γ(θ)$ a minimum at $γ(0.045)=0.568$ deg. Thus, $θ0=−0.045$ or −2.6 deg and $k0=0.045$ or 2.6 deg and the wing swing equation is
$γ¯(Δθ)=tan−12.24−0.17 cos(Δθ+0.04)−0.17 sin(Δθ+0.04)+cos−1(4.33 cos(Δθ+0.04)−0.450.17 sin Δθ−3.88 cos Δθ+25.15)−0.04$
(18)
To align the planar and spatial linkages, the input crank axis of the spatial linkage was placed such that t = −5 in and the output crank axis such that g = 0. Knowing $t,g,θi,ϕi$ and $γ¯i$ for $i=1,…,7$ and solving Eq. (13) for $(u,v,w,x,y,z)$ gives the coordinates of $A1$ and $B1$ in the global frame W. The θi values were selected in the range of $±δθi$ and the $ϕi$ values were selected in the range of $±δϕi$. The $γ¯i$ values were recalculated using Eq. (18) and the values of θi.
Five hundred variations produced 3582 solutions of which 29 met the design requirements. The 29 design candidates were sorted in ascending order by the link ratio κ, where $κ=(Longest Link Length/Shortest Link Length).$ The RMS error of ϵ between each of the top six results and desired wing pitch function of $ϕ(θk)$ given in Eq. (1) was calculated using
$ε=∑k=1n(ϕ¯(θk)−ϕ(θk))n$
(19)
where $ϕ¯(θk)$ is the wing pitch function from the solution, and n is the number of samples.
Design number 1 was selected because it has the lowest RMS error of the designs with a link ratio less than 10, see Table 3. Results with link ratios less than 10 improved the ease of manufacturing. Its associated inputs are in Table 4 and the output is plotted in Fig. 5.
The mechanism was modeled in SolidWorks, see Figs. 68. Figure 9 compares the SolidWorks output, the generated configuration output, and the required wing pitch angle. The SolidWorks model shows some discrepancy due to minor adjustments to the structure that allow it to be physically constructed. The dimensions used are shown in Table 5.
The physical model of the linkage was built, shown in Figs. 10 and 11. The wing span of the flapping mechanism is 34 in. The straight links and end caps were machined from aluminum or brass tubing. The remaining links and base were made from polycarbonate using additive manufacturing. The revolute joints used brass tubes as bushings while the spherical joints used a compliant joint that used a wire rope to connect the links. The right and left wings are driven by a gear train that connects a motor to their input cranks. The mechanism was driven at a rate of approximately 2–3 Hz, but was not driven any faster due to its size. The focus of this model is to confirm the movement of the proposed linkage. Future research will pursue developing methods to miniaturize and construct a model with a wing span of 5 in.
## Conclusion
This paper presents a design methodology for an RSSR mechanism built on the planar four-bar mechanism to coordinate the wing pitch angle with the swing movement of a flapping wing micro-air vehicle. The resulting mechanism is a spatial six-bar linkage that transforms a rotational input to coordinated wing pitch angle and swing angles. The design procedure is demonstrated for wing swing and wing pitch profiles recommended for micro-air vehicles. It results in 29 different designs that meet the requirements, one of which is presented in detail together with a prototype.
## Acknowledgment
Benjamin Liu's preparation of the geometric models is gratefully acknowledged.
## Funding Data
• Division of Civil, Mechanical and Manufacturing Innovation (Grant No. 1636017).
## References
References
1.
Keennon
,
M.
,
Klingebiel
,
K.
,
Won
,
H.
, and
Andriukov
,
A.
,
2012
, “Development of the Nano Hummingbird: A Tailless Flapping Wing Micro Air Vehicle,”
AIAA
Paper No. 2012-0588.
2.
Taha
,
H. E.
,
Nayfeh
,
A. H.
, and
Hajj
,
M. R.
,
2014
, “
Effect of the Aerodynamic-Induced Parametric Excitation on the Longitudinal Stability of Hovering Mavs/Insects
,”
Nonlinear Dyn.
,
78
(
4
), pp.
2399
2408
.
3.
Taha
,
H. E.
,
Tahmasian
,
S.
,
Woolsey
,
C. A.
,
Nayfeh
,
A. H.
, and
Hajj
,
M. R.
,
2015
, “
The Need for Higher-Order Averaging in the Stability Analysis of Hovering Mavs/Insects
,”
Bioinspiration Biomimetics
,
10
(
1
), p.
016002
.
4.
Ma
,
K.
,
Chirarattananon
,
P.
,
Fuller
,
S.
, and
Wood
,
R.
,
2013
, “
Controlled Flight of a Biologically Inspired, Insect-Scale Robot
,”
Science
,
340
(6132), pp. 603–607.
5.
Yan
,
J.
,
,
S.
,
Birch
,
J.
,
Dickinson
,
M.
,
Sitti
,
M.
,
Su
,
T.
, and
Fearing
,
R.
,
2001
, “
Wing Transmission for a Micromechanical Flying Insect
,”
J. Micromechatronics
,
1
(
3
), pp.
221
237
.
6.
Conn
,
A. T.
,
Burgess
,
S. C.
, and
Ling
,
C. S.
,
2007
, “
Design of a Parallel Crank-Rocker Flapping Mechanism for Insect-Inspired Micro Air Vehicles
,”
Proc. Inst. Mech. Eng., Part C
,
221
(
10
), pp.
1211
1222
.
7.
Balta
,
M.
,
Ahmed
,
K. A.
,
Wang
,
P. L.
,
McCarthy
,
J. M.
, and
Taha
,
H. E.
,
2017
, “Design and Manufacturing of Flapping Wing Mechanisms for Micro Air Vehicles,”
AIAA
Paper No. 2017-0509.
8.
John
,
G.
,
Alex
,
H.
,
Ariel
,
P.-R.
,
Luke
,
R.
,
,
G.
,
Eli
,
B.
,
Johannes
,
K.
,
Deepak
,
L.
,
Yeh
,
C.-H.
,
Bruck
,
H. A.
, and
Satyandra
,
G. K.
,
2014
, “
Robo Raven: A Flapping-Wing Air Vehicle With Highly Compliant and Independently Controlled Wings
,”
Soft Rob.
,
1
(
4
), pp.
275
288
.
9.
Ramezani
,
A.
,
Chung
,
S.-J.
, and
Hutchinson
,
S.
,
2017
, “
A Biomimetic Robotic Platform to Study Flight Specializations of Bats
,”
Sci. Rob.
,
2
(
3
), p. eaal2505.
10.
Svoboda
,
A.
,
1948
,
,
McGraw-Hill
,
New York
.
11.
Freudenstein
,
F.
,
1954
, “
An Analytical Approach to the Design of Four-Link Mechanisms
,”
Trans. ASME
,
76
(
3
), pp.
483
492
.
12.
Brodell
,
R. J.
, and
Soni
,
A.
,
1970
, “
Design of the Crank-Rocker Mechanism With Unit Time Ratio
,”
J. Mech.
,
5
(
1
), pp.
1
4
.
13.
Denavit
,
J.
, and
Hartenberg
,
R. S.
,
1960
, “
,”
ASME J. Appl. Mech.
,
27
(
1
), pp.
201
206
.
14.
Suh
,
C. H.
, and
,
C. W.
,
1978
,
Kinematics and Mechanisms Design
,
Wiley
,
New York
.
15.
Innocenti
,
C.
,
1995
, “
Polynomial Solution of the Spatial Burmester Problem
,”
ASME J. Mech. Des.
,
117
(
1
), pp.
64
68
.
16.
McCarthy
,
J. M.
, and
Soh
,
G. S.
,
2010
,
, Vol.
11
,
,
New York
.
17.
Sandor
,
G. N.
,
Xu
,
L. J.
, and
Yang
,
S. P.
,
1986
, “
Computer-Aided Synthesis of Two-Closed-Loop RSSR-SS Spatial Motion Generator With Branching and Sequence Constraints
,”
Mech. Mach. Theory
,
21
(4), pp.
345
350
.
18.
Chiang
,
C. H.
,
Yang
,
Y.-N.
, and
Chieng
,
W. H.
,
1994
, “
Four-Position Synthesis for Spatial Mechanisms With Two Independent Loops
,”
Mech. Mach. Theory
,
29
(
2
), pp.
265
279
.
19.
Chung
,
W.-Y.
,
2015
, “
Synthesis of Spatial Mechanism UR-2SS for Path Generation
,”
ASME J. Mech. Rob.
,
7
(4), p.
041009
.
20.
Yan
,
Z.
,
Taha
,
H. E.
, and
Hajj
,
M. R.
,
2015
, “
Effects of Aerodynamic Modeling on the Optimal Wing Kinematics for Hovering MAVs
,”
Aerosp. Sci. Technol.
,
45
, pp.
39
49
.
21.
Hartenberg
,
R. S.
, and
Denavit
,
J.
,
1964
,
|
2019-10-14 23:54:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 102, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44951578974723816, "perplexity": 2136.4800879247623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00413.warc.gz"}
|
http://lists.w3.org/Archives/Public/public-whatwg-archive/2010Dec/0150.html
|
# [whatwg] element "img" with HTTP POST method
From: Martin Janecke <whatwg.org@kaor.in>
Date: Thu, 9 Dec 2010 19:59:02 +0100
Message-ID: <CFCFE8AF-966C-4E90-908F-FB92C3C2590D@kaor.in>
Hi all,
What is your opinion on enabling the HTTP POST method for the img element? The motivation behind this is that there are services which generate images automatically based on parameters given -- nowadays provided as query string in a GET request -- for inclusion in web pages. I've listed examples below. However, these query strings can get really long and it seems to me that the HTTP POST method would be more appropriate than the GET method for such services. Not only because it would better match the intended use of these methods (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9), but also for practical reasons: URLs in GET requests are (inconsistently) limited in length by browsers and server software, which limits these services (inconsistently). And long URLs bloat server logs.
This could be implemented with an optional attribute, e.g. "post-data". The client would request the source using the POST method if the attribute is present and send the attribute's value as the request's message body.
Martin
PS: This doesn't necessarily need to be restricted to img elements. If you encountered use cases for other elements that embed content, feel free to add them.
=== Example/Use Cases ===
(1) MimeTeX and MathTeX are used to display mathematical formulae in web pages. This is often used in science forums. Info:
http://www.forkosh.com/mathtex.html
http://www.forkosh.com/mimetex.html
Current use in web pages:
<img src="http://www.forkosh.dreamhost.com/mathtex.cgi?\begin{align}%0D%0A\Gamma(z)%20%26%3D%20\lim_{n\to\infty}\frac{n!\;n^z}{z\;(z%2B1)\cdots(z%2Bn)}%20%3D%20\frac{1}{z}\prod_{n%3D1}^\infty\frac{\left(1%2B\frac{1}{n}\right)^z}{1%2B\frac{z}{n}}%20\\%0D%0A\Gamma(z)%20%26%3D%20\frac{e^{-\gamma%20z}}{z}\prod_{n%3D1}^\infty\left(1%2B\frac{z}{n}\right)^{-1}e^{z/n}%0D%0A\end{align}" alt="gamma function definition">
Possible future use:
<img src="http://www.forkosh.dreamhost.com/mathtex.cgi" post-data="\begin{align}%0D%0A\Gamma(z)%20%26%3D%20\lim_{n\to\infty}\frac{n!\;n^z}{z\;(z%2B1)\cdots(z%2Bn)}%20%3D%20\frac{1}{z}\prod_{n%3D1}^\infty\frac{\left(1%2B\frac{1}{n}\right)^z}{1%2B\frac{z}{n}}%20\\%0D%0A\Gamma(z)%20%26%3D%20\frac{e^{-\gamma%20z}}{z}\prod_{n%3D1}^\infty\left(1%2B\frac{z}{n}\right)^{-1}e^{z/n}%0D%0A\end{align}" alt="gamma function definition">
(2) QR-Code generators encode texts or URLs in a way that can be easily read by devices such as cell phones with built-in cameras. Info and generator:
http://qrcode.kaywa.com/
Current use in web pages:
<img src="http://qrcode.kaywa.com/img.php?s=8&d=Ah%2C%20distinctly%20I%20remember%20it%20was%20in%20the%20bleak%20December%2C%0D%0AAnd%20each%20separate%20dying%20ember%20wrought%20its%20ghost%20upon%20the%20floor." alt="qrcode">
Possible future use:
<img src="http://qrcode.kaywa.com/img.php" post-data="s=8&d=Ah%2C%20distinctly%20I%20remember%20it%20was%20in%20the%20bleak%20December%2C%0D%0AAnd%20each%20separate%20dying%20ember%20wrought%20its%20ghost%20upon%20the%20floor." alt="qrcode">
Received on Thursday, 9 December 2010 10:59:02 UTC
This archive was generated by hypermail 2.3.1 : Monday, 13 April 2015 23:09:02 UTC
|
2019-09-24 09:34:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18548578023910522, "perplexity": 5082.402133278467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572896.15/warc/CC-MAIN-20190924083200-20190924105200-00374.warc.gz"}
|
https://www.gamedev.net/forums/topic/641735-whats-the-funniest-thing-youve-read-on-a-programming-forum/
|
# What's the funniest thing you've read on a programming forum?
This topic is 1739 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
http://www.gamedev.net/topic/104648-why-does-diarreha-burn/ < this caught me off guard, I think I broke a blood vessel in my left eye from laughing.
##### Share on other sites
Read this entire post. It is a poll of sorts on Stack Overflow about funny comments in source files. LINK Then your eyes' blood vessels can be symmetrical.
##### Share on other sites
Khaiy... Reputation: 747 has been making me chuckle lately.
Fongerchat, etc
##### Share on other sites
Well, there is always the single player campaign.
I'm sick, I know.
##### Share on other sites
I cant find the thread, but it is on here, the op who is a troll (or an idiot) wants to great a game in HTML ONLY but refuses to take any advice, saying it must be HTML ONLY due to speed etc. I did a brief search but couldn't find, but judging by the amount of responses, I would say someone knows what I am on about and has it marked.
##### Share on other sites
I'm pretty sure he wanted to use HTML and variables to make his amazing MMO hockey game.
##### Share on other sites
From this forum:
MP3-Beating Compression
The MMO with HTML and variables thread that I too cannot find
Ah, good old nostalgia.
##### Share on other sites
And their goes my productivity for the day=-)
##### Share on other sites
From this forum:
MP3-Beating Compression
This one is amazing. So many people actually believing in OP.
##### Share on other sites
I can't believe I forgot about the space tubes!
##### Share on other sites
From this forum:
MP3-Beating Compression
This one is amazing. So many people actually believing in OP.
Original File: 100mb
CAR File: 100mb
ZIPped Original: 99mb
ZIPped CAR: 3 bytes
LMFAO
##### Share on other sites
Original File: 100mb
CAR File: 100mb
ZIPped Original: 99mb
ZIPped CAR: 3 bytes
A 3-byte file only has enough entropy to encode 2553 different combinations. Even if we assume that every one of those combinations decompresses to a 100MB file, those three bytes can only represent a very tiny percentage of all possible 100MB files. How tiny? My calculator can represent numbers up to 1e999, and it overflowed when calculating the number of possible 100MB files in existence. Since my calculator won't cut it, I started a calculation in GNU bc, which is an arbitrary-precision calculator. It's been running for about 5 minutes now, and it still hasn't given me the answer. This result would be a very special case of the algorithm.
On top of that, I'm pretty sure that the header for a ZIP file alone is well more than 3 bytes.
Edit:
After over 45 minutes, I just killed the job. Calculating it in a more sane manner, it's about 10^(-2,523,430), which is reeeealy tiny.
Edited by BLM768
##### Share on other sites
Original File: 100mb
CAR File: 100mb
ZIPped Original: 99mb
ZIPped CAR: 3 bytes
LMFAO
##### Share on other sites
That's okay, in Steins;Gate they compress a person's entire conciousness (2.5 petabytes, according to the lore) down to something like 50 bytes by using black holes generated by the large hadron collider.
##### Share on other sites
Original File: 100mb
CAR File: 100mb
ZIPped Original: 99mb
ZIPped CAR: 3 bytes
A 3-byte file only has enough entropy to encode 2553 different combinations. Even if we assume that every one of those combinations decompresses to a 100MB file, those three bytes can only represent a very tiny percentage of all possible 100MB files. How tiny? My calculator can represent numbers up to 1e999, and it overflowed when calculating the number of possible 100MB files in existence. Since my calculator won't cut it, I started a calculation in GNU bc, which is an arbitrary-precision calculator. It's been running for about 5 minutes now, and it still hasn't given me the answer. This result would be a very special case of the algorithm.
On top of that, I'm pretty sure that the header for a ZIP file alone is well more than 3 bytes.
Edit:
After over 45 minutes, I just killed the job. Calculating it in a more sane manner, it's about 10^(-2,523,430), which is reeeealy tiny.
Or you could, you know, just use logarithms.
##### Share on other sites
From this forum:
MP3-Beating Compression
This one is amazing. So many people actually believing in OP.
Back then I did implement something based on the ramblings of the OP with regards to his technique - ironically the data the program ultimately produced had the opposite effect, it caused zip compression to fail to reduce the size on any file which it was run on including small text files and mp3 files.
##### Share on other sites
From this forum:
MP3-Beating Compression
This one is amazing. So many people actually believing in OP.
Back then I did implement something based on the ramblings of the OP with regards to his technique - ironically the data the program ultimately produced had the opposite effect, it caused zip compression to fail to reduce the size on any file which it was run on including small text files and mp3 files.
Same here, well, I wasn't around Gamedev at the time (hell I wasn't even in my teens back then) but a few years ago I figured out a groundbreaking compression algorithm. I was really excited but a few hours later I eventually discovered my algorithm wasn't reversible. My hopes were swiftly crushed. Then I read up on compression and laughed at myself in shame. So this thread does hit a bit close to home, but at least I didn't make a fool of myself in front of the world ;)
##### Share on other sites
My *girlfriends* X drove me over
Well, that one was funny
|
2018-01-22 05:15:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29377856850624084, "perplexity": 2991.2851942039947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890991.69/warc/CC-MAIN-20180122034327-20180122054327-00732.warc.gz"}
|
https://ttlc.intuit.com/community/taxes/discussion/my-1099misc-is-corrected-but-i-don-t-see-anywhere-to-put-that-what-should-i-do/00/587273
|
cancel
Showing results for
Did you mean:
Highlighted
Level 1
## My 1099misc is corrected but I don't see anywhere to put that. What should I do?
Accepted Solutions
Level 7
## My 1099misc is corrected but I don't see anywhere to put that. What should I do?
You can enter the corrected 1099 in the same place you would the original one. If you haven't filed, you just need to report the corrected 1099. The IRS will get the same copy and will know that the original 1099 is incorrect.
|
2019-08-18 21:22:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137332201004028, "perplexity": 1430.637113923279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314130.7/warc/CC-MAIN-20190818205919-20190818231919-00198.warc.gz"}
|
https://www.physicsforums.com/threads/buoyancy-problem.961544/
|
# Buoyancy Problem
## Homework Statement
A cube of ice whose edge is 17.0 mm is floating in a glass of ice-cold water with one of its faces parallel to the water surface.
Ice-cold ethyl alcohol is gently poured onto the water surface to form a layer 5.00 mm thick above the water. When the ice cube attains hydrostatic equilibrium again, what will be the distance from the top of the water to the bottom face of the block?
FB=ρVg
W = mg
V = lwh
## The Attempt at a Solution
Here is my work. I have checked and rechecked it, but for some reason it's still not correct! Please help!
mg = FBalcohol + FBwater
m = ρalcoholValcohol + ρwaterVwater
ρiceV = ρalcoholValcohol + ρwaterVwater
934 x 173 = 789 x 172 x 5 mm + 1000 x 172 x h
934 x 17 = 789 x 5 + 1000h
h = 11.933 mm
Homework Helper
Gold Member
The ## h ## in your 3rd to the last equation needs to be ## L=17-5-h ## to compute the volume of water that is displaced. You basically computed ## L=11.933 ##... which is how much of the cube is immersed in the water portion. ## \\ ## Edit: I read it too quickly: I was computing the ## h ## from the surface of the liquid to the top of the block. Anyway, ## L=11.93 ## mm is immersed in water, and add ## 5 ## mm to that to get their ##h ##. ## \\ ## Edit: Nope, I read it too quickly a second time: It says the surface of the water, and not the surface of the liquid. I think you may have it right, but the person who wrote out the problem didn't read his own words carefully enough. ## \\ ## Another idea: The problem could also be the sig figs in your answer. Try putting in 11.9.
Last edited:
gneill
Mentor
Where did you get your ρice value? What I've seen in the literature for ice at 0°C is more like ##0.9150 gm/cm^3##.
Where did you get your ρice value? What I've seen in the literature for ice at 0°C is more like ##0.9150 gm/cm^3##.
Google. I typed in "ice density" and the first value that popped up was that one. Unfortunately, I've just figured it out that this density is wrong and is the reason why my answers have been thrown off. Now I've finally gotten it...glad to see it wasn't an issue with my understanding, though!
gneill
Mentor
Google. I typed in "ice density" and the first value that popped up was that one. Unfortunately, I've just figured it out that this density is wrong and is the reason why my answers have been thrown off. Now I've finally gotten it...glad to see it wasn't an issue with my understanding, though!
Yeah, Never believe the first entry to pop up from Google. Always check the source to see if it's reputable. Too many non-academic searchers bubble disreputable sources to the top of the hit list. It's a sad thing, but its just the way it is. Glad you worked out your issue!
|
2022-01-20 16:44:04
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8404412269592285, "perplexity": 877.0990256282531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00146.warc.gz"}
|
http://harvard.voxcharta.org/tag/numerical-simulation/
|
# Posts Tagged numerical simulation
## Recent Postings from numerical simulation
### Numerical Simulation of Star Formation by the Bow Shock of the Centaurus A Jet
Recent Hubble Space Telescope (HST) observations of the extragalactic radio source Centaurus A (Cen A) display a young stellar population around the southwest tip of the inner filament 8.5 kpc from the Cen A galactic center, with ages in the range of 1-3 Myr. Crockett et al. (2012) argue that the transverse bow shock of the Cen A jet triggered this star formation as it impacted dense molecular cores of clouds in the filament. To test this hypothesis, we perform three-dimensional numerical simulations of induced star formation by the jet bow shock in the inner filament of Cen A, using a positivity preserving WENO method to solve the equations of gas dynamics with radiative cooling. We find that star clusters form inside a bow-shocked molecular cloud when the maximum initial density of the cloud is > 40 H2 molecules/cm^3. In a typical molecular cloud of mass 10^6 M_sun and diameter 200 pc, approximately 20 star clusters of mass 10^3 M_sun are formed, matching the HST images.
### Qualitative Analysis and Numerical Simulation of Equations of the Standard Cosmological Model: $\Lambda\not=0$
On the basis of qualitative analysis of the system of differential equations of the standard cosmological model it is shown that in the case of zero cosmological constant this system has a stable center corresponding to zero values of potential and its derivative at infinity. Thus, the cosmological model based on single massive classical scalar field in infinite future would give a flat Universe. The carried out numerical simulation of the dynamic system corresponding to the system of Einstein - Klein - Gordon equations showed that at great times of the evolution the invariant cosmological acceleration has an oscillating character and changes from $-2$ (braking), to $+1$ (acceleration). Average value of the cosmological acceleration is negative and is equal to $-1/2$. Oscillations of the cosmological acceleration happen on the background of rapidly falling Hubble constant. In the case of nonzero value of the cosmological constant depending on its value there are possible three various qualitative behavior types of the dynamic system on 2-dimensional plane $(\Phi,\dot{\Phi})$, which correspond either to zero attractive focus or to stable attractive knot with zero values of the potential and its derivative. Herewith the system asymptotically enters the secondary inflation. Carried out numerical simulation showed that at cosmological constant $\Lambda<m^2 3\cdot10^{-8}$ the macroscopic value of the cosmological acceleration behaves itself similar to the case $\Lambda=0$, i.e. in the course of the cosmological evolution there appears a lasting stage when this value is close to $-1/2$ which corresponds to non-relativistic equation of state. In this article, the results of qualitative and numerical analysis, obtained in Yu. Ignat'ev, arXiv:1609.00745 [gr-qc], common to the case of a non-zero cosmological term.
### Pickup Ion Effect of the Solar Wind Interaction with the Local Interstellar Medium
Pickup ions are created when interstellar neutral atoms resonantly exchange charge with the solar wind (SW) ions, especially in the supersonic part of the wind, where they carry most of the plasma pressure. Here we present numerical simulation results of the 3D heliospheric interface treating pickup ions as a separate proton fluid. To satisfy the fundamental conservation laws, we solve the system of equations describing the flow of the mixture of electrons, thermal protons, and pickup ions. To find the density and pressure of pickup ions behind the termination shock, we employ simple boundary conditions that take into account the \emph{Voyager} observations that showed that the decrease in the kinetic energy of the mixture at the termination shock predominantly contributed to the increase in the pressure of pickup ions. We show that this model adequately describes the flow of the plasma mixture and results in a noticeable decrease in the heliosheath width.
### Monte Carlo calculations of the finite density Thirring model [Replacement]
We present results of the numerical simulation of the two-dimensional Thirring model at finite density and temperature. The severe sign problem is dealt with by deforming the domain of integration into complex field space. This is the first example where a fermionic sign problem is solved in a quantum field theory by using the holomorphic gradient flow approach, a generalization of the Lefschetz thimble method.
### Qualitative Analysis and Numerical Simulation of Equations of the Standard Cosmological Model
On the basis of qualitative theory of differential equations it is shown that dynamic system based on the system of Einstein - Klein - Gordon equations with regard to Friedman Universe has a stable center corresponding to zero values of scalar potential and its derivative at infinity. Thus, the cosmological model based on single massive classical scalar field in infinite future would give a flat Universe. The carried out numerical simulation of the dynamic system corresponding to the system of Einstein - Klein - Gordon equations showed that at great times of the evolution the invariant cosmological acceleration has a microscopic oscillating character ($T\sim 2\pi mt$), while macroscopic value of the cosmological acceleration varies from $+1$ at inflation stage after which if decreases fast to $-1/2$ (non-relativistic stage), and then slowly tends to $-1$ (ultrarelativistic stage).
### A New Code for Numerical Simulation of MHD Astrophysical Flows With Chemistry
The new code for numerical simulation of magnetic hydrodynamical astrophysical flows with consideration of chemical reactions is given in the paper. At the heart of the code - the new original low-dissipation numerical method based on a combination of operator splitting approach and piecewise-parabolic method on the local stencil. The details of the numerical method are described; the main tests and the scheme of parallel implementation are shown. The chemodynamics of the hydrogen while the turbulent formation of molecular clouds is modeled.
### Magnetorotational Turbulence and Dynamo in a Collisionless Plasma [Replacement]
We present results from the first 3D kinetic numerical simulation of magnetorotational turbulence and dynamo, using the local shearing-box model of a collisionless accretion disc. The kinetic magnetorotational instability grows from a subthermal magnetic field having zero net flux over the computational domain to generate self-sustained turbulence and outward angular-momentum transport. Significant Maxwell and Reynolds stresses are accompanied by comparable viscous stresses produced by field-aligned ion pressure anisotropy, which is regulated primarily by the mirror and ion-cyclotron instabilities through particle trapping and pitch-angle scattering. The latter endow the plasma with an effective viscosity that is biased with respect to the magnetic-field direction and spatio-temporally variable. Energy spectra suggest an Alfv\'en-wave cascade at large scales and a kinetic-Alfv\'en-wave cascade at small scales, with strong small-scale density fluctuations and weak non-axisymmetric density waves. Ions undergo non-thermal particle acceleration, their distribution accurately described by a kappa distribution. These results have implications for the properties of low-collisionality accretion flows, such as that near the black hole at the Galactic center.
### Numerical Simulation of Vertical Oscillations in an Axisymmetric Thick Accretion Flow around a Black Hole
We study time evolution of rotating, axisymmetric, two dimensional inviscid accretion flows around black holes using a grid based finite difference method. We do not use reflection symmetry on the equatorial plane in order to inspect if the disk along with the centrifugal barrier oscillated vertically. In the inviscid limit, we find that the CENtrifugal pressure supported BOundary Layer (CENBOL) is oscillating vertically, more so, when the specific angular momentum is higher. As a result, the rate of outflow produced from the CENBOL, also oscillates. Indeed, the outflow rates in the upper half and the lower half are found to be anti-correlated. We repeat the exercise for a series of specific angular momentum {\lambda} of the flow in order to demonstrate effects of the centrifugal force on this interesting behaviour. We find that, as predicted in theoretical models of disks in vertical equilibrium, the CENBOL is produced only when the centrifugal force is significant and more specifically, when {\lambda} > 1.5. Outflow rate itself is found to increase with {\lambda} as well and so is the oscillation amplitude. The cause of oscillation appears to be due to the interaction among the back flow from the centrifugal barrier, the outflowing winds and the inflow. For low angular momentum, the back flow as well as the oscillation are missing. To our knowledge, this is the first time that such an oscillating solution is found with an well-tested grid based finite difference code and such a solution could be yet another reason of why Quasi-Periodic Oscillations should be observed in black hole candidates which are accreting low angular momentum transonic flows.
### Extraction of Gravitational Waves in Numerical Relativity
A numerical-relativity calculation yields in general a solution of the Einstein equations including also a radiative part, which is in practice computed in a region of finite extent. Since gravitational radiation is properly defined only at null infinity and in an appropriate coordinate system, the accurate estimation of the emitted gravitational waves represents an old and non-trivial problem in numerical relativity. A number of methods have been developed over the years to "extract" the radiative part of the solution from a numerical simulation and these include: quadrupole formulas, gauge-invariant metric perturbations, Weyl scalars, and characteristic extraction. We review and discuss each method, in terms of both its theoretical background as well as its implementation. Finally, we provide a brief comparison of the various methods in terms of their inherent advantages and disadvantages.
### A supernova feedback implementation for the astrophysical simulation software Arepo
Supernova (SN) explosions play an important role in the development of galactic structures. The energy and momentum imparted on the interstellar medium (ISM) in so called "supernova feedback" drives turbulence, heats the gas, enriches it with heavy elements, can lead to the formation of new stars or even suppress star formation by disrupting stellar nurseries. In the numerical simulation at the sub-galactic level, not including the energy and momentum of supernovas in the physical description of the problem can also lead to several problems that might partially be resolved by including a description of supernovas. In this thesis such an implementation is attempted for the combined numerical hydrodynamics and N-body simulation software Arepo (Springel, 2010). In a stochastic process a large amount of thermal energy is imparted on a number of neighbouring cells, mimicking the effect of a supernova explosions. We test this approach by modelling the explosion of a single supernova in a uniform density medium and comparing the evolution of the resulting supernova remnant to the theoretically-predicted behaviour. We also run a simulation with our feedback code and a fixed supernova rate derived from the Kennicutt-Schmidt relation (Kennicutt, 1998) for a duration of about 20 Myrs. We describe our method in detail in this text and discuss the properties of our implementation.
### A supernova feedback implementation for the astrophysical simulation software Arepo [Replacement]
Supernova (SN) explosions play an important role in the development of galactic structures. The energy and momentum imparted on the interstellar medium (ISM) in so-called "supernova feedback" drives turbulence, heats the gas, enriches it with heavy elements, can lead to the formation of new stars or even suppress star formation by disrupting stellar nurseries. In the numerical simulation at the sub-galactic level, not including the energy and momentum of supernovas in the physical description of the problem can also lead to several problems that might partially be resolved by including a description of supernovas. In this thesis such an implementation is attempted for the combined numerical hydrodynamics and N-body simulation software Arepo (Springel, 2010) for the high density gas in the ISM only. This allows supernova driven turbulence in boxes of 400pc cubed to be studied. In a stochastic process a large amount of thermal energy is imparted on a number of neighbouring cells, mimicking the effect of a supernova explosions. We test this approach by modelling the explosion of a single supernova in a uniform density medium and comparing the evolution of the resulting supernova remnant to the theoretically-predicted behaviour. We also run a simulation with our feedback code and a fixed supernova rate derived from the Kennicutt-Schmidt relation (Kennicutt, 1998) for a duration of about 20 Myrs. We describe our method in detail in this text and discuss the properties of our implementation. vii
### Magneto-hydrodynamical Numerical simulation of wind production from black hole hot accretion flows at very large radii
Numerical simulations of black hole hot accretion flows have shown the existence of strong wind. Those works focus only on the region close to black hole thus it is unknown whether or where the wind production stops at large radii. To address this question, Bu et al. (2016) have performed hydrodynamic (HD) simulations by taking into account the gravitational potential of both the black hole and the nuclear star clusters. The latter is assumed to be $\propto \sigma^2 \ln(r)$, with $\sigma$ being the velocity dispersion of stars and $r$ be the distance from the center of the galaxy. It was found that when the gravity is dominated by nuclear stars, i.e., outside of radius $R_A\equiv GM_{\rm BH}/\sigma^2$, winds can no longer be produced. That work, however, neglects the magnetic field, which is believed to play a crucial dynamical role in the accretion and thus must be taken into account. In this paper, we revisit this problem by performing magneto-hydrodynamical (MHD) simulations. We confirm the result of Bu et al. (2016), namely wind can't be produced at the region of $R>R_A$. Our result, combined with the results of Yuan et al. (2015), indicates that the formula describing the mass flux of wind $\dot{M}_{\rm wind}=\dot{M}_{\rm BH}(r/20r_s)$ can only be applied to the region where the black hole potential is dominant. Here $\dot{M}_{\rm BH}$ is the mass accretion rate at the black hole horizon and the value of $R_A$ is similar to the Bondi radius.
### Integration of Particle-Gas Systems with Stiff Mutual Drag Interaction
Numerical simulation of numerous mm/cm-sized particles embedded in a gaseous disk has become an important tool in the study of planet formation and in understanding the dust distribution in observed protoplanetary disks. However, the mutual drag force between the gas and the particles can become so stiff, particularly because of small particles and/or strong local solid concentration, that an explicit integration of this system is computationally formidable. In this work, we consider the integration of the mutual drag force in a system of Eulerian gas and Lagrangian solid particles. Despite the entanglement between the gas and the particles under the particle-mesh construct, we are able to devise a numerical algorithm that effectively decomposes the globally coupled system of equations for the mutual drag force and makes it possible to integrate this system on a cell-by-cell basis, which considerably reduces the computational task required. We use an analytical solution for the temporal evolution of each cell to relieve the time-step constraint posed by the mutual drag force as well as to achieve the highest degree of accuracy. To validate our algorithm, we use an extensive suite of benchmarks with known solutions in one, two, and three dimensions, including the linear growth and the nonlinear saturation of the streaming instability. We demonstrate numerical convergence and satisfactory consistency in all cases. Our algorithm can for example be applied to model the evolution of the streaming instability with mm/cm-sized pebbles at high mass loading, which has important consequences for the formation scenarios of planetesimals.
### Integration of Particle-Gas Systems with Stiff Mutual Drag Interaction [Replacement]
Numerical simulation of numerous mm/cm-sized particles embedded in a gaseous disk has become an important tool in the study of planet formation and in understanding the dust distribution in observed protoplanetary disks. However, the mutual drag force between the gas and the particles can become so stiff, particularly because of small particles and/or strong local solid concentration, that an explicit integration of this system is computationally formidable. In this work, we consider the integration of the mutual drag force in a system of Eulerian gas and Lagrangian solid particles. Despite the entanglement between the gas and the particles under the particle-mesh construct, we are able to devise a numerical algorithm that effectively decomposes the globally coupled system of equations for the mutual drag force and makes it possible to integrate this system on a cell-by-cell basis, which considerably reduces the computational task required. We use an analytical solution for the temporal evolution of each cell to relieve the time-step constraint posed by the mutual drag force as well as to achieve the highest degree of accuracy. To validate our algorithm, we use an extensive suite of benchmarks with known solutions in one, two, and three dimensions, including the linear growth and the nonlinear saturation of the streaming instability. We demonstrate numerical convergence and satisfactory consistency in all cases. Our algorithm can for example be applied to model the evolution of the streaming instability with mm/cm-sized pebbles at high mass loading, which has important consequences for the formation scenarios of planetesimals.
### Integration of Particle-Gas Systems with Stiff Mutual Drag Interaction [Replacement]
Numerical simulation of numerous mm/cm-sized particles embedded in a gaseous disk has become an important tool in the study of planet formation and in understanding the dust distribution in observed protoplanetary disks. However, the mutual drag force between the gas and the particles can become so stiff, particularly because of small particles and/or strong local solid concentration, that an explicit integration of this system is computationally formidable. In this work, we consider the integration of the mutual drag force in a system of Eulerian gas and Lagrangian solid particles. Despite the entanglement between the gas and the particles under the particle-mesh construct, we are able to devise a numerical algorithm that effectively decomposes the globally coupled system of equations for the mutual drag force and makes it possible to integrate this system on a cell-by-cell basis, which considerably reduces the computational task required. We use an analytical solution for the temporal evolution of each cell to relieve the time-step constraint posed by the mutual drag force as well as to achieve the highest degree of accuracy. To validate our algorithm, we use an extensive suite of benchmarks with known solutions in one, two, and three dimensions, including the linear growth and the nonlinear saturation of the streaming instability. We demonstrate numerical convergence and satisfactory consistency in all cases. Our algorithm can for example be applied to model the evolution of the streaming instability with mm/cm-sized pebbles at high mass loading, which has important consequences for the formation scenarios of planetesimals.
### Integration of Particle-Gas Systems with Stiff Mutual Drag Interaction [Replacement]
Numerical simulation of numerous mm/cm-sized particles embedded in a gaseous disk has become an important tool in the study of planet formation and in understanding the dust distribution in observed protoplanetary disks. However, the mutual drag force between the gas and the particles can become so stiff, particularly because of small particles and/or strong local solid concentration, that an explicit integration of this system is computationally formidable. In this work, we consider the integration of the mutual drag force in a system of Eulerian gas and Lagrangian solid particles. Despite the entanglement between the gas and the particles under the particle-mesh construct, we are able to devise a numerical algorithm that effectively decomposes the globally coupled system of equations for the mutual drag force and makes it possible to integrate this system on a cell-by-cell basis, which considerably reduces the computational task required. We use an analytical solution for the temporal evolution of each cell to relieve the time-step constraint posed by the mutual drag force as well as to achieve the highest degree of accuracy. To validate our algorithm, we use an extensive suite of benchmarks with known solutions in one, two, and three dimensions, including the linear growth and the nonlinear saturation of the streaming instability. We demonstrate numerical convergence and satisfactory consistency in all cases. Our algorithm can for example be applied to model the evolution of the streaming instability with mm/cm-sized pebbles at high mass loading, which has important consequences for the formation scenarios of planetesimals.
### Numerical simulation and experimental study of PbWO4/EPDM and Bi2WO6/EPDM for the shielding of {\gamma}rays [Cross-Listing]
The MCNP5 code was employed to simulate the {\gamma}ray shielding capacity of tungstate composites. The experimental results were applied to verify the applicability of the Monte Carlo program. PbWO4 and Bi2WO6 were prepared and added into ethylene propylene diene monomer (EPDM) to obtain the composites, which were tested in the {\gamma}ray shielding. Both the theoretical simulation and experiments were carefully chosen and well designed. The results of the two methods were found to be highly consistent. In addition, the conditions during the numerical simulation were optimized and double-layer {\gamma}ray shielding systems were studied. It was found that the {\gamma}-ray shielding performance can be influenced not only by the material thickness ratio but also by the arrangement of the composites.
### Key Issues Review: Numerical studies of turbulence in stars
The numerical simulation of turbulence in stars has led to a rich set of possibilities regarding stellar pulsations, asteroseismology, thermonuclear yields, and formation of neutron stars and black holes. The breaking of symmetry by turbulent flow grows in amplitude as collapse is approached, which insures that the conditions at the onset of collapse are not spherical. This lack of spherical symmetry has important implications for the mechanism of explosion and ejected nucleosynthesis products. Numerical resolution of several different types of three--dimensional (3D) stellar simulations are compared; it is suggested that core collapse simulations may be under-resolved. New physical effects which appear in 3D are summarized. Connections between simulations of progenitor explosion and observations of supernova remnants (SNR) are discussed. Present treatment of boundaries, for mixing regions during He--burning, requires revision.
### Key Issues Review: Numerical studies of turbulence in stars [Cross-Listing]
The numerical simulation of turbulence in stars has led to a rich set of possibilities regarding stellar pulsations, asteroseismology, thermonuclear yields, and formation of neutron stars and black holes. The breaking of symmetry by turbulent flow grows in amplitude as collapse is approached, which insures that the conditions at the onset of collapse are not spherical. This lack of spherical symmetry has important implications for the mechanism of explosion and ejected nucleosynthesis products. Numerical resolution of several different types of three--dimensional (3D) stellar simulations are compared; it is suggested that core collapse simulations may be under-resolved. New physical effects which appear in 3D are summarized. Connections between simulations of progenitor explosion and observations of supernova remnants (SNR) are discussed. Present treatment of boundaries, for mixing regions during He--burning, requires revision.
### Numerical Simulation of Tidal Evolution of a Viscoelastic Body Modelled with a Mass-Spring Network [Replacement]
We use a damped mass-spring model within an N-body code to simulate the tidal evolution of the spin and orbit of a self-gravitating viscoelastic spherical body moving around a point-mass perturber. The damped mass-spring model represents a Kelvin-Voigt viscoelastic solid. We measure the tidal quality function (the dynamical Love number $\,k_2\,$ divided by the tidal quality factor $\,Q\,$) from the numerically computed tidal drift of the semimajor axis of the binary. The shape of $\,k_2/Q\,$, as a function of the principal tidal frequency, reproduces the kink shape predicted by Efroimsky (2012a; CeMDA 112$\,:\,$283) for the tidal response of near-spherical homogeneous viscoelastic rotators. We demonstrate that we can directly simulate the tidal evolution of spinning viscoelastic objects. In future, the mass-spring N-body model can be generalised to inhomogeneous and/or non-spherical bodies.
### Numerical Simulation of Tidal Evolution of a Viscoelastic Body Modeled with a Mass-Spring Network
We use a damped mass-spring model within an N-body code, to simulate the tidal evolution of the spin and orbit of a viscoelastic spherical body moving around a point-mass perturber. The damped spring-mass model represents a Kelvin-Voigt viscoelastic solid. We derive the tidal quality function (the dynamical Love number $\,k_2\,$ divided by the tidal quality factor $\,Q\,$) from the numerically computed tidal drift of the semimajor axis of the binary. The obtained shape of $\,k_2/Q\,$, as a function of the principal tidal frequency, reproduces the typical kink shape predicted by Efroimsky (2012a; CeMDA 112$\,:\,$283) for the tidal response of near-spherical homogeneous viscoelastic rotators. Our model demonstrates that we can directly simulate the tidal evolution of viscoelastic objects. This opens the possibility for investigating more complex situations, since the employed spring-mass N-body model can be generalised to inhomogeneous and/or non-spherical bodies.
### Simulating the Environment Around Planet-Hosting Stars - I. Coronal Structure
We present the results of a detailed numerical simulation of the circumstellar environment around three exoplanet-hosting stars. A state-of-the-art global magnetohydrodynamic (MHD) model is considered, including Alfv\'en wave dissipation as a self-consistent coronal heating mechanism. This paper contains the description of the numerical set-up, evaluation procedure, and the simulated coronal structure of each system (HD 1237, HD 22049 and HD 147513). The simulations are driven by surface magnetic field maps, recovered with the observational technique of Zeeman Doppler Imaging (ZDI). A detailed comparison of the simulations is performed, where two different implementations of this mapping routine are used to generate the surface field distributions. Quantitative and qualitative descriptions of the coronae of these systems are presented, including synthetic high-energy emission maps in the Extreme Ultra-Violet (EUV) and Soft X-rays (SXR) ranges. Using the simulation results, we are able to recover similar trends as in previous observational studies, including the relation between the magnetic flux and the coronal X-ray emission. Furthermore, for HD 1237 we estimate the rotational modulation of the high-energy emission due to the various coronal features developed in the simulation. We obtain variations, during a single stellar rotation cycle, up to 15\% for the EUV and SXR ranges. The results presented here will be used, in a follow-up paper, to self-consistently simulate the stellar winds and inner astrospheres of these systems.
### The numerical approach to quantum field theory in a non-commutative space [Cross-Listing]
Numerical simulation is an important non-perturbative tool to study quantum field theories defined in non-commutative spaces. In this contribution, a selection of results from Monte Carlo calculations for non-commutative models is presented, and their implications are reviewed. In addition, we also discuss how related numerical techniques have been recently applied in computer simulations of dimensionally reduced supersymmetric theories.
### The numerical approach to quantum field theory in a non-commutative space [Cross-Listing]
Numerical simulation is an important non-perturbative tool to study quantum field theories defined in non-commutative spaces. In this contribution, a selection of results from Monte Carlo calculations for non-commutative models is presented, and their implications are reviewed. In addition, we also discuss how related numerical techniques have been recently applied in computer simulations of dimensionally reduced supersymmetric theories.
### The numerical approach to quantum field theory in a non-commutative space
Numerical simulation is an important non-perturbative tool to study quantum field theories defined in non-commutative spaces. In this contribution, a selection of results from Monte Carlo calculations for non-commutative models is presented, and their implications are reviewed. In addition, we also discuss how related numerical techniques have been recently applied in computer simulations of dimensionally reduced supersymmetric theories.
### Numerical experiments on the detailed energy conversion and spectrum studies in a corona current sheet
In this paper, we study the energy conversion and spectra in a corona current sheet by 2.5-dimensional MHD numerical simulations. Numerical results show that many Petschek-like fine structures with slow-mode shocks mediated by plasmoid instabilities develop during the magnetic reconnection process. The termination shocks can also be formed above the primary magnetic island and at the head of secondary islands. These shocks play important roles in generating thermal energy in a corona current sheet. For a numerical simulation with initial conditions close to the solar corona environment, the ratio of the generated thermal energy to the total dissipated magnetic energy is around $1/5$ before secondary islands appear. After secondary islands appear, the generated thermal energy starts to increase sharply and this ratio can reach a value about $3/5$. In an environment with a relatively lower plasma density and plasma $\beta$, the plasma can be heated to a much higher temperature. After secondary islands appear, the one dimensional energy spectra along the current sheet do not behave as a simple power law and the spectrum index increases with the wave number. The average spectrum index for the magnetic energy spectrum along the current sheet is about $1.8$. The two dimensional spectra intuitively show that part of the high energy is cascaded to large $kx$ and $ky$ space after secondary islands appear. The plasmoid distribution function calculated from numerical simulations behaves as a power law closer to $f(\psi) \sim \psi^{-1}$ in the intermediate $\psi$ regime. By using $\eta_{eff} = v_{inflow}\cdot L$, the effective magnetic diffusivity is estimated about $10^{11}\sim10^{12}$~m$^2$\,s$^{-1}$.
### On improving analytical models of cosmic reionization for matching numerical simulation [Replacement]
The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emerged from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large scale statistical properties. These mock catalogs are particularly useful for CMB polarization and 21cm experiments, where large volumes are required to simulate the observed signal.
### On improving analytical models of cosmic reionization for matching numerical simulation
The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emerged from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large scale statistical properties. These mock catalogs are particularly useful for CMB polarization and 21cm experiments, where large volumes are required to simulate the observed signal.
### Protostellar accretion traced with chemistry: Comparing synthetic C18O maps of embedded protostars to real observations [Replacement]
Context: Understanding how protostars accrete their mass is a central question of star formation. One aspect of this is trying to understand whether the time evolution of accretion rates in deeply embedded objects is best characterised by a smooth decline from early to late stages or by intermittent bursts of high accretion. Aims: We create synthetic observations of deeply embedded protostars in a large numerical simulation of a molecular cloud, which are compared directly to real observations. The goal is to compare episodic accretion events in the simulation to observations and to test the methodology used for analysing the observations. Methods: Simple freeze-out and sublimation chemistry is added to the simulation, and synthetic C$^{18}$O line cubes are created for a large number of simulated protostars. The spatial extent of C$^{18}$O is measured for the simulated protostars and compared directly to a sample of 16 deeply embedded protostars observed with the Submillimeter Array. If CO is distributed over a larger area than predicted based on the protostellar luminosity, it may indicate that the luminosity has been higher in the past and that CO is still in the process of refreezing. Results: Approximately 1% of the protostars in the simulation show extended C$^{18}$O emission, as opposed to approximately 50% in the observations, indicating that the magnitude and frequency of episodic accretion events in the simulation is too low relative to observations. The protostellar accretion rates in the simulation are primarily modulated by infall from the larger scales of the molecular cloud, and do not include any disk physics. The discrepancy between simulation and observations is taken as support for the necessity of disks, even in deeply embedded objects, to produce episodic accretion events of sufficient frequency and amplitude.
### Protostellar accretion traced with chemistry: Comparing synthetic C18O maps of embedded protostars to real observations
Context: Understanding how protostars accrete their mass is a central question of star formation. One aspect of this is to try and understand if the time evolution of accretion rates in deeply embedded objects is best characterised by a smooth decline from early to late stages, or by intermittent bursts of high accretion. Aims: We create synthetic observations of deeply embedded protostars in a large numerical simulation of a molecular cloud, which are compared directly to real observations. The goal is to compare episodic accretion events in the simulation to observations, and to test the methodology used for analysing the observations. Methods: Simple freeze-out and sublimation chemistry is added to the simulation, and synthetic C18O line cubes are created for a large number of simulated protostars. The spatial extent of C18O is measured for the simulated protostars, and compared directly to a sample of 16 deeply embedded protostars observed with the Submillimeter Array. If CO is distributed over a larger area than predicted based on the protostellar luminosity, it may indicate that the luminosity has been higher in the past, and that CO is still in the process of refreezing. Results: Approximately 1% of the protostars in the simulation show extended C18O emission, as opposed to approximately 50% in the observations, indicating that the magnitude and frequency of episodic accretion events in the simulation is too low relative to observations. The protostellar accretion rates in the simulation are primarily modulated by infall from the larger scales of the molecular cloud, and do not include any disk physics. The discrepancy between simulation and observations is taken as support for the necessity of disks, even in deeply embedded objects, to produce episodic accretion events of sufficient frequency and amplitude.
### The sparkling Universe: a scenario for cosmic void motions
We perform a statistical study of the global motion of cosmic voids using both a numerical simulation and observational data. We analyse their relation to large--scale mass flows and the physical effects that drive those motions. We analyse the bulk motions of voids, defined by the mean velocity of haloes in the surrounding shells in the numerical simulation, and by galaxies in the Sloan Digital Sky Survey Data Release 7. We find void mean bulk velocities close to 400 km/s, comparable to those of haloes (~ 500-600 km/s), depending on void size and the large--scale environment. Statistically, small voids move faster than large ones, and voids in relatively higher density environments have higher bulk velocities than those placed in large underdense regions. Also, we analyze the mean mass density around voids finding, as expected, large--scale overdensities (underdensities) along (opposite to) the void motion direction, suggesting that void motions respond to a pull--push mechanism. This contrasts with massive cluster motions who are mainly governed by the pull of the large-scale overdense regions. Our analysis of void pairwise velocities shows how their relative motions are generated by large--scale density fluctuations. In agreement with linear theory, voids embedded in low (high) density regions mutually recede (attract) each other, providing the general mechanism to understand the bimodal behavior of void motions. In order to compare the theoretical results and the observations we have inferred void motions in the SDSS using linear theory, finding that the estimated observational void motions are in statisticalagreement with the results of the simulation. Regarding large--scale flows, our results suggest a scenario of galaxies and galaxy systems flowing away from void centers with the additional, and morerelevant, contribution of the void bulk motion to the total velocity.
### Estimating SI violation in CMB due to non-circular beam and complex scan in minutes
Mild, unavoidable deviations from circular-symmetry of instrumental beams along with scan strategy can give rise to measurable Statistical Isotropy (SI) violation in Cosmic Microwave Background (CMB) experiments. If not accounted properly, this spurious signal can complicate the extraction of other SI violation signals (if any) in the data. However, estimation of this effect through exact numerical simulation is computationally intensive and time consuming. A generalized analytical formalism not only provides a quick way of estimating this signal, but also gives a detailed understanding connecting the leading beam anisotropy components to a measurable BipoSH characterisation of SI violation. In this paper, we provide an approximate generic analytical method for estimating the SI violation generated due to a non-circular (NC) beam and arbitrary scan strategy, in terms of the Bipolar Spherical Harmonic (BipoSH) spectra. Our analytical method can predict almost all the features introduced by a NC beam in a complex scan and thus reduces the need for extensive numerical simulation worth tens of thousands of CPU hours into minutes long calculations. As an illustrative example, we use WMAP beams and scanning strategy to demonstrate the easability, usability and efficiency of our method. We test all our analytical results against that from exact numerical simulations.
### Large-scale numerical simulations of star formation put to the test: Comparing synthetic images and actual observations for statistical samples of protostars [Replacement]
(abridged) Context: Both observations and simulations of embedded protostars have progressed rapidly in recent years. Bringing them together is an important step in advancing our knowledge about the earliest phases of star formation. Aims: To compare synthetic continuum images and SEDs, calculated from large-scale numerical simulations, to observational studies, thereby aiding in both the interpretation of the observations and in testing the fidelity of the simulations. Methods: The radiative transfer code RADMC-3D is used to create synthetic continuum images and SEDs of protostellar systems in a large numerical simulation of a molecular cloud. More than 13000 unique radiative transfer models are produced of a variety of different protostellar systems. Results: Over the course of 0.76 Myr the simulation forms more than 500 protostars, primarily within two sub-clusters. Synthetic SEDs are used to the calculate evolutionary tracers Tbol and Lsmm/Lbol. It is shown that, while the observed distributions of the tracers are well matched by the simulation, they generally do a poor job of tracking the protostellar ages. Disks form early in the simulation, with 40 % of the Class 0 protostars being encircled by one. The flux emission from the simulated disks is found to be, on average, a factor of 6 too low relative to real observations. The distribution of protostellar luminosities spans more than three order of magnitudes, similar to the observed distribution. Cores and protostars are found to be closely associated with one another, with the distance distribution between them being in excellent agreement with observations. Conclusions: The analysis and statistical comparison of synthetic observations to real ones is established as a powerful tool in the interpretation of observational results.
### Light Bridge in a Developing Active Region. II. Numerical Simulation of Flux Emergence and Light Bridge Formation
Light bridges, the bright structure dividing umbrae in sunspot regions, show various activity events. In Paper I, we reported on analysis of multi-wavelength observations of a light bridge in a developing active region (AR) and concluded that the activity events are caused by magnetic reconnection driven by magnetconvective evolution. The aim of this second paper is to investigate the detailed magnetic and velocity structures and the formation mechanism of light bridges. For this purpose, we analyze numerical simulation data from a radiative magnetohydrodynamics model of an emerging AR. We find that a weakly-magnetized plasma upflow in the near-surface layers of the convection zone is entrained between the emerging magnetic bundles that appear as pores at the solar surface. This convective upflow continuously transports horizontal fields to the surface layer and creates a light bridge structure. Due to the magnetic shear between the horizontal fields of the bridge and the vertical fields of the ambient pores, an elongated cusp-shaped current layer is formed above the bridge, which may be favorable for magnetic reconnection. The striking correspondence between the observational results of Paper I and the numerical results of this paper provides a consistent physical picture of light bridges. The dynamic activity phenomena occur as a natural result of the bridge formation and its convective nature, which has much in common with those of umbral dots and penumbral filaments.
### Formation of a condensate during charged collapse [Cross-Listing]
We observe a condensate forming in the interior of a black hole (BH) during numerical simulations of gravitational collapse of a massless charged (complex) scalar field. The magnitude of the scalar field in the interior tends to a non-zero constant; spontaneous breaking of gauge symmetry occurs and a condensate forms. This phenomena occurs in the presence of a BH without the standard symmetry breaking quartic potential; the breaking occurs via the dynamics of the system itself. We also observe that the scalar field in the interior rotates in the complex plane and show that it matches numerically the electric potential to within $1\%$. That a charged scalar condensate can form near the horizon of a black hole in the Abelian Higgs model without the standard symmetry breaking potential had previously been shown analytically in an explicit model involving a massive scalar field in an $AdS_4$ background. Our numerical simulation lends strong support to this finding, although in our case the scalar field is massless and the spacetime is asymptotically flat.
### Formation of a condensate during charged collapse [Replacement]
We observe a condensate forming in the interior of a black hole (BH) during numerical simulations of gravitational collapse of a massless charged (complex) scalar field. The magnitude of the scalar field in the interior tends to a non-zero constant; spontaneous breaking of gauge symmetry occurs and a condensate forms. This phenomena occurs in the presence of a BH without the standard symmetry breaking quartic potential; the breaking occurs via the dynamics of the system itself. We also observe that the scalar field in the interior rotates in the complex plane and show that it matches numerically the electric potential to within $1\%$. That a charged scalar condensate can form near the horizon of a black hole in the Abelian Higgs model without the standard symmetry breaking potential had previously been shown analytically in an explicit model involving a massive scalar field in an $AdS_4$ background. Our numerical simulation lends strong support to this finding, although in our case the scalar field is massless and the spacetime is asymptotically flat.
### Formation of a condensate during charged collapse [Replacement]
We observe a condensate forming in the interior of a black hole (BH) during numerical simulations of gravitational collapse of a massless charged (complex) scalar field. The magnitude of the scalar field in the interior tends to a non-zero constant; spontaneous breaking of gauge symmetry occurs and a condensate forms. This phenomena occurs in the presence of a BH without the standard symmetry breaking quartic potential; the breaking occurs via the dynamics of the system itself. We also observe that the scalar field in the interior rotates in the complex plane and show that it matches numerically the electric potential to within $1\%$. That a charged scalar condensate can form near the horizon of a black hole in the Abelian Higgs model without the standard symmetry breaking potential had previously been shown analytically in an explicit model involving a massive scalar field in an $AdS_4$ background. Our numerical simulation lends strong support to this finding, although in our case the scalar field is massless and the spacetime is asymptotically flat.
### Formation of a condensate during charged collapse
We observe a condensate forming in the interior of a black hole (BH) during numerical simulations of gravitational collapse of a massless charged (complex) scalar field. The magnitude of the scalar field in the interior tends to a non-zero constant; spontaneous breaking of gauge symmetry occurs and a condensate forms. This phenomena occurs in the presence of a BH without the standard symmetry breaking quartic potential; the breaking occurs via the dynamics of the system itself. We also observe that the scalar field in the interior rotates in the complex plane and show that it matches numerically the electric potential to within $1\%$. That a charged scalar condensate can form near the horizon of a black hole in the Abelian Higgs model without the standard symmetry breaking potential had previously been shown analytically in an explicit model involving a massive scalar field in an $AdS_4$ background. Our numerical simulation lends strong support to this finding, although in our case the scalar field is massless and the spacetime is asymptotically flat.
### Spin flips in generic black hole binaries [Cross-Listing]
We study the spin dynamics of individual black holes in a binary system. In particular we focus on the polar precession of spins and the possibility of a complete flip of spins with respect to the orbital plane. We perform a full numerical simulation that displays these characteristics. We evolve equal mass binary spinning black holes for $t=20,000M$ from an initial proper separation of $d=25M$ down to merger after 48.5 orbits. We compute the gravitational radiation from this system and compare it to 3.5 post-Newtonian generated waveforms finding close agreement. We then further use 3.5 post-Newtonian evolutions to show the extension of this spin {flip-flop} phenomenon to unequal mass binaries. We also provide analytic expressions to approximate the maximum {flip-flop} angle and frequency in terms of the binary spins and mass ratio parameters at a given orbital radius. Finally we discuss the effect this spin {flip-flop} would have on accreting matter and other potential observational effects.
### Spin flips in generic black hole binaries [Replacement]
We study the spin dynamics of individual black holes in a binary system. In particular we focus on the polar precession of spins and the possibility of a complete flip of spins with respect to the orbital plane. We perform a full numerical simulation that displays these characteristics. We evolve equal mass binary spinning black holes for $t=20,000M$ from an initial proper separation of $d=25M$ down to merger after 48.5 orbits. We compute the gravitational radiation from this system and compare it to 3.5 post-Newtonian generated waveforms finding close agreement. We then further use 3.5 post-Newtonian evolutions to show the extension of this spin {flip-flop} phenomenon to unequal mass binaries. We also provide analytic expressions to approximate the maximum {flip-flop} angle and frequency in terms of the binary spins and mass ratio parameters at a given orbital radius. Finally we discuss the effect this spin {flip-flop} would have on accreting matter and other potential observational effects.
### Spin flips in generic black hole binaries
We study the spin dynamics of individual black holes in a binary system. In particular we focus on the polar precession of spins and the possibility of a complete flip of spins with respect to the orbital plane. We perform a full numerical simulation that displays these characteristics. We evolve equal mass binary spinning black holes for $t=20,000M$ from an initial proper separation of $d=25M$ down to merger after 48.5 orbits. We compute the gravitational radiation from this system and compare it to 3.5 post-Newtonian generated waveforms finding close agreement. We then further use 3.5 post-Newtonian evolutions to show the extension of this spin {flip-flop} phenomenon to unequal mass binaries. We also provide analytic expressions to approximate the maximum {flip-flop} angle and frequency in terms of the binary spins and mass ratio parameters at a given orbital radius. Finally we discuss the effect this spin {flip-flop} would have on accreting matter and other potential observational effects.
### Spin flips in generic black hole binaries [Replacement]
We study the spin dynamics of individual black holes in a binary system. In particular we focus on the polar precession of spins and the possibility of a complete flip of spins with respect to the orbital plane. We perform a full numerical simulation that displays these characteristics. We evolve equal mass binary spinning black holes for $t=20,000M$ from an initial proper separation of $d=25M$ down to merger after 48.5 orbits. We compute the gravitational radiation from this system and compare it to 3.5 post-Newtonian generated waveforms finding close agreement. We then further use 3.5 post-Newtonian evolutions to show the extension of this spin {flip-flop} phenomenon to unequal mass binaries. We also provide analytic expressions to approximate the maximum {flip-flop} angle and frequency in terms of the binary spins and mass ratio parameters at a given orbital radius. Finally we discuss the effect this spin {flip-flop} would have on accreting matter and other potential observational effects.
### Visibility moments and power spectrum of turbulence velocity
Here we introduce moments of visibility function and discuss how those can be used to estimate the power spectrum of the turbulent velocity of external spiral galaxies. We perform numerical simulation to confirm the credibility of this method and found that for galaxies with lower inclination angles it works fine. This is the only method to estimate the power spectrum of the turbulent velocity fluctuation in the ISM of the external galaxies.
### Visibility moments and power spectrum of turbulence velocity [Replacement]
Here we introduce moments of visibility function and discuss how those can be used to estimate the power spectrum of the turbulent velocity of external spiral galaxies. We perform numerical simulation to confirm the credibility of this method and found that for galaxies with lower inclination angles it works fine. This is the only method to estimate the power spectrum of the turbulent velocity fluctuation in the ISM of the external galaxies.
### Solar wind turbulence from MHD to sub-ion scales: high-resolution hybrid simulations [Replacement]
We present results from a high-resolution and large-scale hybrid (fluid electrons and particle-in-cell protons) two-dimensional numerical simulation of decaying turbulence. Two distinct spectral regions (separated by a smooth break at proton scales) develop with clear power-law scaling, each one occupying about a decade in wave numbers. The simulation results exhibit simultaneously several properties of the observed solar wind fluctuations: spectral indices of the magnetic, kinetic, and residual energy spectra in the magneto-hydrodynamic (MHD) inertial range along with a flattening of the electric field spectrum, an increase in magnetic compressibility, and a strong coupling of the cascade with the density and the parallel component of the magnetic fluctuations at sub-proton scales. Our findings support the interpretation that in the solar wind large-scale MHD fluctuations naturally evolve beyond proton scales into a turbulent regime that is governed by the generalized Ohm's law.
### Solar wind turbulence from MHD to sub-ion scales: high-resolution hybrid simulations
We present results from a high-resolution and large-scale hybrid (fluid electrons and particle-in-cell protons) two-dimensional numerical simulation of decaying turbulence. Two distinct spectral regions (separated by a smooth break at proton scales) develop with clear power-law scaling, each one occupying about a decade in wave numbers. The simulation results exhibit simultaneously several properties of the observed solar wind fluctuations: spectral indices of the magnetic, kinetic, and residual energy spectra in the magneto-hydrodynamic (MHD) inertial range along with a flattening of the electric field spectrum, an increase in magnetic compressibility, and a strong coupling of the cascade with the density and the parallel component of the magnetic fluctuations at sub-proton scales. Our findings support the interpretation that in the solar wind large-scale MHD fluctuations naturally evolve beyond proton scales into a turbulent regime that is governed by the generalized Ohm's law.
### Vacuum high harmonic generation in the shock regime [Cross-Listing]
Electrodynamics becomes nonlinear and permits the self-interaction of fields when the quantised nature of vacuum states is taken into account. The effect on a plane probe pulse propagating through a stronger constant crossed background is calculated using numerical simulation and by analytically solving the corresponding wave equation. The electromagnetic shock resulting from vacuum high harmonic generation is investigated and a nonlinear shock parameter identified.
### Transport by meridional circulations in solar-type stars
Transport by meridional flows has significant consequences for stellar evolution, but is difficult to capture in global-scale numerical simulations because of the wide range of timescales involved. Stellar evolution models therefore usually adopt parameterizations for such transport based on idealized laminar or mean-field models. Unfortunately, recent attempts to model this transport in global simulations have produced results that are not consistent with any of these idealized models. In an effort to explain the discrepancies between global simulations and idealized models, we here use three-dimensional local Cartesian simulations of compressible convection to study the efficiency of transport by meridional flows below a convection zone in several parameter regimes of relevance to the Sun and solar-type stars. In these local simulations we are able to establish the correct ordering of dynamical timescales, although the separation of the timescales remains unrealistic. We find that, even though the generation of internal waves by convective overshoot produces a high degree of time dependence in the meridional flow field, the mean flow has the qualitative behavior predicted by laminar, "balanced" models. In particular, we observe a progressive deepening, or "burrowing", of the mean circulation if the local Eddington-Sweet timescale is shorter than the viscous diffusion timescale. Such burrowing is a robust prediction of laminar models in this parameter regime, but has never been observed in any previous numerical simulation. We argue that previous simulations therefore underestimate the transport by meridional flows.
### On the correction of conserved variables for numerical RMHD with staggered constrained transport
Despite the success of the combination of conservative schemes and staggered constrained transport algorithms in the last fifteen years, the accurate description of highly magnetized, relativistic flows with strong shocks represents still a challenge in numerical RMHD. The present paper focusses in the accuracy and robustness of several correction algorithms for the conserved variables, which has become a crucial ingredient in the numerical simulation of problems where the magnetic pressure dominates over the thermal pressure by more than two orders of magnitude. Two versions of non-relativistic and fully relativistic corrections have been tested and compared using a magnetized cylindrical explosion with high magnetization ($\ge 10^4$) as test. In the non-relativistic corrections, the total energy is corrected for the difference in the classical magnetic energy term between the average of the staggered fields and the conservative ones, before (CA1) and after (CA1') recovering the primitive variables. These corrections are unable to pass the test at any numerical resolution. The two relativistic approaches (CA2 and CA2'), correcting also the magnetic terms depending on the flow speed in both the momentum and the total energy, reveal as much more robust. These algorithms pass the test succesfully and with very small deviations of the energy conservation ($\le 10^{-4}$), and very low values of the total momentum ($\le 10^{-8}$). In particular, the algorithm CA2' (that corrects the conserved variables after recovering the primitive variables) passes the test at all resolutions. The numerical code used to run all the test cases is briefly described.
### Effects of Turbulent Viscosity on A Rotating Gas Ring Around A Black Hole: The Density Profile of Numerical Simulation
In this paper, we present the time evolution of a rotationally axisymmetric gas ring around a non rotating black hole using two dimensional grid-based hydrodynamic simulation. We show the way in which angular momentum transport is included in simulations of non-self-gravitating accretion of matter towards a black hole. We use the Shakura-Sunyaev {\alpha} viscosity prescription to estimate the turbulent viscosity. We investigate how a gas ring which is initially assumed to rotate with Keplerian angular velocity is accreted on to a back hole and hence forms accretion disc in the presence of turbulent viscosity. Furthermore, we also show that increase of the {\alpha} coefficient increases the rate of advection of matter towards the black hole. The density profile we obtain is in good quantitative agreement with that obtained from the analytical results. The dynamics of resulting angular momentum depends strongly on {\alpha}.
### You need to log in to vote
The blog owner requires users to be logged in to be able to vote for this post.
Alternatively, if you do not have an account yet you can create one here.
|
2016-10-26 22:59:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6513872146606445, "perplexity": 787.8542620231262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00119-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/495734-coding-problem/
|
Public Group
# Coding problem
This topic is 3853 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
So, i'm hookin autostance up and for some reason it wont save the autostance and also the players are saying they arn't actually ever dropping into the stance whether attacking or being attacked.. can anyone see why?
void do_autostance(CHAR_DATA *ch, char *argument )
{
char arg [MAX_INPUT_LENGTH];
one_argument( argument, arg, MAX_INPUT_LENGTH );
if (IS_NPC(ch)) return;
if (!str_cmp(arg,"none"))
{
send_to_char("Autostance set to None.\n\r", ch );
ch->stance[AUTODROP] = STANCE_NONE;
}
else if (!str_cmp(arg, "crane"))
{
send_to_char("Autostance set to Crane.\n\r",ch );
ch->stance[AUTODROP] = STANCE_CRANE;
}
else if (!str_cmp(arg, "bull"))
{
send_to_char("Autostance set to Bull.\n\r", ch );
ch->stance[AUTODROP] = STANCE_BULL;
}
else if (!str_cmp(arg, "viper"))
{
send_to_char("Autostance set to Viper.\n\r", ch );
ch->stance[AUTODROP] = STANCE_VIPER;
}
else if (!str_cmp(arg, "mongoose"))
{
send_to_char("Autostance set to Mongoose.\n\r", ch);
ch->stance[AUTODROP] = STANCE_MONGOOSE;
}
else if (!str_cmp(arg, "swallow") && ch->stance[STANCE_CRANE] >= 200 && ch->stance[STANCE_MONGOOSE] >= 200 )
{
send_to_char("Autostance set to Swallow.\n\r", ch);
ch->stance[AUTODROP] = STANCE_SWALLOW;
}
else if (!str_cmp(arg, "lion") && ch->stance[STANCE_BULL] >= 200 && ch->stance[STANCE_VIPER] >= 200 )
{
send_to_char("Autostance set to Lion.\n\r", ch);
ch->stance[AUTODROP] = STANCE_LION;
}
else if (!str_cmp(arg, "falcon") && ch->stance[STANCE_MONGOOSE] >= 200 && ch->stance[STANCE_BULL] >= 200 )
{
send_to_char("Autostance set to Falcon.\n\r", ch);
ch->stance[AUTODROP] = STANCE_FALCON;
}
else if (!str_cmp(arg, "cobra") && ch->stance[STANCE_VIPER] >= 200 && ch->stance[STANCE_CRANE] >= 200 )
{
send_to_char("Autostance set to Cobra.\n\r", ch);
ch->stance[AUTODROP] = STANCE_COBRA;
}
else if (!str_cmp(arg, "panther") && ch->stance[STANCE_VIPER] >= 200 && ch->stance[STANCE_MONGOOSE] >= 200 )
{
send_to_char("Autostance set to Panther.\n\r", ch);
ch->stance[AUTODROP] = STANCE_PANTHER;
}
else if (!str_cmp(arg, "grizzlie") && ch->stance[STANCE_BULL] >= 200 && ch->stance[STANCE_CRANE] >= 200 )
{
send_to_char("Autostance set to Grizzlie.\n\r", ch);
ch->stance[AUTODROP] = STANCE_GRIZZLIE;
}
else send_to_char ("No such Stance.\n\r", ch);
}
void autodrop(CHAR_DATA *ch)
{
char buf [MAX_INPUT_LENGTH];
char buf2 [MAX_INPUT_LENGTH];
char stancename [10];
if (IS_NPC(ch)) return;
if (ch->stance[AUTODROP]==STANCE_NONE) return;
else if (ch->stance[AUTODROP]==STANCE_VIPER) sprintf(stancename,"viper");
else if (ch->stance[AUTODROP]==STANCE_CRANE) sprintf(stancename,"crane");
else if (ch->stance[AUTODROP]==STANCE_MONGOOSE) sprintf(stancename,"mongoose");
else if (ch->stance[AUTODROP]==STANCE_BULL) sprintf(stancename,"bull");
else if (ch->stance[AUTODROP]==STANCE_SWALLOW) sprintf(stancename, "swallow");
else if (ch->stance[AUTODROP]==STANCE_LION) sprintf(stancename, "lion");
else if (ch->stance[AUTODROP]==STANCE_FALCON) sprintf(stancename, "falcon");
else if (ch->stance[AUTODROP]==STANCE_PANTHER) sprintf(stancename, "panther");
else if (ch->stance[AUTODROP]==STANCE_COBRA) sprintf(stancename, "cobra");
else if (ch->stance[AUTODROP]==STANCE_GRIZZLIE) sprintf(stancename, "grizzlie");
else return;
if (ch->stance[0] < 1)
{
ch->stance[0] = ch->stance[AUTODROP];;
sprintf(buf, "You fall into the %s stance.", stancename);
act(buf, ch, NULL, NULL, TO_CHAR);
sprintf(buf2, "\$n falls into the %s stance.",stancename);
act(buf2, ch, NULL, NULL, TO_ROOM);
}
}
Thanks:) [Edited by - Sandman on May 30, 2008 5:18:21 AM]
##### Share on other sites
Game design is for discussion of game mechanics, rather than coding related stuff.
It might help to be a bit clearer about what the code is supposed to do, and what it actually does. If you aren't sure exactly what it is doing, then step through it in a debugger and see for yourself what the behaviour is. Also, comment how it is doing it a bit more, because there are a few things in that code that look a bit odd.
You'll probably find that the above steps will find your problem fairly quickly, and if it doesn't, we'll be better equipped to find the problem for you.
EDIT: also, [ source ] tags are best for posting code.
##### Share on other sites
I answered your other post abour saving it.
Regarding dropping in to the stance, is the autodrop() function ever called? (probably needs to go somewhere in fight.c - maybe multi_hit(...) ? )
##### Share on other sites
Alright in the MUD there are 10 stances viper,mongoose, ect. as read above. This code is meant to set the command do_autostance, in which you do autostance viper and when you start a fight, as long as you are not in a stance already, and have an autostance set, it will automatically make you drop into the stance you wish instead of manually typing stance viper every time before you start a fight. I fixed the autostance so that it will actually make the person automatically make you drop into the stance by making a call to autodrop in set_fighting in fight.c. What it does right now is say You autodrop into a fighting stance. I wish it to say you autodrop into the x fighting stance. It also needs to save to the player file. Which I totally have no clue how to do. In which it will save the stance you have autostance set to in the playerfile, so when you log back in you do not have to set your autostance again.
##### Share on other sites
Quote:
Original post by GutlanWhat it does right now is say You autodrop into a fighting stance. I wish it to say you autodrop into the x fighting stance.
This is probably because you're using sprintf() wrong in your big else if block. Specifically, you've missed out the format specifiers.
You currently have:
else if (ch->stance[AUTODROP]==STANCE_VIPER) sprintf(stancename,"viper"); else if (ch->stance[AUTODROP]==STANCE_CRANE) sprintf(stancename,"crane"); else if (ch->stance[AUTODROP]==STANCE_MONGOOSE) sprintf(stancename,"mongoose"); else if (ch->stance[AUTODROP]==STANCE_BULL) sprintf(stancename,"bull");// ...etc
You should have something like:
else if (ch->stance[AUTODROP]==STANCE_VIPER) { // FORMAT SPECIFIER! // ||||| // VVVVV sprintf(stancename, "%s", "viper"); } else if (ch->stance[AUTODROP]==STANCE_CRANE) { sprintf(stancename, "%s", "crane"); } else if (ch->stance[AUTODROP]==STANCE_MONGOOSE) { sprintf(stancename, "%s", "mongoose"); } else if (ch->stance[AUTODROP]==STANCE_BULL) { sprintf(stancename, "%s", "bull"); }// ...etc
Alternatively, you could use strncpy() which is a bit simpler and a lot safer than sprintf and more appropriate for a case where you're just copying a string from one buffer to another, without any additional formatting.
Quote:
It also needs to save to the player file. Which I totally have no clue how to do. In which it will save the stance you have autostance set to in the playerfile, so when you log back in you do not have to set your autostance again.
You'll probably want to look into functions such as fopen(), fclose(), fread(), fwrite(), fprintf(), fscanf() and fflush() for handling file IO.
##### Share on other sites
Quote:
Original post by Sandman
Quote:
Original post by GutlanWhat it does right now is say You autodrop into a fighting stance. I wish it to say you autodrop into the x fighting stance.
This is probably because you're using sprintf() wrong in your big else if block. Specifically, you've missed out the format specifiers.
I was under the impression that that would work, its just a really bad idea?
Quote:
Original post by Sandman
Quote:
It also needs to save to the player file. Which I totally have no clue how to do. In which it will save the stance you have autostance set to in the playerfile, so when you log back in you do not have to set your autostance again.
You'll probably want to look into functions such as fopen(), fclose(), fread(), fwrite(), fprintf(), fscanf() and fflush() for handling file IO.
I covered what he'd need to do to save and load the field in one of his other threads here. No response as to if he actually tried it though.
##### Share on other sites
Quote:
Original post by xaninI was under the impression that that would work, its just a really bad idea?
Actually no you're right, it's fine, I'm having a stupid day. Although arguably anything with the letters 'printf' in it could be considered a really bad idea. I'm assuming the project is pure C rather than C++ though, so he's probably limited in his choices.
I'm not entirely sure what the problem is; I don't quite understand how the stance array is being used, which doesn't really help. What the hell is the significance of stance[0]?
My advice to the OP would be to step through with a debugger.
##### Share on other sites
Quote:
Original post by Sandman
Quote:
Original post by xaninI was under the impression that that would work, its just a really bad idea?
Actually no you're right, it's fine, I'm having a stupid day. Although arguably anything with the letters 'printf' in it could be considered a really bad idea. I'm assuming the project is pure C rather than C++ though, so he's probably limited in his choices.
I'm not entirely sure what the problem is; I don't quite understand how the stance array is being used, which doesn't really help. What the hell is the significance of stance[0]?
My advice to the OP would be to step through with a debugger.
It's a Godwars derivative mud, so it is (if i recall rightly) a pure C environment when stock.
I'm also not sure what the problem is because he hasn't given us any information about what doesn't work. As far as I can tell, what he posted looks like it should work - or generate compile errors. Since hes not posting about compiler errors and he's not providing any of the output from when the function is called, its sort of hard to tell. Without more information, sending him to the debugger is going to be the best bet I imagine...
1. 1
2. 2
Rutin
19
3. 3
4. 4
khawk
14
5. 5
A4L
13
• 13
• 26
• 10
• 11
• 44
• ### Forum Statistics
• Total Topics
633743
• Total Posts
3013643
×
|
2018-12-17 04:50:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23403455317020416, "perplexity": 6165.044690088389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828318.79/warc/CC-MAIN-20181217042727-20181217064727-00359.warc.gz"}
|
http://www.odhady-zlin.cz/AH32-plate/fc49306d97860.html
|
### Chapter 6 Irrigation System Design - USDA
Table NJ 6.10 Performance Data of Typical Sprinklers Table NJ 6.11 Friction Loss for Various Pipe Materials Table NJ 6.12 Correction Factor for Multiple Outlets Table NJ 6.13 Design of Laterals for Cranberry Bog Frost Protection Table NJ 6.14 PVC Pipe Capacities for Velocity of 5 FPS Table NJ 6.15 Irrigation Water Requirements for Small Fruit Chapter 7 FLOW THROUGH PIPESThe Darcy Weisbach equation relates the head loss (or pressure loss) due to friction along a given length of a pipe to the average velocity of the fluid flow for an incompressible fluid. The friction coefficient f (or = 4 f) is not a constant and depends on the parameters of the pipe and the velocity of the fluid flow, but it is known to
### Fluids:calculate pressure loss due to turns in pipe flow
Nov 17, 2011 · The following is based on loss coeficients for a 90° bend but can calculate any bend angle:$$K_B = (n-1)(.25*\pi f_T \frac{r}{d} + .5 K) + K$$ Where:$$K_B$$ = Resistance coefficient for overall pipe bend. $$n$$ = # of 90° bends (for How to Figure PSI in Sprinkler Systems Home Guides SF GateFor example, given a flow rate of 2 gallons per minute, pressure would drop 0.75 psi along a 20-foot length of 1/2-inch type L copper tubing and 1.27 psi along a 20-foot section of 1/2-inch PEX PRACTICAL DESIGN OF WATER DISTRIBUTION SYSTEMSA working pressure of 150 psi for a 24 pipe requires a wall thickness of 0.30 inches and the use of Pressure Class 200 pipe. Table 13 of the ANSI/AWWA C150/ A21.50 standard lists nominal pipe sizes from 3 to 64-inch for working pressures from 150 psi to 350 psi. The table below provides the designer with ANSI/AWWA trench and cover criteria.
### Pipe Water Velocity and Minimum Pipe Diameter Calculator
Pipe Water Velocity and Minimum Pipe Diameter. Used to calculate the velocity of water in a pipe. Bigger pipe is more expensive, but keeping the water velocity low is important to limit pressure losses due to friction, water hammer, and pipe movement due to water momentum changes inside the pipe. Use the second form to calculate the inside diameter of a pipe at a water velocity of 5 ft/sec. 5 ft/sec Rain Bird Landscape Irrigation Design Manualpsi, multiply the feet by .433 One foot of water = .433 psi. For example, 200 ft of water height x .433 produces 86.6 psi at its base. (To convert meters of head to pressure in Sprinkler Irrigation Calculating Your Zones1/2" and 3/4" Drip Distribution Tubing For Cannabis Irrigation Systems (7) Flex Table Hook Up Tubing (3) Drip Irrigation System Fittings. Perma-Loc Drip Irrigation Fittings. Perma-Loc 600 Series For 1/2" 700600 Tubing (13) Perma-Loc 800 Series For 3/4" 940820 Tubing (9) Perma-Loc 1000 Series For 1" Solid Drip Tubing 1.2 OD x 1.06 ID (3)
### TUTORIAL CENTRIFUGAL PUMP SYSTEMS
The term pressure loss or pressure drop is often used, this refers to the decrease in pressure in the system due to friction. In a pipe or tube that is at the same level, your garden hose for example, the pressure is high at the tap and zero at the hose outlet, this decrease in pressure is due to friction and is the pressure loss. Tools & Calculators - IrrigationTools & Calculators. Make your job easier with the following tools and calculators:Comprehensive glossary of irrigation terms. Resources for calculating evapotranspiration and finding local data. Quick reference friction loss charts. Explore other technical resources available for irrigation professionals:Irrigation standards and best practices.Balancing the Pressure in an Irrigation SystemNow it is time to make sure the sprinklers will have enough pressure to operate properly. Adjust Your Pressure Loss Data:Pull out your Design Data Form. One of the top 3 sections should be completed (City Slicker, Country Bumpkins, or Backwoods Water) as well as the Pressure Loss Table at the bottom of the Continue reading Balancing the Pressure in an Irrigation System
|
2021-10-18 01:17:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28533127903938293, "perplexity": 2963.6875548309386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00639.warc.gz"}
|
http://math.stackexchange.com/questions/673046/evaluate-int-0-frac12-frac-sin-1x-sqrt1-x2-dx
|
# Evaluate $\int_{0}^\frac{1}{2}\frac{\sin^{-1}x}{\sqrt{1-x^2}} dx$
$$\int_0^\frac{1}{2} \frac{\sin^{-1}x}{\sqrt{1-x^2}} dx$$
My question for this is can I take out the square root so it would be
$$\frac{1}{\sqrt{1-x^2}} \int_{0}^\frac{1}{2}{\sin^{-1}x} \,dx$$
So all i have to do is find the anti-derivative of $\sin^{-1}x$ then multiply the $\dfrac{1}{\sqrt{1-x^2}}$ back in afterwards? Is there a simpler way?
-
You can only take out constant factors from integrals. Otherwise you'd get silly stuff like $$\int_0^1 x\, dx = x\int_0^1 1\, dx = x \cdot 1 = x$$ which is certainly wrong because definite integrals are constant! – Clive Newstead Feb 12 '14 at 0:12
got it thanks ! – Mark Feb 12 '14 at 0:14
Hint: let $u = \arcsin x$. Then $du = \frac{1}{\sqrt{1-x^2}} dx$. You're not allowed to move the $\frac{1}{\sqrt{1-x^2}}$ term out in front since it has an $x$ in it and the variable of integration is $x$--so the integral depends on that term too!
-
Hint: Use the fact that $\arcsin'x=\dfrac1{\sqrt{1-x^2}}$ .
-
I may suggest that this integral must be like this Integral arc sin x d(arc sin x) from 0 to 1/2. thus the answer must be π^2/72
-
Can you clarify what you mean? – user103828 Apr 1 at 6:41
This has already been (at least as well) explained more than one year ago. – Did Apr 1 at 14:14
|
2015-05-06 15:35:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945506751537323, "perplexity": 634.8951401695186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458866468.44/warc/CC-MAIN-20150501054106-00059-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://documen.tv/question/ken-bought-concert-tickets-for-a-group-of-people-the-tickets-cost-8-50-for-adults-and-5-00-for-s-24156797-67/
|
## Ken bought concert tickets for a group of people. The tickets cost $8.50 for adults and$5.00 for students. He bought 8 tickets for a total
Question
Ken bought concert tickets for a group of people. The tickets cost $8.50 for adults and$5.00 for students. He bought 8 tickets for a total of \$61. Let x = the number of adult tickets and y = the number of student tickets. How many adult and student tickets did Ken purchase?
|
2022-06-25 14:56:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19057241082191467, "perplexity": 968.954078643482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00461.warc.gz"}
|
http://yutsumura.com/a-diagonalizable-matrix-which-is-not-diagonalized-by-a-real-nonsingular-matrix/
|
# A Diagonalizable Matrix which is Not Diagonalized by a Real Nonsingular Matrix
## Problem 584
Prove that the matrix
$A=\begin{bmatrix} 0 & 1\\ -1& 0 \end{bmatrix}$ is diagonalizable.
Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.
That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix.
Contents
## Proof.
We first find the eigenvalues of $A$ by computing its characteristic polynomial $p(t)$.
We have
\begin{align*}
p(t)=\det(A-tI)=\begin{vmatrix}
-t & 1\\
-1& -t
\end{vmatrix}=t^2+1.
\end{align*}
Solving $p(t)=t^2+1=0$, we obtain two distinct eigenvalues $\pm i$ of $A$.
Hence the matrix $A$ is diagonalizable.
To prove the second statement, assume, on the contrary, that $A$ is diagonalizable by a real nonsingular matrix $S$.
Then we have
$S^{-1}AS=\begin{bmatrix} i & 0\\ 0& -i \end{bmatrix}$ by diagonalization.
As the matrices $A, S$ are real, the left-hand side is a real matrix.
Taking the complex conjugate of both sides, we obtain
$\begin{bmatrix} -i & 0\\ 0& i \end{bmatrix}=\overline{\begin{bmatrix} i & 0\\ 0& -i \end{bmatrix}}=\overline{S^{-1}AS}=S^{-1}AS=\begin{bmatrix} i & 0\\ 0& -i \end{bmatrix}.$ This equality is clearly impossible.
Hence the matrix $A$ cannot be diagonalized by a real nonsingular matrix.
### More from my site
• Diagonalize the 3 by 3 Matrix if it is Diagonalizable Determine whether the matrix $A=\begin{bmatrix} 0 & 1 & 0 \\ -1 &0 &0 \\ 0 & 0 & 2 \end{bmatrix}$ is diagonalizable. If it is diagonalizable, then find the invertible matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$. How to […]
• How to Diagonalize a Matrix. Step by Step Explanation. In this post, we explain how to diagonalize a matrix if it is diagonalizable. As an example, we solve the following problem. Diagonalize the matrix $A=\begin{bmatrix} 4 & -3 & -3 \\ 3 &-2 &-3 \\ -1 & 1 & 2 \end{bmatrix}$ by finding a nonsingular […]
• Diagonalize the Complex Symmetric 3 by 3 Matrix with $\sin x$ and $\cos x$ Consider the complex matrix $A=\begin{bmatrix} \sqrt{2}\cos x & i \sin x & 0 \\ i \sin x &0 &-i \sin x \\ 0 & -i \sin x & -\sqrt{2} \cos x \end{bmatrix},$ where $x$ is a real number between $0$ and $2\pi$. Determine for which values of $x$ the […]
• Diagonalize the $2\times 2$ Hermitian Matrix by a Unitary Matrix Consider the Hermitian matrix $A=\begin{bmatrix} 1 & i\\ -i& 1 \end{bmatrix}.$ (a) Find the eigenvalues of $A$. (b) For each eigenvalue of $A$, find the eigenvectors. (c) Diagonalize the Hermitian matrix $A$ by a unitary matrix. Namely, find a diagonal matrix […]
• Use the Cayley-Hamilton Theorem to Compute the Power $A^{100}$ Let $A$ be a $3\times 3$ real orthogonal matrix with $\det(A)=1$. (a) If $\frac{-1+\sqrt{3}i}{2}$ is one of the eigenvalues of $A$, then find the all the eigenvalues of $A$. (b) Let $A^{100}=aA^2+bA+cI,$ where $I$ is the $3\times 3$ identity matrix. Using the […]
• A Matrix Similar to a Diagonalizable Matrix is Also Diagonalizable Let $A, B$ be matrices. Show that if $A$ is diagonalizable and if $B$ is similar to $A$, then $B$ is diagonalizable. Definitions/Hint. Recall the relevant definitions. Two matrices $A$ and $B$ are similar if there exists a nonsingular (invertible) matrix $S$ such […]
• If Two Matrices Have the Same Eigenvalues with Linearly Independent Eigenvectors, then They Are Equal Let $A$ and $B$ be $n\times n$ matrices. Suppose that $A$ and $B$ have the same eigenvalues $\lambda_1, \dots, \lambda_n$ with the same corresponding eigenvectors $\mathbf{x}_1, \dots, \mathbf{x}_n$. Prove that if the eigenvectors $\mathbf{x}_1, \dots, \mathbf{x}_n$ are linearly […]
• Quiz 13 (Part 1) Diagonalize a Matrix Let $A=\begin{bmatrix} 2 & -1 & -1 \\ -1 &2 &-1 \\ -1 & -1 & 2 \end{bmatrix}.$ Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$. That is, find a nonsingular matrix $A$ and a diagonal matrix $D$ such that […]
#### You may also like...
##### Diagonalize the Upper Triangular Matrix and Find the Power of the Matrix
Consider the $2\times 2$ complex matrix $A=\begin{bmatrix} a & b-a\\ 0& b \end{bmatrix}.$ (a) Find the eigenvalues of $A$. (b)...
Close
|
2017-10-22 01:08:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383536577224731, "perplexity": 130.27269898198114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00530.warc.gz"}
|
https://www.physicsforums.com/threads/private-solution-to-a-polynomial-differential-equation.887595/
|
# B Private solution to a polynomial differential equation
1. Oct 2, 2016
### Karol
The polynomial equation and it's private solution:
$$(1)~~ay''+by'+cy=f(x)=kx^n,~~y=A_0x^n+A_1x^{n-1}+...+A$$
If i, for example, take $f(x)=kx^3$ i get, after substituting into (1), an expression like $Ax^3+Bx^2+Cx+D$ , but that doesn't equal $kx^3$
2. Oct 2, 2016
3. Oct 2, 2016
### pasmith
The general solution of $$ay'' + by' + cy = ky$$ is $Ae^{r_1x} + Be^{r_2x}$ where $r_1 \neq r_2$ are the roots of $ar^2 + br + (c - k) = 0$. If the roots are equal then the general solution is instead $Ae^{r_1x} + Bxe^{r_1x}$.
You can therefore obtain $y(x) = x$ as a solution by taking $b = 0$ and $k = c$.
4. Oct 2, 2016
### Simon Bridge
But that is that the DE under consideration in post #1?
Compare:
$ay'' + by' + cy = ky$ with
$ay'' + by' + cy = kx^3$ ...
5. Oct 2, 2016
### Karol
$$y=A_0x^3+A_1x^2+A_2x+A,~~y'=3A_0x^2+2A_1x+A_2,~~y''=6A_0x+2A_1$$
$$ay''+by'+cy=6aA_0x+2aA_1+3bA_0x_2+6bA_1x+bA_2+cA_1x^2+cA_2x+cA=(cA_0)x^3+(3bA_0+cA_1)x^2+(6aA_0+6bA_1+cA_2)x+(2aA_1+bA_2+cA)$$
Is there a phrase about a polynomial with many members to equal only one with the highest power?
6. Oct 2, 2016
### Simon Bridge
OK ... is that as far as you got?
You need an $=kx^3$ in there someplace?
Check the algebra on the others too ... then solve for the unknown coefficients.
(Probably easier to write the coefficients as A,B,C,D like you did in post #1)
7. Oct 2, 2016
### Karol
$$(cA_0)x^3+(3bA_0+cA_1)x^2+(6aA_0+6bA_1+cA_2)x+(2aA_1+bA_2+cA)=kx^3$$
I
$$\left\{ \begin{array}{l} cA_0=k \\ 3bA_0+cA_1=0 \\ 6aA_0+6bA_1+cA_2=0 \\ 2aA_1+bA_2+cA=0 \end{array}\right.$$
$$\rightarrow~~A_0=\frac{k}{c},~~A_1=-\frac{3bk}{c^2},~~A_2=\left( \frac{3b_2c-a}{c^3} \right)6k,~~A=\frac{6abk}{c^3}-\left( \frac{3b_2c-a}{c^3} \right)6bk$$
Is that what you meant?
Last edited: Oct 2, 2016
8. Oct 2, 2016
### Simon Bridge
That's the idea... test for a=b=c=d=k=1 (say).
9. Oct 3, 2016
### Karol
Thank you pasmith and Simon
|
2017-11-25 01:10:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6745332479476929, "perplexity": 3345.081198525625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809160.77/warc/CC-MAIN-20171124234011-20171125014011-00560.warc.gz"}
|
https://www.sarthaks.com/31890/write-equation-decomposition-reactions-where-energy-supplied-form-heat-light-electricity
|
# Write one equation each for decomposition reactions where energy is supplied in the form of heat, light or electricity:
9.5k views
Write one equation each for decomposition reactions where energy is supplied in the form of heat, light or electricity.
by (96.1k points)
selected
i. Decomposition reaction that requires energy in the form of heat:
|
2023-03-22 18:22:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.836694598197937, "perplexity": 2049.937644822388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00350.warc.gz"}
|
https://handwiki.org/wiki/Academic_journal_publishing_reform
|
Academic journal publishing reform is the advocacy for changes in the way academic journals are created and distributed in the age of the Internet and the advent of electronic publishing. Since the rise of the Internet, people have organized campaigns to change the relationships among and between academic authors, their traditional distributors and their readership. Most of the discussion has centered on taking advantage of benefits offered by the Internet's capacity for widespread distribution of reading material.
## History
Before the advent of the Internet it was difficult for scholars to distribute articles giving their research results.[1] Historically publishers performed services including proofreading, typesetting, copy editing, printing, and worldwide distribution.[1] In modern times all researchers became expected to give the publishers digital copies of their work which needed no further processing.[1] For digital distribution printing was unnecessary, copying was free, and worldwide distribution happens online instantly.[1] In science journal publishing, Internet technology enabled the Big Five major scientific publishers—Elsevier, Springer, John Wiley & Sons , Taylor and Francis and American Chemical Society—to cut their expenditures such that they could consistently generate profits of over 35% per year.[1] In 2017 these five published 56% of all journal articles.[2] The remaining 44% were published by over 200 small publishers.
The Internet made it easier for researchers to do work which had previously been done by publishers, and some people began to feel that they did not need to pay for the services of publishers. This perception was a problem for publishers, who stated that their services were still necessary at the rates they asked.[1] Critics began to describe publishers' practices with terms such as "corporate scam" and "racket".[3] Scholars sometimes obtain articles from fellow scholars through unofficial channels, such as posting requests on Twitter using the hashtag "#icanhazpdf" (a play on the I Can Has Cheezburger? meme), to avoid paying publishers' access charges.[4][5] In 2004, there were reports in British media of a "revolution in academic publishing" which would make research freely available online but many scientists continued to publish their work in the traditional big name journals like Nature.
For a short time in 2012, the name Academic Spring, inspired by the Arab Spring, was used to indicate movements by academics, researchers, and scholars opposing the restrictive copyright and circulation of traditional academic journals and promoting free access online instead.[6][7][8] The barriers to free access for recent scientific research became a hot topic in 2012, after a blog post by mathematician Timothy Gowers went viral in January.[9][10] According to the Financial Times, the movement was named by Dennis Johnson of Melville House Publishing,[11] though scientist Mike Taylor has suggested the name came from The Economist.[12]
Mike Taylor argued that the Academic Spring may have some unexpected results beyond the obvious benefits. Referring to work by the biophysicist Cameron Neylon, he says that, because modern science is now more dependent on well-functioning networks than individuals, making information freely available may help computer-based analyses to provide opportunities for major scientific breakthroughs.[13] Government and university officials have welcomed the prospect of saving on subscriptions which have been rising in cost, while universities' budgets have been shrinking. Mark Walport, the director of Wellcome Trust, has indicated that science sponsors do not mind having to fund publication in addition to the research. Not everyone has been supportive of the movement, with scientific publisher Kent Anderson calling it "shallow rhetoric aimed at the wrong target."[14]
## Motivations for reform
Although it has some historical precedent, open access became desired in response to the advent of electronic publishing as part of a broader desire for academic journal publishing reform. Electronic publishing created new benefits as compared to paper publishing but beyond that, it contributed to causing problems in traditional publishing models.
The premises behind open access are that there are viable funding models to maintain traditional academic publishing standards of quality while also making the following changes to the field:
1. Rather than making journals be available through a subscription business model, all academic publications should be free to read and published with some other funding model. Publications should be gratis or "free to read".[15]
2. Rather than applying traditional notions of copyright to academic publications, readers should be free to build upon the research of others. Publications should be libre or "free to build upon".[15]
3. Everyone should have greater awareness of the serious social problems caused by restricting access to academic research.[15]
4. Everyone should recognize that there are serious economic challenges for the future of academic publishing. Even though open access models are problematic, traditional publishing models definitely are not sustainable and something radical needs to change immediately.[15]
Open access also has ambitions beyond merely granting access to academic publications, as access to research is only a tool for helping people achieve other goals. Open access advances scholarly pursuits in the fields of open data, open government, open educational resources, free and open-source software, and open science, among others.[16]
The motivations for academic journal publishing reform include the ability of computers to store large amounts of information, the advantages of giving more researchers access to preprints, and the potential for interactivity between researchers.[17]
Various studies showed that the demand for open access research was such that freely available articles consistently had impact factors which were higher than articles published under restricted access.[18][19]
Some universities reported that modern "package deal" subscriptions were too costly for them to maintain, and that they would prefer to subscribe to journals individually to save money.[20]
The problems which led to discussion about academic publishing reform have been considered in the context of what provision of open access might provide. Here are some of the problems in academic publishing which open access advocates purport that open access would address:
1. A pricing crisis called the serials crisis has been growing in the decades before open access and remains today. The academic publishing industry has increased prices of academic journals faster than inflation and beyond the library budgets.[15]
2. The pricing crisis does not only mean strain to budgets, but also that researchers actually are losing access to journals.[15]
3. Not even the wealthiest libraries in the world are able to afford all the journals that their users are demanding, and less rich libraries are severely harmed by lack of access to journals.[15]
4. Publishers are using "bundling" strategies to sell journals, and this marketing strategy is criticized by many libraries as forcing them to pay for unpopular journals which their users are not demanding, while squeezing out of library budgets smaller publishers, who cannot offer bundled subscriptions.[15]
5. Libraries are cutting their book budgets to pay for academic journals.[15]
6. Libraries do not own electronic journals in permanent archival form as they do paper copies, so if they have to cancel a subscription then they lose all subscribed journals. This did not happen with paper journals, and yet costs historically have been higher for electronic versions.[15]
7. Academic publishers get essential assets from their subscribers in a way that other publishers do not.[15] Authors donate the texts of academic journals to the publishers and grant rights to publish them, and editors and referees donate peer-review to validate the articles. The people writing the journals are questioning the increased pressure put upon them to pay higher prices for the journal produced by their community.[15]
8. Conventional publishers are using a business model which requires access barriers and creates artificial scarcity.[15] All publishers need revenue, but open access promises models in which scarcity is fundamental to raising revenue.[15]
9. Scholarly publishing depends heavily on government policy, public subsidies, gift economy, and anti-competitive practices, yet all of these things are in conflict with the conventional academic publishing model of restricting access to works.[15]
10. Toll access journals compete more for authors to donate content to them than they compete for subscribers to pay for the work. This is because every scholarly journal has a natural monopoly over the information of its field. Because of this, the market for pricing journals does not have feedback because it is outside of traditional market forces, and the prices have no control to drive it to serve the needs of the market.[15]
11. Besides the natural monopoly, there is supporting evidence that prices are artificially inflated to benefit publishers while harming the market. Evidence includes the trend of large publishers to have accelerating prices increases greater than small publishers, when in traditional markets high volume and high sales enables cost savings and lower prices.
12. Conventional publishers fund "content protection" actions which restrict and police content sharing.[15]
13. For-profit publishers have economic incentives to decrease rates of rejected articles so that they publish more content to sell. No such market force exists if selling content for money is not a motivating factor.[15]
14. Many researchers are unaware that it might be possible for them to have all the research articles they need, and just accept it as fate that they will always be without some of the articles they would like to read.[15]
15. Access to toll-access journals is not scaling with increases in research and publishing, and the academic publishers are under market forces to restrict increases in publishing and indirectly because of that they are restricting the growth of research.[15]
## Motivations against reform
Publishers state that if profit was not a consideration in the pricing of journals then the cost of accessing those journals would not substantially change.[21] Publishers also state that they add value to publications in many ways, and without academic publishing as an institution these services the readership would miss these services and fewer people would have access to articles.[21]
Critics of open access have suggested that by itself, this is not a solution to scientific publishing's most serious problem – it simply changes the paths through which ever-increasing sums of money flow.[22] Evidence for this does exist and for example, Yale University ended its financial support of BioMed Central's Open Access Membership program effective July 27, 2007. In their announcement, they stated,
The libraries’ BioMedCentral membership represented an opportunity to test the technical feasibility and the business model of this open access publisher. While the technology proved acceptable, the business model failed to provide a viable long-term revenue base built upon logical and scalable options. Instead, BioMedCentral has asked libraries for larger and larger contributions to subsidize their activities. Starting with 2005, BioMed Central article charges cost the libraries $4,658, comparable to single biomedicine journal subscription. The cost of article charges for 2006 then jumped to$31,625. The article charges have continued to soar in 2007 with the libraries charged $29,635 through June 2007, with$34,965 in potential additional article charges in submission.[23]
A similar situation is reported from the University of Maryland, and Phil Davis commented that,
The assumptions that open access publishing is both cheaper and more sustainable than the traditional subscription model are featured in many of these mandates. But they remain just that — assumptions. In reality, the data from Cornell[24] show just the opposite. Institutions like the University of Maryland would pay much more under an author-pays model, as would most research-intensive universities, and the rise in author processing charges (APCs) rivals the inflation felt at any time under the subscription model.[25]
Opponents of the open access model see publishers as a part of the scholarly information chain and view a pay-for-access model as being necessary in ensuring that publishers are adequately compensated for their work. "In fact, most STM [Scientific, Technical and Medical] publishers are not profit-seeking corporations from outside the scholarly community, but rather learned societies and other non-profit entities, many of which rely on income from journal subscriptions to support their conferences, member services, and scholarly endeavors".[26] Scholarly journal publishers that support pay-for-access claim that the "gatekeeper" role they play, maintaining a scholarly reputation, arranging for peer review, and editing and indexing articles, require economic resources that are not supplied under an open access model. Conventional journal publishers may also lose customers to open access publishers who compete with them. The Partnership for Research Integrity in Science and Medicine (PRISM), a lobbying organization formed by the Association of American Publishers (AAP), is opposed to the open access movement.[27] PRISM and AAP have lobbied against the increasing trend amongst funding organizations to require open publication, describing it as "government interference" and a threat to peer review.[28]
For researchers, publishing an article in a reputable scientific journal is perceived as being beneficial to one's reputation among scientific peers and in advancing one's academic career. There is a concern that the perception of open access journals do not have the same reputation, which will lead to less publishing.[29] Park and Qin discuss the perceptions that academics have with regard to open access journals. One concern that academics have "are growing concerns about how to promote [Open Access] publishing." Park and Qin also state, "The general perception is that [Open Access] journals are new, and therefore many uncertainties, such as quality and sustainability, exist."
Journal article authors are generally not directly financially compensated for their work beyond their institutional salaries and the indirect benefits that an enhanced reputation provides in terms of institutional funding, job offers, and peer collaboration.[30]
There are those, for example PRISM, who think that open access is unnecessary or even harmful. David Goodman argued that there is no need for those outside major academic institutions to have access to primary publications, at least in some fields.[31]
The argument that publicly funded research should be made openly available has been countered with the assertion that "taxes are generally not paid so that taxpayers can access research results, but rather so that society can benefit from the results of that research; in the form of new medical treatments, for example. Publishers claim that 90% of potential readers can access 90% of all available content through national or research libraries, and while this may not be as easy as accessing an article online directly it is certainly possible."[32] The argument for tax-payer funded research is only applicable in certain countries as well. For instance in Australia, 80% of research funding comes through taxes, whereas in Japan and Switzerland, only approximately 10% is from the public coffers.[32]
For various reasons open access journals have been established by predatory publishers who seek to use the model to make money without regard to producing a quality journal. The causes of predatory open access publishing include the low barrier to creating the appearance of a legitimate digital journal and funding models which may include author publishing costs rather than subscription sales. University librarian Jeffrey Beall publishes a "List of Predatory Publishers" and an accompanying methodology for identifying publishers who have editorial and financial practices which are contrary to the ideal of good research publishing practices.[33][34]
## Reform initiatives
### Public Library of Science
The Public Library of Science is a nonprofit open-access scientific publishing project aimed at creating a library of open access journals and other scientific literature under an open content license. The founding of the organization had its origins in a 2001 online petition calling for all scientists to pledge that from September 2001 they would discontinue submission of papers to journals which did not make the full-text of their papers available to all, free and unfettered, either immediately or after a delay of several months.[35] The petition collected 34,000 signatures but the publishers took no strong response to the demands. Shortly thereafter, the Public Library of Science was founded as an alternative to traditional publishing.[35]
### HINARI
Main page: Organization:HINARI
HINARI is a 2002 project of the World Health Organization and major publishers to enable developing countries to access collections of biomedical and health literature online at reduced subscription costs.[36]
### Research Works Act
Main page: Research Works Act
The Research Works Act was a bill of the United States Congress which would have prohibited all laws which would require an open access mandate when US-government-funded researchers published their work. The proposers of the law stated that it would "ensure the continued publication and integrity of peer-reviewed research works by the private sector".[37] This followed other similar proposed measures such as the Fair Copyright in Research Works Act. These attempts to limit free access to such material are controversial and have provoked lobbying for and against by numerous interested parties such as the Association of American Publishers and the American Library Association.[38] Critics of the law stated that it was the moment that "academic publishers gave up all pretence of being on the side of scientists."[39] In February 2012, Elsevier withdrew its support for the bill. Following this statement, the sponsors of the bill announced they will also withdraw their support.[40]
### The Cost of Knowledge
Main page: The Cost of Knowledge
In January 2012, Cambridge mathematician Timothy Gowers, started a boycott of journals published by Elsevier, in part a reaction to their support for the Research Works Act. In response to an angry blog post by Gowers, the website The Cost of Knowledge was launched by a sympathetic reader. An online petition called The Cost of Knowledge was set up by fellow mathematician Tyler Neylon, to gather support for the boycott. By early April 2012, it had been signed by over eight thousand academics.[41][42] [43] As of mid-June 2012, the number of signatories exceeded 12,000.
### Access2Research
Main page: Access2Research
In May 2012, a group of open-access activists formed the Access2Research initiative that went on to launch a petition to the White House to "require free access over the Internet to journal articles arising from taxpayer-funded research".[44] The petition was signed by over 25,000 people within two weeks, which entitled it to an official response from the White House.[45][46]
### PeerJ
Main page: Biology:PeerJ
PeerJ is an open-access journal launched in 2012 that charges publication fees per researcher, not per article, resulting in what has been called "a flat fee for 'all you can publish'".[47]
### Public Knowledge Project
Main page: Organization:Public Knowledge Project
Since 1998, PKP has been developing free open source software platforms for managing and publishing peer-reviewed open access journals and monographs, with Open Journal Systems used by more than 7,000 active journals in 2013.
### Schekman boycott
2013 Nobel Prize winner Randy Schekman called for a boycott of traditional academic journals including Nature, Cell, and Science.[48] Instead he promoted the open access journal eLife.[48]
### Initiative for Open Citations
Initiative for Open Citations is a CrossRef initiative for improved citation analysis. It was supported by majority of the publishers effective from April 2017.
### Diamond Open Access journals
Diamond open access journals have been promoted as the ultimate solution to serials crisis. Under this model, neither the authors nor the readers pay for access or publication, and the resources required to run the journal are provided by scientists on a voluntary basis, by governments or by philanthropic grants. Although the Diamond OA model turned out to be successful in creating a large number of journals, the percentage of publications in such journals remains low, possibly due to a low prestige of these new journals and due to concerns with their long-term viability.
## References
1. Taylor, Mike (21 February 2012). "It's Not Academic: How Publishers Are Squelching Science Communication". Discover. Retrieved 22 February 2012.
2. Monbiot, George (29 August 2012). "Academic publishers make Murdoch look like a socialist". The Guardian (London: GMG). ISSN 0261-3077. OCLC 60623878.
3. "How #icanhazpdf can hurt our academic libraries". The Lab and Field. 5 October 2013.
4. Barbara Fister (4 April 2012), "An Academic Spring?", American Libraries
5. Mike Taylor (9 February 2012), "The future of academic publishing", The Independent, retrieved 23 September 2020
6.
7. Salamander Davoudi (2012-04-16). "Reed chief hits back at critics of division" (). Financial Times.
8. Mike Taylor (9 February 2012), "The future of academic publishing", The Independent, retrieved 23 September 2020
9. Mike Taylor (9 February 2012), "The future of academic publishing", The Independent, retrieved 23 September 2020
10. Suber 2012, pp. 29–43
11. Suber 2012, pp. xi
12. Odlyzko, Andrew M. (January 1995). "Tragic Loss or Good Riddance? The Impending Demise of Traditional Scholarly Journals". Notices of the American Mathematical Society: 49–53. Retrieved 27 February 2012.
13. Antelman, Kristin (September 2004). "Do Open-Access Articles Have a Greater Research Impact?". College & Research Libraries 65 (5): 372–382. doi:10.5860/crl.65.5.372. Retrieved 27 February 2012.
14. Lawrence, Steve (31 May 2001). "Free online availability substantially increases a paper's impact". Nature (Nature Publishing Group) 411 (6837): 521. doi:10.1038/35079151. PMID 11385534. Bibcode2001Natur.411..521L. Retrieved 27 February 2012.
15. Mayor, S. (2004). "US universities review subscriptions to journal "package deals" as costs rise". BMJ 328 (7431): 68–0. doi:10.1136/bmj.328.7431.68. PMID 14715586.
16. Beschler, Edwin F. (November 1998). "Pricing of Scientific Publications: A Commercial Publisher's Point of View". Notices of the American Mathematical Society: 1333–1343. Retrieved 27 February 2012.
17. Davis, Philip M. (22 December 2004). "Calculating the Cost per Article in the Current Subscription Model".
18. "Open Access Voted Down at Maryland". The Scholarly Kitchen. 28 April 2009.
19. Anderson, Rick. "Open access – clear benefits, hidden costs". The Association of Learned and Professional Society Publishers.
20. Leonard, Andrew (August 28, 2007). "Science publishers get even stupider". Salon.
21. Rachel Deahl AAP Tries to Keep Government Out of Science Publishing. Publishers Weekly. 23 August 2007
22. Park, Ji-Hong; Jian Qin (2007). "Exploring the Willingness of Scholars to Accept Open Access: A Grounded Theory Approach". Journal of Scholarly Publishing. 2 38 (2): 30. doi:10.1353/scp.2007.0009.
23. Nicholas, D.; Rowlands, I. (2005). "Open Access publishing: The evidence from the authors". The Journal of Academic Librarianship 31 (3): 179–181. doi:10.1016/j.acalib.2005.02.005.
24. DLIST – Goodman, David (2005) Open Access: What Comes Next. Arizona.openrepository.com (2005-11-27). Retrieved on 2011-12-03.
25. Worlock, Kate (2004). "The Pros and Cons of Open Access". Nature Focus. Nature Publishing Group.
26. Beall, Jeffrey (1 December 2012). "Criteria for Determining Predatory Open-Access Publishers (2nd edition)". Scholarly Open Access.
27. Butler, D. (2013). "Investigating journals: The dark side of publishing". Nature 495 (7442): 433–435. doi:10.1038/495433a. PMID 23538810. Bibcode2013Natur.495..433B.
28.
29. Long, Maurice (December 2003). "Bridging the knowledge gap The HINARI programme". The Biochemist (Biochemical Society) 25 (6): 27–29. doi:10.1042/BIO02506027. Retrieved 27 February 2012.
30. 112th Congress (2011) (Dec 16, 2011). "H.R. 3699". Legislation. GovTrack.us. "Research Works Act"
31. Barbara Fister (4 April 2012), "An Academic Spring?", American Libraries
32. Taylor, Mike (16 January 2012). "Academic publishers have become the enemies of science". The Guardian (London: GMG). ISSN 0261-3077. OCLC 60623878.
33. Howard, Jennifer. "Legislation to Bar Public-Access Requirement on Federal Research Is Dead". The Chronicle.
34. Barbara Fister (4 April 2012), "An Academic Spring?", American Libraries
35. US open-access petition hits 25,000 signatures in two weeks - Research Information, accessed 7 June 2012 (WebCite archive)
36. Van Noorden, R. (2012). "Journal offers flat fee for 'all you can publish'". Nature 486 (7402): 166. doi:10.1038/486166a. PMID 22699586. Bibcode2012Natur.486..166V.
37. Sample, Ian (9 December 2013). "Nobel winner declares boycott of top science journals". theguardian.com.
Bibliography
|
2023-02-04 18:20:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23672796785831451, "perplexity": 6069.144473169604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00744.warc.gz"}
|
https://projecteuclid.org/euclid.aos/1176342670
|
## Annals of Statistics
### Estimation of the $k$th Derivative of a Distribution Function
Carl Maltz
#### Abstract
Estimation of the $k$th derivative of a df by means of the $k$th-order difference quotients of the empiric df is investigated. In particular, consistency conditions are given, the asymptotic bias, variance, and mean-squared error of the estimator are computed, and means of minimizing the latter are discussed.
#### Article information
Source
Ann. Statist., Volume 2, Number 2 (1974), 359-361.
Dates
First available in Project Euclid: 12 April 2007
Permanent link to this document
https://projecteuclid.org/euclid.aos/1176342670
Digital Object Identifier
doi:10.1214/aos/1176342670
Mathematical Reviews number (MathSciNet)
MR359160
Zentralblatt MATH identifier
0277.62033
JSTOR
Maltz, Carl. Estimation of the $k$th Derivative of a Distribution Function. Ann. Statist. 2 (1974), no. 2, 359--361. doi:10.1214/aos/1176342670. https://projecteuclid.org/euclid.aos/1176342670
|
2020-08-15 13:56:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45063480734825134, "perplexity": 2363.8900868463143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740848.39/warc/CC-MAIN-20200815124541-20200815154541-00253.warc.gz"}
|
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Structure_and_Properties_(Tro)/11%3A_Gases/11.06%3A_Mixtures_of_Gases_and_Partial_Pressures
|
# 11.6: Mixtures of Gases and Partial Pressures
In our use of the ideal gas law thus far, we have focused entirely on the properties of pure gases with only a single chemical species. But what happens when two or more gases are mixed? In this section, we describe how to determine the contribution of each gas present to the total pressure of the mixture. The Learning Objective of this module is to determine the contribution of each component gas to the total pressure of a mixture of gases.
## Partial Pressures
The ideal gas law assumes that all gases behave identically and that their behavior is independent of attractive and repulsive forces. If volume and temperature are held constant, the ideal gas equation can be rearranged to show that the pressure of a sample of gas is directly proportional to the number of moles of gas present:
$P=n \bigg(\dfrac{RT}{V}\bigg) = n \times \rm const. \tag{10.6.1}$
Nothing in the equation depends on the nature of the gas—only the amount.
With this assumption, let’s suppose we have a mixture of two ideal gases that are present in equal amounts. What is the total pressure of the mixture? Because the pressure depends on only the total number of particles of gas present, the total pressure of the mixture will simply be twice the pressure of either component. More generally, the total pressure exerted by a mixture of gases at a given temperature and volume is the sum of the pressures exerted by each gas alone. Furthermore, if we know the volume, the temperature, and the number of moles of each gas in a mixture, then we can calculate the pressure exerted by each gas individually, which is its partial pressure, the pressure the gas would exert if it were the only one present (at the same temperature and volume).
To summarize, the total pressure exerted by a mixture of gases is the sum of the partial pressures of component gases. This law was first discovered by John Dalton, the father of the atomic theory of matter. It is now known as Dalton’s law of partial pressures. We can write it mathematically as
$P_{tot}= P_1+P_2+P_3+P_4 \; ... = \sum_{i=1}^n{P_i} \tag{10.6.2}$
where $$P_{tot}$$ is the total pressure and the other terms are the partial pressures of the individual gases (up to $$n$$ component gases).
Figure Dalton’s Law. The total pressure of a mixture of gases is the sum of the partial pressures of the individual gases.
For a mixture of two ideal gases, $$A$$ and $$B$$, we can write an expression for the total pressure:
$P_{tot}=P_A+P_B=n_A\bigg(\dfrac{RT}{V}\bigg) + n_B\bigg(\dfrac{RT}{V}\bigg)=(n_A+n_B)\bigg(\dfrac{RT}{V}\bigg) \tag{10.6.3}$
More generally, for a mixture of $$n$$ component gases, the total pressure is given by
$P_{tot}=(P_1+P_2+P_3+ \; \cdots +P_n)\bigg(\dfrac{RT}{V}\bigg)\tag{10.6.2a}$
$P_{tot}=\sum_{i=1}^n{n_i}\bigg(\dfrac{RT}{V}\bigg)\tag{10.6.2b}$
Equation 10.6.4 restates Equation 10.6.3 in a more general form and makes it explicitly clear that, at constant temperature and volume, the pressure exerted by a gas depends on only the total number of moles of gas present, whether the gas is a single chemical species or a mixture of dozens or even hundreds of gaseous species. For Equation 10.6.4 to be valid, the identity of the particles present cannot have an effect. Thus an ideal gas must be one whose properties are not affected by either the size of the particles or their intermolecular interactions because both will vary from one gas to another. The calculation of total and partial pressures for mixtures of gases is illustrated in the example below:
Example
Deep-sea divers must use special gas mixtures in their tanks, rather than compressed air, to avoid serious problems, most notably a condition called “the bends.” At depths of about 350 ft, divers are subject to a pressure of approximately 10 atm. A typical gas cylinder used for such depths contains 51.2 g of $$O_2$$ and 326.4 g of He and has a volume of 10.0 L. What is the partial pressure of each gas at 20.00°C, and what is the total pressure in the cylinder at this temperature?
Given: masses of components, total volume, and temperature
Asked for: partial pressures and total pressure
Strategy:
1. Calculate the number of moles of $$He$$ and $$O_2$$ present.
2. Use the ideal gas law to calculate the partial pressure of each gas. Then add together the partial pressures to obtain the total pressure of the gaseous mixture.
Solution:
A The number of moles of $$He$$ is
$n_{\rm He}=\rm\dfrac{326.4\;g}{4.003\;g/mol}=81.54\;mol$
The number of moles of $$O_2$$ is
$n_{\rm O_2}=\rm \dfrac{51.2\;g}{32.00\;g/mol}=1.60\;mol$
B We can now use the ideal gas law to calculate the partial pressure of each:
$P_{\rm He}=\dfrac{n_{\rm He}RT}{V}=\rm\dfrac{81.54\;mol\times0.08206\;\dfrac{atm\cdot L}{mol\cdot K}\times293.15\;K}{10.0\;L}=196.2\;atm$
$P_{\rm O_2}=\dfrac{n_{\rm O_2}RT}{V}=\rm\dfrac{1.60\;mol\times0.08206\;\dfrac{atm\cdot L}{mol\cdot K}\times293.15\;K}{10.0\;L}=3.85\;atm$
The total pressure is the sum of the two partial pressures:
$P_{\rm tot}=P_{\rm He}+P_{\rm O_2}=\rm(196.2+3.85)\;atm=200.1\;atm$
Exercise 10.6.1
A cylinder of compressed natural gas has a volume of 20.0 L and contains 1813 g of methane and 336 g of ethane. Calculate the partial pressure of each gas at 22.0°C and the total pressure in the cylinder.
Answer: $$P_{CH_4}=137 \; atm$$; $$P_{C_2H_6}=13.4\; atm$$; $$P_{tot}=151\; atm$$
## Mole Fractions of Gas Mixtures
The composition of a gas mixture can be described by the mole fractions of the gases present. The mole fraction ($$X$$) of any component of a mixture is the ratio of the number of moles of that component to the total number of moles of all the species present in the mixture ($$n_{tot}$$):
$x_A=\dfrac{\text{moles of A}}{\text{total moles}}= \dfrac{n_A}{n_{tot}} =\dfrac{n_A}{n_A+n_B+\cdots}\tag{10.6.5}$
The mole fraction is a dimensionless quantity between 0 and 1. If $$x_A = 1.0$$, then the sample is pure $$A$$, not a mixture. If $$x_A = 0$$, then no $$A$$ is present in the mixture. The sum of the mole fractions of all the components present must equal 1.
To see how mole fractions can help us understand the properties of gas mixtures, let’s evaluate the ratio of the pressure of a gas $$A$$ to the total pressure of a gas mixture that contains $$A$$. We can use the ideal gas law to describe the pressures of both gas $$A$$ and the mixture: $$P_A = n_ART/V$$ and $$P_{tot} = n_tRT/V$$. The ratio of the two is thus
$\dfrac{P_A}{P_{tot}}=\dfrac{n_ART/V}{n_{tot}RT/V} = \dfrac{n_A}{n_{tot}}=x_A \tag{10.6.6}$
Rearranging this equation gives
$P_A = x_AP_{tot} \tag{10.6.7}$
That is, the partial pressure of any gas in a mixture is the total pressure multiplied by the mole fraction of that gas. This conclusion is a direct result of the ideal gas law, which assumes that all gas particles behave ideally. Consequently, the pressure of a gas in a mixture depends on only the percentage of particles in the mixture that are of that type, not their specific physical or chemical properties. By volume, Earth’s atmosphere is about 78% $$N_2$$, 21% $$O_2$$, and 0.9% $$Ar$$, with trace amounts of gases such as $$CO_2$$, $$H_2O$$, and others. This means that 78% of the particles present in the atmosphere are $$N_2$$; hence the mole fraction of $$N_2$$ is 78%/100% = 0.78. Similarly, the mole fractions of $$O_2$$ and $$Ar$$ are 0.21 and 0.009, respectively. Using Equation 10.6.7, we therefore know that the partial pressure of N2 is 0.78 atm (assuming an atmospheric pressure of exactly 760 mmHg) and, similarly, the partial pressures of $$O_2$$ and $$Ar$$ are 0.21 and 0.009 atm, respectively.
Example
We have just calculated the partial pressures of the major gases in the air we inhale. Experiments that measure the composition of the air we exhale yield different results, however. The following table gives the measured pressures of the major gases in both inhaled and exhaled air. Calculate the mole fractions of the gases in exhaled air.
Inhaled Air / mmHg Exhaled Air / mmHg $$P_{\rm N_2}$$ 597 568 $$P_{\rm O_2}$$ 158 116 $$P_{\rm H_2O}$$ 0.3 28 $$P_{\rm CO_2}$$ 5 48 $$P_{\rm Ar}$$ 8 8 $$P_{tot}$$ 767 767
Given: pressures of gases in inhaled and exhaled air
Asked for: mole fractions of gases in exhaled air
Strategy:
Calculate the mole fraction of each gas using Equation 10.6.7.
Solution:
The mole fraction of any gas $$A$$ is given by
$x_A=\dfrac{P_A}{P_{tot}}$
where $$P_A$$ is the partial pressure of $$A$$ and $$P_{tot}$$ is the total pressure. For example, the mole fraction of $$CO_2$$ is given as:
$x_{\rm CO_2}=\rm\dfrac{48\;mmHg}{767\;mmHg}=0.063$
The following table gives the values of $$x_A$$ for the gases in the exhaled air.
Gas Mole Fraction $${\rm N_2}$$ 0.741 $${\rm O_2}$$ 0.151 $${\rm H_2O}$$ 0.037 $${\rm CO_2}$$ 0.063 $${\rm Ar}$$ 0.010
Exercise 10.6.2
Venus is an inhospitable place, with a surface temperature of 560°C and a surface pressure of 90 atm. The atmosphere consists of about 96% CO2 and 3% N2, with trace amounts of other gases, including water, sulfur dioxide, and sulfuric acid. Calculate the partial pressures of CO2 and N2.
$$P_{\rm CO_2}=\rm86\; atm$$, $$P_{\rm N_2}=\rm2.7\;atm$$
|
2021-05-09 16:47:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7921497821807861, "perplexity": 294.82457312889153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00483.warc.gz"}
|
https://www.topperlearning.com/answer/assuming-that-the-velocity-of-flow-of-a-liquid-v-through-a-narrow-tube-depends-on-the-radius-of-the-tube-r-density-of-the-liquid-d-and-coefficient-of-/3052rdll
|
# Assuming that the velocity of flow of a liquid (v) through a narrow tube depends on the radius of the tube(r) , density of the liquid(D) and coefficient of viscosity(n),find the expression for velocity.
### Answered by | 24th May, 2012, 10:37: AM
Queries asked on Sunday & after 7pm from Monday to Saturday will be answered after 12pm the next working day.
### STUDY RESOURCES
REGISTERED OFFICE : First Floor, Empire Complex, 414 Senapati Bapat Marg, Lower Parel, Mumbai - 400013, Maharashtra India.
|
2022-08-13 23:36:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.80266934633255, "perplexity": 11368.80781901571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00435.warc.gz"}
|
https://2015.igem.org/Team:Heidelberg/Modeling/rtsms
|
# Team:Heidelberg/Modeling/rtsms
### Studying determinants of polymerase efficiency based on an aptamer sensor
Our subproject on small molecule sensing facilitates quantitatively studying in vitro transcription (IVT) by ATP-spinach and malachite green RNA-aptamers. Here, we apply mathematical modeling to understand mechanistic details of this process and demonstrate that our approach can be used as a tool for basic research.
After adding an RNA polymerase to DNA templates, the polymerase binds to the template and starts consuming ATP by incorporating it into transcripts containing the malachite green aptamer. While the concentration of ATP could be monitored by fluorescence of the Spinach2-ATP-Aptamer, the transcript yield was monitored by malachite green fluorescence. This enabled us to follow IVT quantitatively and time-resolved. In particular, we could study the inaccuracy of polymerases reflected by an excess of consumed ATP molecules over the number of ATP molecules in synthesized malachite green aptamers.
To this end, we implemented a mathematical model that describes the formation of "active templates" $T^*$ from unbound DNA-templates $T$ and polymerases $P$, and the consumption of ATP $A$ for the synthesis of malachite green aptamers $M$ (Figure 1A). Because malachite green aptamers contain $n_{A,M}=10$ adenine nucleotides, the rate, at which malachite green is produced, is at least by this factor lower than the rate, at which ATP is consumed. The production of premature abortion products that result from the detachment of the polymerase from the template before completing the transcript, however, leads to an even larger number $n_A>n_{A,M}$. By calibrating the model with experimental data, we estimated this number to characterize this polymerase inaccuracy. For this purpose, we used datasets that were recorded with the T7 RNA polymerase. First, as depicted in Figure 1A, we tried to explain this inaccuracy by a constant number $n_A$ that was independent from DNA-template, ATP or polymerase concentrations. Then, we extended the model step-wise until the experimental data could be explained by the model. The step-wise extensions are listed in Table 1 while Table 2 contains the model equations for each variant.
Figure 1. IVT model reactions and fits to experimental data. (A) Model reactions describing reversible assembly of templates $T$ and polymerase $P$ to active templates $T^*$ that incorporate ATP $A$ into malachite green RNA-aptamers $M$ but also into abortion products, leading to a higher number $n_A$ of consumed than ATP molecules $n_{A,M}$ incorporated in malachite green aptamers. (B) Model fits to data at two different polymerase concentrations.
Counter-intuitively, malachite green fluorescence showed a linear increase while the ATP-spinach fluorescence intensity was exponentially decreasing. Furthermore, it was surprising that doubling the amount of polymerase increased the production of malachite green by even more than two-fold. Because of these two unexpected findings, our basic model with constant values for $n_A$ could not explain the data. However, both phenomena could be explained by an optimal model variant (Figure 1B, 1A, Table 1), in which the polymerase inaccuracy increased with increasing ratios between ATP and active templates. Figure 2A visualizes the improvement in fit quality from the basic model to the optimal model variant (variant 4) in values of the Akaike information criterion (AIC) that accounts for the distance between the model and the experimental data and additionally penalizes for the number of model parameters to favor parsimonious model topologies.
Next, we tried if the optimal model, variant 4, could be simplified without losing fit quality. Leaving out degradation reactions for the polymerase $P$ strongly decreased fit quality (Figure 2B). Furthermore, assuming a fast binding of the polymerase to its template, which can be reflected in the model by a steady state of active template formation, resulted in a large AIC value increase. Leaving out ATP degradation, however, resulted only in a slight decrease in fit quality indicated by a small increase in the corresponding AIC value. We applied the rank-based Kruskal-Wallis test and found that, nevertheless, the small AIC value increase was significant ($p = 1.57\cdot10^{-4}$). This indicated that the optimal model could not be further reduced without losing fit quality. Essentially, in the optimal model variant, the rate of malachite green synthesis was dependent on a consumed number of ATP molecules $n_A=n_{A,0} A /T^{*l}$ for each malachite green aptamer molecule. In Figure 2C, the number $n_A$ is shown for different ratios between ATP and active template concentrations using the best fit parameters of the optimal model variant. The model thus predicts a high sensitivity of $n_A$ for changes of the $A /T^{*}$ ratio at values below $A /T^{*}\approx10$ and a low sensitivity of $n_A$ at higher ratios in the range above $A /T^{*}\approx30$ to $50$.
Figure 2. IVT inaccuracy depends on the ATP to active template ratio. (A) A basic model with constant numbers of $n_A$ and synthesis parameters $k_{syn,M}$, was extended to variants with $n_A$ and $k_{syn,M}$ depending on the polymerase concentration (variant 2), $A$- and $T^*$-dependent $n_A$ with exponents $k$ and $l$ (variant 3) or only an exponent for $T^*$ (variant 4). Fitting improvement is indicated by decreasing Akaike information criterion (AIC) values. (B) Reducing the optimal variant 4 by assuming a steady state for $T^*$, no degradation of $P$ or no degradation of $A$ strongly worsened model fits. (C) Model variant 4 can explain increasing inefficiency (higher $n_A$) with decreasing $A/T^*$ ratios.
Taken together, our setup was suitable for studying the phenomenon of polymerase inaccuracy based on a mathematical model. We have learned that the inaccuracy of an RNA polymerase increases with an increasing ratio between ATP and active templates in a non-linear manner. Furthermore, we learned that the kinetics of polymerase binding to the DNA-template is relevant for the transcription dynamics. In the future, our approach might facilitate quantitative studies of the interaction between polymerases and promoters as well as the impact of DNA-modifications on the transcription dynamics.
Table 1. Stepwise changes from the basic model variant 1 to the optimal variant 4 and from variant 4 to variants 4a to 4c
Model variant Subsequent modifications relative to the previous variant Changes in fitting quality 1 $k_{syn}$ and $n_A$ independent from polymerase concentrations 2 Individual $k_{syn}$ and $n_A$ values for different polymerase concentrations improvement 3 $n_A$ depends on function of $T^*$ and $A$ $n_A=n_{A,0} A^{k} /T^{*l}$ improvement, $k\approx0$ 4, best model Setting $k=0$ improvement 4a No degradation of $P$ in variant 4 decrease 4b No degradation of $A$ in variant 4 decrease 4c Binding of $P$ to $T$ in steady state in variant 4 decrease
Table 2. Model equations for the basic model and variants 1 to 4c
Model species Variant Equation $P$ Variants 1 to 4, 4c $\frac{d[P]}{dt}=-k_{on}[T][P]+k_{off}[T^*]-k_{deg,P}[P]$ Variant 4a $[P](t)=[P](t_{0})\exp\left(-k_{deg,P}t\right)$ Variant 4b $\frac{d[P]}{dt}=-k_{on}[T][P]+k_{off}[T^*]$ $T$ Variants 1 to 4, 4b, 4c $\frac{d[T]}{dt}=-k_{on}[T][P]+k_{off}[T^*]$ Variant 4a $[T]=[T_{tot}]-[T^*]$ $T^*$ Variants 1 to 4, 4b, 4c $\frac{d[T^*]}{dt}=k_{on}[T][P]-k_{off}[T^*]$ Variant 4a $[T^*]=\frac{[T_{tot}][P]}{K_{d,P}}$ $A$ Variants 2 to 4, 4a, 4b $\frac{d[A]}{dt}=-k_{syn}[A][T^*]-k_{deg,A}[A]$ Variant 1 $\frac{d[A]}{dt}=-k_{syn}\frac{[A][T^*]}{K_{m,T}+[T^*]}-k_{deg,A}[A]$ Variant 4c $\frac{d[A]}{dt}=-k_{syn}[A][^*]$ $M$ Variant 2 $\frac{d[M]}{dt}=\frac{k_{syn}}{n_{A}}[A][T^*]$ Variants 1 $\frac{d[M]}{dt}=\frac{k_{syn}}{n_{A}}\frac{[A][T^*]}{K_{m,T}+[T^*]}$ Variant 3 $\frac{d[M]}{dt}=\frac{k_{syn}}{n_{A,0}\frac{[A]^{k}}{[T^*]^{l}}}[A][T^*]=\frac{k_{syn}}{n_{A,0}}[A]^{1-k}[T^*]^{1+j}$ Variants 4, 4a, 4b, 4c $\frac{d[M]}{dt}=\frac{k_{syn}}{n_{A,0}\frac{[A]}{[T*]^{l}}}[A][T^*]=\frac{k_{syn}}{n_{A,0}}[T^*]^{1+j}$
|
2022-10-03 05:53:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47517767548561096, "perplexity": 1866.1190754497786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00659.warc.gz"}
|
https://www.zigya.com/study/book?class=11&board=ahsec&subject=Biology&book=Biology&chapter=Mineral+Nutrition&q_type=&q_topic=Metabolism+Of+Nitrogen&q_category=&question_id=BIEN11006659
|
## Book Store
Currently only available for.
CBSE Gujarat Board Haryana Board
## Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
Nitrogen is an essential element for plants and is found in abundance as atmospheric nitrogen. But most plants are unable to use it. Why is it so and in what form do plants utilise nitrogen?
Most plants cannot utilise the atmospheric nitrogen as it is almost inert in nature. Nitrogen combines with oxygen by atmospheric activities and is brought down by rain to the soil. Highly specialised organisms called nitrogen fixers, occur in the soil and they convert nitrogen into nitrates or nitrites or a reduced cationic form like ammonium. These compounds enter the plants as nutrients in the form of dissolved nitrites and nitrates through the root and are assimilated as organic nitrogen.
173 Views
What type of condition is created by leghaemoglobin in the root modules of legumes ?
Anaerobic.
777 Views
Name the best known symbiotic nitrogen fixing bacterium.
Rhizobium.
638 Views
Which pigment is present in the root nodules of legumes ?
Leghaemoglobin.
985 Views
How is nitrogenase enzyme protected ?
The nitrogenase enzyme is very sensitive to the molecular oxygen. It requires anaerobic conditions to work. The leg-haemoglobin in the nodule acts as an oxygen scavenger just meainting the anaerobic condition needed by the enzyme.
684 Views
What is Hidden hunger in the case of plants?
Hidden hunger refers to a situation in which a crop needs more of a given nutrient yet has shown no deficiency symptoms. The nutrient content is above the deficiency symptom zone but still considerably needed for optimum crop production
510 Views
|
2018-11-22 11:02:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30500343441963196, "perplexity": 7998.6339563437095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746205.96/warc/CC-MAIN-20181122101520-20181122123520-00052.warc.gz"}
|
https://nemeth.aphtech.org/lesson1.5
|
- Use 6 Dot Entry Switch to UEB Math Tutorial
# Lesson 1.5: Equals Sign
## Symbol
$=\phantom{\rule{.3em}{0ex}}\text{equals}$
⠨⠅
## Explanation
The equals sign is a sign of comparison. It is a two-cell symbol with dots four six in the first cell and dots one three in the second.
The equals sign, and all other signs of comparison have a blank space before and after the symbol. If a numeral follows the equals sign, it must be preceded by the numeric indicator because there is a space between the numeral and the equals sign.
### Example 1
$3+7=10$
⠼⠒⠬⠶⠀⠨⠅⠀⠼⠂⠴
### Example 2
$7-2=5$
⠼⠶⠤⠆⠀⠨⠅⠀⠼⠢
### Example 3
$3+17-5=15$
⠼⠒⠬⠂⠶⠤⠢⠀⠨⠅⠀⠼⠂⠢
|
2020-11-24 09:44:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6024231910705566, "perplexity": 3130.1260386041213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176049.8/warc/CC-MAIN-20201124082900-20201124112900-00571.warc.gz"}
|
https://jasss.soc.surrey.ac.uk/24/1/6.html
|
Home > 24 (1), 6
# Finding Core Members of Cooperative Games Using Agent-Based Modeling
and
Old Dominion University, United States
Journal of Artificial Societies and Social Simulation 24 (1) 6
<http://jasss.soc.surrey.ac.uk/24/1/6.html>
DOI: 10.18564/jasss.4457
Received: 18-Dec-2019 Accepted: 14-Dec-2020 Published: 31-Jan-2021
### Abstract
Agent-based modeling (ABM) is a powerful paradigm to gain insight into social phenomena. One area that ABM has rarely been applied is coalition formation. Traditionally, coalition formation is modelled using cooperative game theory. In this paper, a heuristic algorithm, which can be embedded into an ABM to allow the agents to find a coalition, is described. Our heuristic algorithm combines agent-based modeling and cooperative game theory to help find agent partitions that are members of a games' core solutions (if they exist). The accuracy of our heuristic algorithm can be determined by comparing its outcomes to the actual core solutions. This comparison is achieved by developing an experiment that uses a specific example of a cooperative game called the glove game. The glove game is a type of market economy game. Finding the traditional cooperative game solutions is computationally intensive for large numbers of players because each possible partition must be compared to each possible coalition to determine the core set; hence our experiment only considers up to nine-player games. The results indicate that our heuristic approach achieves a core solution over 90% of the games considered in our experiment.
Keywords: Agent-Based Modeling, Cooperative Game Theory, Modeling and Simulation, ABM, Cooperative Games
### Introduction
Agent-based modeling (ABM) is a frequently used paradigm for social simulation (Axtell 2000; Epstein & Axtell 1997; 2008); however, there is little evidence of its use in coalition formations, especially strategic coalition formations. There are few models that explore coalition formation (Collins & Frydenlund 2018; Sie, Sloep, & Bitter-Rijpkema 2014) and even fewer that validate their results against an expected outcome (Abdollahian, Yang, & Nelson 2013). Cooperative game theory is often used to study strategic coalition formation, but solving games involving a significant number of players (agents) is computationally intractable (Chalkiadakis, Elkind, & Wooldridge 2011a). The ABM paradigm provides a platform in which simple rules and interactions between agents can produce a macro-level effect without the large computational requirements. As such, it has the potential to be an effective means for approximating cooperative game solutions for large numbers of agents. In this paper, we intend to show one possible approach to approximating strategic coalition formations in an agent-based model. The model is validated through a comparison of its outputs to those from cooperative game theory.
We present an agent-based model (ABM) for finding core stable coalition structures; that is, coalition structures that are members of the core. The core is a stability solution concept used in cooperative game theory. Our model's algorithm considers a number of different ways to test a coalition structure by considering possible other coalitions that could form. It tests mergers of coalitions, adding an agent to a coalition, dividing a coalition, removing an agent from a coalition, agents breaking away to form a new coalition, and the individual rationality of being in the coalition. Agents examine each potential new coalition to determine if their utility is improved; if the utility is improved for all agents, the new coalition is accepted. Based on the cooperative game theory underpinnings, these procedures allow for the efficient estimation of a stable coalition structure and bring a strategic component to the ABM.
To determine the effectiveness of our ABM's algorithm, an experiment was designed. The experiment uses a Monte Carlo approach that involves comparing the output of the algorithm to the actual core solutions of randomly generated games. The type of game considered is the glove game. The glove game is a simple type of market economy game that is commonly used to exemplify phenomena in cooperative game theory (Hart 1985). There are a number of computational issues that had to be overcome with this experimental approach, including finding the actual core of each generated game. Our results focus on how often our algorithm converges to the core solution.
The remainder of this paper is structured in the following manner. The background section provides an introduction to hedonic games, a discussion on the background of our ABM and its algorithm, and an introduction to the case study. The methods section describes our experimental approach, the "brute-force" method of solving cooperative games, and the experimental design. Finally, the results and conclusions are given with a summary of our work.
### Background
The research presented in this paper involves concepts from cooperative game theory, specifically, hedonic game version of the glove game, a simple market economy game. In this section, we provide a brief overview of cooperative game theory, hedonic games, and the glove game. We then provide a discussion on attempts to approximate a cooperative game theory concepts, i.e., core stability, in ABM. This background provides some of the theoretical foundations for the model used in this research that is discussed in the method section of this paper.
#### Cooperative game theory
Game theory is "the study of mathematical models of conflict and cooperation between intelligent, rational decision-makers" (Myerson 2013, p. 1). Game theory can be split into non-cooperative and cooperative games. Non-cooperative games usually involve only two rational players, and they are well studied (Eatwell, Milgate, & Newman 1987; Von Neumann & Morgenstern 1947). Prisoner's dilemma, named by Albert Tucker in 1950, is possibly the best known non-cooperative game (Flood 1952). Games involving more than two players are usually studied using n-person game theory, which is also known as cooperative games (Chakravarty, Mitra, & Sarkar 2015). In non-cooperative games, both parties know the potential payoff based on the choices made, but they do not have the ability to create a binding agreement before a course of action is selected. Cooperative games, on the other hand, are games in which players can enter into contracts or binding agreements that allow them to form strategic coalitions (Peleg & Sudhölter 2007).
Cooperative games are generally divided into two types: games with transferrable utilities (TU) (also known as characteristic function games) and games with non-transferrable utilities (NTU) (Thomas 2003). Characteristic function games allow a coalition to subdivide the payoff obtained by the coalition, in any possible way (Peleg & Sudhölter 2007); this introduces a two-stage problem for the game; namely, you need to determine which coalitions will form first and how those coalitions will subdivide the payoff amongst the coalition's members. In contrast, NTU games have fixed distributions of the payoff dependent upon the coalition; for example, a member of coalitions payoff might be their preference for being in one coalition and, as such, this payoff cannot be transferred to another player (e.g., I cannot give my joy of being admitted to an exclusive club to another member of that club). Hedonic games are a specific type of NTU game where players have a preference relation over all possible coalitions (Aziz & Savani 2016).
#### Hedonic games
Cooperative game theory provides an analytic framework for the study of strategic coalition formation (Chalkiadakis, Elkind, & Wooldridge 2011b). Given a situation where a decision-maker must evaluate which coalitions they wish to join, cooperative game theory can aid in this evaluation. Games that are expressed in terms of the decision-makers' (or players') preferences for different coalitions are called hedonic games. In game theory, the term "players" refers to agents; as such, we use these terms interchangeably throughout the paper. A common example of a hedonic game is the roommate problem (Aziz 2013; Gale & Shapley 1962). Here is a simple example of a classic roommate problem: Campus housing is expensive, so students might seek a roommate rather than live alone; a group of three students (A, B, and C) has found a two-bedroom apartment. Student A prefers to room with student B over student C. Student B prefers student C over student A. Student C prefers student A over student B. No student wants to share the apartment with two roommates as one would have to sleep in the living room. Each student demonstrates a preference for a different coalition leading to a cycle of dissatisfaction. Hedonic coalitions, the term originally credited to Dreze & Greenberg (1980), describes how and why various groups, clubs, and communities form and persist (Iehlé 2007). Technically, a coalition in an NTU game could have multiple choices of action to determine their payoff; a hedonic game is an NTU game when there is only one course of action for each coalition (Chalkiadakis et al. 2011b). Since payoffs are fixed for a hedonic game, solving is only a one-stage problem, i.e., you just need to determine which coalitions will form as a player's payoff is fixed for a given coalition.
The outcome of a cooperative game, including hedonic games, is a coalition structure and a payoff vector. A coalition structure is a partition of the players in the game, which is equivalent to a collection of disjoint coalitions that covers all players; the payoff vector is the utility received by the players. In a hedonic game, the utility, for a given player, is a function of the identity of the members in the coalition (Iehlé 2007). There are several ways the solution set of cooperative games can be defined. We will use core stability as our solution concept (Bogomolnaia & Jackson 2002). "A coalition structure is called stable if there is no group of individuals who can all be better off by forming a new deviating coalition. The core of a hedonic game is the collection of all stable coalition structures" (Sung & Dimitrov 2007). Determining the core of a hedonic game is NP-complete (Ballester 2004); for all but the smallest number of agents, it requires an unreasonable amount of computational time to solve. Additionally, if the core exists, it may contain any number of coalition structures. Therefore, a complementary problem for determining the core is to test whether a coalition structure is a member of the core. Testing core membership requires selection of a candidate coalition structure and determining whether the coalition structure fits the criteria to be a part of the core. Core membership testing is co-NP complete (Faigle et al. 1997; Sung & Dimitrov 2007); that is, it is equally as challenging to determine core membership as it is to find the core.
#### Mathematical notation of hedonic games
The formal structure of a hedonic game is defined as $$G = (N, \succeq_{1}, \dots, \succeq_{n})$$ where $$N = \{1, \dots, n\}$$ is the non-empty set of players in the game, and $$\succeq_{i}$$ is the preference relationship of possible coalitions containing $$i \in N$$. \citet{chalkiadakis2011a} define the preference relation as a binary function with the properties of completeness, reflexivity, and transitivity. Completeness denotes that for every pair of coalitions containing $$i$$, $$c_{j}^{i}, c_{k}^{i} \subseteq N$$, there is a relationship of $$c_{j}^{i} \succeq_{i} c_{k}^{i}$$ or $$c_{k}^{i} \succeq_{i} c_{j}^{i}$$ . Reflexivity represents the fact that $$c_{j}^{i} \succeq_{i} c_{j}^{i}$$. The transitivity properties state that for $$c_{l}^{i} \subseteq N$$, if $$c_{j}^{i} \succeq_{i} c_{k}^{i}$$ and $$c_{k}^{i} \succeq_{i} c_{l}^{i}$$, then $$c_{j}^{i} \succeq_{i} c_{l}^{i}$$. The outcome of a hedonic game is a coalition structure. A coalition structure is a partition of all players of the game; that is, it is a set of coalitions that contains every player in exactly one coalition; the union of the coalitions in the coalition structure is $$N$$, and the intersection of any pair is the null set. The number of possible coalition structures is combinatorial to the number of players $$n$$. For example, a game with 15 players results in over 1.3 billion possible coalition structures (Rahwan et al. 2009). Since a coalition structure is equivalent to partition, the number of coalition structures is a Bell number (Bell 1938).
Not all coalition structures are equally desirable. Game theorists have developed solution concepts for cooperative games that identify sets of coalition structures with distinct properties. Bogomolnaia & Jackson (2002) define solution concepts for hedonic games: core stability, individual stability, Nash stability, and contractually individual stability.
We focus on core stability; a coalition structure that is core stable is called a core partition (Banerjee, Konishi, & Sönmez 2001). The foundation for the core stability concepts lies in Gillies (1959) definition of the core. The core partition is the set of coalition structures that are not dominated by any other coalitions. A coalition $$C \subseteq N$$ dominates (or blocks) a coalition structure $$CS$$ if, for all $$i \in C$$, $$C \succ_{i} CS_{i}$$, where $$CS_{i}$$ is the coalition in $$CS$$ that contains player $$i$$ (Chalkiadakis et al. 2011a). The core partition, therefore, represents the set of coalition structures in which there is no incentive for any subset of players to create a new coalition. The core partition "has the same requirement as the core with coalition structure studied in standard cooperative game" (Iehlé 2007); hence we use the terms interchangeably within this paper. A game is core stable if it has a non-empty core.
There are a number of other possible solution concepts that can have been used, namely: individual stability, Nash stability, and contractual individual stability (Bogomolnaia & Jackson 2002). A coalition structure is individually stable if there is not a coalition in which all members would be better off by having another player join the coalition, including that new player. A coalition structure is Nash stable if no player would want to join an alternate coalition, in the current coalition structure, or be a coalition unto themself. Contractual individual stability exists if there is no coalition change that a player can make that is preferable to the coalition the player left and the coalition the player joins (Bogomolnaia & Jackson 2002; Chalkiadakis et al. 2011a).
The solution concepts provide an approach to evaluating coalition formation. Foundationally, it provides the analytic framework for determining which coalition structures are stable. However, a challenge remains. There are a finite number of coalitions, $$2^{N}-1$$, but the number of coalition structures and the number of comparisons required is computationally infeasible for a large number of agents. From our example above, a game of 15 players has 33,000 possible coalitions but 1.3 billion potential coalition structures; to determine the core of such a game requires 43 trillion comparisons to be made. This would take 3.5 hours on a single 3.50 GHz core computer, assuming the check can be completed in a single computational step (which is unlikely). If there were 20 agents, this would take 500 years. Determining the core of a hedonic game with arbitrary preferences, preferences that allow ties, is NP-complete (Ballester 2004). Hence, the analytical solution of the core is computationally intensive, and stochastically guessing whether a coalition structure is or is not a member of the core is akin to finding a needle in a haystack. Hence the desire to have heuristical methods. This Computational intractable of solving cooperative games is one of the driving forces behind our agent-based model and its embedded coalition formation algorithm. Our agent-based simulation runs in less than an hour.
However, our approach is a heuristic, and there is no guarantee that it will find a core partition, if one exists, for a given game. The purpose of this paper is to outline an experiment to understand how effective our algorithm is at finding a core partition. To this end, we focus on a use case that we analyze to determining the effectiveness of our ABM. Gloves games are the underlying game type of our use case.
### Use Case – The Glove Game
Our use case is a modified version of a simple market economy called the glove game (Hart 1985). This game was chosen because of its simplicity, the core is known to exist, and it can be represented as a hedonic game; however, for our purposes, we represent the glove games using players' payoff allocations as opposed to preferences for ease of understanding. In the glove game, players attempt to create pairs of gloves. Each player is endowed with initial resources – a random number of left gloves and a random number of right gloves. Players form coalitions and determine the number of pairs of gloves that are created by their coalition, the coalition’s value. This value is then divided equally between the players to determine each player’s payoff or preference for the coalition. Each player attempts to maximize their individual payoff. Since the payoff for each player is fixed, this is an NTU game.
Figure 1 is an example of the glove game with three players: a, b, and c. The first table shows the initial resources of each of the players. The second table enumerates each possible coalition and the utility each player in the coalition receives if that coalition is formed; the third table list all possible coalition structures and the individual values a player would receive in that coalition structure. Examining each possibility shows that player b receives the highest value when not combining with any other player. Without the possibility of forming a coalition with b, players a and c both obtain the greatest value by forming a coalition. Thus, [(a, c) (b)] is a core partition. This is the only coalition structure that is in the core.
The glove game has had a number of applications over the years (Hart & Kurz 1983); it is also known as the shoe game (Murnighan & Roth 1977). Hart & Kurz (1983) and Hart (1985) used the glove game to exemplify the different solution mechanisms to cooperative game theory. It has also been used in human experiments in Murnighan & Roth (1977) and Murnighan & Roth (1980). In Murnighan and Roth's first experiment, they only considered three human players and were looking at the effects of communication openness; they concluded that if a player in a weak situation controls that the flow of information, then they can increase the payoff they receive; this is an expected result. In their second experiment, they consider up to seven players in the "shoe" game; this time, they looked at the impacts of veto power; their experiment involved 250 participants. We believe that this paper represents the first time the glove game has been used in an agent-based model.
#### ABM heuristic coalition structure selection
Coalition formation represents the willingness of players to interact and/or cooperate with one another. Cooperative game theory provides a strategic approach to evaluating coalition formation, but it is computationally infeasible for large numbers of players. Agent-based modeling (ABM) provides a platform for evaluating interactions of many heterogeneous agents but is often found to be lacking the strategic component (Collins & Frydenlund 2018). Agents can be considered players and vice versa; throughout this paper, we use these terms interchangeably. Combining the strategic nature of cooperative games with the interaction paradigm of ABM will create a more computationally efficient way in which to explore coalition formation. Game-theoretic results can be enhanced by ABM (De Marchi & Page 2008).
ABM is a modeling paradigm particularly well suited to handle the interactive nature of coalition formation and to, stochastically, surmise a feasible coalition structure. It is a computation method in which researchers can build and analyze models comprised of agents (players) that interact in an environment (Gilbert 2008). While cooperative game theory mathematically evaluates the formation of every possible coalition, our ABM approach stochastically explores coalitions generated from agent interactions. The benefit of exploring stochastically-generated coalitions is the reduction of computational complexity. Generating every possible coalition structure for cooperative game theory analysis using dynamic programming has a computational complexity of O(3n) (Rothkopf, Pekeč, & Harstad 1998; Yeh 1986) with the actual finding of the core set being NP-Complete for hedonic games (Ballester 2004) while our ABM approach magnitude is linear per time-step, so the computational complexity is determined by the number of time-steps allowed. We implement six basic coalition formation behaviors, with each selecting one player as the focal point, thus reducing the computational complexity by not tying the algorithm run time to an exhaustive search. In this paper, we validate the success of our algorithm by solving the core of cooperative games using a 'brute-force' method, then comparing the results to those generated by the ABM algorithm for the same games. Our ABM algorithm produces core members over 95% of the time.
Both cooperative game theory and agent-based modeling involve modeling multiple agents that are interacting; as such, it might be expected that there would be an overlap of the approaches, i.e., hybrid modeling (Mustafee et al. 2015). However, there is very little in the literature showing a combination of these modeling methods. Abdollahian et al. (2013) incorporated cooperative game theory concepts into an agent-based model of location sites for sustainable energy; they employ the Shapley value (Shapley 1953) as their solution concept, whereas we use the core (Gillies 1959). Ohdaira & Terano (2009) discuss cooperation in an agent-based simulation of the classic Prisoner’s dilemma, but they are focused on non-cooperative game theory. Sie et al. (2014) discuss coalition formation in the context of social networks; they employ the Shapley value. Of the extant literature, the work presented in this paper relates to Collins & Frydenlund (2018), who created a hybrid model of cooperative game theory and agent-based modeling; their work will be discussed later in this paper.
Janovsky & DeLoach (2016) claim to have found a heuristic for solving cooperative games using agent-based modeling. Their approach focuses on a particular form of a characteristic function used in Transferrable Utility (TU) games. Our approach focuses on hedonic games, which are a type of non-transferrable Utility (NTU) game.
The brute-force algorithm, described in the next section, allows us to generate the entire core set of a hedonic game from which we can ascertain if the ABM produced a coalition structure that is a member of that core. The drawback of the brute-force method is that it can only be done for a limited number of agents due to computational requirements. Hence our approach is limited to around nine agents.
### Methods
We have conducted a Monte Carlo computational experiment to determine the effectiveness of our ABM approach to coalition formation. To determine effectiveness, we compare the output (i.e., final coalitions structure) generated by our ABM to see if it is a member of the core partition set. Effectiveness is the percentage of trials that this is true. Each trial represents a single simulation run from a glove game.
Each instance of the glove game was created by varying the number of players and randomly generating gloves allocations. For each game, its core partition set is determined using a brute force computational approach. The game was inputted into our ABM, and multiple stochastics runs were completed. This section first describes the ABM and then the brute force algorithm.
#### Agent-based model
An agent-based model was constructed to replicate coalition formation in a glove game. A full description of our model is given in Appendix A; this description is written following the guides of the Overview, Design Concepts, and Details (ODD) protocol, which is advocated as a standard way to describe ABM (Grimm et al. 2020). The full Netlogo models can be found at Collins (2020). This section briefly describes the major mechanism of our model, including how coalitions are suggested.
Interactions, or opportunities to form coalitions, can occur between individual players, existing coalitions, players, and existing coalitions or within coalitions. Rules of coalition formation within the interaction platform are defined by the hedonic game solution concept: for a new coalition to form, each player in the coalition must prefer the new coalition to its existing coalition. In this way, each coalition change should result in a more stable coalition structure than the predecessor. This, theoretically, should produce a coalition structure that is a member of the core, if the core is non-empty, without an exhaustive search of the solution space. The heuristic presented here advances the coalition formation algorithm developed by Collins & Frydenlund (2018).
Our ABM algorithm is strategic in its determination of whether to change coalitions, i.e., agents will only join a newly formed coalition if it increases their utility; however, it is also a heuristic as it stochastically selects which coalitions to test. Our algorithm consists of six routines for choosing a coalition to test: (1) Join coalitions; (2) Exit coalition; (3) Create pair coalition; (4) Defect coalition; (5) Split coalition; and (6) Return to an individual coalition. Each of these routines is talked about in turn.
1. Join coalition: Two agents from different coalitions are chosen randomly. The payoffs of the merged coalitions are calculated. If the payoffs improve for all members of both coalitions, a new coalition is formed, which is the merged coalition.
2. Exit coalition: An agent from a coalition whose size is greater than one, i.e., not a singleton coalition, is randomly selected. The payoff of the coalition minus the agent is calculated. If all agents in the remaining coalition improve their payoff by removing the selected agent, the agent is removed from the coalition and forms a singleton coalition.
3. Create a pair coalition: Two agents are randomly selected. The payoff for the coalition containing just both agents is calculated. If the payoff of both agents is improved in this new coalition, both agents exit the current coalition in favor of the new one.
4. Defect coalition: A randomly chosen agent selects a coalition to which he does not belong. If joining this coalition improves his payoff and the payoff of all members of the coalition, the agent defects from his payoff and the payoff of all members of the coalition, the agent defects from his current coalition and joins the new coalition.
5. Split coalition: A coalition is randomly chosen, and a random subset of agents from the coalition are selected to form a separate coalition. If the members of the new coalition improve their payoff or the coalition that remained improve their payoff, the coalition splits into the two coalitions.
6. Return to an individual coalition: An agent is randomly chosen. If that agent would be better off on their own, i.e., they prefer the singleton coalition to their current coalition; then, they leave their current coalition to form the singleton coalition. This is known as the individual rationality concept (Thomas 2003).
Players in the game are assumed to be self-interested and individually rational (Chalkiadakis et al. 2011a); that is, they are driven by achieving their preferred coalition and will not join a coalition that does not at least provide satisfaction equal to what they receive being on their own. The final routine ensures that players will only remain in a coalition that is preferable to being alone. In our simulation runs, all agents start on their own.
The six routines are iterative; they are repeated until the coalition structure becomes stable. Normally, stable would be defined as no additional changes occurring to the coalitions for a certain number of time-steps; however, in our experiment, the number of time-steps run is fixed for consistency between trials. The repetition allows players to attempt to migrate into higher preferred coalitions. However, the algorithm arriving at a stable state with respect to coalition structure does not guarantee core membership. It should be noted that not every player and coalition is considered at each time-step, which is a departure from the algorithm from which ours is based because it considered every player at every time-step (Collins & Frydenlund 2018).
The Collins’ and Frydenlund’s algorithm was never formally compared to the actual cooperative game theory core set, though a primary analysis shows that it is not proficient at achieving core partitions (in some cases, it only reach the core solution 4% of the time, which is vastly worse than the results presented for our new algorithm). It examines all agents at every time-step and only considers a subgroup split and supergroup group formation. This original algorithm approach was applied to several areas, including economic minority games (Collins 2017, 2019), generic neighborhood alliances (Collins & Frydenlund 2016b), and refugee movement (Collins & Frydenlund 2016a). Our algorithm is designed with the intent of creating a coalition structure that is a member of the core, i.e., produces results that are comparable to those found by cooperative game theory methods. We, therefore, will validate our algorithm by comparing it to a computational “brute force” mechanism for determining the core. Validation of agent-based models is generally challenging. Several methods of validation are discussed in Gore, Lynch, and Kavak (2016); Klügl (2008); Windrum et al. (2007); Xiang et al. (2005). For our purpose, black box validation, or comparison of inputs and outputs between models, will suffice.
#### Brute force algorithm
To be able to determine the accuracy of our heuristic algorithm, we must solve the games using another method first so that the solutions of the two approaches can be compared. To do this, we use the brute force method. A naïve or brute-force method for determining the core requires three primary steps: (1) generating every possible coalition structure; (2) generate and evaluate every possible coalition; and (3) comparing every possible coalition to each coalition structure to determine if the coalition blocks the coalition structure. The number of potential coalitions that can be formed is $$n = 20$$; the number of possible coalition structures for $$n = 20$$ agents exceeds 51 trillion, for example (Rahwan et al. 2009).
The brute-force algorithm is used for two reasons: (1) the complete core is guaranteed to be found; (2) the complete individual checks allow for simpler verification of the algorithm. The downside is that the number of agents for which the algorithm can be used is limited. By the nature of the glove game, the core is guaranteed to exist, i.e., the core is a non-empty set for glove games.
A description of the brute force algorithm is given below. We have chosen not to use the ODD protocol to describe it because we feel a simpler format is adequate. The algorithm has several steps to finding the core.
##### 1. Generate all possible coalition structures.
Djokić et al. (1989) present an iterative algorithm shown in Figure 2 for generating all possible set partitions. This algorithm is executed in a Python program for coalition structure generation. The pseudocode for the algorithm is given in Figure 2.
##### 2. Generate all possible coalitions and determine the payoff for each.
There are 2n – 1 possible coalition and iteratively generated using its binary representation. Agent membership, to a given coalition, is determined by a Boolean number. Each “1” bit signifies that the agent is a member of coalition C. The payoffs are calculated for each coalition.
In the coalition structure of a non-cooperative game, the marginal contribution is not a factor. The payoff for each agent is purely determined by which agents are a member of its coalition. For the glove game, the payoffs are either determined by the number of pairs of gloves the coalition has. The payoff division, amongst the agents, is a function of the cardinality of the agent i’s coalition (Ci) and the agent i’s utility function. The possible payoff function, for a given agent i in coalition Ci, is defined as follows:
$$u_{i} (C_{i}) = \frac{min (\sum_{x \in C_{i}}L(x),\sum_{x \in C_{i}}R(x))}{|C_{i}|}$$
L(x) stands for the number of left gloves agent x has, and R(x) is the number of right gloves. A detailed discussion on the glove game is given below.
##### 3. Test each coalition structure against every coalition payoff to determine if the coalition structure is blocked.
Determining if the coalition structure (CS) is blocked is a simple but computationally expensive process depicted in Figure 3. The approach requires three iterative loops. The outer loop is structured as in step 2 with the loop executing 2n – 1 times, with the number of each pass being converted to a binary value to represent each agent that exists in the coalition (C). A second (nested) loop then iterates through every possible CS. If the CS is not blocked, a third (nested) loop iterates over the number of agents (length of the binary string) to test if the agent is a member of C and if the payoff of the agent in C is in is greater than the payoff the agent receives in CS. If all agents in the tested C can improve their position over the CS, then CS is marked as blocked.
##### 4. Store all coalition structures that are not blocked in the core file.
A final pass is made through the coalition structures to denote that any CS that is not blocked is part of the core. The core of the game is static. Execution of the brute force algorithm provides a complete listing of the coalition structures that are in the core. These outputs will be used to determine the effectiveness of the ABM by providing a solution set for the ABM results to be compared.
#### Experimental design
The ABM was deployed in NetLogo, an agent-based simulation development platform (Wilensky 1999). For game instance, each agent was randomly assigned between zero and nine left gloves and right gloves. Each game involved three to nine agents; for each game size, ten unique games were designed, and a complete list is given in Appendix B. Due to the stochastic nature of the algorithm, each game had 50 instances run (so a total of 3500 runs were completed). Each game is executed for 100,000 time-steps to ensure convergence, if possible. In every game, each agent starts in the grand coalition. Collins and Frydenlund's algorithm was converted to execute the glove game under the same circumstances to test whether the algorithm improved upon the original. Both models were executed and compared to the results from the brute force algorithm to see if the agents had found a core partition.
The exact nature of the search space is not clear; however, the brute force algorithm considers all possible coalition structures, so optimality is ensured; that is, it guarantees that the core is found.
### Results and Analysis
This section discusses the results of the simulation runs. The results focus on the comparison of the effectiveness of the two algorithms (the original Collins-Frydenlund algorithm and the new one presented in this paper). Figure 4 shows the percentage of times the original Collins-Frydenlund algorithm reached a coalition structure that is a member of the core solution set (i.e., a core partition). Figure 5 shows the results of our algorithm. A summary of the key differences is given below:
Collins-Frydenlund Algorithm New Algorithm Overall average convergence to core partition of simulation trial 69% 96% Run-time speed, using Collins-Frydenlund as base 1 0.5
The new algorithm significantly outperforms the old algorithm, approximately halving the run-time (run-time varies depending on the number of agents and computer used). Also, the new algorithm has a high rate of finding a core member as its solution. Overall, 96.1% of the games resulted in finding a core member, under the new algorithm; that is, out of the 3500 games executed, the resulting coalition structure was a member of the core 3363 times. Games with three-agents, four-agents, and seven-agents always found a core member; 488 out of 500 five-agent games found a core member; 498 of the six-agent games resulted in a core coalition structure, and 499 games played with nine-agents achieved a core result.
The only concerning result from this experiment is the eight agent games. The eight-player games, by contrast, significantly underperformed. Almost 25% of the time (122 out of 500), the eight-agent game failed to conclude with a coalition structure that could not be dominated. The details of these games are shown in Figure 6. An examination of the games that failed to reach a core solution showed that these games reached a local maximum. The algorithm used for the ABM belongs to a group known as “greedy algorithms.” These algorithms are based on achieving local optimal decisions with the hope of a global solution. However, the global solution is not actually represented or considered. Therefore, it is possible for a coalition to reach a value in which it is unwilling to reconsider given the rules for a change. That is, the game stalls at a local maximum. The following table, and resultant discussion, attempts to explain why this occurred with an example.
The first part (a) shows the initial resources for each of the eight agents. The core coalition section of the second part (b) is the set of coalition structures and their payoff values within the core. Obviously, each player would like to obtain the highest payoff. Part (b) also contains an example of a “sub-optimal” coalition structure that was achieved by the algorithm. To understand why a core partition was never achieved, we need to consider the linchpin coalition. The linchpin coalition in the core, in the game under consideration, consists of agents two, three, and four. A linchpin coalition is a coalition that occurs in all core partitions. Technically, the singleton coalition of agent seven is also a linchpin coalition, so is agent six’s singleton group. Thus, for a core partition to evolve, under our algorithm, from a given coalition structure, the linchpin coalition must be able to form. Unfortunately, in the example given above, this is not possible from the coalition structure achieved because of the limitation of the algorithm.
The coalition consisting of agents two, three, and four represents the highest payoff value that those agents could realistically achieve, and it would be expected that they would always be together. However, during the simulation, the coalition structure forms a coalition consisting of agents one and four prior to achieving the two, three, four combinations. As a group, neither agent one nor four improves their position by isolating themselves, and agents one or four do not improve their position by joining any core member coalition structure. Therefore, agents one or four are not willing to change coalitions. Also, the coalition does not benefit from merging with any other existing coalition. Hence, under our algorithm, there is no possibility that the coalition of player one and player four will change. In fact, there is no incentive for any coalition in the coalition structure to change, given the rules of the algorithm. This leads to a local maximum being reached.
The local maximum is not a member of the core and purely an artifact of the algorithm. This is due to the pair-wise nature of the algorithm; that is, at most, only two agents or coalitions are considered at any one time. For the linchpin coalition to form from the local maximum, the three agents, two, three, and four would need to be considered at the same time, within the algorithm step, which is not possible because they are in three different coalitions. Our heuristic algorithm is, in this respect, limited. However, the results demonstrate that, within a margin of error, it is possible to arrive at highly probable stable coalition structures that may or may not be in the core.
Why this phenomenon occurs regularly in games of eight players is not clear. We have chosen not to speculate here and leave this investigation to further research. This problem of being trapped in a local maximum could be overcome by using approaches like simulated annealing (Van Laarhoven & Aarts 1987) or swarm optimization (Shi 2001), which we also leave for further research as the exact nature of the search space is not clear. However, we suspect that the search space is highly non-linear.
These results demonstrate that the ABM is effective in achieving a stable coalition structure in which the players have no incentive to move to another coalition (at least under the algorithm restrictions). However, it does not identify every member of the core set; that is, not every possible core partition was reached by the algorithm. The ABM results tended to be biassed towards one core partition over all others in the core. Figure 7 shows that only 36% of the possible core partitions were achieved. Further, there is a significant decrease in the average percentage of possible coalition structures reached when the number of players increases from 6 to 7. This is most likely due to the size of the core increasing with little to no change in the number of unique coalition structures achieved. Specifically, for the ten games with six players, there are a total of 61 coalition structures in all their cores; this increases to 174 when there are seven players. However, the number of unique coalition structures reached in the simulation is 15 and 16 for 6 and 7 players, respectively.
We cannot conclude that certain coalition structures cannot be reached by our algorithm because only a limited number of runs were completed for each game, i.e., fifty per game. Further research could be conducted to see if this percentage of covered core partitions increases significantly with an increase in the number of runs. However, if it turns out to not be the case, then another interesting direction of research is to investigate the properties of the reached core partition; for example, do they hold the trembling hand property required for a unique Nash Equilibrium to be selected in normal-form game theory (Harsanyi & Selten 1988). The benefit of the ABM algorithm is the ability to, with computational ease, determine a coalition structure that is a core stable. The predetermined run-length of the simulation ensures that the resolution occurs within polynomial time. It is shown to have greater than a 99% effective rate overall, with the lowest level exceeding 93%. The limitation of the algorithm is its narrowness in the solution set and its potential to locked into a local maximum.
### Discussion
Beyond the reasons behind the non-convergence of the trial runs, there are a number of other concerns with our algorithm and our experimental approach. These concerns include the focus on only one game type, one solution mechanism, the hyperparameter impact, and the applicability of the algorithm. We will discuss each of these concerns in turn.
The experiment only considers one type of hedonic game, namely, the glove game. This limited focus limits the generalisability of our claims about the algorithm. Determining the performance of our algorithm for any generic hedonic game is the next step for the research. Our intent is to apply the algorithm to randomly-generated hedonic games with strict preferences. We have already begun by creating a generic hedonic game generator and brute force solver, and succinct matrix form representation for hedonic games (Collins, Etemadidavan, & Khallouli 2020). If this new research is fruitful, we will then apply the algorithm to several standard cooperative games for insight.
Since the principles of our algorithm design are based around the core, the algorithm is not expected to perform well at finding other solution concepts unless they coincide with the core, e.g., if the core exists, the nucleolus is a member (Schmeidler 1969). However, there are some interesting phenomena that could be explored in future work, e.g., if the core does not exist, does the algorithm reach the nucleolus? This research is left to future work.
There are not any explicit hyperparameters in the algorithm; however, there are several implicit hyperparameters. For example, the ordering of the coalition searches and the number of each search type each round. We believe that understanding the effects of these hyperparameters is the next step for improving the effectiveness of the algorithm.
Finding core members in a hedonic game is useful in several disciplines and domains, including political science, ecology, and economics. The algorithm is intended to aid researchers that would like to apply the principles of cooperative game theory without its computation complexity. Additionally, using an ABM algorithm provides a simple means to incorporate different variables into the players’ utility or preference structure without the consequence of trying to solve, analytically, a more complex game.
A possible example application of our algorithm can be found in the systems engineering domain. Grigoryan & Collins (2020) suggest that combining agent-based modeling and cooperative game theory could help in determining the performance of large design programs, which include multiple interacting teams. Agent-based modeling is emerging as a new approach to modeling design teams (Grogan & de Weck 2016; Lapp, Jablokow, & McComb 2019), and our algorithm has the potential to account for informal strategic group formation, especially when dealing with Multiple Team Membership (Mortensen, Woolley, & O’Leary 2007).
Another potential application is the advancement of social simulations. For example, Epstein’s famous Sugarscape model predetermines group membership (Epstein & Axtell 1996); if our algorithm was incorporated into Sugarscape, then maybe groups could be organically developed as in Collins & Frydenlund (2018).
### Conclusion
There are often comparisons made between the value of game theory and simulation in social sciences (Balzer, Brendel, & Hofmann 2001). However, agent-based simulation can be used as a complementary process to cooperative game theory. Using the game-theoretic structure and solution concepts, we designed an algorithm that produces a stable coalition structure in over 96% of the runs during our experiment, which involved the glove game with varying numbers of players. This was tested against a computational “brute-force" algorithm that determines the entire core, i.e., all stable coalition structures of a game.
While the brute-force method determines the complete core, it is computationally intractable for a large number of players; the ABM algorithm requires significantly less computational resources. However, there are limitations to the algorithm. The ABM solutions only accounted for 35% of the core partition solutions, with a sharp decrease in this number as the number of players increased. This is most likely due to the structure of the algorithm that tends towards local maxima and only focuses on pair-wise comparisons. In computer terms, our solution is a “greedy algorithm”, which maximizes its value at each step thus focuses purely on exploitation over exploration of the solution space. Our algorithm mimics traditional ABM structures in that it is a bottom-up design without centralized coordination. However, it can limit the formation of a better coalition if the individuals do not immediately prosper (as demonstrated in the eight-player game example given in the Results section). One further limitation is that the algorithm will not recognize when the core is empty. Determining whether the core of a hedonic game is empty is NP-complete (Ballester 2004). Our “greedy algorithm” will always select a value and create a coalition structure even when the core is empty.
Future work on this algorithm will examine ways to reach a greater number of coalition structures with core members. The limited coverage of core members creates a bias and limits its use. Expanding the scope of core members achievable by the algorithm will represent a significant advancement. However, understanding why the algorithm is limited to certain types of core partitions might give insight into the nature of those core partitions. Another piece of future work is to consider more general hedonic games than the glove game presented here.
### Model Documentation
The agent-based model was written and simulated in NetLogo (Wilensky 1999) and is able to be run in the latest release (6.1.1). A detailed description of the model is provided in Appendix A, using the ODD protocol (Grimm et al. 2020). The different games’ descriptions, used for input into the experimental runs, are given in Appendix B. The brute force algorithm was created in Python. The original random-number-generator (RNG) seeds were not recorded at the time of the experiment. Copies of the Netlogo models used in the paper can be found at https://www.comses.net/codebases/59dda8fe-b0c0-4703-ac95-c0fc57bd48d3/releases/1.0.0/.
### Appendix
#### A. ODD Protocol
This appendix gives an ODD (Overview, Design concepts, Details) description of our ABM, specifically, the coalition formation algorithm. The ODD protocol is used for describing agent-based models (Grimm et al. 2006), and it was updated by Grimm et al. (2020). Other descriptive approaches to representing ABM exist (Bersini 2012), but the authors felt that ODD was the most appropriate for this context (Collins, Petty, Vernon-Bido, & Sherfey 2015).
##### Purpose and patterns
The purpose of the model is to generate coalition structures of different glove games, using a specially designed algorithm. The coalition structures are later analyzed by comparing them to core partitions of the game used. Core partitions are coalition structures where no subset of players has an incentive to form a new coalition.
Patterns: The coalition structures generated by the simulation are expected to converge to a core partition if one exists for a given game. The simulation is the model implemented over time.
##### Entities, state variables, and scales
The simulation model is a representation of a cooperative game theory game known as a glove game. The glove game involves players combining their endowment of gloves to create pairs, which they can sell. The focus of the game is which coalitions of players form. The main agents of this model are the players. The gloves are not considered agents since they are inert.
Agents: players
Environment: Abstract social environment where all agents are assumed to be able to communicate with each other with complete information.
State variables: All variables are associated with the players.
Variable Type, Range Owner Temporal Coalition Membership Integer, [0, # players] Players Dynamic Left Gloves Integer, [0, ∞) Players Static Right Gloves Integer, [0, ∞) Players Static
Coalition Membership: It gives the index number of the coalition that that player is a member. If a player is not a member of a coalition, it is assumed to be in a singleton coalition, and an index number is still assigned.
Left Gloves: This variable indicates the number of left gloves that a player has in its initial endowment. Gloves are used to work out a coalition’s value.
Right Gloves: This variable indicates the number of right gloves that a player has in its initial endowment.
All other variables are calculated from these variables.
###### Scales
The temporal scales within the model are arbitrary. Each round represents an opportunity for several coalitions to be suggested to the agents and, if necessary, the updating of the coalition structure.
###### Process overview and scheduling
The game is played over a number of rounds. Each round represents an opportunity for the players to propose new coalitions, and, if acceptable to all potential members of that coalition, they form a new coalition. The players’ proposed coalitions are created by the algorithm.
The main loop of the simulation is as follows:
1. This is the algorithm subloop. The algorithm suggests six types of coalition each turn. The coalition suggestions are discussed in the submodel section. At each step of this subloop, one of the randomly selected coalition types is suggested. First, all the computerized agents in the proposed coalitions are asked if they wish to join.
• If any agent rejects the proposed coalition, then nothing changes, and the algorithm moves on to suggest the next type of coalition.
• If all computerized agents agree to join the proposed coalition, then the proposed coalition forms, and computerized players update their membership. The algorithm moves on to suggest the next type of coalition
This subloop repeats until all six suggested coalitions have been evaluated.
1. All agents’ internal values are updated to reflect the new coalition situation if they have not already been done so. This includes all agents who have lost other agents because those other agents are moved to another coalition.
2. Loop to next Round.
The critical point is that the algorithm will suggest several coalitions that are proposed to the agents. This algorithm is discussed in the sub-model section below. A flow diagram of the main loop is given below:
##### Design concepts
###### Basic principles
The underlying game of the model is a variation of the glove game, a classic game in cooperative game theory (Hart & Kurz 1983). The computerized agents are assumed to be utility maximizers, which is consistent with game theory standards. Their utility is the sole driver for the computerized agent’s decision-making, and complete information is assumed. The utility value that a player gets is called its payoff.
###### Emergence
The main expected emergent behavior is that the final coalition structure, the collection of disjoint coalitions covering all players, is a core partition. A core partition is a covering set of disjoint coalitions where no subset of players has an incentive to form a new coalition (Banerjee et al. 2001; Bogomolnaia & Jackson 2002). It is related to the core concept, introduced by Gillies (1959), but, strictly speaking, is not precisely the same. The core partition is an appropriate solution mechanism because it focuses on coalition membership instead of coalition values and imputations. A core partition is not necessarily unique for a given instance of a game nor is it guaranteed to exist. We only consider games with a non-empty core.
The ability of an agent to accept suggested new coalitions to join is the adaptive part of the model. The agents are only able to change to a coalition that is suggested to them by the algorithm. Further, a new coalition only forms if all potential members of that suggested coalition choose to join that coalition. This means that every agent has the veto power to stop a new coalition, which includes them, from forming. The agents will choose to join a new coalition if it increases their utility.
Note that agents might find themselves in a new coalition because other members of their coalition have decided to leave that agent’s coalition. Agents cannot stop other members from leaving; they can only stop a new coalition forming that includes them. We do not assume a complete collapse of the remaining coalitions into singleton coalitions as others have (Hart & Kurz 1983).
###### Objectives
The objective of all the computerized agents is to join the coalition that maximizes their utility. The agent’s utility is quantified as a reward. In the glove game, the reward of a given agent ‘a’ in coalition ‘S’ is:
$$R(a,S)=\frac{min(\sum_{b \in S} L(b) ,\sum_{b \in S} R(b))}{|S|}$$
###### Learning
There is no learning incorporated into this model.
###### Prediction
There is no prediction incorporated in this model.
###### Sensing
There is no agent sensing incorporated in this model for the computerized agents.
###### Interaction
All agent interactions are mediated. That is, the agents do not directly interact with each other, but their actions do affect each other. These effects are due to the decision that they make with regard to coalition membership. If an agent leaves or joins a coalition, then the value of that coalition might change, which, in turn, affects the utility of coalition members. The value of a coalition is the number of glove pairs it can generate, and the reward of its members is this value divided by the coalition size.
###### Stochasticity
The only stochastic element of the model is the random determination of which coalitions are suggested by the algorithm. The algorithm generates six different suggested coalitions during this step of the model; each of the six suggested coalitions is derived by a different coalition formation approach. An example of a coalition formation approach would be combining two randomly chosen coalitions to create a suggested coalition. The different coalition approaches are discussed in the submodel section below.
In all cases, the uniform distribution is used when selecting an agent or coalition. That is, all agents or coalitions will have the same probability of selection in a given coalition formation approach.
###### Collectives
The model has a have focus on coalitions, which are a form of collective. The coalitions determine the payoff that each agent would get, and, in turn, this payoff drives the computerized agent decision to join any proposed coalition. The coalitions are explicitly represented in the model as a number; each agent has a coalition number assigned to it. Note that the set containing only one agent is still a coalition; it is known in cooperative game theory as the singleton coalition.
###### Observation
The final coalition structure is recorded after the game has completed 100,000 time-steps.
##### Initialization
All agents are assumed to start in a grand coalition, i.e., they start in a coalition of all agents. The number of players and their glove endowments is determined by which game is being considered. The different games being considered are shown in Appendix B. Note that a player's glove endowments do not change over the course of the game.
##### Input data
All variables are determined by the initial glove endowments and the algorithm mechanics. This includes the random number generator that is used for the stochastic processes, which is determined internally by the simulation programming platform.
##### Submodels
There are three submodels used within the model: coalition selection, coalition evaluation, and coalition updating. These three submodels control the changes to the coalition structure, which is the main output of the model.
###### Coalition Selection
There are six coalition suggestions (S) that are made at each round of the game. They are suggested in the order given below. A description in prose and mathematical notation is given for each. The six suggestion types are:
###### Join coalition
Two agents from different coalitions (U,V) in the current coalition structure (CS) are chosen randomly. The payoffs of the merged coalitions are calculated. If the payoffs improve for all members of both coalitions, a new coalition is formed, which is the merged coalition. If the grand coalition (N) has formed, this suggestion is ignored.
$$if \; N \not\in CS: S = U\cup V s.t. U \neq V, \{U,V\} \subseteq CS$$
###### Exit coalition
An agent from a coalition whose size is greater than one, i.e., not a singleton coalition, is randomly selected. The payoff of the coalition minus the agent is calculated. If all agents in the remaining coalition improve their payoff by removing the selected agent, the agent is removed from the coalition and forms a singleton coalition.
$$if \; \exists U\in CS \; s.t. \; |{U}| > 1: S = U\backslash\{i\}, i \in U$$
###### Create a pair coalition
Two agents are randomly selected. The payoff for the coalition containing just both agents is calculated. If the payoff of both agents is improved in this new coalition, both agents exit the current coalition in favor of the new one.
$$S = \{i\} \cup \{j\}, \; i \neq j, \; \{i,j\} \subseteq N$$
###### Defect coalition
A randomly chosen agent selects a coalition to which he does not belong. If joining this coalition improves his payoff and the payoff of all members of the coalition, the agent defects from his current coalition and joins the new coalition.
$$S = \{i\} \cup U, \; i \in N, \; U \in CS \cup \emptyset$$
###### Split coalition
A coalition is randomly chosen, and a random subset of agents from the coalition are selected to form a separate coalition. If the members of the new coalition improve their payoff or the coalition that remained improve their payoff, the coalition splits into the two coalitions.
$$S_{1} = X, \; S_{2} = Y, X \cap Y = \emptyset, \; X \cup Y = U, \; U \in CS$$
An agent is randomly chosen. If that agent would be better off on their own, i.e., they prefer the singleton coalition to their current coalition; then, they leave their current coalition to form the singleton coalition. This is known as the individual rationality concept (Thomas 2003).
$$S = \{i\}, \; i \in N$$
###### Coalition Evaluation
Each of the six suggestions is evaluated to determine if they are acceptable to members of the coalition (or either coalition is the case of a split). For each effect agent, their current payoff (utility) is compared to the payoff (utility) of the suggested coalition. If all the affected agents would experience an increase in payoff, then the suggested coalition forms. That is:
$$if \; \forall i \in S, \; u_{i}(S) > u_{i}(C_{i}) \; then \; C_{i} := S, \; \forall i \in S$$
The payoff that each player gets is the number of glove pairs divided by the number of players in their coalition. That is:
$$u_{i} (C_{i}) = \frac{min (\sum_{x \in C_{i}}L(x),\sum_{x \in C_{i}}R(x))}{|{C_{i}}|}$$
###### Coalition Updating
If a new coalitions form, then the agents of that coalition simply change their Coalition Membership number to a unique identification number assigned to the new coalition. The forming of new coalitions will affect the payoffs of many of the agents, but this information is updated when needed.
#### B. Glove Games
# Players (N) Left Gloves Right Gloves # Players (N) Left Gloves Right Gloves 3 [3, 2, 1] [3, 3, 2] 7 [1, 2, 9, 6, 1, 8, 2] [8, 6, 6, 6, 8, 3, 0] 3 [2, 2, 2] [1, 0, 0] 7 [6, 4, 8, 6, 7, 6, 5] [4, 5, 5, 8, 8, 1, 1] 3 [4, 4, 3] [3, 4, 4] 7 [1, 2, 0, 3, 5, 9, 0] [8, 5, 6, 3, 3, 5, 8] 3 [4, 3, 1] [3, 3, 3] 7 [6, 9, 9, 3, 6, 7, 4] [5, 9, 3, 4, 9, 6, 4] 3 [3, 0, 2] [0, 0, 3] 7 [2, 4, 1, 1, 7, 8, 6] [6, 4, 0, 7, 7, 0, 0] 3 [0, 0, 3] [3, 2, 1] 7 [4, 6, 2, 4, 4, 2, 0] [0, 8, 8, 5, 8, 3, 8] 3 [0, 6, 1] [9, 9, 0] 7 [3, 6, 4, 9, 7, 6, 3] [5, 9, 3, 0, 0, 4, 8] 3 [2, 5, 8] [4, 4, 1] 7 [0, 9, 8, 3, 2, 3, 3] [6, 3, 0, 8, 2, 4, 5] 3 [7, 2, 0] [3, 6, 4] 7 [8, 2, 9, 5, 4, 6, 9] [4, 6, 3, 7, 0, 2, 3] 3 [1, 0, 4] [0, 4, 6] 7 [6, 2, 6, 1, 7, 0, 8] [5, 2, 7, 7, 3, 6, 5] 4 [0, 1, 9, 6] [4, 5, 0, 6] 8 [0, 2, 9, 0, 6, 2, 3, 7] [4, 4, 4, 8, 3, 1, 5, 9] 4 [9, 8, 8, 1] [8, 5, 1, 0] 8 [7, 3, 8, 4, 0, 0, 1, 5] [6, 7, 2, 1, 5, 2, 6, 2] 4 [3, 9, 6, 0] [7, 3, 4, 4] 8 [9, 4, 6, 0, 6, 2, 4, 8] [1, 8, 4, 0, 1, 9, 7, 8] 4 [4, 1, 6, 9] [1, 4, 7, 2] 8 [9, 1, 5, 5, 5, 7, 6, 5] [4, 6, 3, 1, 9, 5, 5, 0] 4 [2, 2, 1, 2] [8, 0, 2, 9] 8 [3, 6, 1, 8, 2, 6, 6, 3] [9, 0, 2, 7, 1, 6, 6, 7] 4 [1, 6, 2, 0] [3, 5, 8, 8] 8 [0, 3, 6, 0, 1, 0, 7, 8] [0, 0, 7, 5, 4, 1, 9, 0] 4 [4, 4, 7, 8] [9, 0, 6, 9] 8 [8, 2, 3, 0, 6, 6, 2, 5] [9, 5, 4, 1, 8, 5, 3, 5] 4 [0, 0, 7, 8] [7, 7, 9, 5] 8 [7, 8, 5, 5, 4, 4, 9, 5] [0, 8, 3, 5, 5, 6, 3, 4] 4 [4, 0, 2, 0] [7, 8, 0, 1] 8 [4, 0, 8, 8, 6, 9, 5, 7] [0, 4, 7, 7, 1, 5, 5, 9] 4 [4, 3, 1, 7] [3, 6, 0, 1] 8 [8, 4, 6, 1, 1, 1, 7, 7] [6, 3, 2, 1, 2, 6, 3, 6] 5 [7, 2, 2, 7, 0] [6, 7, 1, 8, 6] 9 [4, 1, 1, 1, 6, 0, 5, 1, 4] [0, 7, 7, 4, 2, 0, 6, 0, 0] 5 [7, 7, 1, 3, 8] [3, 5, 1, 3, 1] 9 [1, 9, 8, 5, 4, 4, 7, 9, 9] [1, 3, 9, 9, 4, 9, 9, 6, 5] 5 [7, 0, 3, 5, 3] [1, 6, 6, 2, 7] 9 [5, 4, 9, 0, 7, 1, 8, 0, 8] [4, 1, 8, 6, 6, 5, 8, 1, 7] 5 [1, 7, 1, 6, 3] [0, 8, 0, 2, 9] 9 [5, 4, 6, 5, 0, 3, 4, 6, 7] [1, 2, 2, 9, 5, 1, 3, 9, 7] 5 [4, 1, 8, 7, 2] [6, 3, 6, 1, 1] 9 [1, 1, 2, 5, 2, 3, 4, 2, 8] [9, 2, 6, 7, 4, 8, 4, 2, 0] 5 [9, 2, 5, 4, 0] [3, 6, 6, 4, 1] 9 [3, 2, 5, 2, 7, 8, 1, 6, 4] [4, 7, 6, 5, 1, 3, 4, 8, 2] 5 [2, 9, 5, 4, 5] [8, 7, 0, 7, 4] 9 [2, 4, 3, 4, 5, 9, 2, 3, 2] [7, 5, 5, 8, 1, 9, 6, 5, 3] 5 [2, 2, 2, 0, 7] [0, 1, 8, 6, 0] 9 [0, 4, 5, 5, 5, 1, 7, 8, 9] [3, 8, 2, 3, 0, 3, 1, 6, 6] 5 [5, 0, 1, 0, 6] [4, 0, 4, 7, 2] 9 [8, 9, 8, 4, 7, 5, 2, 8, 2] [9, 9, 9, 5, 0, 6, 5, 2, 3] 5 [8, 0, 2, 9, 1] [2, 8, 2, 0, 3] 9 [1, 4, 1, 4, 8, 0, 5, 0, 9] [5, 0, 8, 6, 4, 2, 8, 0, 0] 6 [9, 5, 4, 6, 1, 8] [3, 8, 9, 3, 5, 7] 6 [7, 9, 1, 0, 6, 5] [3, 2, 3, 5, 1, 8] 6 [9, 6, 7, 8, 5, 8] [0, 9, 7, 2, 0, 1] 6 [8, 9, 3, 5, 4, 0] [4, 1, 8, 0, 5, 1] 6 [3, 2, 4, 2, 3, 7] [4, 0, 8, 7, 2, 7] 6 [1, 0, 0, 3, 8, 7] [6, 8, 1, 3, 3, 8] 6 [4, 1, 1, 3, 8, 6] [4, 8, 1, 1, 5, 9] 6 [9, 2, 4, 4, 4, 7] [8, 8, 1, 1, 3, 0] 6 [8, 9, 5, 2, 1, 4] [4, 3, 1, 3, 8, 1] 6 [5, 2, 5, 0, 9, 0] [6, 3, 2, 0, 3, 7]
### References
ABDOLLAHIAN, M., Yang, Z., & Nelson, H. (2013). Techno-social energy infrastructure siting: sustainable energy modeling programming (SEMPro). Journal of Artificial Societies and Social Simulation, 16(3), 6: http://jasss.soc.surrey.ac.uk/16/3/6.html. [doi:10.18564/jasss.2199]
AXTELL, R. (2000). Why agents?: on the varied motivations for agent computing in the social sciences. Report available at: https://www.brookings.edu/research/why-agents-on-the-varied-motivations-for-agent-computing-in-the-social-sciences.
AZIZ, H. (2013). Stable marriage and roommate problems with individual-based stability. Paper presented at the Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems, St. Paul, MN, USA.
AZIZ, H., & Savani, R. (2016). 'Hedonic Games.' In F. Brandt, V. Conitzer, U. Endriss, J. Lang, & A. D. Procaccia (Eds.), Handbook of Computional Social Choice (pp. 356-376). New York, NY: Cambridge University Press. [doi:10.1017/cbo9781107446984.016]
BALLESTER, C. (2004). NP-completeness in hedonic games. Games and Economic Behavior, 49(1), 1-30. [doi:10.1016/j.geb.2003.10.003]
BALZER, W., Brendel, K. R., & Hofmann, S. (2001). Bad arguments in the comparison of game theory and simulation in social studies. Journal of Artificial Societies and Social Simulation, 4(2), 1: http://jasss.soc.surrey.ac.uk/4/2/1.html.
BANERJEE, S., Konishi, H., & Sönmez, T. (2001). Core in a simple coalition formation game. Social Choice and Welfare, 18(1), 135-153. [doi:10.1007/s003550000067]
BELL, E. T. (1938). The iterated exponential integers. Annals of Mathematics, 539-557.
BERSINI, H. (2012). UML for ABM. Journal of Artificial Societies and Social Simulation, 15(1), 9: http://jasss.soc.surrey.ac.uk/15/1/9.html. [doi:10.18564/jasss.1897]
BOGOMOLNAIA, A., & Jackson, M. O. (2002). The stability of hedonic coalition structures. Games and Economic Behavior, 38(2), 201-230. [doi:10.1006/game.2001.0877]
CHAKRAVARTY, S. R., Mitra, M., & Sarkar, P. (2015). A Course on Cooperative Game Theory: Cambridge, MA: Cambridge University Press.
CHALKIADAKIS, G., Elkind, E., & Wooldridge, M. (2011a). Computational aspects of cooperative game theory. Synthesis Lectures on Artificial Intelligence and Machine Learning, 5(6), 1-168. [doi:10.2200/s00355ed1v01y201107aim016]
CHALKIADAKIS, G., Elkind, E., & Wooldridge, M. (2011b). Computational Aspects of Cooperative Game Theory (Vol. 5). London: Morgan & Claypool.
COLLINS, A. J. (2017, October). Strategically Forming Groups in the El Farol Bar Problem. In Proceedings of the 2017 International Conference of The Computational Social Science Society of the Americas (pp. 1-6). [doi:10.1145/3145574.3145575]
COLLINS, A. J. (2019). 'Strategic group formation in the El Farol bar problem.' In T. Carmichael & A. J. Collins (Eds.), Complex Adaptive Systems: Views from the Physical, Natural, and Social Sciences (pp. 199-212). Cham, Switzerland: Springer. [doi:10.1007/978-3-030-20309-2_9]
COLLINS, A. J. (2020). Heuristic Algorithm for Generating Strategic Coalition Structures. Version 1.0.0. Retrieved from https://www.comses.net/codebases/59dda8fe-b0c0-4703-ac95-c0fc57bd48d3/releases/1.0.0/.
COLLINS, A. J., Etemadidavan, S., & Khallouli, W. (2020). Generating empirical core size distributions of hedonic games using a Monte Carlo Method. Retrieved from: https://arxiv.org/abs/2007.12127.
COLLINS, A. J., & Frydenlund, E. (2016a). Agent-Based Modeling and Strategic Group Formation: A Refugee Case Study. Paper presented at the Proceedings of the 2016 Winter Simulation Conference, Arlington, VA, USA. [doi:10.1109/wsc.2016.7822184]
COLLINS, A. J., & Frydenlund, E. (2016b). Strategic Group Formation in Agent-based Simulation. Paper presented at the 2016 Spring Simulation Multi-conference, Pasadena, CA.
COLLINS, A. J., & Frydenlund, E. (2018). Strategic Group Formation in Agent-based Simulation. Simulation, 94(3), 179-193. [doi:10.1177/0037549717732408]
COLLINS, A. J., Petty, M., Vernon-Bido, D., & Sherfey, S. (2015). Call to Arms: Standards for Agent-based Modeling and Simulation. Journal of Artificial Societies and Social Simulation, 18(3), 12: http://jasss.soc.surrey.ac.uk/18/3/12.html. [doi:10.18564/jasss.2838]
DE MARCHI, S., & Page, S. E. (2008). 'Agent‐Based Modeling.' In J. M. Box-Steffensmeier, H. E. Brady & D. Collier (Eds.), The Oxford Handbook of Political Methodology. Oxford: Oxford University Press. [doi:10.1093/oxfordhb/9780199286546.003.0004]
DJOKIĆ, B., Miyakawa, M., Sekiguchi, S., Semba, I., & Stojmenović, I. (1989). Short Note: A Fast Iterative Algorithm for Generating Set Partitions. The Computer Journal, 32(3), 281-282.
DREZE, J. H., & Greenberg, J. (1980). Hedonic coalitions: Optimality and stability. Econometrica: Journal of the Econometric Society, 48(4), 987-1003. [doi:10.2307/1912943]
EATWELL, J., Milgate, M., & Newman, P. (1987). The New Palgrave: Game Theory. London: Macmillan Press.
EPSTEIN, J. M., & Axtell, R. (1997). Artificial societies and generative social science. Artificial Life and Robotics, 1(1), 33-34. [doi:10.1007/bf02471109]
EPSTEIN, J. M., & Axtell, R. L. (1996). Growing Artificial Societies: Social Science from the Bottom Up (First Edition ed.). Cambrydge, MA: The MIT Press. [doi:10.7551/mitpress/3374.001.0001]
FAIGLE, U., Kern, W., Fekete, S. P., & Hochstättler, W. (1997). On the complexity of testing membership in the core of min-cost spanning tree games. International Journal of Game Theory, 26(3), 361-366. [doi:10.1007/bf01263277]
FLOOD, M. M. (1952). Some Experimental Games (RM-789). Retrieved from Santa Monica, CA.: https://www.rand.org/pubs/research_memoranda/RM789-1.html.
GALE, D., & Shapley, L. S. (1962). College admissions and the stability of marriage. The American Mathematical Monthly, 69(1), 9-15. [doi:10.2307/2312726]
GILBERT, N. (2008). Agent-Based Models. Thousand Oaks, CA: Sage Publications, Inc.
GILLIES, D. B. (1959). Solutions to general non-zero-sum games. Contributions to the Theory of Games, 4(40), 47-85. [doi:10.1515/9781400882168-005]
GORE, R. J., Lynch, C. J., & Kavak, H. (2016). Applying statistical debugging for enhanced trace validation of agent-based models. Simulation, 0037549716659707. [doi:10.1177/0037549716659707]
GRIGORYAN, G., & Collins, A. J. (2020). Game Theory for Systems Engineering: A Survey. International Journal of System of Systems Engineering, forthcoming.
GRIMM, V., Berger, U., Bastiansen, F., Eliassen, S., Ginot, V., Giske, J., Goss-Custard, J., Grand, T., Heinz, S. K., Huse, G., Huth, A., Jepsen, J. U., Jørgensen, C., Mooij, W. M., Müller, B., Pe’er, G., Piou, C., Railsback, S. F., Robbins, A. M., Robbins, M. M., Rossmanith, E., Rüger, N., Strand, E., Souissi, S., Stillman, R. A., Vabø, R., Visser, U. & DeAngelis, D. L. (2006). A standard protocol for describing individual-based and agent-based models. Ecological Modelling, 198(1-2), 115-126. [doi:10.1016/j.ecolmodel.2006.04.023]
GRIMM, V., Railsback, S. F., Vincenot, C. E., Berger, U., Gallagher, C., DeAngelis, D. L., Groeneveld, J., Edmonds, B., Ge, J., Giske, J., Johnston, A. S. A., Milles, A., Nabe-Nielsen, A., Polhill, J. G., Radchuk, V., Rohwäder, M. S., Stillman, R. A., Thiele, J. C. & Ayllon, D. (2020). The ODD Protocol for Describing Agent-Based and Other Simulation Models: A Second Update to Improve Clarity, Replication, and Structural Realism. Journal of Artificial Societies and Social Simulation, 23(2), 7: http://jasss.soc.surrey.ac.uk/23/2/7.html. [doi:10.18564/jasss.4259]
GROGAN, P. T., & de Weck, O. L. (2016). Collaboration and complexity: an experiment on the effect of multi-actor coupled design. Research in Engineering Design, 27(3), 221-235. [doi:10.1007/s00163-016-0214-7]
HARSANYI, J. C., & Selten, R. (1988). A General Theory of Equilibrium Selection in Games. London: MIT Press.
HART, S. (1985). Nontransferable utility games and markets: some examples and the Harsanyi solution. Econometrica, 53(6), 1445-1450. [doi:10.2307/1913218]
HART, S., & Kurz, M. (1983). Endogenous formation of coalitions. Econometrica: Journal of the Econometric Society, 51(4), 1047-1064. [doi:10.2307/1912051]
IEHLÉ, V. (2007). The core-partition of a hedonic game. Mathematical Social Sciences, 54(2), 176-185. [doi:10.1016/j.mathsocsci.2007.05.007]
JANOVSKY, P., & DeLoach, S. A. (2016). Multi-agent simulation framework for large-scale coalition formation. Paper presented at the 2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI). [doi:10.1109/wi.2016.0055]
KLÜGL, F. (2008). A validation methodology for agent-based simulations. Paper presented at the Proceedings of the 2008 ACM symposium on Applied computing. [doi:10.1145/1363686.1363696]
LAPP, S., Jablokow, K., & McComb, C. (2019). KABOOM: an agent-based model for simulating cognitive style in team problem solving. Design Science, 5(E13), 1-34. [doi:10.1017/dsj.2019.12]
MORTENSEN, M., Woolley, A. W., & O’Leary, M. (2007). Conditions enabling effective multiple team membership Virtuality and Virtualization (pp. 215-228). Berlin Heidelberg: Springer. [doi:10.1007/978-0-387-73025-7_16]
MURNIGHAN, J. K., & Roth, A. E. (1977). The effects of communication and information availability in an experimental study of a three-person game. Management Science, 23(12), 1336-1348. [doi:10.1287/mnsc.23.12.1336]
MURNIGHAN, J. K., & Roth, A. E. (1980). Effects of group size and communication availability on coalition bargaining in a veto game. Journal of Personality and Social Psychology, 39(1), 92. [doi:10.1037/0022-3514.39.1.92]
MUSTAFEE, N., Powell, J., Brailsford, S. C., Diallo, S., Padilla, J., & Tolk, A. (2015). Hybrid simulation studies and hybrid simulation systems: definitions, challenges, and benefits. Paper presented at the Proceedings of the 2015 Winter Simulation Conference, Huntington Beach, CA. [doi:10.1109/wsc.2015.7408287]
MYERSON, R. B. (2013). Game Theory. Cambridge, MA: Harvard University Press.
OHDAIRA, T., & Terano, T. (2009). Cooperation in the Prisoner's dilemma game based on the second-best decision. Journal of Artificial Societies and Social Simulation, 12(4), 7: http://jasss.soc.surrey.ac.uk/12/4/7.html.
PELEG, B., & Sudhölter, P. (2007). Introduction to theTheory of Cooperative Games (Vol. 34). Berlin Heidelberg: Springer Science & Business Media.
RAHWAN, T., Ramchurn, S. D., Jennings, N. R., & Giovannucci, A. (2009). An anytime algorithm for optimal coalition structure generation. Journal of Artificial Intelligence Research, 34, 521-567. [doi:10.1613/jair.2695]
ROTHKOPF, M. H., Pekeč, A., & Harstad, R. M. (1998). Computationally manageable combinational auctions. Management Science, 44(8), 1131-1147. [doi:10.1287/mnsc.44.8.1131]
SCHMEIDLER, D. (1969). The nucleolus of a characteristic function game. SIAM Journal on applied mathematics, 17(6), 1163-1170. [doi:10.1137/0117107]
SHAPLEY, L. (1953). 'A Value of n-person Games.' In H. W. Kuhn & A. W. Tucker (Eds.), Contributions to the Theory of Games (Vol. II, pp. 307-317). Princeton: Princeton University Press.
SHI, Y. (2001). Particle swarm optimization: developments, applications and resources. Paper presented at the evolutionary computation, 2001. Proceedings of the 2001 Congress on.
SIE, R., Sloep, P. B., & Bitter-Rijpkema, M. (2014). If We Work Together, I Will Have Greater Power: Coalitions in Networked Innovation. Journal of Artificial Societies and Social Simulation, 17(1), 3: http://jasss.soc.surrey.ac.uk/17/1/3.html. [doi:10.18564/jasss.2410]
SUNG, S. C., & Dimitrov, D. (2007). On core membership testing for hedonic coalition formation games. Operations Research Letters, 35(2), 155-158. [doi:10.1016/j.orl.2006.03.011]
THOMAS, L. C. (2003). Games, Theory and Applications. Mineola, NY: Dover Publications.
VAN LAARHOVEN, P. J., & Aarts, E. H. (1987). Simulated Annealing. Berlin Heidelberg: Springer.
VON NEUMANN, J., & Morgenstern, O. (1947). Theory of Games and Economic Behavior, 2nd rev. Princeton, NJ: Princeton University Press.
WILENSKY, U. (1999). NetLogo. Evanston, IL: center for connected learning and computer-based modeling, Northwestern University.
WINDRUM, P., Fagiolo, G., & Moneta, A. (2007). Empirical Validation of Agent-Based Models: Alternatives and Prospects. Journal of Artificial Societies and Social Simulation, 10(2), 8: http://jasss.soc.surrey.ac.uk/10/2/8.html.
XIANG, X., Kennedy, R., Madey, G., & Cabaniss, S. (2005). Verification and validation of agent-based scientific simulation models. Paper presented at the Agent-directed simulation conference.
YEH, D. Y. (1986). A dynamic programming approach to the complete set partitioning problem. BIT Numerical Mathematics, 26(4), 467-474.
|
2021-06-20 10:15:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5271508693695068, "perplexity": 938.9535125983274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00498.warc.gz"}
|
http://math.stackexchange.com/questions/164574/the-limit-of-a-sequence-of-paths
|
# The Limit of a Sequence of Paths
Given a path connected topological space $X$, consider a sequence $x_1, x_2, x_3, \dotsc$ of points in $X$ converging at some $x \in X$. For each $x_i$ and $x_{i+1}$, there exists some path $p_i$ in $X$ from $x_i$ to $x_{i+1}$. I define the following for all positive integers $i$ and $j$ such that $i + 1 < j$:
$$p_{i \to i+1} = p_i \quad\text{and}\quad p_{i \to j} = p_i \cdot p_{i+1 \to j},$$
where $\cdot$ denotes path composition; for any paths $f, g : [0, 1] \to X$ such that $f(1) = g(0)$, $(f \cdot g) : [0, 1] \to X$ is a path such that
$$(f \cdot g)(t) = \begin{cases} f(2t) & 0 \leq t \leq {\textstyle\frac{1}{2}}\\ g(2t - 1) & {\textstyle\frac{1}{2}} < t \leq 1. \end{cases}$$
It can be seen that $p_{i \to j}$ is a path from $x_i$ to $x_j$. Consider the sequence $p_{1 \to 2}, p_{1 \to 3}, p_{1 \to 4}, \dotsc$. Visually, the successive terms of the sequence show a path stretching a bit of its end to touch the next $x_k$. Under what conditions can I say that this sequence of paths converges to a path $p$ in $X$ with the following?
$$p(1 - 2^{-k}) = x_{k+1} \quad\text{and}\quad p(1) = x.$$
For instance, does it suffice to require that $X$ has a metric $d$, and $\lim\limits_{i \to \infty} L(p_i) = 0$, where $L(q)$ denotes the length of a path $q$? However, this would impose a metric on $X$, which is a magic bullet I'd rather not use.
In fact, does the notion of path length exist? Are there further conditions I must pose on $X$ so that the following limit exists for some path $q : [0, 1] \to X$?
$$L(q) = \lim_{n\to\infty}\sum_{m=1}^n d\left(\textstyle q(\frac{m-1}{n}),q(\frac{m}{n})\right).$$
(The following "appendix" may not be fully relevant but it seems interesting)
Consider the function $P : [0, 1]^2 \to X$, where
$$P(t,s) = \begin{cases} p_k(2 - 2^k(1 - t)) & t \leq s < 1, k = \lceil-\log_2(1 - t)\rceil\\ p_k\left(2 - 2^k\left(1 - \frac{t + s}{2}\right)\right) & s \leq t < 1, k = \left\lceil-\log_2\left(1 - \frac{t + s}{2}\right)\right\rceil\\ x & t = s = 1 \end{cases}$$
$P$ can be visualized in the graph below:
Solid lines contain points in the $ts$-plane which are mapped by $P$ to the same point in $X$. If the sequence $p_{1 \to 2}, p_{1 \to 3}, p_{1 \to 4}, \dotsc$ indeed converges to some path $p$ in $X$, then we have the following:
1. $P : [0, 1] \times [0, 1] \to X$ is a homotopy from $p_1$ to $p$.
2. $P(1 - 2^{2 - k}, t) = p_{1 \to k}(t)$ for any integer $k \geq 2$ and $t \in [0, 1]$.
However, does the converse hold? If $P$ is continuous, then does the "limit path" $p$ exist?
-
As you indicate, this does not work in general. How do you define the length of a path in a topological space? – Rasmus Jun 29 '12 at 14:19
Oops, did I wrongly assume that path length was defined? If the metric on $X$ is $d$, then is it ok to define the length of a path $p : [0, 1] \to X$ as $\displaystyle\lim_{n \to \infty}\sum_{m = 1}^n d\left(\textstyle p(\frac{m - 1}{n}),p(\frac{m}{n})\right)$? – Herng Yi Jun 29 '12 at 14:22
@HerngYi Please edit your post to indicate that $X$ is a metric space. – user31373 Jun 29 '12 at 17:44
@Kovalev In the part of the question that used the notion of path length, I have added that $X$ had to be a metric space. However, is there a way to guarantee the existence of the limit of the sequence of paths without using the notion of path length? – Herng Yi Jun 30 '12 at 1:03
@Kovalev: If $X$ were a quasiconvex metric space with metric $d$, and we required accordingly that there exists some $C \in \mathbb{R}$ such that $L(p_i) \leq Cd(x_i, x_{i+1})$ for all positive integers $i$, then by the Squeeze Theorem, $0 \leq \lim\limits_{i\to\infty}L(p_i) \leq C\lim\limits_{i\to\infty}d(x_i, x_{i+1}) = 0$. Does this imply the existence of the limit, $p$, of the sequence of paths? – Herng Yi Jun 30 '12 at 1:36
Here is an example, even a metric example, where the path you want doesn't exist. Take a topologist's sine curve and connect $(0, 0)$ to, say, $\left( \frac{1}{\pi}, 0 \right)$ via a path going below the rest of the curve, so the resulting space is path connected. (I think this is sometimes called the "topologist's circle.") Let $x_k$ be, say, $\left( \frac{1}{\pi k}, 0 \right)$. Then $x_k \to (0, 0)$ but the path you want doesn't exist (the interval is compact but the image of the path you want can't be compact because it has limit points it doesn't contain).
|
2015-04-28 07:33:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792653918266296, "perplexity": 125.70649356860322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660743.58/warc/CC-MAIN-20150417045740-00038-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://mathhelpboards.com/threads/theodore-ks-question-at-yahoo-answers-radius-of-convergence.4363/
|
# Theodore K's question at Yahoo! Answers (Radius of convergence)
MHB Math Helper
#### Fernando Revilla
##### Well-known member
MHB Math Helper
Hello Theodore K,
The ratio test works well here $$\lim_{n\to +\infty}\left|\frac{u_{n+1}}{u_n}\right|=\lim_{n \to +\infty}\left|\frac{(-1)^{n+1}x^{3n+3}}{(2n+2)!}\cdot\frac{(2n)!}{(-1)^nx^{3n}}\right|=\\\lim_{n\to +\infty}\left|\frac{x^{3}}{(2n+2)(2n+1)}\right|=0<1\; (\forall x\in\mathbb{R})$$ This implies that the radius of convergence is $R=+\infty$.
|
2021-09-25 06:08:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9177273511886597, "perplexity": 7062.294692394582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057598.98/warc/CC-MAIN-20210925052020-20210925082020-00643.warc.gz"}
|
http://odin-osiris.usask.ca/ReST/skpython/code/solar_spectrum.html
|
# Solar Spectrum¶
utility.solar_spectrum.convolved_solar_spectrum(wavelengths, psf_wavelengths, psf, spectrum='SAO2010')
Parameters: wavelengths: numpy array An array of wavelenghts in nanometers at which the solar spectrum will be calculated psf_wavelengths: numpy array An array of wavelengths in nanometers at which the point spread function is specified. The output wavelenghts are linearly interpolated from this array. psf: numpy array An array of wavelengths in nanometers defining the spectral point spread function. The PSF is assumed be gaussian with a standard deviation given by psf spectrum: ‘string’ options argument specifying which solar spectrum to use. Currenlty supported options are ‘SAO2010’ and ‘’. Default is ‘SAO2010’. convovled_irradiance: numpy.array The solar spectrum in units of photons/m^2/sec/nm at the resolution specified
Examples
>>> import utility.solar_spectrum as sol
|
2017-09-26 16:29:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3515470623970032, "perplexity": 6185.36747537978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696653.69/warc/CC-MAIN-20170926160416-20170926180416-00417.warc.gz"}
|
http://mathhelpforum.com/calculus/79663-light-intensity-minimization.html
|
# Math Help - light intensity minimization
1. ## light intensity minimization
My only thought on this: I have to minimize the l(x) equation it gives... But I can't figure out how to do that :S We know the power and quotient rule... no chain rule
2. Originally Posted by mike_302
My only thought on this: I have to minimize the l(x) equation it gives... But I can't figure out how to do that :S We know the power and quotient rule... no chain rule
$I = ax^{-2} + b(s-x)^{-2}$
$\frac{dI}{dx} = -2ax^{-3} + 2b(s-x)^{-3}$
$\frac{dI}{dx} = -\frac{2a}{x^3} + \frac{2b}{(s-x)^3}$
set the derivative equal to 0 ...
$\frac{2a}{x^3} = \frac{2b}{(s-x)^3}$
$bx^3 = a(s-x)^3$
$\sqrt[3]{b} \cdot x = \sqrt[3]{a} \cdot (s-x)$
$x = \frac{\sqrt[3]{a} \cdot s}{\sqrt[3]{a} + \sqrt[3]{b}}
$
|
2014-09-21 09:06:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157587289810181, "perplexity": 1363.825983222154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135080.9/warc/CC-MAIN-20140914011215-00152-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://twodee.org/blog/17969
|
teaching machines
Capsule
May 26, 2021 by . Filed under public, slai-2021.
This post is part of a course on geometric modeling at the Summer Liberal Arts Institute for Computer Science held at Carleton College in 2021.
Suppose you grabbed the North Pole, somehow, and I grabbed the South Pole. And then we pulled on the poles. What shape would we get? Earth would look like a pill. Maybe Saturn would swallow it to calm its storms. A more professional term for a pill is capsule. In this exercise, you will make a capsule.
Draw
On your paper, draw an arc that forms the bottom-right quarter-arc of a circle, starting at the south pole and moving counter-clockwise to the east pole. Draw a straight line upward. Then draw another quarter-arc, this time stopping at the pole. You should see the radial cross section of a capsule.
Draw five or more points along the arcs only. Do not include the poles.
As with the sphere, the points you have drawn represent a set of seed points along the first line of longitude. To generate the other lines of longitude, you will rotate this first line around the y-axis.
Function
Write a function named generateCapsule. Have it accept these parameters:
• An integer nlatitudesPerHemisphere that specifies the number of lines of latitude that each hemisphere of the capsule has. The higher the number, the more spherical they will be.
• An integer nlongitudes that specifies the number of lines of longitude that the capsule has. The higher the number, the more spherical it will be.
• The radius of the capsule.
• The height of the capsule’s cylindrical body.
Copy your code from generateSphere into this function. Much of it will be the same.
Positions
The seed positions of the capsule are very similar to the seed positions of the sphere, but the first half of them are pushed down and the second half are pushed up. That’s it. You’ll accomplish this by breaking the loop into two, one for each half. Follow these steps to adapt your sphere code:
• Note how the first parameter has a different meaning that it did with the sphere. You still need to know the total number of lines of latitude for your later code. Compute the total number of lines of latitude and store it in nlatitudes.
• All told, each quarter-arc is $\frac{\pi}{2}$ radians, and you want to break each into nlatitudesPerHemisphere slices. Compute the number of radians per slice and store it in radiansPerSlice.
• Adapt the loop to visit each latitude index in just one hemisphere.
• Range-map the latitude index so that it generates the seed points in just the southern hemisphere, whose interval is $[-\frac{\pi}{2} + \mathrm{radiansPerSlice}, 0]$.
• Tweak the y-coordinate calculation so that the seed points are pushed downward by half the cylinder’s height.
• Duplicate the loop.
• Modify the range-mapping of the second loop so that it generates the seed points in the northern hemisphere.
• Tweak the y-coordinate calculation so that the seed points are pushed upward by half the cylinder’s height.
When you render your capsule after making these changes, you should see a capsule-like shape. However, the poles are probably not be right. Adjust the pole positions just as you adjust the seed positions.
|
2021-06-20 22:45:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5203063488006592, "perplexity": 871.6617645772811}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00090.warc.gz"}
|
http://math-mprf.org/journal/articles/id917/
|
Geometric Expansion of the Log-Partition Function for the Ginibre Gas Obeying Maxwelltire Boltzmann Statistics
#### S. Poghosyan, H. Zessin
2001, v.7, №4, 581-593
ABSTRACT
The following problem is discussed for a system of interacting Brownian loops in a bounded domain of $\r^{\nu}$: given the energy ${\cal U}^{\phi}$ of a system, defined in a natural way by means of a stable potential $\phi$ with nice decay properties, the associated log-partition function, $\ln\,Z(\Lambda ,z)$, where $Z(\Lambda ,z)=\int \exp\{-{\cal U}^{\phi}(\mu)\} W_{z\rho _{\Lambda}}(d\mu )$, can be expanded as a function of the geometric characteristics of $\Lambda$, like volume, surface measure etc., if $z>0$ is small enough. ($W_{z\rho%_{\Lambda}}$ denotes the natural reference measure on the configurations of finitely many loops living completely in $\Lambda$ and having activity'' $z$.) The constants appearing in this expansion, are uniquely determined by and explicitly represented in terms of $\phi$. The first one can be interpreted as the pressure and the second as the surface tension.
Keywords: partition function,pressure,Ursell function,geometric expansion
|
2017-07-27 12:40:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326204657554626, "perplexity": 471.80031145998794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428257.45/warc/CC-MAIN-20170727122407-20170727142407-00215.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-9-quadratic-functions-and-equations-9-3-solving-quadratic-equations-practice-and-problem-solving-exercises-page-565/43
|
# Chapter 9 - Quadratic Functions and Equations - 9-3 Solving Quadratic Equations - Practice and Problem-Solving Exercises - Page 565: 43
No solution
#### Work Step by Step
$1.2z^2-7=-34$ $1.2z^2=-27$ $z^2=-22.5$ Hence, there are no solutions.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2022-05-21 05:16:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40061888098716736, "perplexity": 1676.9681963391324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00228.warc.gz"}
|
https://math.stackexchange.com/questions/2385650/find-the-absolute-maximum-and-minimum-given-constraint
|
# Find the absolute maximum and minimum given constraint
Given the function $f\left(x,y\right)=9x^4+16x^3+6x^2y^2$,
defined for all points $(x,y)$ in the area R given by the inequality
$x^2+2x+y^2 \le 0.$
a) Find and classify critical points for $f$ inside of R.
Answer: $\left(-\frac{4}{3},\:0\right)$ Local minimum
I have managed to find this one using the gradient of the function, and setting each part of the gradient equal to zero. Then solve for X,Y to get the points and use the second derivative test to figure if it is a minimum or maximum.
What I don't understand is, when I plug the point to see where it lies on the graph it seems to just be floating in the middle of the air:
b) Decide the absolute maximum and minimum for $f$ on the area R.
Answer: $16, \frac{-256}{27}$
I am not sure what the question exactly asks me to do. Please keep in mind that sentence is translated from another Language to the English language so it might not be 100% correct in this context so feel free to use your imagination of what it could mean.
I realized I could find the absolute maximum by using the technique langrange multiplier with $x^2+2x+y^2 = 0$.
I tried doing the same to find the absolute minimum but setting the function to $x^2+2x+y^2 = -1$, but this seems to be wrong.
• For $a)$, note the constraint is the circle: $(x+1)^2+y^2\le 1$, the point $(-4/3,0,-256/3)$ floats in air because the graph of a two variable function is a surface. – farruhota Aug 7 '17 at 15:56
Let $x=-\frac{4}{3}$ and $y=0$.
Hence, the condition is valid and $f\left(-\frac{4}{3},0\right)=-\frac{256}{27}$.
We'll prove that $-\frac{256}{27}$ it's a minimal value of our function.
Indeed, we need to prove that $$9x^4+16x^3+6x^2y^2\geq-\frac{256}{27},$$ for which it's enough to prove that $$9x^4+16x^3+\frac{256}{27}\geq0,$$ which follows from AM-GM: $$9x^4+16x^3+\frac{256}{27}=3\cdot3x^4+\frac{256}{27}+16x^3\geq$$ $$\geq4\sqrt[4]{(3x^4)^3\cdot\frac{256}{27}}+16x^3=16|x^3|+16x^3\geq0.$$
Now, $f(-2,0)=16$.
We'll prove that it's a maximal value of $f$.
Indeed, we need to prove that $$9x^4+16x^3+6x^2y^2\leq16$$ and since $y^2\leq-x^2-2x$, it's enough to prove that $$9x^4+16x^3+6x^2(-x^2-2x)\leq16$$ or $$(x+2)(3x^3-2x^2+4x-8)\leq0.$$
But the condition gives $x^2+2x\leq-y^2\leq0.$
Thus, $-2\leq x\leq0$ and $$(x+2)(3x^3-2x^2+4x-8)=(x+2)(3x^3-2(x^2-2x+4))\leq0$$ and we are done!
|
2021-06-20 06:04:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041504621505737, "perplexity": 122.79645830784946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487658814.62/warc/CC-MAIN-20210620054240-20210620084240-00506.warc.gz"}
|
https://panafricansl.com/oovguh/aa2512-un-du-adhesive-remover-ingredients
|
Here are a few types of resistors: 1. To calculate the temperature rise, multiply the resistor power by the thermal impedance of the traces. Computing the mean free time of electrons moving through the conductor shows that the electrons move past a large number of lattice sites before interacting significantly with the metal cations. Project 2-1 shows you how to build a simple circuit that demonstrates how a resistor can be used to limit current to an LED. Log in here. At any given instant, electrons have a certain probability of scattering inelastically off of the metallic lattice, imparting some of their energy to the lattice as kinetic energy, i.e. One of the most common uses for resistors is to limit the current flowing through an electronic component. According to the definition of electric current, the total charge passing through the circuit in time ttt is Q=ItQ=ItQ=It. Derivation for the equation of heat generated in a circuit, Joule's law of Heating. They’re actually fairly inexpensive, too ($5 or so for 2 to 4 in a pack), and are commonly used for custom speaker projects. Since a potential difference VVV is defined as the work done to transfer a unit charge from a potential V1V_{1}V1 to a potential V1+VV_{1}+VV1+V, QVQVQV Joules of work must be done to transfer a charge QQQ across the same potential. According to Ohm's law, these three quantities are related by the equation V=IRV=IRV=IR. In this case, the voltage is easy to figure out: You know that two AA batteries provide 3 V. To figure out the current, you just need to decide how much current is acceptable for your circuit. Thick and thin film 5. A few milliamps of current is enough to make an LED glow; a few hundred milliamps is enough to destroy the LED. This will help us to improve better. Thus, we are calculating a rate at which energy is being converted into heat inside a conductor. Sign up to read all wikis and quizzes in math, science, and engineering topics. 1. The answer is simple: Ohm’s law, which can easily tell you what size resistor to use, but you must first know the voltage and current. Upvote(4) How satisfied are you with the answer? Forgot password? Power dissipated by a resistor is P = I^2 x R or P= V^2/R both derivations of P = I x V and of V= I x R (Ohm’s law). heat. This electrical component is used to control the air conditioning system of a vehicle. The longer leads also provide a longer path for heat to travel down them, also reducing the heat conducted to the solder joints and the PCB. Before getting into the construction of the circuit, here’s a simple question: Why a 120 Ω resistor? Let's do one more example. Sign up, Existing user? when i connect my 12 volt battery with 6500 MA the resistors begin to heat up. The technical specifications of the LED tell you how much current the LED can handle. where mmm and eee are the mass and charge of an electron respectively, LLL and aaa are the length and area of the conductive material comprising the resistor, nnn is the number density of charge carriers, and τ\tau τ is the time interval between two collisions of electrons in the resistor. Derivation of Power Dissipation by Resistor, https://brilliant.org/wiki/heat-dissipated-by-resistors/. To get the resistor temperature, add this rise to the temperature of the power plane. We'd need the combined resistance to be 240^2/1500=38.4 ohms, and the 2000W element contributes 28.8 of this, so another 9.6 ohms are needed. They cost a bit more than a normal resistor but still under a buck. In order to accurately develop photographic films, a very precise and constant temperature is required, otherwise colors will degrade. Note that these are not AEC-Q200 tests. Now compute the total work done by the battery in driving current around the circuit. This resistor would be handling some serious current and it doesn't have any housing as-is. A series resistor will reduce the heat output of the element. There are a number of different ways to reduce the voltage from the decoder. When a resistor is attached directly to the heat sink, the heat generated by the resistor conducts to the metal plate, and then into the air as a result of convection. Suppose we wanted to get 1500W from the combination of the element and the resistor. Cost. Carbon composition 2. So I've put together several different ways to figure it out.Lets get right to it:Each of the steps do the same thing. Tweeter attenuation is the reduction of voltage & power to a tweeter to decrease its volume output. A power resistor is just a larger-size resistor that can handle a lot more power & heat than the small ones commonly used on electronic boards. Over Heat Of Resistor. Integrating the sense resistor into the package of a DS27XX series fuel gauge reduces board size and saves cost in an application. Install a series resistor equal to the load impedance followed by back-to-back Zener diodes to clamp the output to 10 vac Run a 120 vac motor connected to a 10 vac generator Install back-to-back Zener diodes in series to reduce to voltage across the load (they will have to dissipate significant power and they distort the waveform) Figure 1 – Heat vs. Current in a Resistor Note: Real power only occurs when the magnitude of the voltage and current increase and decrease at exactly the same rate as illustrated below. Posted by December 12, 2020 Leave a comment on how to reduce voltage from 12v to 6v using resistor December 12, 2020 Leave a comment on how to reduce voltage from 12v to 6v using resistor Some components, such as light-emitting diodes, are very sensitive to current. Step 1 is the simplest and we… toppr. The crystal structure of metal atoms in a conductor hinders the flow of electrons through it. Most manufacturers specify the power rating at 70°C and free airflow conditions. A tweeter resistor network, sometimes called an L-pad, can be used to reduce the volume & power a tweeter receives. Resistors can be made of several different materials and methods. To calculate the desired resistance, you divide the voltage (3 V) by the current (0.025 A). Diode-D1 protects other parts from the high voltage pulse that is generated in the relay coil when the relay is switched OFF. Choosing the Resistor to Use With LEDs: This question gets asked every day in Answers and the Forums: What resistor do I use with my LEDs? The result is 120 Ω. If you do, the LED will flash brightly, and then it will be dead forever. This solution require wire splicing and would be inherently less safe for anyone not very familiar with electric systems. Another approach would be to use multiple resistors so they each get a little bit of the heat. The simplest and cheapest way to reduce voltage is to use a resistor … This is called being in-phase and will only happen for a resistive load . Since Q=ItQ = ItQ=It, the total work WWW done in moving this charge can be written: Because this circuit consists of only one resistor, the entire work done goes into energy lost through power dissipation by this resistor, by conservation of energy. Technically I don’t like to use the terms power loss. To be safe and make sure that you don’t destroy the LED with too much current, round the maximum current down to 25 mA. Most resistor spark plugs use a monolithic resistor, generally made of graphite and glass materials, to filter the electrical voltage as it passes through the center electrode. A simple, practical heat sink is an aluminum or steel plate as shown in Figure 1. This is what allows electricity to be useful: the electrical potential energy from the voltage source is converted to kinetic energy of the electrons, which is then transferred to something we wish to power, such as a toaster or a laptop. Carbon film 3. Foil resistor 6. However, in high-current applications, heat generated by the sense resistor can affect real-time temperature readings made by the device. Metal film 4. Consider a circuit as shown in the diagram, with a potential source (battery) of VVV volts driving a current III around a circuit and across a resistor of resistance RRR. It is difficult to damage a resistor by overheating it with a soldering iron. Screw Shield for Mega/Due/Uno, Bobuino with ATMega1284P, & other '328P & '1284P creations & offerings at my website. This heat is measured in terms of power, which corresponds to energy per unit time. is there anyway to prevent… Why not a larger or a smaller value? Note that while inter-electron collisions may yield their own associated thermal energy of motion, this energy stays internal to the system until it is dissipated into the metallic lattice, which does not carry the current. Using a resistor reduce the voltage to relay. The resistance can also be expanded as: where ρ\rhoρ is the resistivity, a material property of the resistor, and LLL and AAA are the length and cross-sectional area respectively of the resistor. The heat from the resistor keeps the glass temperature above the dew point, which prevents fogging and snow buildup, which keeps the camera useful in all weather conditions. Due to the wave nature of the electron, electrons are able to propagate without scattering inelastically for a longer distance through the lattice than expected, and the scattering probability is much more sensitive to lattice defects than the density of the lattice. A longer lead provides more air between the resistor body and the PCB, which allows for greater cooling of the resistor as well as reducing the radiant heat transmitted to the PCB. New user? So it's easy to see now how any voltage can be obtained with a resistor voltage divider circuit. Already have an account? Iload is the current flow the relay coil. In other words, how do you determine what size resistor to use in a circuit like this? Hence E ∝ R 1 So if we reduce the resistance to half of its value then heat generated will be doubled. So the relevant equation is the equation for power in a circuit: P = IV = I^2 R = \frac {V^2} {R}, P = I V = I 2R = RV 2 And we… a resistor is a critical factor during testing series with that 2000W element to make it heat?! 2000W element to make an LED glow ; a few hundred milliamps is enough to destroy the LED flash... / Iload Vin = 12V battery Vload = voltage of relay coil when the relay coil 6V. ∝ R 1 so if we reduce the resistance of the most robust all... When using 1.5 volt lamps the voltage across the resistor wanted to get from! Protects other parts from the decoder to directly answer your question, could resistor! An electronic component as our R2 resistor with the R1 resistor being..! The express purpose of creating a precise quantity of resistance the relay coil = 6V heat dissipation a! Such as light-emitting diodes, are very sensitive to current atoms in a conductor, in high-current applications heat. Of resistors: 1 readings made by the current flowing through an electronic component the R1 being... Soldering iron temperature is required, otherwise colors will degrade you can go a! Make an LED glow ; a few hundred milliamps is enough to make LED... Put into a circuit simple metal heat sinks with ATMega1284P, & other '328P '1284P... Of electrons through it several different materials and methods prevent… so you can go find a$ 9.1k\Omega $rated! High temperatures, some can withstand high power and some are very sensitive to current circuit in ttt. The current through the circuit in time ttt is Q=ItQ=ItQ=It factor during testing, in high-current applications, heat will. Charge passing through the circuit, here ’ s a simple, practical heat sink is an or! Circuit simple metal heat sinks output of the heat dissipation in a circuit, 's. S a simple circuit that demonstrates how a resistor is simply the power plane accurate, can! Go find a$ 9.1k\Omega $resistor rated for 1.5W or more of power dissipation by resistor, engineering! Constant temperature is required, otherwise colors will degrade power, which corresponds to energy per time put into system. Done by the sense resistor can be obtained how to reduce resistor heat a resistor is a factor. Equation of heat generated will be dead forever most robust of all electronic,. At 70°C and free airflow conditions lowering RFI to an LED electronic components such! Resistor be added in series with that 2000W element to make an LED source. Real-Time temperature readings made by the equation of heat generated will be doubled we the. To suppress some of the most robust of all electronic components, with high reliability and a long.... At which energy is being converted into heat inside a conductor are the inelastic collisions of electrons it... Resistor is simply the power rating at 70°C and free airflow conditions they a! Hinders the flow of electrons through it 's easy to see now how any can!, some can withstand high power and some are very sensitive to.! Around the circuit by the sense resistor can be obtained with a soldering iron it does n't have housing! Power rating at 70°C and free airflow conditions obtained with a soldering iron, III the flowing. Insertion into a system to the temperature of the spark energy, thus lowering RFI to acceptable. 2-1 shows you how to build a simple circuit that demonstrates how a resistor affect! Films, a very precise and constant temperature is required, otherwise colors will degrade Vload! Conductive material, how do you determine what size resistor to use in a circuit element suppress some of resistor. Does n't have any housing as-is resistors begin to heat up generated will be doubled handling... Be to use in a resistor is simply the power dissipated across that resistor since power represents energy time. The most common uses for resistors is to limit the current flowing through an electronic.. Resistor would be handling some serious current and it does n't have any housing as-is rise to the battery driving. Volume & power to a tweeter to decrease its volume output few milliamps of current enough... Energy per time put into a system E ∝ R 1 so we... Any voltage can be used to limit the current flowing through an electronic component across the resistor ’ like! And will only happen for a resistive load total work done by the without! The high voltage pulse that is generated in the 1960s to suppress how to reduce resistor heat of the most robust of all components. Bit more than a normal resistor but still under a buck are you with lattice! Brush against this that implements electrical resistance as a circuit, Joule 's law of heating Vin – Vload /! ) by the sense how to reduce resistor heat can affect real-time temperature readings made by the of. Sometimes called an L-pad, can be made of several different materials and methods read. R1 resistor being 10KΩ n't have any housing as-is inside a conductor hinders the flow electrons. Vin = 12V battery Vload = voltage of relay coil when the relay is switched.. The volume & power a tweeter receives free airflow conditions this resistor would be some! Derivation for the equation of heat generated by the equation of heat generated by the current through... / Iload Vin = 12V battery Vload = voltage of relay coil the. Use in a circuit, Joule 's law, these three quantities are related by the in! Since Resistor-R1 = ( Vin – Vload ) / Iload Vin = battery... To directly answer your question, could a resistor can affect real-time temperature readings made by the current through... Resistor rated for 1.5W or more of power dissipation by resistor, https //brilliant.org/wiki/heat-dissipated-by-resistors/! Critical factor during testing generated by the sense resistor can be used to the... Voltage divider circuit the explanation for this fact comes from quantum mechanics and duality... 12 volt battery with 6500 MA the resistors begin to heat up and would be use... 'S easy to see now how any voltage can be used to limit current! Closed for time ttt is Q=ItQ=ItQ=It connect my 12 volt battery with 6500 MA resistors. Joule 's law of heating & '1284P creations & offerings at my.. Resistor since power represents energy per unit time heat sink is an or! Be obtained with a resistor 3 V ) by the device the LED directly to the temperature of how to reduce resistor heat output., thus lowering RFI to an acceptable level the battery in driving around! The crystal structure of metal atoms in a circuit like this, which corresponds to energy per time into! Current to an LED circuit has been closed for time ttt is.! Added in series with that 2000W element to make it heat less like to the... To reduce the heat dissipation within a resistor is simply the power dissipated across that resistor since power energy. Resistor would be handling some serious current and it does n't have any housing as-is some are.... Above with 5V but only want 1V are you with the answer the 1960s to suppress some the... Prevent… so you can go find a$ 9.1k\Omega \$ resistor rated for 1.5W or more of power.! Electrons through it like this power loss, is the reduction of voltage & power to a resistor... Will cover three different methods of voltage & power a tweeter to decrease its volume output the R1 resistor 10KΩ... Dissipation in the lattice of metallic ions forming a conductive material than a normal resistor still! Bias on the resistor, and engineering topics creations & offerings at website., how do you determine what size resistor to use the terms power loss we the. Not connect the LED can handle electrical component that implements electrical resistance a... If we reduce the volume & power to a tweeter to decrease its volume.! Of metal atoms in a conductor are the cause of resistance for insertion into a system combination. Two-Terminal electrical component that implements electrical resistance as a circuit element like this we… a resistor from quantum and! Do, the LED LED tell you how to build a simple question: Why a Ω! Temperature is required, otherwise colors will degrade resistor as our R2 resistor with the R1 resistor being..! Conducting electrons with the R1 resistor being 10KΩ when using 1.5 volt lamps the voltage ( V... Do, the LED 1960s to suppress some of the most robust all.: Why a 120 Ω resistor resistor is simply the power rating at 70°C and free airflow conditions long.! Overheating it with a soldering iron for resistors is to limit the current flowing an... Enough to destroy the LED tell you how to build a simple circuit that demonstrates a... Make an LED temperature is required, otherwise colors will degrade it with a resistor L-pad can.
|
2021-09-16 11:52:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4063721299171448, "perplexity": 1216.4215147784798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00093.warc.gz"}
|
https://blender.stackexchange.com/questions/115873/what-does-mark-sharp-actually-do
|
# What does 'Mark Sharp' *actually* do?
In the course of teaching myself Blender, I've always been led to believe that marking edges 'Sharp' had an effect only if used in concert with the Edge Split modifier. But then I saw a couple of tutorials which used the 'Sharp' Edge Data property without splitting edges.
The objects below are identical. Autosmooth is enabled for both at 60 degrees. Both have had a Bevel modifier applied to all edges but the ones in the curved section, controlled by weight. The only difference between them is that the one on the right has had its sharper edges marked 'Sharp'
And after the Bevel modifier is applied, there are split vertex-per-face normals on the marked 'Sharp' version, again on the right. The vertex count in both versions is the same.
The Blender manual goes no further than to say the 'Sharp' property works with the Edge Split modifier.. but no Edge Split modifier has been assigned. Can anyone explain what marking 'Sharp' actually does, under the bonnet, contrary to all the explanations I've heard?
According to the v2.92 manual (emphasis added):
The Sharp mark is used by the split normals and the Edge Split modifier, which are part of the smoothing or customized shading techniques.
So, anything that uses split normals can also use the sharp data. This applies to the Auto Smooth feature of meshes.
Per the manual:
By default in Blender . . . a sharp edge is always defined as an edge being either non-manifold, or having at least one of its faces defined as flat.
Enabling the Auto Smooth setting adds an extra parameter to define a sharp edge, the Angle threshold between two neighbor faces, above which the edge will always be considered as sharp.
So, for example, you could apply Auto Smooth, say at 30°. Then you could mark an edge as sharp, say an edge that was only 20°. That edge would stay sharp thus overriding the Auto Smooth.
|
2022-09-29 14:34:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3529132008552551, "perplexity": 1447.8285255186627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00354.warc.gz"}
|
https://www.physicsforums.com/threads/general-relativity-electrodynamics-2.130768/
|
# General Relativity + Electrodynamics #2
1. Sep 4, 2006
### JustinLevy
Some of the comparisons between gravity (in the weak field limit) and electromagnetism were really interesting (https://www.physicsforums.com/showthread.php?t=80710 ). Unfortunately, that got locked due to someone presenting their pet theory. The topic seems too good to end, so let's hope we can stay on topic this time.
Referring all the way back to:
https://www.physicsforums.com/showpost.php?p=667979&postcount=12
Can you give some suggestions how one would go about figuring out how g and H do transform? I don't really understand why it would be different, as shouldn't a "point charge" (rho = m * delta function) be the same in all reference frames? So if the "charge" is invarient, and the equations are already "equivalent" to Maxwell's equations (so they have Lorentz symmetry), what makes these any different at all? I would expect them to transform the same.
EDIT: Or are we supposed to use relativistic mass density instead of invarient mass density? I guess that would make the "charge" not invarient and then g,H wouldn't transform like E,B.
Last edited: Sep 4, 2006
2. Sep 4, 2006
### pervect
Staff Emeritus
g and H can be written in terms of the Christoffel symbols, so they transform like Christoffel symbols do.
From the Harris reference, "Analogy between General Relativity and Electromagnetism for slowly moving particles in weak fields", for time-independent fields one can write g and H in terms of the Christoffel symbols as
$$\Gamma^\mu{}_{0\beta} = \frac{1}{2 c^2} \left| \begin{array}{cccc} 0 &-2g_x & -2g_y & -2g_z\\ -2 g_x & 0 & -H_z & H_y \\ -2 g_y & H_z & 0 & -H_x \\ -2 g_z & -H_y & H_x & 0 \end{array} \right |$$
Christoffel symbols don't transform as tensors. I found an ugly expression for how they do transform on the Wikipedia
http://en.wikipedia.org/wiki/Christoffel_symbol#Change_of_variable
which, for convenience, and because of the mutability of Wiki, I'll quote below (though I haven't verified it personally)
$$\overline{\Gamma^k {}_{ij}} = \frac{\partial x^p}{\partial y^i}\, \frac{\partial x^q}{\partial y^j}\, \Gamma^r {}_{pq}\, \frac{\partial y^k}{\partial x^r} + \frac{\partial y^k}{\partial x^m}\, \frac{\partial^2 x^m}{\partial y^i \partial y^j}$$
So the short version is - they transform in a rather complicated and ugly way :-(. I presume the above non-linear transformation can be "linearized", but I haven't seen or worked out the details.
Last edited: Sep 4, 2006
3. Sep 4, 2006
### JustinLevy
Wow, I don't blame you. I'm not sure I'll work out the details either :)
Thanks for such an indepth answer. I'll definitely have to read up on that, and then maybe I'll try "linearizing" the transformation in some approximation. Anyway, thanks again, very much appreciated.
4. Sep 4, 2006
### masudr
I'm sure you guys are smarter than me (and I haven't read all of the previous threads) but I thought I'd just add this.
Christoffel symbols don't transform as tensors for a very good reason. The derivative of a (non-zero rank) tensor does not transform like a tensor. However, when one sees the form of a derivative in a new co-ordinate system, there is a specific, ugly term that remains. These are precisely the Christoffel symbols, and so the covariant derivative definition includes the Christoffel symbol so that the covariant derivative transforms nicely like a tensor.
i.e. the punch line is that christoffel symbols exactly counter the extra ugly terms that appear in the transformation of a derivative; hence christoffel symbols are ugly.
5. Sep 4, 2006
Staff Emeritus
And then instead of sticking the ugly terms on the end of every derivation, you define them to be part of the derivative and give it a new name: covariant derivative, because with it, derivatives of tensors are now covariant. And then you learn about connections...
6. Sep 5, 2006
### JustinLevy
Ouch, I think I am misunderstanding something very basic here. I thought the point of Gravitoelectromagnetism was that in a weak field limit (and slow moving masses) we could consider instead the equivalent "Maxwell's equations" for g,H and the equivalent "lorentz force law" for "charges", all in flat space, instead of solving the full Einstein equations. Is that not the intent?
7. Sep 5, 2006
### masudr
Well, a flat space would have the Minkowski metric for g. So a non-Minkowski metric implies a non-flat space.
8. Sep 5, 2006
### JustinLevy
Here g is a vector, not the metric. Wikipedia instead uses E and B (for g,H) to prevent confusion (and make it "look" more like Maxwell's equations). http://en.wikipedia.org/wiki/Gravitomagnetism Maybe we should adopt that convention for this thread.
I hope I didn't completely misunderstand your comment.
Last edited: Sep 5, 2006
9. Sep 5, 2006
### masudr
I'm sorry, the fault is all mine.
10. Sep 5, 2006
### pervect
Staff Emeritus
Will, SR doesn't have any nonlinear differential equations, but OK
Huh? Assuming that the unclear PT is "parity time", this doesn't have anything to do with SR and GR, but could have something to do with quantum mechanics
Unless you framulate the blogistan, with slithy toves and onomotapoeia.
Sorry, this is just gibberish.
11. Sep 5, 2006
### JustinLevy
Okay, let's just ignore that post and move on.
...
So if we agree E,B (g,H) are fields we are supposed to use in flat space / inertial frames, and all inertial frames are equivalent, then these equations must be valid in all inertial frames (where the assumption of slow moving masses holds true at least). Thus the fields must have lorentz symmetry, and do transform just like the electromagnetic E,B.
Correct?
I think the problem in the beginning was that it was mentioned that the linearized approximation which gives us the gravitoelectomagnetic (GEM) field equations came from the Christoffel symbols. So we thought that the fields would transform in a complicated way as Christoffel symbols do instead of like a tensor does.
But look back at the Christoffel symbol transformation equation. If we are approximating space as flat, we just get:
$$\overline{\Gamma^k {}_{ij}} = \frac{\partial x^p}{\partial y^i}\, \frac{\partial x^q}{\partial y^j}\, \Gamma^r {}_{pq}\, \frac{\partial y^k}{\partial x^r} + \frac{\partial y^k}{\partial x^m}\, \frac{\partial^2 x^m}{\partial y^i \partial y^j} \approx \frac{\partial x^p}{\partial y^i}\, \frac{\partial x^q}{\partial y^j}\, \Gamma^r {}_{pq}\, \frac{\partial y^k}{\partial x^r}$$
So in this approximation the Christoffel symbol does transform just like a tensor, and so E,B (g,h) do transform like the electromagnetic fields.
Most of these concepts are new to me, so let me know if my arguement is flawed. But it does look to me like my original intuition worked out on this problem.
|
2017-10-19 19:34:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923099637031555, "perplexity": 925.4173668247921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823360.1/warc/CC-MAIN-20171019175016-20171019195016-00116.warc.gz"}
|
https://electronics.stackexchange.com/questions/21243/how-does-the-current-limiting-resistor-for-an-led-affect-current-and-voltage-dro
|
# How does the current limiting resistor for an LED affect current and voltage drops?
I'm having a bit of trouble understanding currrent limiting resistors in simple LED circuits. I know that I can determine the optimal resistor like so:
$\displaystyle R=\frac{V_{s}-V_{f}}{I_{f}}$
But I'm having a hard time understanding how this one value modifies the voltage and the current to the correct values for the LED. For example, if my calcuations for a super bright blue LED (with $V_{f}$ being 3.0-3.4 V and $I_{f}$ being 80 mA, and a voltage source of 5 V) gives me 25 ohms (using the lower bound of the forward voltage), that's fine. So the current throughout should be 80 mA and the voltage drop for the resistor and LED should be 2 and 3 volts, respectively.
But what if I used a 100 ohm resistor instead? Or any other value—how would I calculate the voltage drops and current? Would I assume one of them stays the same?
The LED forward voltage drop will remain (roughly) the same, but the current can change, so the calculation becomes (same equation solving for I):
$$I_{LED} = {(V_s - V_f)\over{R}}$$
So for a 3V ${V_f}$ and a 5V supply, the $100\Omega$ resistor would give ${(5V - 3V)\over{100 \Omega }} = 20 mA$.
So if you know what current you want, just plug the values in, e.g. for 10mA:
$$R = {(5V - 3V)\over{0.01 A}} = {200 \Omega}$$
Basically, the fact that the supply and the LED forward voltage can be relied upon to be pretty static, means that whatever value resistor you put in will also have a static voltage across it (e.g ~2V in this case), so it just leaves you to find out that voltage and select a resistance value according to the current you want.
Below is the V-I curve of a diode (from the wiki LED page), notice the current sharply rises (exponentially) but voltage stays roughly the same when the "on" voltage is reached.
For more accurate control of the current you would use a constant current, which is what most LED driver ICs provide.
• Ok this makes sense, when I connected a 10k resistor to experiment the voltage drops went from 2-3 to 2.5-2.5 (and the LED was very dim), so the high resistance must have brought the voltage drop of the LED left near the beginning of the curve. But otherwise I can assume the voltage to be static, or solve graphically if possible. Thanks I think I understand now. – mk12 Oct 23 '11 at 22:05
|
2019-07-16 22:40:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5467244982719421, "perplexity": 584.4068662212271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524972.66/warc/CC-MAIN-20190716221441-20190717003441-00148.warc.gz"}
|
https://cstheory.stackexchange.com/questions/31452/a-few-questions-about-ksvd-algorithm-dictionary-learning-in-a-paper
|
# A few questions about KSVD algorithm (dictionary learning) in a paper [closed]
To learn more about dictionary learning, I am currently trying to understand the concept in detail and to do so, I've found the following paper quite informative:
I have a few questions:
• does the jth column of D (the overcomplete dictionary) correspond to the jth row of X or its transpose? (where $Y=DX$, Y is our data/signals; D is the dictionary; X is the sparse represenation of signals) So we do work on the transpose of X and not the matrix X itself? Why the notation $x_T ^K$ is used and not $x_K ^T$ ?
• somewhere in the middle, they suggest to initialize $w_k$ which defines as follows: $$w_k=\{i|1 \leq k \leq K: w_k(i) \neq 0\}$$ is defining this variable necessary in spite of the fact that we will define another variable afterwards. $\Omega _k$ is a matrix of size $N$ (number of signals (Y)) $\times |w_k|$ where it is one for $w_k(i)$ and zero otherwise. why do we need to define $w_k$?
## closed as off-topic by D.W., Lev Reyzin♦Aug 22 '16 at 18:57
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Our site policy prohibits simultaneous crossposting: it duplicates effort and fractures discussion. Crossposting is permitted after a week has passed without a satisfying answer elsewhere. When crossposting please summarize the relevant discussions from other sites in your question and link between the copies in both directions." – D.W., Lev Reyzin
If this question can be reworded to fit the rules in the help center, please edit the question.
|
2019-08-24 08:44:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6584339141845703, "perplexity": 792.4255037331249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320156.86/warc/CC-MAIN-20190824084149-20190824110149-00380.warc.gz"}
|
http://mqasim.me/?m=201708
|
## Mapping to a ‘t'(map)
(This article was first published on HighlandR, and kindly contributed to R-bloggers)
tmap More maps of the Highlands?
Yep, same as last time, but no need to install dev versions of anything, we can get awesome maps courtesy of the tmap package.
Get the shapefile from the last post
``````
library(tmap)
library(tmaptools)
library(viridis)
highland (scot[scot\$LAName=="Highland", ])
#replicate plot from previous blog post:
quint tm_shape(highland) +
tm_fill(col = "Quintile",
palette = viridis(n=5, direction = -1,option = "C"),
fill.title = "Quintile",
title = "SIMD 2016 - Highland Council Area by Quintile")
quint # plot
ttm() #switch between static and interactive - this will use interactive
quint # or use last_map()
# in R Studio you will find leaflet map in your Viewer tab
``````
The results:
One really nice thing is that because the polygons don’t have outlines, the DataZones that are really densely packed still render nicely – so no ‘missing’ areas.
A static image of the leaflet map:
Here I take the rank for all the Highland data zones, and the overall SIMD rank, and create a small multiple
``````small_mult tm_shape(highland) +
tm_fill(col = c("IncRank","EmpRank","HlthRank","EduRank",
"GAccRank","CrimeRank","HouseRank","Rank"),
palette = viridis(n=5, direction = -1,option = "C"),
title=c("Income Rank", "Employment Rank","Health Rank","Education Rank",
"General Access Rank","Crime Rank", "Housing Rank","Overall Rank"))
small_mult
``````
Let’s take a look at Scotland as a whole, as I assume everyone’s pretty bored of the Highlands by now:
``````#try a map of scotland
scotplot tm_shape(scot) +
tm_fill(col = "Rank",
palette = viridis(n=5, direction = -1,option = "C"),
fill.title = "Overall Rank",
title = "Overall-Rank")
scotplot # bit of a monster
``````
With the interactive plot, we can really appreciate the density of these datazones in the Central belt.
Here, I’ve zoomed in a bit on the region around Glasgow, and then zoomed in some more:
I couldn’t figure out how to host the leaflet map within the page (Jekyll / Github / Leaflet experts please feel free to educate me on that ) but, given the size of the file, I very much doubt I could have uploaded it to Github anyway.
Thanks to Roger Bivand (@RogerBivand) for getting in touch and pointing me towards the tmap package! It’s really good fun and an easy way to get interactive maps up and running.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…
Source:: R News
## Abstract
The proof-calculation ping-pong is the process of iteratively improving a statistical analysis by comparing results from two independent analysis approaches until agreement. We use the `daff` package to simplify the comparison of the two results.
## Introduction
If you are a statistician working in climate science, data driven journalism, official statistics, public health, economics or any related field working with real data, chances are that you have to perform analyses, where you know there is zero tolerance for errors. The easiest way to ensure the correctness of such an analysis is to check your results over and over again (the iterated 2-eye principle). A better approach is to pair-program the analysis by either having a colleague read through your code (the 4-eye principle) or have a skilled colleague completely redo your analysis from scratch using her favorite toolchain (the 2-mindset principle). Structured software development in the form of, e.g. version control and unit tests, provides valuable inspiration on how to ensure the quality of your code. However, when it comes to pair-programming analyses, surprisingly many steps remain manual. The `daff` package provides the equivalent of a `diff` statement on data frames and we shall illustrate its use by automatizing the comparison step of the statistical proof-calculation ping-pong process.
## Case Story
Ada and Bob have to proof-calculate their country’s quadrennial official statistics on the total income and number of employed people in fitness centers. A sample of fitness centers is asked to fill out a questionnaire containing their yearly sales volume, staff costs and number of employees. For this post we will ignore the complex survey part of the problem and just pretend that our sample corresponds to the population (complete inventory count). After the questionnaire phase, the following data are available to Ada and Bob.
Id Version Region Sales Volume Staff Costs People
01 1 A 23000 10003 5
02 1 B 12200 7200 1
03 1 NA 19500 7609 2
04 1 A 17500 13000 3
05 1 B 119500 90000 NA
05 2 B 119500 95691 19
06 1 B NA 19990 6
07 1 A 19123 20100 8
08 1 D 25000 100 NA
09 1 D 45860 32555 9
10 1 E 33020 25010 7
Here `Id` denotes the unique identifier of the sampled fitness center, `Version` indicates the version of a center’s questionnaire and `Region` denotes the region in which the center is located. In case a center’s questionnaire lacks information or has inconsistent information, the protocol is to get back to the center and have it send a revised questionnaire. All Ada and Bob now need to do is aggregate the data per region in order to obtain region stratified estimates of:
• the overall number of fitness centres
• total sales volume
• total staff cost
• total number of people employed in fitness centres
However, the analysis protocol instructs that only fitness centers with an annual sales volume larger than or equal to €17,500 are to be included in the analysis. Furthermore, if missing values occur, they are to be ignored in the summations. Since a lot of muscle will be angered in case of errors, Ada and Bob agree on following the 2-mindset procedure and meet after an hour to discuss their results. Here is what each of them came up with.
Ada loves the tidyverse and in particular `dplyr`. This is her solution:
``````ada fitness %>% na.omit() %>% group_by(Region,Id) %>% top_n(1,Version) %>%
group_by(Region) %>%
filter(`Sales Volume` >= 17500) %>%
summarise(`NoOfUnits`=n(),
`Sales Volume`=sum(`Sales Volume`),
`Staff Costs`=sum(`Staff Costs`),
People=sum(People))
``````## # A tibble: 4 x 5
## Region NoOfUnits `Sales Volume` `Staff Costs` People
##
## 1 A 3 59623 43103 16
## 2 B 1 119500 95691 19
## 3 D 1 45860 32555 9
## 4 E 1 33020 25010 7``````
### Bob
Bob has a dark past as database developer and, hence, only recently experienced the joys of R. He therefore chooses a no-SQL-server-necessary `SQLite` within R approach:
``````library(RSQLite)
db dbConnect(SQLite(), dbname = file.path(filePath,"Test.sqlite"))
##Move fitness data into the ad-hoc DB
dbWriteTable(conn = db, name = "FITNESS", fitness, overwrite=TRUE, row.names=FALSE)
##Query using SQL
bob dbGetQuery(db, "
SELECT Region,
COUNT(*) As NoOfUnits,
SUM([Sales Volume]) As [Sales Volume],
SUM([Staff Costs]) AS [Staff Costs],
SUM(People) AS People
FROM FITNESS
WHERE [Sales Volume] > 17500 GROUP BY Region
")
bob``````
``````## Region NoOfUnits Sales Volume Staff Costs People
## 1 1 19500 7609 2
## 2 A 2 42123 30103 13
## 3 B 2 239000 185691 19
## 4 D 2 70860 32655 9
## 5 E 1 33020 25010 7``````
### The Ping-Pong phase
After Ada and Bob each have a result, they compare their resulting `data.frames`s using the `daff` package, which was recently presented by @edwindjonge at the useR in Brussels.
``````library(daff)
diff\$get_data()``````
``````## @@ Region NoOfUnits Sales Volume Staff Costs People
## 1 +++ 1 19500 7609 2
## 2 -> A 3->2 59623->42123 43103->30103 16->13
## 3 -> B 1->2 119500->239000 95691->185691 19
## 4 -> D 1->2 45860->70860 32555->32655 9
## 5 E 1 33020 25010 7``````
After Ada’s and Bob’s serve, the two realize that their results just agree for one region (‘E’). Note: Currently, `daff` has the semi-annoying feature of not being able to show all the diffs when printing, but just `n` lines of the head and tail. As a consequence, for the purpose of this post, we overwrite the printing function such that it always shows all rows with differences.
``````print.data_diff function(x) x\$get_data() %>% filter(`@@` != "")
print(diff)``````
``````## @@ Region NoOfUnits Sales Volume Staff Costs People
## 1 +++ 1 19500 7609 2
## 2 -> A 3->2 59623->42123 43103->30103 16->13
## 3 -> B 1->2 119500->239000 95691->185691 19
## 4 -> D 1->2 45860->70860 32555->32655 9``````
The two decide to first focus on agreeing on the number of units per region.
``````## @@ Region NoOfUnits
## 1 +++ 1
## 2 -> A 3->2
## 3 -> B 1->2
## 4 -> D 1->2``````
One obvious reason for the discrepancies appears to be that Bob has results for an extra region. Therefore, Bob takes another look at his management of missing values and decides to improve his code by:
#### Pong Bob
``````bob2 dbGetQuery(db, "
SELECT Region,
COUNT(*) As NoOfUnits,
SUM([Sales Volume]) As [Sales Volume],
SUM([Staff Costs]) AS [Staff Costs],
SUM(People) AS People
FROM FITNESS
WHERE ([Sales Volume] > 17500 AND REGION IS NOT NULL)
GROUP BY Region
")
diff2 %>% print()``````
``````## @@ Region NoOfUnits Sales Volume Staff Costs People
## 1 -> A 3->2 59623->42123 43103->30103 16->13
## 2 -> B 1->2 119500->239000 95691->185691 19
## 3 -> D 1->2 45860->70860 32555->32655 9``````
#### Ping Bob
Better. Now the `NA` region is gone, but still quite some differences remain. Note: You may at this point want to stop reading and try yourself to fix the analysis – the data and code are available from the github repository.
#### Pong Bob
Now Bob notices that he forgot to handle the duplicate records and massages the SQL query as follows:
``````bob3 dbGetQuery(db, "
SELECT Region,
COUNT(*) As NoOfUnits,
SUM([Sales Volume]) As [Sales Volume],
SUM([Staff Costs]) AS [Staff Costs],
SUM(People) AS People FROM
(SELECT Id, MAX(Version), Region, [Sales Volume], [Staff Costs], People FROM FITNESS GROUP BY Id)
WHERE ([Sales Volume] >= 17500 AND REGION IS NOT NULL)
GROUP BY Region
")
diff3 %>% print()``````
``````## @@ Region NoOfUnits Sales Volume Staff Costs People
## 1 ... ... ... ... ... ...
## 2 -> D 1->2 45860->70860 32555->32655 9``````
Comparing with Ada, Bob is sort of envious that she was able to just use `dplyr`‘s `group_by` and `top_n` functions. However, `daff` shows that there still is one difference left. By looking more carefully at Ada’s code it becomes clear that she accidentally leaves out one unit in region D. The reason is the too liberate use of `na.omit`, which also removes the one entry with an `NA` in one of the not so important columns. However, they discuss the issue, if one really wants to include partial records or not, because summation in the different columns then is over a different number of units. After consulting with the standard operation procedure (SOP) for these kind of surveys they decide to include the observation where possible. Here is Ada’s modified code:
``````ada2 fitness %>% filter(!is.na(Region)) %>% group_by(Region,Id) %>% top_n(1,Version) %>%
group_by(Region) %>%
filter(`Sales Volume` >= 17500) %>%
summarise(`NoOfUnits`=n(),
`Sales Volume`=sum(`Sales Volume`),
`Staff Costs`=sum(`Staff Costs`),
People=sum(People))
``````## @@ Region NoOfUnits ... Staff Costs People
## 1 ... ... ... ... ... ...
## 2 -> D 2 ... 32655 NA->9``````
Oops, forgot to take care of the `NA` in the summation:
``````ada3 fitness %>% filter(!is.na(Region)) %>% group_by(Region,Id) %>% top_n(1,Version) %>%
group_by(Region) %>%
filter(`Sales Volume` >= 17500) %>%
summarise(`NoOfUnits`=n(),
`Sales Volume`=sum(`Sales Volume`),
`Staff Costs`=sum(`Staff Costs`),
People=sum(People,na.rm=TRUE))
length(diff_final\$get_data()) == 0``````
``## [1] TRUE``
## Conclusion
Finally, their results agree and they move on to production and their results are published in a nice report.
Question 1: Do you agree with their results?
``ada3``
``````## # A tibble: 4 x 5
## Region NoOfUnits `Sales Volume` `Staff Costs` People
##
## 1 A 3 59623 43103 16
## 2 B 1 119500 95691 19
## 3 D 2 70860 32655 9
## 4 E 1 33020 25010 7``````
As shown, the ping-pong game is quite manual and particularly annoying, if at some point someone steps into the office with a statement like Btw, I found some extra questionnaires, which need to be added to the analysis asap. However, the two now aligned analysis scripts and the corresponding daff-overlay could be put into a script, which is triggered every time the data change. In case new discrepancies emerge as `length(diff\$get_data()) > 0`, the two could then be automatically informed.
Question 2: Are you aware of any other good ways and tools to structure and automatize such a process? If so, please share your experiences as a Disqus comment below.
Source:: R News
## Multiplicative Congruential Generators in R
(This article was first published on R – Aaron Schlegel, and kindly contributed to R-bloggers)
Part 2 of 2 in the series Random Number Generation
Multiplicative congruential generators, also known as Lehmer random number generators, is a type of linear congruential generator for generating pseudorandom numbers in . The multiplicative congruential generator, often abbreviated as MLCG or MCG, is defined as a recurrence relation similar to the LCG with $c = 0$.
$large{X_{i+1} = aX_i space text{mod} space m}$
Unlike the LCG, the parameters $a$ and $m$ for multiplicative congruential generators are more restricted and the initial seed $X_0$ must be relatively prime to the modulus $m$ (the greatest common divisor between $X_0$ and $m$ is $0$). The current parameters in common use are $m = 2^{31} - 1 = 2,147,483,647 text{and} a = 7^5 = 16,807$. However, in a correspondence from the Communications of the ACM, Park, Miller and Stockmeyer changed the value of the parameter $a$, stating:
The minimal standard Lehmer generator we advocated had a modulus of m = 2^31 – 1 and a multiplier of a = 16807. Relative to this particular choice of multiplier, we wrote “… if this paper were to be written again in a few years it is quite possible that we would advocate a different multiplier ….” We are now prepared to do so. That is, we now advocate a = 48271 and, indeed, have done so “officially” since July 1990. This new advocacy is consistent with the discussion on page 1198 of [10]. There is nothing wrong with 16807; we now believe, however, that 48271 is a little better (with q = 44488, r = 3399).
###### Multiplicative Congruential Generators with Schrage’s Method
When using a large prime modulus $m$ such as $2^{31} - 1$, the multiplicative congruential generator can overflow. Schrage’s method was invented to overcome the possibility of overflow. We can check the parameters in use satisfy this condition:
```a
Schrage's method restates the modulus $m$ as a decomposition $m = aq + r$ where $r = m space text{mod} space a$ and $q = m / a$.
$ax space text{mod} space m = begin{cases} a(x space text{mod} space q) - rfrac{x}{q} & text{if} space x space text{is} geq 0 a(x space text{mod} space q) - rfrac{x}{q} + m & text{if} space x space text{is} leq 0 end{cases}$Multiplicative Congruential Generator in R
We can implement a Lehmer random number generator in R using the parameters mentioned earlier.
```
```lehmer.rng
```
```# Print the first 10 randomly generated numbers
lehmer.rng()
## [1] 0.68635675 0.12657390 0.84869106 0.16614698 0.08108171 0.89533896
## [7] 0.90708773 0.03195725 0.60847522 0.70736551
```
Plotting our multiplicative congruential generator in three dimensions allows us to visualize the apparent ‘randomness’ of the generator. As before, we generate three random vectors $x, y, z$ with our Lehmer RNG function and plot the points. The plot3d package is used to create the scatterplot and the animation package is used to animate each scatterplot as the length of the random vectors, $n$, increases.
```library(plot3D)
library(animation)
```
```n
The generator appears to be generating suitably random numbers demonstrated by the increasing swarm of points as $n$ increases.
References
Anne Gille-Genest (March 1, 2012). Implementation of the Pseudo-Random Number Generators and the Low Discrepancy Sequences.
Saucier, R. (2000). Computer Generation of Statistical Distributions (1st ed.). Aberdeen, MD. Army Research Lab.
Stephen K. Park; Keith W. Miller; Paul K. Stockmeyer (1988). “Technical Correspondence”. Communications of the ACM. 36 (7): 105–110.
The post Multiplicative Congruential Generators in R appeared first on Aaron Schlegel.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
```
Source:: R News
## Probability functions intermediate
(This article was first published on R-exercises, and kindly contributed to R-bloggers)
In this set of exercises, we are going to explore some of the probability functions in R by using practical applications. Basic probability knowledge is required. In case you are not familiarized with the function `apply`, check the R documentation.
Note: We are going to use random numbers functions and random processes functions in R such as `runif`. A problem with these functions is that every time you run them, you will obtain a different value. To make your results reproducible you can specify the value of the seed using `set.seed(‘any number')` before calling a random function. (If you are not familiar with seeds, think of them as the tracking number of your random number process.) For this set of exercises, we will use `set.seed(1).`Don’t forget to specify it before every exercise that includes random numbers.
Answers to the exercises are available here. If you obtained a different (correct) answer than those listed on the solutions page, please feel free to post your answer as a comment on that page.
Exercise 1
Generating dice rolls Set your seed to 1 and generate 30 random numbers using `runif`. Save it in an object called `random_numbers`. Then use the `ceiling` function to round the values. These values represent rolling dice values.
Exercise 2
Simulate one dice roll using the function `rmultinom`. Make sure `n = 1` is inside the function, and save it in an object called `die_result`. The matrix `die_result` is a collection of 1 one and 5 zeros, with the one indicating which value was obtained during the process. Use the function `which`to create an output that shows only the value obtained after the dice is rolled.
Exercise 3
Using `rmultinom`, simulate 30 dice rolls. Save it in a variable called `dice_result` and use `apply` to transform the matrix into a vector with the result of each dice.
Exercise 4
Some gambling games use 2 dice, and after being rolled they sum their value. Simulate throwing 2 dice 30 times and record the sum of the values of each pair. Use `rmultinom`to simulate throwing 2 dice 30 times. Use the function `apply` to record the sum of the values of each experiment.
Learn more about probability functions in the online course Statistics with R – Advanced Level. In this course you will learn how to
• work with different binomial and logistic regression techniques,
• know how to compare regression models and choose the right fit,
• and much more.
Exercise 5
Simulate normal distribution values. Imagine a population in which the average height is 1.70 m with a standard deviation of 0.1. Using `rnorm`, simulate the height of 100 people and save it in an object called `heights`.
To get an idea of the values of heights, use the function `summary`.
Exercise 6
90% of the population is smaller than ____________?
Exercise 7
Which percentage of the population is bigger than 1.60 m?
Exercise 8
Run the following line code before this exercise. This will load a library required for the exercise.
```if (!'MASS' %in% installed.packages()) install.packages('MASS') library(MASS)```
Simulate 1000 people with height and weight using the function `mvrnorm` with ` mu = c(1.70, 60) ` and
` Sigma = matrix(c(.1,3.1,3.1,100), nrow = 2) `
Exercise 9
How many people from the simulated population are taller than 1.70 m and heavier than 60 kg?
Exercise 10
How many people from the simulated population are taller than 1.75 m and lighter than 60 kg?
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…
Source:: R News
## DEADLINE EXTENDED: Last call for Boston EARL abstracts
(This article was first published on Mango Solutions, and kindly contributed to R-bloggers)
Are you solving problems and innovating with R?
Are you working with R in a commercial setting?
Do you enjoy sharing your knowledge?
If you said yes to any of the above, we want your abstract!
Share your commerical R stories with your peers at EARL Boston this November.
EARL isn’t about knowing the most or being the best in your field – it’s about taking what you’ve learnt and sharing it with others, so they can learn from your wins (and sometimes your failures, because we all have them!).
As long as your proposed presentation is focused on the commerical use of R, any topic from any industry is welcome!
The abstract submission deadline has been extended to Sunday 3 September.
Join David Robinson, Mara Averick and Tareef Kawaf on 1-3 November 2017 at The Charles Hotel in Cambridge.
See you in Boston!
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…
Source:: R News
## Text featurization with the Microsoft ML package
(This article was first published on Revolutions, and kindly contributed to R-bloggers)
Last week I wrote about how you can use the MicrosoftML package in Microsoft R to featurize images: reduce an image to a vector of 4096 numbers that quantify the essential characteristics of the image, according to an AI vision model. You can perform a similar featurization process with text as well, but in this case you have a lot more control of the features used to represent the text.
Tsuyoshi Matsuzaki demonstrates the process in a post at the MSDN Blog. The post explores the Multi-Domain Sentiment Dataset, a collection of product reviews from Amazon.com. The dataset includes reviews from 975,194 products on Amazon.com from a variety of domains, and for each product there is a text review and a star rating of 1, 2, 4, or 5. (There are no 3-star rated reviews in the data set.) Here’s one example, selected at random:
What a useful reference! I bought this book hoping to brush up on my French after a few years of absence, and found it to be indispensable. It’s great for quickly looking up grammatical rules and structures as well as vocabulary-building using the helpful vocabulary lists throughout the book. My personal favorite feature of this text is Part V, Idiomatic Usage. This section contains extensive lists of idioms, grouped by their root nouns or verbs. Memorizing one or two of these a day will do wonders for your confidence in French. This book is highly recommended either as a standalone text, or, preferably, as a supplement to a more traditional textbook. In either case, it will serve you well in your continuing education in the French language.
The review contains many positive terms (“useful”, “indespensable”, “highly recommended”), and in fact is associated with a 5-star rating for this book. The goal of the blog post was to find the terms most associated with positive (or negative) reviews. One way to do this is to use the `featurizeText` function in thje Microsoft ML package included with Microsoft R Client and Microsoft R Server. Among other things, this function can be used to extract ngrams (sequences of one, two, or more words) from arbitrary text. In this example, we extract all of the one and two-word sequences represented at least 500 times in the reviews. Then, to assess which have the most impact on ratings, we use their presence or absence as predictors in a linear model:
```transformRule = list(
featurizeText(
vars = c(Features = "REVIEW_TEXT"),
# skipLength=1 : "computer" and "compuuter" is the same
wordFeatureExtractor = ngramCount(
weighting = "tfidf",
ngramLength = 2,
skipLength = 1),
language = "English"
),
selectFeatures(
vars = c("Features"),
mode = minCount(500)
)
)
# train using transforms !
model rxFastLinear(
RATING ~ Features,
data = train,
mlTransforms = transformRule,
type = "regression" # not binary (numeric regression)
)
```
We can then look at the coefficients associated with these features (presence of n-grams) to assess their impact on the overall rating. By this standard, the top 10 words or word-pairs contributing to a negative rating are:
```boring -7.647399
waste -7.537471
not -6.355953
nothing -6.149342
money -5.386262
no -5.210301
worst -5.051558
poorly -4.962763
disappointed -4.890280
```
Similarly, the top 10 words or word-pairs associated with a positive rating are:
```will 3.073104
the|best 3.265797
love 3.290348
life 3.562267
wonderful 3.652950
,|and 3.762862
you 3.889580
excellent 3.902497
my 4.454115
great 4.552569
```
Another option is simply to look at the sentiment score for each review, which can be extracted using the `getSentiment` function.
```sentimentScores rxFeaturize(data=data,
mlTransforms = getSentiment(vars =
list(SentimentScore = "REVIEW_TEXT")))
```
As we expect, a negative seniment (in the 0-0.5 range) is associated with 1- and 2-star reviews, while a positive sentiment (0.5-1.0) is associated with the 4- and 5-star reviews.
You can find more details on this analysis, including the Microsoft R code, at the link below.
Microsoft Technologies Blog for Enterprise Developers: Analyze your text in R (MicrosoftML)
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…
Source:: R News
## Why to use the replyr R package
Recently I noticed that the `R` package `sparklyr` had the following odd behavior:
``````suppressPackageStartupMessages(library("dplyr"))
library("sparklyr")
packageVersion("dplyr")
#> [1] '0.7.2.9000'
packageVersion("sparklyr")
#> [1] '0.6.2'
packageVersion("dbplyr")
#> [1] '1.1.0.9000'
sc * Using Spark: 2.1.0
d [1] NA
ncol(d)
#> [1] NA
nrow(d)
#> [1] NA
``````
This means user code or user analyses that depend on one of `dim()`, `ncol()` or `nrow()` possibly breaks. `nrow()` used to return something other than `NA`, so older work may not be reproducible.
In fact: where I actually noticed this was deep in debugging a client project (not in a trivial example, such as above).
Tron: fights for the users.
In my opinion: this choice is going to be a great source of surprises, unexpected behavior, and bugs going forward for both `sparklyr` and `dbplyr` users.
A little digging gets us to this:
The above might make sense if `tibble` and `dbplyr` were the only users of `dim()`, `ncol()` or `nrow()`.
Frankly if I call `nrow()` I expect to learn the number of rows in a table.
The suggestion is for all user code to adapt to use `sdf_dim()`, `sdf_ncol()` and `sdf_nrow()` (instead of `tibble` adapting). Even if practical (there are already a lot of existing `sparklyr` analyses), this prohibits the writing of generic `dplyr` code that works the same over local data, databases, and `Spark` (by generic code, we mean code that does not check the data source type and adapt). The situation is possibly even worse for non-`sparklyr` `dbplyr` users (i.e., databases such as `PostgreSQL`), as I don’t see any obvious convenient “no please really calculate the number of rows for me” (other than “`d %>% tally %>% pull`“).
I admit, calling `nrow()` against an arbitrary query can be expensive. However, I am usually calling `nrow()` on physical tables (not on arbitrary `dplyr` queries or pipelines). Physical tables ofter deliberately carry explicit meta-data to make it possible for `nrow()` to be a cheap operation.
Allowing the user to write reliable generic code that works against many `dplyr` data sources is the purpose of our `replyr` package. Being able to use the same code many places increases the value of the code (without user facing complexity) and allows one to rehearse procedures in-memory before trying databases or `Spark`. Below are the functions `replyr` supplies for examining the size of tables:
``````library("replyr")
#> [1] '0.5.4'
#> [1] TRUE
#> [1] 2 1
#> [1] 1
#> [1] 2
spark_disconnect(sc)``````
Note: the above is only working properly in the development version of `replyr`, as I only found out about the issue and made the fix recently.
`replyr_hasrows()` was added as I found in many projects the primary use of `nrow()` was to determine if there was any data in a table. The idea is: user code uses the `replyr` functions, and the `replyr` functions deal with the complexities of dealing with different data sources. This also gives us a central place to collect patches and fixes as we run into future problems. `replyr` accretes functionality as our group runs into different use cases (and we try to put use cases first, prior to other design considerations).
The point of `replyr` is to provide re-usable work arounds of design choices far away from our influence.
Source:: R News
## Community Call – rOpenSci Software Review and Onboarding
(This article was first published on rOpenSci Blog, and kindly contributed to R-bloggers)
Are you thinking about submitting a package to rOpenSci’s open peer software review? Considering volunteering to review for the first time? Maybe you’re an experienced package author or reviewer and have ideas about how we can improve.
## Agenda
1. Welcome (Stefanie Butland, rOpenSci Community Manager, 5 min)
2. guest: Noam Ross, editor (15 min)
Noam will give an overview of the rOpenSci software review and onboarding, highlighting the role editors play and how decisions are made about policies and changes to the process.
3. guest: Andee Kaplan, reviewer (15 min)
Andee will give her perspective as a package reviewer, sharing specifics about her workflow and her motivation for doing this.
4. Q & A (25 min, moderated by Noam Ross)
## Speaker bios
Andee Kaplan is a Postdoctoral Fellow at Duke University. She is a recent PhD graduate from the Iowa State University Department of Statistics, where she learned a lot about R and reproducibility by developing a class on data stewardship for Agronomists. Andee has reviewed multiple (two!) packages for rOpenSci, `iheatmapr` and `getlandsat`, and hopes to one day be on the receiving end of the review process.
Noam Ross is one of rOpenSci’s four editors for software peer review. Noam is a Senior Research Scientist at EcoHealth Alliance in New York, specializing in mathematical modeling of disease outbreaks, as well as training and standards for data science and reproducibility. Noam earned his Ph.D. in Ecology from the University of California-Davis, where he founded the Davis R Users’ Group.
## Resources
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…
Source:: R News
## Create and Update PowerPoint Reports using R
By Tim Bock
(This article was first published on R – Displayr, and kindly contributed to R-bloggers)
In my sordid past, I was a data science consultant. One thing about data science that they don’t teach you at school is that senior managers in most large companies require reports to be in PowerPoint. Yet, I like to do my more complex data science in R – PowerPoint and R are not natural allies. As a result, creating an updating PowerPoint reports using R can be painful.
In this post, I discuss how to make R and PowerPoint work efficiently together. The underlying assumption is that R is your computational engine and that you are trying to get outputs into PowerPoint. I compare and contrast three tools for creating and updating PowerPoint reports using R: free ReporteRs package with two commercial products, Displayr and Q.
## Option 1: ReporteRs
The first approach to getting R and PowerPoint to work together is to use David Gohel’s ReporteRs. To my mind, this is the most “pure” of the approaches from an R perspective. If you are an experienced R user, this approach works in pretty much the way that you will expect it to work.
The code below creates 250 crosstabs, conducts significance tests, and, if the p-value is less than 0.05, presents a slide containing each. And, yes, I know this is p-hacking, but this post is about how to use PowerPoint and R, not how to do statistics…
```
library(devtools)
devtools::install_github('davidgohel/ReporteRsjars')
devtools::install_github('davidgohel/ReporteRs')
install.packages(c('ReporteRs', 'haven', 'vcd', 'ggplot2', 'reshape2'))
library(ReporteRs)
library(haven)
library(vcd)
library(ggplot2)
library(reshape2)
filename = "c://delete//Significant crosstabs.pptx" # the document to produce
document = pptx(title = "My significant crosstabs!")
alpha = 0.05 # The level at which the statistical testing is to be done.
dependent.variable.names = c("wrkstat", "marital", "sibs", "age", "educ")
all.names = names(dat)[6:55] # The first 50 variables int the file.
counter = 0
for (nm in all.names)
for (dp in dependent.variable.names)
{
if (nm != dp)
{
v1 = dat[[nm]]
if (is.labelled(v1))
v1 = as_factor(v1)
v2 = dat[[dp]]
l1 = attr(v1, "label")
l2 = attr(v2, "label")
if (is.labelled(v2))
v2 = as_factor(v2)
if (length(unique(v1)) 0, colSums(x) > 0]
ch = chisq.test(x)
p = ch\$p.value
if (!is.na(p) && p
Below we see one of the admittedly ugly slides created using this code. With more time and expertise, I am sure I could have done something prettier. A cool aspect of the ReporteRs package is that you can then edit the file in PowerPoint. You can then get R to update any charts and other outputs originally created in R.
Option 2: Displayr
A completely different approach is to author the report in Displayr, and then export the resulting report from Displayr to PowerPoint.
This has advantages and disadvantages relative to using ReporteRs. First, I will start with the big disadvantage, in the hope of persuading you of my objectivity (disclaimer: I have no objectivity, I work at Displayr).
Each page of a Displayr report is created interactively, using a mouse and clicking and dragging things. In my earlier example using ReporteRs, I only created pages where there was a statistically significant association. Currently, there is no way of doing such a thing in Displayr.
The flipside of using the graphical user interface like Displayr is that it is a lot easier to create attractive visualizations. As a result, the user has much greater control over the look and feel of the report. For example, the screenshot below shows a PowerPoint document created by Displayr. All but one of the charts has been created using R, and the first two are based on a moderately complicated statistical model (latent class rank-ordered logit model).
You can access the document used to create the PowerPoint report with R here (just sign in to Displayr first) – you can poke around and see how it all works.
A benefit of authoring a report using Displayr is that the user can access the report online, interact with it (e.g., filter the data), and then export precisely what they want. You can see this document as it is viewed by a user of the online report here.
Option 3: Q
A third approach for authoring and updating PowerPoint reports using R is to use Q, which is a Windows program designed for survey reporting (same disclaimer as with Displayr). It works by exporting and updating results to a PowerPoint document. Q has two different mechanisms for exporting R analyses to PowerPoint. First, you can export R outputs, including HTMLwidgets, created in Q directly to PowerPoint as images. Second, you can create tables using R and then have these exported as native PowerPoint objects, such as Excel charts and PowerPoint tables.
Q has two different mechanisms for exporting R analyses to PowerPoint. First, you can export R outputs, including HTMLwidgets, created in Q directly to PowerPoint as images. Second, you can create tables using R and then have these exported as native PowerPoint objects, such as Excel charts and PowerPoint tables.
In Q, a Report contains a series of analyses. Analyses can either be created using R, or, using Q's own internal calculation engine, which is designed for producing tables from survey data.
The map above (in the Displayr report) is an HTMLwidget created using the plotly R package. It draws data from a table called Region, which would also be shown in the report. (The same R code in the Displayr example can be used in an R object within Q). So when exported into PowerPoint, it creates a page, using the PowerPoint template, where the title is Responses by region and the map appears in the middle of the page.
The screenshot below is showing another R chart created in PowerPoint. The data has been extracted from Google Trends using the gtrendsR R package. However, the chart itself is a standard Excel chart, attached to a spreadsheet containing the data. These slides can then be customized using all the normal PowerPoint tools and can be automatically updated when the data is revised.
Explore the Displayr example
You can access the Displayr document used to create and update the PowerPoint report with R here (just sign in to Displayr first). Here, you can poke around and see how it all works or create your own document.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
```
Source:: R News
## Pacific Island Hopping using R and iGraph
(This article was first published on The Devil is in the Data, and kindly contributed to R-bloggers)
Last month I enjoyed a relaxing holiday in the tropical paradise of Vanuatu. One rainy day I contemplated how to go island hopping across the Pacific ocean visiting as many island nations as possible. The Pacific ocean is a massive body of water between, Asia and the Americas, which covers almost half the surface of the earth. The southern Pacific is strewn with island nations from Australia to Chile. In this post, I describe how to use R to plan your next Pacific island hopping journey.
The Pacific Ocean.
## Listing all airports
My first step was to create a list of flight connections between each of the island nations in the Pacific ocean. I am not aware of a publically available data set of international flights so unfortunately, I created a list manually (if you do know of such data set, then please leave a comment).
My manual research resulted in a list of international flights from or to island airports. This list might not be complete, but it is a start. My Pinterest board with Pacific island airline route maps was the information source for this list.
The first code section reads the list of airline routes and uses the `ggmap` package to extract their coordinates from Google maps. The data frame with airport coordinates is saved for future reference to avoid repeatedly pinging Google for the same information.
```# Init
library(tidyverse)
library(ggmap)
library(ggrepel)
library(geosphere)
# Read flight list and airport list
flights
Create the map
To create a map, I modified the code to create flight maps I published in an earlier post. This code had to be changed to centre the map on the Pacific. Mapping the Pacific ocean is problematic because the -180 and +180 degree meridians meet around the date line. Longitudes west of the antemeridian are positive, while longitudes east are negative.
The `world2` data set in the borders function of the `ggplot2` package is centred on the Pacific ocean. To enable plotting on this map, all negative longitudes are made positive by adding 360 degrees to them.
```
```# Pacific centric
flights\$lon.x[flights\$lon.x
Pacific Island Hopping
This visualisation is aesthetic and full of context, but it is not the best visualisation to solve the travel problem. This map can also be expressed as a graph with nodes (airports) and edges (routes). Once the map is represented mathematically, we can generate travel routes and begin our Pacific Island hopping.
The igraph package converts the flight list to a graph that can be analysed and plotted. The `shortest_path` function can then be used to plan routes. If I would want to travel from Auckland to Saipan in the Northern Mariana Islands, I have to go through Port Vila, Honiara, Port Moresby, Chuuk, Guam and then to Saipan. I am pretty sure there are quicker ways to get there, but this would be an exciting journey through the Pacific.
```
```library(igraph)
g
The post Pacific Island Hopping using R and iGraph appeared first on The Devil is in the Data.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
```
Source:: R News
|
2018-07-22 23:52:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2346527874469757, "perplexity": 2959.5622239222907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594675.66/warc/CC-MAIN-20180722233159-20180723013159-00199.warc.gz"}
|
https://www.illustrativemathematics.org/content-standards/6/NS/B/3/tasks/2216
|
Update all PDFs
# Changing Currency
Alignments to Content Standards: 6.NS.B.3
1. How many one-hundred dollar bills do you need to make \$2,000? \$20,000?
2. How many ten dollar bills do you need to make \$2,000? \$20,000?
3. How many dimes do you need to make \$0.20? \$2? \$20? 4. How many pennies do you need to make \$0.02? \$0.20? \$2?
5. Use the answers to the questions above to fill in the table:
$A$ $B$ $A\div B$ 2 0.01 20 0.1 200 1 2,000 10 20,000 100
6. What changes (and how does it change) and what stays the same as you move from row to row down the table?
## IM Commentary
The purpose of this task is for students to notice that if the dividend and divisor both increase by a factor of 10, the quotient remains the same. This sets them up to understand the rules for moving decimal points when performing long division. After students have described the pattern in the table, the teacher can challenge them to explain why this pattern must always hold, or can explain the pattern to the students.
• One way to do this is to write the first division question $2\div 0.01 = ?$ as $0.01\times ? = 2$ and note that if we multiply both sides by 10, we get the second division problem. Multiplying both sides by 10 again gives the third, and so on.
• For students who are familiar with complex fractions, we can also explain this by thinking of $2\div 0.01$ as $\frac{2}{0.01}$ and can note that if we multiply this fraction by $\frac{10}{10}$, which is 1, we get $\frac{20}{0.1}$. Multiplying this fraction by $\frac{10}{10}$ again gives us $\frac{200}{1}$.
• We can also explain it by thinking about it in terms of the context. If we are finding out how many coins are needed to make a certain total, then we need the same number of coins if we are making an amount that is ten times as great with a coin worth ten times as much.
A task and discussion like this help prepare students to understand why $1.2 \overline{)2.4} = 12 \overline{)24}$.
One approach to parts (a) through (d) would be to use language. One might think of $2,000 as 2,000 ones, 200 tens, and 20 hundreds. This is a fine approach, as the connection to division is made in part (e). One idea might be to give students only parts (a) through (d) first, discuss, and then give them parts (e) and (f) (the table and the generalizing question.) ## Solution 1.$2,000\div100=20$, so you need 20 one-hundred dollar bills to make \$2,000. You need ten times as many as that, or 200, to make \$20,000. 2.$2,000\div10=200$, so you need 200 ten dollar bills to make \$2,000. You need ten times as many as that, or 2,000, to make \$20,000. 3.$0.20\div0.10=2$(or more intuitively, 2 dimes are worth \$0.20). You need ten times as many, or 20 dimes, to make \$2.00. You need ten times as many as that, or 200, to make \$20.
4. $0.02\div0.01=2$ (or more intuitively, 2 pennies are worth \$0.02). You need ten times as many, or 20 pennies, to make \$0.20. You need ten times as many as that, or 200, to make \$2. 5. Use the answers to the questions above to fill in the table: $ABA\div B\$ 2 0.01 200 20 0.1 200 200 1 200 2,000 10 200 20,000 100 200
6. As we go down the first column, the value is ten times bigger than the value in the row above. The same is true with the second column. But the quotient always stays the same. So the number being divided and the number dividing into it increase by a factor of ten from row to row, but the quotient never changes.
|
2018-10-18 16:10:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7947863936424255, "perplexity": 490.5441306852806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511889.43/warc/CC-MAIN-20181018152212-20181018173712-00361.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=3498031
|
# Solving 2nd order differential equation with non-constant coefficients
P: 10 Hi~ I'm having trouble with solving a certain differential equation. x2y'' + x y'+(k2x2-1)y = 0 I'm tasked to find a solution that satisfies the boundary conditions: y(0)=0 and y(1)=0 I have tried solving this using the characteristic equation, but i arrived at a solution that is unable to satisfy the boundary conditions except for when y(x)=0 (which is trivial). Any pointers on how i should go about this? Actually, i am trying to find the Green's function for this differential equation, which is why I need the solution to the said equation first. Thanks so much for any help :)
Math Emeritus Sci Advisor Thanks PF Gold P: 39,363 There is at least one obvious solution: y= 0 for all x. Since that is a regular singular equation at 0, you probably will want to use "Frobenius' method", using power series. That is much too complicated to explain here- try http://en.wikipedia.org/wiki/Frobenius_method
P: 607 See the "Bessel" differential equation. Change variables to convert yours to that. So what you need to do is select k so that your solution has a zero at 1.
P: 10 Solving 2nd order differential equation with non-constant coefficients Thanks for the tip! Are there any possible way to solve this? Like possibly an out of this world change in variables (ex let u = lnx)? or series substitution?
P: 2 Hi every body, I have another kind of equation which seems rather difficult to solve (1 + a Sin(x)) y'' + a Cos(x) y' + b Sin(x) y = 0 Should I first expand sin and cos in their series and then try Laplace or serious solution methods? Or there is a method that can solve it directly? Please help?
P: 100
Quote by river_boy Hi every body, I have another kind of equation which seems rather difficult to solve (1 + a Sin(x)) y'' + a Cos(x) y' + b Sin(x) y = 0 Should I first expand sin and cos in their series and then try Laplace or serious solution methods? Or there is a method that can solve it directly? Please help?
Try substitution
u=(1+aSin(x))
P: 2
Quote by stallionx Try substitution u=(1+aSin(x))
Thanks its really helping.
P: 6 Please help me to solve this DE: y''=ysinx (I think i should multiply both sides with 2y' but I don't know how to do next) thanks in advance ^^
P: 756
Quote by Ceria_land Please yhelp me to solve this DE: y''=ysinx (I think i should multiply both sides with 2y' but I don't know how to do next) thanks in advance ^^
(y')² = y²sin(x)+C
You can find the solutions in the particular case C=0 in terms of exponential of Incomplete elliptic integral of the second kind.
HW Helper
P: 1,391
Quote by JJacquelin (y')² = y²sin(x)+C You can find the solutions in the particular case C=0 in terms of exponential of Incomplete elliptic integral of the second kind.
I'm afraid that sin(x) ruins the integration, and as such you're missing a term ##-\int dx y(x) \cos(x)##. After multiplying by 2y' you get
$$\frac{d}{dx}(y')^2 = \left[\frac{d}{dx}(y^2)\right] \sin(x),$$
which can't be integrated exactly - you get ##(y')^2 = y^2\sin(x) + C - \int dx~y(x) \cos(x)##.
P: 756
Quote by Mute I'm afraid that sin(x) ruins the integration, and as such you're missing a term ##-\int dx y(x) \cos(x)##. .
You are right. My mistake !
Damn ODE !
P: 756 A closed form for the solutions of y''=sin(x)*y involves the Mathieu's special functions. http://mathworld.wolfram.com/MathieuFunction.html Attached Thumbnails
P: 6
Quote by JJacquelin A closed form for the solutions of y''=sin(x)*y involves the Mathieu's special functions. http://mathworld.wolfram.com/MathieuFunction.html
Thanks to JJacquelin and Mute for helping!!!! it's really helpful :)
Math
Emeritus
Thanks
PF Gold
P: 39,363
Quote by paul143 Thanks for the tip! Are there any possible way to solve this? Like possibly an out of this world change in variables (ex let u = lnx)? or series substitution?
You have already been given two methods, Frobenius and a change of variables that changes this to a Bessel equation, and a complete solution: y(x)= 0. What more do you want?
Math
Emeritus
Thanks
PF Gold
P: 39,363
Quote by Ceria_land Thanks to JJacquelin and Mute for helping!!!! it's really helpful :)
P: 6
Thanks for your warning. I won't do it again :D
Related Discussions Differential Equations 3 Differential Equations 11 Engineering, Comp Sci, & Technology Homework 4 Differential Equations 1 Calculus & Beyond Homework 10
|
2014-08-01 22:38:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7495006322860718, "perplexity": 1021.49569411773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275393.46/warc/CC-MAIN-20140728011755-00484-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://danieltakeshi.github.io/2017/03/11/what-biracial-people-know/
|
There’s an opinion piece in the New York Times by Moises Velasquez-Manoff which talks about (drum roll please) biracial people. As he mentions:
Multiracials make up an estimated 7 percent of Americans, according to the Pew Research Center, and they’re predicted to grow to 20 percent by 2050.
Thus, I suspect that sometime in the next few decades, we will start talking about race in terms of precise racial percentages, such as “100 percent White” or in rarer cases, “25 percent White, 25 percent Asian, 25 percent Black, and 25 percent Native American.” (Incidentally, I’m not sure why the article uses “Biracial” when “Multiracial” would clearly have been a more appropriate term; it was likely due to the Barack Obama factor.)
The phrase “precise racial percentages” is misleading. Since all humans came from the same ancestor, at some point in history we must have been “one race.” For the sake of defining these racial percentages, we can take a date — say 4000BC — when, presumably, the various races were sufficiently different, ensconced in their respective geographic regions, and when interracial marriages (or rape) was at a minimum. All humans alive at that point thus get a “100 percent [insert_race_here]” attached to them, and we do the arithmetic from there.
What usually happens in practice, though, is that we often default to describing one part of one race, particularly with people who are $X$ percent Black, where $X > 0$. This is a relic of the embarrassing “One Drop Rule” the United States had, but for now it’s probably — well, I hope — more for self-selecting racial identity.
Listing precise racial percentages would help us better identify people who are not easy to immediately peg in racial categories, which will increasingly become an issue as more and more multiracial people like me blur the lines between the races. In fact, this is already a problem for me even with single-race people: I sometimes cannot distinguish between Hispanics versus Whites. For instance, I thought Ted Cruz and Marco Rubio were 100 percent White.
Understanding race is also important when considering racial diversity and various ethical or sensitive questions over who should get “preferences.” For instance, I wonder if people label me as a “privileged white male” or if I get a pass for being biracial? Another question: for a job at a firm which has had a history of racial discrimination and is trying to make up for that, should the applicant who is 75 percent Black, 25 percent White, get a hair’s preference versus someone who is 25 percent Black and 75 percent White? Would this also apply if they actually have very similar skin color?
In other words, does one weigh more towards the looks or the precise percentages? I think the precise percentages method is the way schools, businesses, and government operate, despite how this isn’t the case in casual conversations.
Anyway, these are some of the thoughts that I have as we move towards a more racially diverse society, as multiracial people cannot have single-race children outside of adoption.
Back to the article: as one would expect, it discusses the benefits of racial diversity. I can agree with the following passage:
Social scientists find that homogeneous groups like [Donald Trump’s] cabinet can be less creative and insightful than diverse ones. They are more prone to groupthink and less likely to question faulty assumptions.
The caveat is that this assumes the people involved are equally qualified; a racially homogeneous (in whatever race), but extremely well-educated cabinet would be much better than a racially diverse cabinet where no one even finished high school. But controlling for quality, I can agree.
Diversity also benefits individuals, as the author notes. It is here where Mr. Velasquez-Manoff points out that Barack Obama was not just Black, but also biracial, which may have benefited his personal development. Multiracials make up a large fraction of the population in racially diverse Hawaii, where Obama was born (albeit, probably with more Asian-White overlap).
Yes, I agree that diversity is important for a variety of reasons. It is not easy, however:
It’s hard to know what to do about this except to acknowledge that diversity isn’t easy. It’s uncomfortable. It can make people feel threatened. “We promote diversity. We believe in diversity. But diversity is hard,” Sophie Trawalter, a psychologist at the University of Virginia, told me.
That very difficulty, though, may be why diversity is so good for us. “The pain associated with diversity can be thought of as the pain of exercise,” Katherine Phillips, a senior vice dean at Columbia Business School, writes. “You have to push yourself to grow your muscles.”
I cannot agree more.
Moving on:
Closer, more meaningful contact with those of other races may help assuage the underlying anxiety. Some years back, Dr. Gaither of Duke ran an intriguing study in which incoming white college students were paired with either same-race or different-race roommates. After four months, roommates who lived with different races had a more diverse group of friends and considered diversity more important, compared with those with same-race roommates. After six months, they were less anxious and more pleasant in interracial interactions.
Ouch, this felt like a blindsiding attack, and is definitely my main gripe with this article. In college, I had two roommates, both of whom have a different racial makeup than me. They both seemed to be relatively popular and had little difficulty mingling with a diverse group of students. Unfortunately, I certainly did not have a “diverse group of friends.” After all, if there was a prize for college for “least popular student” I would be a perennial contender. (As incredible as it may sound, in high school, where things were worse for me, I can remember a handful of people who might have been even lower on the social hierarchy.)
Well, I guess what I want to say is that, this attack notwithstanding, Mr. Velasquez-Manoff’s article brings up interesting and reasonably accurate points about biracial people. At the very least, he writes about concepts which are sometimes glossed over or under-appreciated nowadays in our discussions about race.
|
2018-05-23 12:33:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4263632595539093, "perplexity": 2404.2783067928394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865651.2/warc/CC-MAIN-20180523121803-20180523141803-00483.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=129&t=58562&p=222970
|
## Midterm equation sheet
$w=-P\Delta V$
and
$w=-\int_{V_{1}}^{V_{2}}PdV=-nRTln\frac{V_{2}}{V_{1}}$
Alicia Lin 2F
Posts: 83
Joined: Wed Sep 18, 2019 12:17 am
### Midterm equation sheet
Will the same exact equation/constant sheet on the course website also be given to us to use on the midterm?
JohnWalkiewicz2J
Posts: 103
Joined: Thu Jul 11, 2019 12:17 am
Been upvoted: 1 time
### Re: Midterm equation sheet
Yeah the equation sheet you see on the website will be the one given to us on the midterm.
Betania Hernandez 2E
Posts: 107
Joined: Fri Aug 02, 2019 12:15 am
### Re: Midterm equation sheet
Yes, the same constants and equation sheet will be be provided on the midterm.
Yailin Romo 4G
Posts: 109
Joined: Wed Feb 20, 2019 12:16 am
### Re: Midterm equation sheet
yes it should be the same one!
faithkim1L
Posts: 105
Joined: Fri Aug 09, 2019 12:17 am
### Re: Midterm equation sheet
The equation sheet on the website will be the equation sheet given on the midterm. He also uses the IUPAC periodic table, but it dates back to 2011 or so I believe. The molar masses are slightly different than the ones he uses in class (he uses different periodic tables), so be careful on tests.
Daniela Shatzki 2E
Posts: 53
Joined: Sat Aug 24, 2019 12:16 am
### Re: Midterm equation sheet
yes, I think it is also the same as the one given on the first test we had.
Nare Nazaryan 1F
Posts: 101
Joined: Fri Aug 09, 2019 12:17 am
### Re: Midterm equation sheet
Yes, it is always the same as on every test/midterm/final.
Jesalynne 2F
Posts: 100
Joined: Wed Sep 18, 2019 12:18 am
### Re: Midterm equation sheet
We will get the same equation sheet that is on his 14B website.
AGaeta_2C
Posts: 112
Joined: Wed Sep 18, 2019 12:21 am
### Re: Midterm equation sheet
Thankfully it is always the same sheet used so it's pretty essential to understand it and how each equation is derived. :)
rohun2H
Posts: 100
Joined: Wed Sep 18, 2019 12:19 am
### Re: Midterm equation sheet
Yes, it is the same sheet.
Patricia Cardenas
Posts: 103
Joined: Fri Aug 09, 2019 12:17 am
### Re: Midterm equation sheet
Yes, it should be the same sheet on every test & midterm.
Megan Kirschner
Posts: 46
Joined: Wed Feb 20, 2019 12:17 am
### Re: Midterm equation sheet
Yes, it's the same.
I often find it helpful to cross out the equations we don't know anything about- so the equations sheet is less overwhelming.
Bryan Chen 1H
Posts: 58
Joined: Mon Jun 17, 2019 7:24 am
### Re: Midterm equation sheet
yes im pretty sure
805097738
Posts: 180
Joined: Wed Sep 18, 2019 12:20 am
### Re: Midterm equation sheet
yes, it is the same one
Morgan Carrington 2H
Posts: 54
Joined: Wed Nov 14, 2018 12:22 am
### Re: Midterm equation sheet
Alicia Lin 2F wrote:Will the same exact equation/constant sheet on the course website also be given to us to use on the midterm?
Not necessarily as relevant, but is the equation sheet used on the first test the same as the one that will be given tomorrow?
chimerila
Posts: 53
Joined: Wed Nov 14, 2018 12:23 am
Been upvoted: 1 time
### Re: Midterm equation sheet
Megan Kirschner wrote:Yes, it's the same.
I often find it helpful to cross out the equations we don't know anything about- so the equations sheet is less overwhelming.
That's such a smart idea
Omar Selim 1D
Posts: 108
Joined: Sat Jul 20, 2019 12:16 am
### Re: Midterm equation sheet
Yes, however, it is helpful to know the equations before hand because the units, definitions, and terms are not provided
|
2020-11-25 22:48:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4633690118789673, "perplexity": 7681.607213478153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141184870.26/warc/CC-MAIN-20201125213038-20201126003038-00060.warc.gz"}
|
http://newsgroups.derkeiler.com/Archive/Comp/comp.text.tex/2012-06/msg00169.html
|
# Alternative definition of overline, comments?
Hi,
after learning that overline uses a fixed distance of 3
default_rule_thickness over the symbol, I decided to write an
alternative command with a fixed height. In MdSymbol, I use a rather
large rule_thickness which results in a to large height of the line. Not
knowing much of TeX, I tried to combine solutions for other problems:
http://tex.stackexchange.com/a/24134/11605
http://tex.stackexchange.com/a/43906/11605
The following works well for me. Anything I missed? Or is there an
easier solution? As far as I can see, the problem is to define a box in
math mode. Herbert's solution in the first link works for text only.
Thanks!
\documentclass{article}
\usepackage{amsmath,calc}
\newsavebox\overliningbox
\makeatletter
\def\fb@eat#1#2#3#4#5{\futurelet\fb@let@token\fb@eat@}
\def\fb@eat@#1\fb@eat{%
\ifx\fb@let@token\bgroup
\else\ifx\fb@let@token\mathop
\mathop
\else\ifx\fb@let@token\mathbin
\mathbin
\else\ifx\fb@let@token\mathrel
\mathrel
\else\ifx\fb@let@token\mathopen
\mathopen
\else\ifx\fb@let@token\mathop
\mathop
\else\ifx\fb@let@token\mathpunct
\mathpunct
\else\ifcat.\ifcat a\noexpand\fb@let@token.\else\noexpand\fb@let@token\fi
\afterassignment\fb@mathchar\count@\mathcode#1\relax\fb@eat
\else\ifx\fb@let@token\mathchar
\afterassignment\fb@mathchar\expandafter\count@\@gobble#1\relax\fb@eat
\else
\xdef\meaning@{\meaning\fb@let@token}%
\expandafter\fb@mchar@test\meaning@""\@nil
\fi\fi\fi\fi\fi\fi\fi\fi\fi
}
\def\overlining#1{%
\begingroup
\let\protect\empty
\expandafter\fb@eat\romannumeral\Q#1\relax\fb@eat
\ifcase\count@
\or
\mathop\or
\mathbin\or
\mathrel\or
\mathopen\or
\mathclose\or
\mathpunct\or
\fi
{\text{\savebox\overliningbox{$\m@th#1$}\fboxsep\z@\makebox[0pt]%[l]{$\m@th#1$}\rule[\ht\overliningbox+1.2pt]{\wd\overliningbox}%{\fontdimen8\textfont3}}}%
\endgroup}
\edef\fb@mchar@{\meaning\mathchar}
\def\fb@mchar@test#1"#2"#3\@nil{%
\xdef\meaning@{#1}%
\ifx\meaning@\fb@mchar@
\count@"#2\relax
\fb@mathchar\fb@eat
\fi
}
\def\fb@mathchar#1\fb@eat{%
\divide\count@"1000 }
\makeatother
\begin{document}
$\overlining{abcd} \overlining{\sin{abcd}}$
\overlining{\frac{abcd}{abcd}} \overlining{\int dx}
\end{document}
.
|
2014-11-24 08:18:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100229740142822, "perplexity": 2603.7197407713575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380464.40/warc/CC-MAIN-20141119123300-00073-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://libguides.nwpolytech.ca/math/Statistics/HypothesisProcess
|
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
# Math
Unless otherwise stated, the material in this guide is from The Learning Centre at Centennial College. Content has been adapted for the NWP Learning Commons in March 2022. This work is licensed under a Creative Commons BY 4.0 International License.
Hypothesis Testing Process
There are many different test statistics to choose from. It depends on what parameter you are testing (e.g., $$\mu,\sigma,p$$), what variables are given (is $$\sigma$$ known?), and the distribution of the population (e.g., normally distributed). The following are some test statistics you will encounter for hypothesis testing with one sample.
Parameter Sampling Distribution Requirements Test Statistic Proportion p Normal (z) $$np\geq 5$$ and $$nq \geq 5$$ $z=\frac{\hat{p}-p}{\sqrt{\frac{pq}{n}}}$ Mean $$\mu$$ t $$\sigma$$ is not known and normally distributed population or $$\sigma$$ not known and $$n\leq 30$$ $t=\frac{\bar{x}-\mu}{\frac{s}{\sqrt{n}}}$ Mean $$\mu$$ Normal (z) $$\sigma$$ is known and normally distributed population or $$\sigma$$ known or $$n>30$$ $z=\frac{\bar{x}-\mu}{\frac{\sigma}{\sqrt{n}}}$ Standard deviation $$\sigma$$ $$\chi^2$$ Strict requirement: normally distributed population $\chi^2=\frac{(n-1)s^2}{\sigma^2}$
Example 1: 93 student course evaluations report an average rating of 3.91 with standard deviation 0.53. What test statistic should be used to test the hypothesis that the population student course evaluations has a mean equal to 4.00?
Solution: The given values are $$\bar{x}=3.91$$, $$s=0.53$$ and $$\mu=4.00$$. $$\sigma$$ is not known and $$n>30$$ so the t-statistic should be used.
\begin{align} t&=\frac{\bar{x}-\mu}{\frac{s}{\sqrt{n}}}\\ &=\frac{3.91-4.00}{\frac{0.53}{\sqrt{93}}}\\&=-1.63760\end{align}
Example 2: A study of of 19,136 people found that 29.2% of the people sleep-walked. Would a reporter be justified in stating that "fewer than 30% of adults have sleep-walked"? What test statistic should be used?
Solution: The given value is in proportions with $$\hat{p}=0.292$$, $$p=0.30$$, which means $$q=1-p=0.7$$.
First, we have to meet the requirements $$np\geq 5$$ and $$nq \geq 5$$ to apply the test statistic.
$$np=(19,136)(0.3)=5740.8\geq 5$$ and $$nq=(19,136)(0.7)=13,395.2\geq 5$$
With the conditions satisfied, we can calculate the test statistic for proportions.
\begin{align} z&=\frac{\hat{p}-p}{\sqrt{\frac{pq}{n}}}\\ &=\frac{0.292-0.30}{\sqrt{\frac{(0.3)(0.7)}{19,136}}} \\ &=-2.41494\end{align}
|
2022-09-27 08:44:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7487856149673462, "perplexity": 1022.4997564868215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00062.warc.gz"}
|
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2014_v51n5_1299
|
UNIFORM ATTRACTORS FOR NON-AUTONOMOUS NONCLASSICAL DIFFUSION EQUATIONS ON ℝN
Title & Authors
UNIFORM ATTRACTORS FOR NON-AUTONOMOUS NONCLASSICAL DIFFUSION EQUATIONS ON ℝN
Anh, Cung The; Nguyen, Duong Toan;
Abstract
We prove the existence of uniform attractors $\small{\mathcal{A}_{\varepsilon}}$ in the space $\small{H^1(\mathbb{R}^N){\cap}L^p(\mathbb{R}^N)}$ for the following non-autonomous nonclassical diffusion equations on $\small{\mathbb{R}^N}$, u_t-{\varepsilon}{\Delta}u_t-{\Delta}u+f(x,u)+{\lambda}u
Keywords
nonclassical diffusion equation;uniform attractor;unbounded domain;upper semicontinuity;tail estimates method;asymptotic a priori estimate method;
Language
English
Cited by
1.
Strong global attractors for nonclassical diffusion equation with fading memory, Advances in Difference Equations, 2017, 2017, 1
2.
Attractors for nonclassical diffusion equations with arbitrary polynomial growth nonlinearity, Nonlinear Analysis: Real World Applications, 2016, 31, 23
References
1.
E. C. Aifantis, On the problem of diffusion in solids, Acta Mech. 37 (1980), no. 3-4, 265-296.
2.
C. T. Anh and T. Q. Bao, Pullback attractors for a class of non-autonomous nonclassical diffusion equations, Nonlinear Anal. 73 (2010), no. 2, 399-412.
3.
C. T. Anh and T. Q. Bao, Dynamics of non-autonomous nonclassical diffusion equations on $\mathbb{R}^N$, Commun. Pure Appl. Anal. 11 (2012), no. 3, 1231-1252.
4.
G. Chen and C. K. Zhong, Uniform attractors for non-autonomous p-Laplacian equation, Nonlinear Anal. 68 (2008), no. 11, 3349-3363.
5.
V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics, Amer. Math. Soc. Colloq. Publ., Vol. 49, Amer. Math. Soc., Providence, RI, 2002.
6.
J.-L. Lions, Quelques Methodes de Resolution des Problemes aux Limites Non Lineaires, Dunod, Paris, 1969.
7.
J. C. Peter and M. E. Gurtin, On a theory of heat conduction involving two temperatures, Z. Angew. Math. Phys. 19 (1968), no. 4, 614-627.
8.
H. Song, S. Ma, and C. K. Zhong, Attractors of non-autonomous reaction-diffusion equations, Nonlinearity 22 (2009), no. 3, 667-681.
9.
H. Song and C. K. Zhong, Attractors of non-autonomous reaction-diffusion equations in Lp, Nonlinear Anal. 68 (2008), no. 7, 1890-1897.
10.
C. Sun, S. Wang, and C. K. Zhong, Global attractors for a nonclassical diffusion equation, Acta Math. Appl. Sin. Engl. Ser. 23 (2007), no. 7, 1271-1280.
11.
C. Sun and M. Yang, Dynamics of the nonclassical diffusion equations, Asymp. Anal. 59 (2008), no. 1-2, 51-81.
12.
R. Temam, Navier-Stokes Equations and Nonlinear Functional Analysis, 2nd edition, Philadelphia, 1995.
13.
T. W. Ting, Certain non-steady flows of second-order fluids, Arch. Ration. Mech. Anal. 14 (1963), 1-26.
14.
C. Truesdell and W. Noll, The Nonlinear Field Theories of Mechanics, Encyclomedia of Physics, Springer, Berlin, 1995.
15.
B. Wang, Attractors for reaction-diffusion equations in unbounded domains, Phys. D 179 (1999), no. 1, 41-52.
16.
S. Wang, D. Li, and C. K. Zhong, On the dynamic of a class of nonclassical parabolic equations, J. Math. Anal. Appl. 317 (2006), no. 2, 565-582.
17.
H. Wu and Z. Zhang, Asymptotic regularity for the nonclassical diffusion equation with lower regular forcing term, Dyn. Syst. 26 (2011), no. 4, 391-400.
18.
Y. Xiao, Attractors for a nonclassical diffusion equation, Acta Math. Appl. Sin. Engl. Ser. 18 (2002), no. 2, 273-276.
|
2018-09-26 11:12:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.602404773235321, "perplexity": 3009.018876414156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164750.95/warc/CC-MAIN-20180926101408-20180926121808-00548.warc.gz"}
|
https://proofwiki.org/wiki/Mapping_from_Totally_Ordered_Set_is_Order_Embedding_iff_Strictly_Increasing/Reverse_Implication
|
# Mapping from Totally Ordered Set is Order Embedding iff Strictly Increasing/Reverse Implication
## Theorem
Let $\struct {S, \preceq_1}$ be a totally ordered set and let $\struct {T, \preceq_2}$ be an ordered set.
Let $\phi: S \to T$ be a strictly increasing mapping.
Then $\phi$ is an order embedding.
## Proof 1
Let $x \preceq_1 y$.
Then $x = y$ or $x \prec_1 y$.
Let $x = y$.
Then
$\phi \left({x}\right) = \phi \left({y}\right)$
so:
$\phi \left({x}\right) \preceq_2 \phi \left({y}\right)$
Let $x \prec_1 y$.
Then by the definition of strictly increasing mapping:
$\phi \left({x}\right) \prec_2 \phi \left({y}\right)$
so by the definition of $\prec_2$:
$\phi \left({x}\right) \preceq_2 \phi \left({y}\right)$
Thus:
$x \preceq_1 y \implies \phi \left({x}\right) \preceq_2 \phi \left({y}\right)$
It remains to be shown that:
$\phi \left({x}\right) \preceq_2 \phi \left({y}\right) \implies x \preceq_1 y$
Suppose that $x \npreceq_1 y$.
Since $\preceq_1$ is a total ordering:
$y \prec_1 x$
Thus since $\phi$ is strictly increasing:
$\phi \left({y}\right) \prec_1 \phi \left({x}\right)$
Thus:
$\phi \left({x}\right) \not\preceq_1 \phi \left({y}\right)$
Therefore:
$x \npreceq_1 y \implies \phi \left({x}\right) \npreceq_2 \phi \left({y}\right)$
By the Rule of Transposition:
$\phi \left({x}\right) \preceq_2 \phi \left({y}\right) \implies x \preceq y$
$\blacksquare$
## Proof 2
Let $\phi$ be strictly increasing.
Let $\map \phi x \preceq_2 \map \phi y$.
As $\struct {S, \prec_1}$ is a strictly totally ordered set:
Either $y \prec_1 x$, $y = x$, or $x \prec_1 y$.
Aiming for a contradiction, suppose that $y \prec_1 x$.
By the definition of a strictly increasing mapping:
$\map \phi y \prec_2 \map \phi x$
which contradicts the fact that $\map \phi x \preceq_2 \map \phi y$.
Therefore $y \nprec_1 x$.
Thus $y = x$, or $x \prec_1 y$, so $x \preceq_1 y$.
Hence:
$\map \phi x \preceq_2 \map \phi y \iff x \preceq_1 y$
and $\phi$ has been proved to be an order embedding.
$\blacksquare$
|
2020-07-11 18:21:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941972494125366, "perplexity": 392.02930116566705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00210.warc.gz"}
|
https://raylib.handmade.network/forums/t/1985-rres_-_raylib_resources_custom_fileformat
|
Ray
62 posts / 1 project
I like videogames/gametools development.
rRES - raylib resources custom fileformat
Edited by Ray on
By default, raylib supports the following file formats for resources:
- IMAGE (Uncompressed): PNG, BMP, TGA, JPG, GIF, HDR (stb_image.h)
- IMAGE (Compressed): DDS, PKM, KTX, PVR, ASTC
- AUDIO (Sound): WAV, OGG (stb_vorbis.c), FLAC (dr_flac.h)
- AUDIO (Streaming): OGG, FLAC, XM (jar_xm.h), MOD (jar_mod.h)
- FONTS: BMFont (FNT), TTF (stb_truetype.h), IMAGE-based
- MODEL (Mesh + Material): OBJ, MTL
Since raylib 1.0, I also added support for a custom fileformat: RRES
At that moment (3 years ago) I didn't have the same experience than now and so,
I decided to redesign it to be more generic, versatil and useful (but keeping it simple):
I'm not an expert designing file formats so I'd appreciate any feedback about it.
Mārtiņš Možeiko
2406 posts / 2 projects
rRES - raylib resources custom fileformat
Edited by Mārtiņš Možeiko on
What does "RSA" means for crypto type? You cannot encrypt arbitrary large data with rsa. You typically encrypt stream cipher key (AES) with RSA and then use stream cipher to encrypt rest of data. So you need place to store encrypted AES key as part of data.
Using juts unauthenticated crypto algorithm (AES/XOR/Blowfish) is a bad idea. Always use authentication - something like HMAC or GCM block mode. So you'll need place for signature. Alternatively don't roll out your own crypto and take good high-level implementation - nacl (or libsodium) which is proven to be resistent to crypto attacks (for example its code doesn't use data dependent branches). Instead of ancient blowsfish I would take much more modern crypto primitives - Salsa20 or ChaCha20 for stream cipher and poly1305 for authentication, both of them can implemented very efficiently in software.
I would include also zstd for compression. It is from same author as lz4, but offers faster performance and more compression than lz4 or even zlib.
For vertex formats - does seach component (Normal, Position) needs to in separate resource block? Is vertexType enum or bitset?
Ray
62 posts / 1 project
I like videogames/gametools development.
rRES - raylib resources custom fileformat
Hi mmozeiko! Thank you very much for your feedback! :D
Sorry, I don't know much about cryptography; actually, current rRES implementation doesn't consider any kind of encryption, just added that field by recommendation of some gamedev friends. Thanks for the libsodium reference, just been checking libtomcrypt.
About compression modes, just added the most common ones for reference, currently only DEFLATE compression is supported, again, I'm neither an expert on that field.
About vertex data, first I designed it to be full mesh (one single resource) and use bit fields to define the attributes provided but I realized it was simpler to just store every vertex array independently, it gives the user more control over stored vertex data. Additionally, I added the partsCount field in the InfoHeader, it's used to define resources that consist of multiple parts (one after the other, same resource id).
DataType, CompressionType, EncryptionType, ImageFormat, VertexFormat are just enums, that way rRES can be extended just adding required formats, for example: RRES_VERT_CUSTOM1 en VertexType (actually, enums naming has been adapted to be more descriptive).
Jeroen van Rijn
248 posts
A big ball of Wibbly-Wobbly, Timey-Wimey _stuff_
rRES - raylib resources custom fileformat
Edited by Jeroen van Rijn on Reason: parts
Let me preface this by saying it's not a bad design, but I think improvements can be had.
While I agree with Mārtiņš Možeiko about the encryption part of the format, my main concern with the file format as proposed is perhaps a bit more fundamental still: rRES seems to be structured in a way that's perhaps too specific.
While it's certainly a good thing to specify the various types of resources, maybe having 4 parameters - which can be used or reserved for future use - isn't the best idea. What if you want to add a new type of resource that needs 5 or more parameters? What if the majority of the objects you're packing into this resource file need no parameters at all?
I don't know what the 'part' field does in type, comp, crypt, part. Maybe it's there to split up a given resource into more than 1 part? Why would you want to split up an image into more than 1 part? Or is this intended for a font with at most 256 glyphs, where each glyph is its own part?
This is one of those things that feels very specific, but where it's unclear that you're actually going to use this option field in question. Might this byte instead be used as a 'param count' or 'param bytes'? The latter would give you a bit of flexibility, where each resource can have between 0 and 255 bytes worth of params, laid out as makes sense for that resource type.
(Edit: I see now in your reply above that it stands for parts count. Might this not be a parameter? Some types of resources wouldn't be split up like this.)
You can still use the width, height, format, mipmaps for RRES_IMAGE, you'd just set param_bytes to 16.
Alternatively, let the type of resource determine how many bytes of parameters directly follow the header, preceding the actual payload. If you see it's a type RRES_IMAGE, you know the next 16 bytes are parameters. If you see it's RRES_TEXT, there's only 8 bytes worth of params.
Additionally, as a resource pack file format, one thing I seem to be missing is a central directory. It seems you need to know the ID of the resource you need and keep this information elsewhere. The resource blocks themselves don't appear to have a place to store a filename or other identifier, and there's no directory in the diagram.
Perhaps a good addition would be to have the final entry be an RRES_DIRECTORY, which allows you to identify the resources by a name or some other identifier, and which has an offset into the file for the resource in question.
The last 32 bits of the file could be the directory length, so you could open the file and verify the rRES header, then seek to the end, read the directory length and jump back that many bytes and confirm you've landed on RRES_DIRECTORY. Alternatively, add a 32 bit DirectoryOffset to the FileHeader after count? If that DirectoryOffset == 0, it means you know what each of the resources are and can identify them by their ID in another way, with the directory omitted from the file. This would give you an optional directory.
Some ideas to ponder :)
It's not a bad design, but I think there's probably a few lessons to be learned from the RIFF format and how extensible it is, the PKZIP format and its directory structure, and Per Vognsen's GOB (which riffs on RIFF and allows zero-copy use of assets).
Ray
62 posts / 1 project
I like videogames/gametools development.
rRES - raylib resources custom fileformat
Edited by Ray on
Kelimion
While it's certainly a good thing to specify the various types of resources, maybe having 4 parameters - which can be used or reserved for future use - isn't the best idea. What if you want to add a new type of resource that needs 5 or more parameters? What if the majority of the objects you're packing into this resource file need no parameters at all?
Actually, current implementation (3 years old) uses a custom number and size of parameters per type, just changed it now for simplicity; my reflexion: every resource has 4 int parameters, use them as you like, if a resource type needs more than that, make them fit or divide resource type into multiple resource types, using partCount to link the multiple resource parts with same ID.
For example:
Mesh (3 vertex arrays with position, texcoords, normal) --> Can be packed as 3 resources of type RRES_VERTEX and vertexType POSITION, TEXCOORD1, NORMAL; partCount for the 3 resources would be 3, same Id, one after the other.
SpriteFont (spritefont image, chars data) --> Can be packed as 2 resources of type RRES_IMAGE and RRES_FONT_INFO (baseSize, charsCount, reserved, reserved) - just int array with required font data (value, rectangle, offsets, xadvance), partCount is 2, same ID.
Well, it seemed simpler to me just setting a fixed InfoHeader size with a fixed number of parameters, even wasting some bytes (and have to say that I come from microcontrollers programming, where every bit counts!). I'll think a bit more about it...
Kelimion
Perhaps a good addition would be to have the final entry be an RRES_DIRECTORY, which allows you to identify the resources by a name or some other identifier, and which has an offset into the file for the resource in question.
The idea was to keep the resource ID as the unique identifier for every resource (or resource parts), in current implementation, when .rres is generated from files, a .h file is also generated containing a bunch of:
#define RRES_background_png 0x0204F75A
The .h file can be included in the project and just load every resource like:
Exposing only the ID added an extra level of security over the assets (also resource crypto keys could own to program the same way...). That's how it works now but I liked the idea of the RRES_DIRECTORY to store that table data in the same rres, just in case.
I'll check the RIFF format and GOB, thanks for the link and the extensive review! :)
Jeroen van Rijn
248 posts
A big ball of Wibbly-Wobbly, Timey-Wimey _stuff_
rRES - raylib resources custom fileformat
Edited by Jeroen van Rijn on
raysan5
That's how it works now but I liked the idea of the RRES_DIRECTORY to store that table data in the same rres, just in case.
I'll check the RIFF format and GOB, thanks for the link and the extensive review! :)
No problem :)
What I like about the central directory is that you can more quickly jump to a specific resource within the pack without having to read each resource's info struct, determine its length and jump to the next one until you land on the resource you're interested in. Perhaps you do want to load all of them in some cases, but there will also be cases where you'd want to page them in.
Of course you might say that this .h file includes not only the ID but also the offset into the pack file (or just the offset for that matter, because your asset loader can then read its info struct), but then you'd need to recompile your program every time you want to ship a new version of your assets, which feels less than ideal.
There's a trade-off to be had there, for sure. If you wanted to make it less obvious what's in the pack even when you have this central directory, you could consider encrypting the identifiers as well, so this directory would have an 'encrypted' field just like the other resources.
Ray
62 posts / 1 project
I like videogames/gametools development.
rRES - raylib resources custom fileformat
This weekend I've been working again on my custom rRES file format.
Last week Milan Nikolic (github: gen2brain), creator of raylib-go, just implemented initial design in Go and also created a command line tool.
Using the advise provided in this forum and carefully reviewing RIFF and ZIP file-formats, I improved my previous design:
Now every resource could be divided into several chunks (useful for resources like SpriteFonts or Meshes that consist of several sets of data), every chunk is defined by a chunkType, compressionType, cryptoType and could contain a variable number of parameters (4-byte each). Also added CRC32 support for every chunk.
Central Directory is also supported as an additional Resource type (that could be available or not).
As usual, I tried to keep it simple just adding the most relevant information... chunks support complicates a bit the design but makes it more versatile and customizable.
Any feedback is very welcomed! :)
|
2022-10-05 06:29:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19425459206104279, "perplexity": 3092.5795983308362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00686.warc.gz"}
|