url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://puzzling.meta.stackexchange.com/questions/5437/what-is-a-puzzle-tag/5443
|
# What is a Puzzle Tag™?
We've probably all seen at least one of the "What is a XXX Word™?" puzzles. They were started off by JLee in July 2015 and have been going strong ever since, with 30 posted to date.
Unlike most of the 'fads' Puzzling SE has seen (the Security to the Party puzzles of late 2014, the rebus craze of early 2015, the $n$ Words (----||||) puzzles of mid-2015, and so on), these puzzles have withstood both the twin tests of time and votes. Usually such 'fads' don't last long - a few months at most - and the question scores rapidly decline after the first one or two posted as people get bored of the idea. But the ™ puzzles have lasted for over a year and are still popular today, with at least one of them on the HNQs at the time of this writing. It looks like they're good quality and here to stay.
Moreover, there really is a specific type of puzzle here: the ™ thing isn't just a gimmick that could be put on many more puzzles than it is. They're a 'sequence' defined not just by a common flavour text and OP like the Ernie or Mysterious Email puzzles, but by the following definitive puzzle type:
• some unknown property of certain words/phrases is to be found
• a list of words with this property is provided, and also some without it for comparison.
## I propose that we create a tag for this type of puzzle.
For want of a better tag name, how about ?
(Yes, I realise that this would be a sub-tag of , but we've already established that sub-tags and super-tags are OK here. Popular tags such as and already have various sub-tags covered by them.)
• The title of this post is meant to be a humorous reference to the titles of the puzzles under discussion. If you think it's too confusing, feel free to change it to something more descriptive of this question. – Rand al'Thor Sep 7 '16 at 21:35
• I think the title is fine! Not sure about "tm-puzzle" as a name though... – Deusovi Sep 7 '16 at 21:40
• tm-word or tm-words seems a bit more accurate. Any tag that includes "puzzle" runs the risk of being redundant. – Dan Russell Sep 9 '16 at 20:23
• @Dan People don't seem to like the idea of putting tm in the tag name. See my answer here, suggesting word-property, and Deusovi's answer with several more suggestions. – Rand al'Thor Sep 9 '16 at 20:37
I think a tag would be a great idea! Not sure if there's anything I can really say - you articulated pretty much everything I would've tried to say much better than I ever could, and a lot more too.
I'm not really a fan of though - it just doesn't seem descriptive. Since many of the tags here are Puzzling-exclusive, I'd prefer to have them be as understandable as possible to outsiders who come across them for the first time. (If I had it my way, would be changed to something like (though probably not that exactly, since it doesn't sound very good), and would be... well, it wouldn't exist, but that's another issue entirely.)
Maybe something like or ? Or possibly , though that might give the impression that the goal is classification of new words rather than finding the pattern.
• The voting on this answer suggests people support the creation of a tag for these puzzles. But there's no clear consensus on a name for the tag (except probably not tm-puzzle): you've included a few possibilities here, plus there's another one in my answer below which also got an upvote. Shall I go ahead and create a word-property tag? – Rand al'Thor Sep 10 '16 at 14:33
Suggested tag name: .
Suggested tag wiki excerpt:
For puzzles which ask the solver to determine a hidden property of certain words (or phrases), given a set of words with that property and usually also a set of words without it for comparison. Puzzles with this tag usually have titles of the form "What is a [...] Word™?"
Suggested tag wiki:
Puzzles with this tag usually take approximately the following form:
Title: What is a XXX Word™?
If a word conforms to a special rule, I call it a XXX Word™.
Use the list of examples below to find the rule.
[list of XXX words]
[list of non-XXX words]
The trend was started by user JLee in July 2015 with the puzzle What is a Versatile Word™? and has been kept up ever since. XXX is normally an adjective which cryptically describes some property of the word. This property is what solvers are expected to find, and it can be pretty much anything you like: it can be about the meaning of the word, the letters within it, or anything else.
Sometimes this tag can also be used for puzzles about properties of phrases instead of words, e.g. What is a Surpassing Phrase™?
• A couple of points I'm unsure of: 1) the tag name - is word-properties a good name? should it be word-property instead? 2) phrases - should the tag cover both "What is a XXX Word™?" and "What is a XXX Phrase™?" puzzles, or only the word ones with maybe a different tag for the phrase ones? – Rand al'Thor Sep 7 '16 at 22:04
• I think the tag should cover both Word™ and Phrase™ puzzles, but using "word" may be fine (since phrases are just multiple words anyway). – Deusovi Sep 8 '16 at 2:28
• Coudln't a word-something tag be used for both with prhase-something as synonym? – M Oehm Sep 9 '16 at 7:57
• JLee's first (tm) puzzle was earlier than the cited "Versatile Word" one. I think the first may be this: puzzling.stackexchange.com/questions/16219/… – Gareth McCaughan Sep 12 '16 at 15:16
• @Gareth You're right, that seems to be the first of JLee's ™ puzzles. However, I also discovered that the puzzle type is actually older than we'd thought and wasn't invented by JLee: see these older puzzles which should also be tagged [word-property]. – Rand al'Thor Sep 18 '16 at 20:08
• Oh, good discovery! I see that one of them is by some guy called Rand al'Thor -- any relation? :-) – Gareth McCaughan Sep 18 '16 at 20:41
I agree with the sentiment of somehow nicely tagging/grouping these puzzles, but only if - and I haven't checked all of them - they really fit into a specific category.
I do not like the idea of grouping this questions with a tag purely to provide a convenient link. They can already be easily searched for by their common style.
So, if we're gooing to introduce a new tag, we need to find what really defines these type of puzzles. (In what way are they different from the connected-wall puzzles?)
Is it, that:
The pattern is not in the meaning of the words but usually a commony property of the type-face, font or similar?
If so, this is what should be indicated in the the tag.
• Yes. I already covered this in the question here, and established that they do fit into a specific category: they're a puzzle type defined by the requirement to identify a property of words given a list of words with that property. The property could relate to the meaning, the type-face, or pretty much anything else; it doesn't matter. (See also the tag wiki suggested in my answer here.) – Rand al'Thor Sep 8 '16 at 11:49
• @randal'thor Exactly. My issue here is, that unless there is a more restricting, common property of all those puzzles (which could be the tagname), then a new tag is not needed and 'pattern' does just fine. In particular if meaning etc also can be the property, then the connected-wall puzzles would also fall into the same tag, wouldn't they? – BmyGuest Sep 8 '16 at 11:59
• No, pattern doesn't do just fine, because that tag is also used to cover hundreds of other puzzles: number patterns, colour patterns, letter patterns, different kinds of word pattern questions. And the connected-wall puzzles (assuming you mean ones like this) don't fall into the same tag because they're not about identifying a specific property of words given a list of words with that property. – Rand al'Thor Sep 8 '16 at 12:05
• Hmm, slightly but not fully convinced, @randal'thor. Maybe 'common-property' would be better name tag, then? It could be used for different non-word related puzzles of similar kind as well and is sufficiently different from 'pattern' – BmyGuest Sep 8 '16 at 12:36
• That could certainly be a good idea for a tag, but then we'd have to do a lot more retagging than just those 30-odd ™ questions (and spend more time finding the questions that need the tag, too). I wonder what proportion of the pattern questions would count as common-property? We could take a random sample and check ... – Rand al'Thor Sep 8 '16 at 12:44
Just to toss another idea for the tag name into the ring:
• Hmm ... would this tag be for all pattern puzzles about words, or just for those of the type specified here (where there's a specific property of certain words to be worked out, given a list)? – Rand al'Thor Sep 8 '16 at 14:58
• I think it's best to have a tag that just describes the ™-style puzzles. If you don't think this fits for that purpose, please downvote it. – GentlePurpleRain Sep 8 '16 at 15:00
|
2019-10-17 23:14:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43922021985054016, "perplexity": 1443.7715601470138}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677230.18/warc/CC-MAIN-20191017222820-20191018010320-00377.warc.gz"}
|
https://www.encyclopediaofmath.org/index.php?title=Conical_surface&oldid=31530
|
# Conical surface
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
cone
The surface formed by the movement of a straight line (the generator) through a given point (the vertex) intersecting a given curve (the directrix). A conical surface consists of two concave pieces positioned symmetrically about the vertex.
A second-order cone is one which has the form of a surface of the second order. The canonical equation of a real second-order conical surface is
$$\frac{x^2}{a^2}+\frac{y^2}{b^2}-\frac{z^2}{c^2}=0;$$
if $a=b$, the surface is said to be circular or to be a conical surface of rotation; the canonical equation of an imaginary second-order canonical surface is
$$\frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{z^2}{c^2}=0;$$
the only real point of an imaginary conical surface is $(0,0,0)$.
An $n$-th order cone is an algebraic surface given in affine coordinates $x,y,z$ by the equation
$$f(x,y,z)=0,$$
where $f(x,y,z)=0$ is a homogeneous polynomial of degree $n$ (a form of degree $n$ in $x,y,z$). If the point $M(x_0,y_0,z_0)$ lies on a cone, then the line $OM$ also lies on the cone ($O$ is the coordinate origin). The converse is also true: Every algebraic surface consisting of lines passing through a single point is a conical surface.
How to Cite This Entry:
Conical surface. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Conical_surface&oldid=31530
This article was adapted from an original article by A.B. Ivanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
|
2019-05-20 22:56:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9388046860694885, "perplexity": 475.471912763317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256163.40/warc/CC-MAIN-20190520222102-20190521004102-00349.warc.gz"}
|
http://quant.stackexchange.com/questions?page=28&sort=active
|
# All Questions
177 views
### PCA Variances and Principal Portfolio Variances
In Meucci's paper called "Managing Diversification" he mentions that: "Indeed, the eigenvalues A correspond to the variances of these uncorrelated portfolios" I tried to replicate it but found they ...
277 views
### How do you know if if an option is priced correctly?
Besides obvious extreme examples (ie volatility going to infinity, infinite time, zero time, or zero volatility, deep OTM/ITM ) how does one gauge if an option is 'correct' or at least in the ...
54 views
### earnings reports and option pricing
Let's assume that company XYZ reports earnings in a 0% interest rate environment and the option expires shortly after earnings. And there is a 50% chance the earnings are good (an upmove) and 50% bad ...
120 views
I am considering a product composed of 10 underlying assets. The maturity is 5 year. Each year if the performance of the equi-weighted portfolio reach a barrier, it pays a coupon. My question concern ...
595 views
### Quantitative before/after or financial engineering studies of a bid or ask tax?
Has anyone in the quantitative finance or financial engineering community studied the effects of a bid or ask tax with actual or simulated data? If so, what were the quantitative results or ...
359 views
### How to correctly construct a value- and equally weighted portfolio consisting of property-types?
A problem of which I couldn’t find the answer on the forum is about the construction of equally-weighted and value-weighted portfolio. I want to compute the equally-weighted property-type portfolio ...
89 views
### Benchmarking risk
Given the portfolio return $R$ and the benchmark return $B$, I want to define a risk indicator, measuring the ability to beat the benchmark ($R>B$), given the downside risk taken; the latter not ...
2k views
### t-statistics for the mean return, using Newey-West standard errors
I have seen that in several papers, where the aim was to evaluate the performance of a certain investment strategy, they use t-statistics to test for significance in the results. However, this seems a ...
273 views
I've been trying to download the national interest rates for some countries. When i use Datastream, it only gives me the currency return (while i need yield). Can someone please tell how to ...
451 views
### How do I determine the maturity date from a T-bill's CUSIP?
Is there a way to determine a government bill's or bond's maturity date by looking at its CUSIP? For example, the CUSIP for US T-Bills with a maturity of 12/1/11 is 9127953V1. As you probably know, ...
181 views
### Ito's Lemma - Integrand depends on upper limit of integration
A problem I came across while practicing using Ito's Lemma had a process with an integral whose integrand depends on the upper limit of integration (the goal is to find $dZ_{t}$): ...
91 views
### What does negative gamma mean in APGARCH model?
I got a gamma of -0.1321677. ...
142 views
### Statistics of difference between two GBMs
if I have two asset prices modeled separately as geometric brownian motions. How do i go about calculating the expected statistics of their difference? Like given the sigmas and mus of both processes, ...
117 views
### Modelling long run relationship between dividend and earnings
I am working on a paper where I have to model the long run relationship between earnings and dividends. I have downloaded the raw data from shillers website. I have converted the series to ...
6k views
### Python library for Portfolio Optimization
Does anyone know of a python library/source that is able to calculate the traditional mean-variance portfolio? To press my luck, any resources where the library/source also contains functions such as ...
462 views
According to this link I try to get intraday data of SAP listed at Xetra. Intraday data with timestep of 1 second would be great. I do not understand parts of the command, I try ...
172 views
280 views
### Are BSDE's used in practice?
In the academic applied probability/math finance community, Backwards Stochastic Differential Equations (BSDE's) are extremely popular, and they provide a single framework for several different ...
|
2014-04-17 21:45:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774798512458801, "perplexity": 2401.948212853719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://zbmath.org/?q=an:1234.35193
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Global well-posedness for the micropolar fluid system in critical Besov spaces. (English) Zbl 1234.35193
The authors consider an incompressible micropolar fluid system. This is a kind of non Newtonian fluid, and is a model of the suspensions, animal blood, liquid crystals which cannot be characterized appropriately by the Navier-Stokes system. It is described by the fluid velocity $u\left(x,t\right)=\left({u}_{1},{u}_{2},{u}_{3}\right)$, the velocity of rotation of particles $\omega \left(x,t\right)=\left({\omega }_{1},{\omega }_{2},{\omega }_{3}\right)$, and the pressure $\pi \left(x,t\right)$ in the following form:
$\left\{\begin{array}{cc}& {\partial }_{t}u-{\Delta }u+u·\nabla u+\nabla \pi -\nabla ×\omega =0,\hfill \\ & {\partial }_{t}\omega -{\Delta }\omega +u·\nabla \omega +2\omega -\nabla \text{div}\phantom{\rule{0.166667em}{0ex}}\omega -\nabla ×u=0,\hfill \\ & \text{div}\phantom{\rule{0.166667em}{0ex}}u=0,\hfill \\ & u\left(x,0\right)={u}_{0}\left(x\right),\phantom{\rule{1.em}{0ex}}\omega \left(x,0\right)={\omega }_{0}\left(x\right)·\hfill \end{array}\right\$
They assume that the initial values ${v}_{0},{w}_{0}$ belong to the Besov space ${\stackrel{˙}{B}}_{p·\infty }^{\frac{p}{3}-1}$ for some $1\le p<6$ with small norms (this type of Besov space is called critical). They prove the existence of the solution in $C\left(0,\infty ;{\stackrel{˙}{B}}_{p·\infty }^{\frac{p}{3}-1}\right)$. They also prove the uniqueness under an additional assumption. For this purpose they consider an associated linear system
$\left\{\begin{array}{cc}& {\partial }_{t}u-{\Delta }u-\nabla ×\omega =0,\hfill \\ & {\partial }_{t}\omega -{\Delta }\omega +2\omega -\nabla ×u=0,\hfill \end{array}\right\$
and study the action of its Green matrix.
One can apply thier result directly to an incompressible Navier-Stokes equation, by setting $\omega =0$.
##### MSC:
35Q35 PDEs in connection with fluid mechanics 76A05 Non-Newtonian fluids 76B03 Existence, uniqueness, and regularity theory (fluid mechanics) 35Q30 Stokes and Navier-Stokes equations
##### Keywords:
micropolar fluid; Besov space
|
2013-12-04 23:55:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7833266258239746, "perplexity": 5254.015893279646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037851/warc/CC-MAIN-20131204131717-00039-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://codegolf.meta.stackexchange.com/questions/2140/sandbox-for-proposed-challenges/374
|
# What is the Sandbox?
This "Sandbox" is a place where Code Golf users can get feedback on prospective challenges they wish to post to the main page. This is useful because writing a clear and fully specified challenge on the first try can be difficult. There is a much better chance of your challenge being well received if you post it in the Sandbox first.
See the Sandbox FAQ for more information on how to use the Sandbox.
## Get the Sandbox Viewer to view the sandbox more easily
To add an inline tag to a proposal use shortcut link syntax with a prefix: [tag:king-of-the-hill]
# Metagolf: Catlike Piet
The goal of this is to write a catlike program, which would be executed (in a Unix environment, though you needn't stick to that) by the following:
yourprogram < file > output
piet output
where piet output writes the contents of file to stdout. That is, you're to generate a Piet program which prints the input to yourprogram.
One-liners
Straight line programs can be written in Piet... in straight lines. If you're willing to take a hit to your score, your output can take the form of a string of commands:
= none (continue color block)
| push
^ pop
- subtract
* multiply
/ divide
% mod
~ not
> greater
. pointer
\ switch
: duplicate
@ roll
$input number ? input character # output number ! output character which is trivial to convert to a Piet program with the following (partially golfed) Python code: def P(s): h=v=0;l=len(s)+1;R="P3 %i 2 255 192 0 0 "%(l+2) C=[1,3,2,6,4,5];V=[0,192,192,255,0,255] for x in map("=|^+-*/%~>.,:@$?#!".find,s):
C=C[x//3:]+C[:x//3];V=V[x%3*2:]+V[:x%3*2]
for i in [1,2,4]:R+="%i "%V[(C[0]//i)%2]
return R+"255 "*4+"0 0 "+"255 "*l*3+"255 0 0 "*2
The dimension of said program is (n+3) x 2 if there are n characters in the string.
Scoring
Your code will be judged on the maximum dimension of the images that it outputs.
• Part 1: Take the maximum score taken over all ascii codes (that is, single-character inputs), discounting EOF.
• Part 2: Take the score for the input "Hello. My name is Inigo Montoya. You killed my father. Prepare to die."
Your score is the product of the scores in part 1 and part 2.
Punishment: Double your score if you write one-liners as above (that is, if you don't output an image).
Bonus: If your program is written in Piet, take the square root of your score above.
• It took me a while to understand the task as "Write a program taking INPUT which produces as output a piet program that takes no input but produces INPUT." I think it is a interesting and challenging, but it's reception will depend entirely on how many people are willing to learn/futz-around-in/deal-with piet. And I have no feel for how many that is. – dmckee --- ex-moderator kitten Jul 7 '11 at 3:12
• @dmckee; would it be better if I just used a reduced instruction set, and only ask for the instruction stream? I think this is still challenging with {push 1,duplicate,add,subtract,multiply,output}. Come to think of it, if I restrict to {push 1,duplicate,add,output}, there's a reduction to some awesome algorithms. – boothby Jul 7 '11 at 4:48
• I did this in piet some time ago: craigoclock.blogspot.com/2011/05/metaprogramming-in-piet.html – captncraig May 21 '12 at 18:31
• Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) – programmer5000 Jun 9 '17 at 15:22
## Chess move
The Challenge
Write a program that gets a string containing a chessmove and a chessboard as input, and then outputs the chessboard.
Requirements
The chess move will have this format:
<from square><to square>[<promoted to>]
Examples:
d2d4
f8g7
a7a8R
The chessboard format is not fixed, but there must be a 1 to 1 relation between the board and the string to represent the board. Also the format of the input must bet the same as the format of the output. Two suggestions of what it could look like:
rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR
rnbqkbnr pppppppp 00000000 00000000 00000000 00000000 PPPPPPPP RNBQKBNR
It is not required to store anything except the location of the pieces, and validity of moves can be assumed.
Scoring
Base score is character count (assuming your program can move pieces for all moves)
Bonus multipliers:
• If the program updates the promoted piece, divide by 2
• If the program also moves the rook when castling, divide by 2
• If the program also removes the pawn when capturing en passent, divide by 2
The moves, and castling & en passent in particular are explaned on Wikipedia.
So basically writing a 100 character solution for the base problem gives the same score as an 800 character solution with all bonus multipliers.
Examples
If you would choose to use one of the board formats above, your input would look like one of these strings:
e2e4 rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR
e2e4 rnbqkbnr pppppppp 00000000 00000000 00000000 00000000 PPPPPPPP RNBQKBNR
Your corresponding output string would then be one of these:
rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR
rnbqkbnr pppppppp 00000000 00000000 0000P000 00000000 PPPP0PPP RNBQKBNR
• Before I get on to more specific criticisms: as presented, without the bonus this is too trivial to be interesting. I suggest removing some flexibility: require Fen notation for the board position and algebraic notation for the move, and making the current bonus options mandatory. On specifics: it's not clear why you talk about storage; and the board position notations you suggest don't include enough information to know whether en passant is possible. – Peter Taylor Dec 22 '13 at 23:56
• @PeterTaylor I agree that compared to chess programs this may be trivial, but I would like to make it a golf challenge. Compared to the hot code golf questions this is quite elaborate already in its basic form. (For a good solution the board design may need to be changed drastically). It is true that there is no attention to the legality of moves (whether it is possible to capture en passent) but for a mere viewer this is not required so I am not too worried about this. So far the chess questions seem to get very few answers as they tend to be complex and I hope to offer relatively easy entry. – Dennis Jaheruddin Dec 30 '13 at 11:02
• Your point about en passant is valid - you had said in the spec to not worry about legality. I'll try to convince you of my first point: without the bonus, this reduces to: a) parse first four characters into (col 1, row 1, col 2, row 2); b) take board as a 64-char string; c) board[8*row_2+col_2] := board[8*row_1+col_1]; board[8*row_1+col_1] := ' '; print board. This is trivial compared to any good golf question. (Note that the hot questions at the moment are neither golf questions nor good questions). – Peter Taylor Dec 30 '13 at 12:14
• This sandbox post has had little activity in a while. Please improve / edit it or delete it to help us clean up the sandbox. Due to community guidelines, if you don't respond to this comment in 7 days I have permission to vote to delete this. – programmer5000 Jun 9 '17 at 15:40
# Black Box
Your task is to analyze a given situation for the game Black Box. Given a sequence of guesses and answers, your program is to either print the solution or suggest the next move.
## The game
The board consists of 8×8 cells, with edges labeled like this:
I'll probably create nice images here, particularly to make sure that the squares of the board are really square.
abcdefgh
i I
j J
k K
l L
m M
n N
o O
p P
ABCDEFGH
The player shoots rays into the interior of the box, where they might get deflected, reflected or absorbed. He is told the position where the ray leaves the black box again, and from that has to deduce the positions of 4 atoms inside the black box.
I'll have to include more of the game rules here, but for now see Wikipedia.
# Input and output
Input is a sequence of line, each consisting of two characters. The first denotes the point where the ray of light enters the black box, the second the place where it comes out again. In the case of a reflection, both characters will be equal. In the case of a hit, the second character will be -.
If the input is enough to fully determine the locations of the atoms, then output should be four lines giving the coordinates of each atom. The lines should be two lower case characters each, the first giving the row and the second giving the column of the found solution. The atom positions must be printed in lexicographical order.
If the input is consistent with more than one set of atom positions, then the output should consist of a single line containing a single character, which is the location where the next ray should be shot. That location has to be chosen in such a way that it can help find the solution. This is the case unless all of the atom positions consistent with the input so far would produce the same output for this next ray as well.
Your output has to be terminated by a newline character.
# Examples
Let's take the atom configuration the Wikipedia article uses as an example as well:
abcdefgh
i I
j J
k O O K
l L
m M
n O N
o O
p O P
ABCDEFGH
If the input were
cf
D-
Em
HH
Co
then the output should be
kb
kg
nd
pg
but if the input were only
Em
HH
then the output might be for example
K
## Scoring
This is code golf, so shortest answer wins. However, I'll only accept answers which are practical in so far as they compute their result in reasonable time. I'd say no more than five minutes on my system where I'll evaluate the answers, and I'll simply hope that correct solutions will be much faster and incorrect ones much slower, so that the speed of my system doesn't make a difference. A submission which gives a wrong answer for one of my test cases will be disqualified until it gets fixed. I will probably point out the problem in a comment to that post.
# Create a program with "exact repetition" in its source code
The task is to create a program, with the following restrictions placed on the printable ASCII characters in the source code: choose some k > 0.
• Every non-alphabetic character has to appear exactly k times.
• Every alphabetic character has to appear at most k times.
• This rule differs from the former in order to avoid boring dummy identifiers while still making it a challenge to choose good library functions to call.
Character set definitions used:
• Non-alphabetic characters are !"#$%&'()*+,-./0123456789:;<=>?@[\]^_{|}~ and '' (backtick). • Alphabetic characters are ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz. Note that no restriction is placed on characters outside of the range of printable ASCII characters (including control codes, tabs, newlines, higher unicode codepoints, etc). What the program does is up to you; be creative. Some general guidelines: • Programs that do something interesting might have better chances, although more impressive code structure (i.e. fewer comments) is also beneficial. • Stuffing excess characters in comments is boring, and should be avoided/is discouraged. • Dead/no-op code isn't terribly interesting either, but is probably unavoidable and at least has to conform to the language's grammar. This is : whatever has the most upvotes at Feb 1, 2014 gets accepted as the winner. ### Example answer (C) # # /*$$@*/_[]={9.};main() {printf("He%clo \ world!%c\ ",2^7&!8.&~1|~-1?4|5?0x6C:48:6<3>2>=3<++_[0],'@'^79-5);} Prints "Hello world!" (adapted from an answer to another question). Probably wouldn't score a lot (since what it does isn't terribly interesting). Each of the non-alphabetic characters appear exactly twice, and no alphabetic character appears more than twice. For meta: I want to post this, but I'm worrying that "do something interesting" might give too little guidance and the question won't receive many answers.. thoughts? Is it good as-is, or should I come up with some task that one should be required to implement (and possibly change the ruling to code-challenge, with length + 2^(characters-in-comments) as the score)? ## 4 and 20 baked in a π While some might describe π as a string of seemingly random numbers, one can also look at it in a way similar to a monkey with a typewriter. Eventually, it should calculate out to something more interesting. For example, the sequence 1337 shows up 4,814 places to the right of the decimal. At 700,731 places right of the decimal, you'll find the sequence 160151, which is "pi" represented as ASCII (although you'll find a 'pointer' to it much faster, as the sequence 700731 begins at 29,830 digits to the right). So, your task is to make a program to find things in π. Your program will accept a positive integer and output the number of places right of the decimal point that number appears. To keep the run times down, input can be limited to numbers in the range of 0 to 1000 (without leading zeros). Example: Using 415 as the input, the output should be 2: 3.14159 ^ Rules: • You can not use any precalculated values of π, including language constants, built in functions that return π or digits of π, or any resource outside the code itself (such as files or websites). • You can not use any trig functions to calculate π. Bonus points if you find the sequence 072 101 108 108 111 044 032 087 111 114 108 100 033. This is code golf, so lowest score wins. • It's not clear to me whether you require answers to support leading zeroes. Also: program, named function or snippet? And how indexed? (Giving 415 as a test case would be a good way to answer the last question) – Peter Taylor Mar 11 '14 at 6:58 • Isn't this just Calculate 500 digits of pi with a search function tagged on at the end? By the way, your bonus points are quite safe — even if you searched a trillion trillion trillion digits of pi, your chance of finding an arbitrary 39-digit sequence would still be less than 0.1%. – r3mainer Mar 11 '14 at 14:59 • Edited to clarify leading zeros and indexing. @squeamishossifrage - Yes and no. The number of digits to find the answer depends on the input, which both limits the choice of algorithm to generate the search space and gives more ample room to golf the integration of the search function. The worst case is under 10000 digits for n between 0 and 1000. I suppose I could put in a time limit of a couple minutes and expand the range of n to 10000 (worst case is just under 390k), but that seems obnoxious. Thoughts? – Comintern Mar 11 '14 at 17:20 • @AlexA. - Not a drug reference. – Comintern Apr 1 '15 at 22:34 • Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) Due to community guidelines, if you don't respond to this comment in 7 days I have permission to adopt this. – programmer5000 Jun 9 '17 at 16:15 # Create a calendar We all know HDD-space is precious and bandwidth is expensive, therefore it is best to reduce the size of your executables. Let's start with your calendar: Your task is to build a calendar app in at most 512 bytes. The calendar must at least support the following features, but additional features may gain you additional upvotes: • It must be able to show the current month with the current day highlighted • The user must be able to find out the week day of each day Rules: • Maximum code length is 512 bytes (counted as UTF-8 without BOM) • You may subtract the bootstrapping code (i.e. int main(int argc, char **argv) in C or <?php in PHP) and imports from the final size to allow for more verbose languages to be in • You may use standard time / date functions of your programming language, as long as they don't allow you to output a ready to use calendar • No network access (I said bandwidth is expensive!) • Voters decide on the amount of features / look and feel / creativity This needs a tag for the size restriction, any suggestions? • "bandwidth is expensive" <sup>[citation needed]</sup> – John Dvorak Mar 22 '14 at 5:27 • Seems rather close to Output: Calendar Month – Peter Taylor Mar 22 '14 at 5:33 • Who decides what counts as bootstrapping code? It seems odd to arbitrarily exclude code like that, and the examples you gave can be golfed a lot: they're more or less equivalent to main(){ and <? respectively. – Wander Nauta Mar 24 '14 at 20:49 • @WanderNauta Bootstrapping code is the code that's essentiell to get a working noop program. – TimWolla Mar 24 '14 at 21:00 • @TimWolla That definition won't fly. A zero-byte file is a working noop PHP script, for example. – Wander Nauta Mar 24 '14 at 21:01 • @WanderNauta A zero byte file is a working noop in every language. – TimWolla Mar 24 '14 at 22:12 • So what's bootstrapping code then? :) – Wander Nauta Mar 24 '14 at 22:53 • for the limit I'd say code-shuffleboard or restricted-source – Einacio Mar 26 '14 at 15:57 • This sandbox post has had little activity in a while. Please improve / edit it or delete it to help us clean up the sandbox. Due to community guidelines, if you don't respond to this comment in 7 days I have permission to vote to delete this. – programmer5000 Jun 9 '17 at 16:28 # ASCII ART edge detection As the title says, I was thinking to contest in which one must detect edges of an ASCII art. The code should accept a B/W ASCII art as input. A B/W ASCII art is defined as (by me) an ASCII art with only one kind of non-white-spaces character (in our case: an asteriks *). And as output produce a standard ASCII art (all ASCII characters are accepted) which should remember the contourn of the first. The purpose of using more than one character in the output is to make some edges ssmoother. For instance, one could let this input *** **** ****** ****** ****** ****** **** *** could became: ___ _/ ) _/ / / | | / | \ \ | \ | \___) The input \n separated string as input. Each line has a maximum of 80 characters. The number of rows is not specified. I'd put it as a popularity-contest since, beside my simple code, I'd like to see more "round" edge detections which use more than one character in smooth edges. Also, I don't want to tag it as code-golf since I'm quite sure one can do this job using aplay (with ASCII art renderer) and command line GIMP (to apply edge detection). As a popularity contest, there are no strict rules on how the output should be..just use your fantasy! This is my sample program: import fileinput as f import re as r import copy as c a,s,p='*',' ','+' def read(n): s=[list(' '*n)] for l in f.input(): if(len(l)>n):l=l[:n] k=list(r.sub('[^ ^\%c]'%a,'',' '+l+' ')) s.append(k+[' ']*(n-len(k))) s.append([' ']*n) return s def np(s): s=c.deepcopy(s) for l in s[1:-1]: for w in l[1:-1]: print(w,end='') print() def grow(i): o=c.deepcopy(i) for x in range(1,len(o)-1): for y in range(1,len(o[x])-1): if(i[x][y]==a): o[x-1][y-1]=o[x-1][y+1]=o[x-1][y]=o[x+1][y]=o[x+1][y-1]=o[x+1][y+1]=o[x][y+1]=o[x][y-1]=a return o def diff(i,o): c=[] for x in range(0,len(i)): l=[] for y in range(0,len(i[x])): if(i[x][y]==a and o[x][y]==s): l.append(p) else: l.append(s) c.append(l) return c I=read(80) np(diff(grow(I),I)) Here below I put both input of the programs. It is an 80x70 ASCII ART. It means it has 70 lines of 80 characters, each separated by \n. ************* ***** ***** ****** *** *** **** ********* ** *********** ** ****** ******* ** ***** ******* *** ** **** ******** ***** * ** ********* ***** ***** * *** ********* ******* ****** ** ** ********** ******* ****** ** ** ********** ******* ******** * * *********** ****** ******** * ** ************ ***** ******** * * ************ *** ******** * * ************* ****** * * ************* *** * ** ************* * * ************** * ** ************* ** * ************* ** ** ************* *** *** ************* **** ** ************ **** ** ************* **** ** ************* ***** **** ** ************* ** ** ** **** ** ************ * * ** ** **** * ************ ** ** ** ** **** * ************* ******* ** *** **** * ************ ***** ******* **** * ************ *** ***** **** ** * ************* **** ***** ** *** ************** ***** * ***** ************* ****** ** ******* ************** ******* ********** *************** * ********* ********** ***************** *** *********** *********** ******************* ************** *********** ********************** ****************** ************ ***************** ** *********************** ************* ****************** **** ******************* ************** ****************** ******************** **************** ****************** ******************* *************** ******************* ******************* **************** ****************** ****************** ****************** ****************** ******************* ******************* ***************** ******************* ********************* ****************** ******************** ********************************************* ********************* ********************************************** *********************** ************************ ***************** ************************ ********************** ******************* ************************** ********************* ********************************************* ********************* **************************** *************** ******************** ************************** *************** ******************** ********************* *************** ******************* ******************** **************** ****************** ***************** **************** ***************** **************** *************** ***************** **************** *************** ***************** ***************** *************** **************** ***************** *************** ************** ****************** *************** **************** **************** ************** *************** ************** ************ A possible output could be: +++++ ++++ ++++++ ++++++++++ +++ ++ +++++ +++++ +++++ ++++++++ +++++ ++++ ++ ++++ ++ ++++ ++ ++++++ ++ ++ ++ +++++ +++ + +++++ ++ ++ ++++ +++++++ ++ ++ ++ ++ ++ ++ +++++ ++ + + + +++++++ ++ +++ ++++ ++ + ++ ++ ++ ++ ++ ++ ++ ++ ++ + + + ++ ++ ++ +++ + + ++ + ++ +++ + ++ ++ ++ + ++ ++ + +++ + ++ +++ ++ + + +++ + + + ++ + + + + + ++ + ++++ + ++ ++ ++ + ++ ++ ++ + + + ++ ++ + + +++++ ++ ++ + + ++ ++ + + +++ ++ + + + + ++ + +++++ + + ++ ++ + + ++ + + + + ++ + + ++ ++ ++ + + ++ ++ + + ++ + ++ + + + + + + + ++ ++ ++ + + + + + + +++++++ + + + + + + ++ ++ ++++ + + + + + + + +++ + ++ +++ + + + + ++ + + ++ ++ + ++ + ++ + + + ++ + ++ ++ +++ + + +++ ++ + + + + + + + ++ + +++ + + + + + + ++ ++ ++ ++ + + + + + +++ ++ ++ + +++ +++ + ++ + + ++ ++ + ++ +++++ + ++ ++ + + ++ ++ + + ++++++ ++ ++ ++++ + ++ +++ ++ + + + ++ ++ +++ +++ + + + ++++ ++ ++ +++ + ++ ++ ++++ + + ++++ + + ++ +++++ +++++ +++++ + ++ ++ +++ ++++++ + ++ + +++++ +++++ + ++ + + +++ +++++ + +++ + ++ ++++++ + + + ++ + ++ + ++ + + + + +++ ++ + ++ ++ ++ + + + + +++ ++ + +++ + ++++++ + ++ ++ ++ +++ + + ++ + +++++ ++++++ + + ++ ++ + + + ++ + ++ + + + ++ + + ++ + ++++ + + + + ++++++ ++ + + ++ + ++ + + + ++ + ++++ + + + ++ ++ + + ++ + + + ++ + + + + + + + + ++ + + ++ + + ++ ++ + + + ++ ++++++++++++++++ +++ + + + ++ ++ ++ + ++++++++++++++++ ++ + ++ ++ ++++++++++++++ This is also the output produced by the script above. Of course it is not the best output and I'm sure one can easily produce a smoother one. • It would be useful to be more precise about which characters should be non-blank in the output: characters which were non-blank in the input but adjacent to blanks, or characters which were blank in the input but adjacent to non-blanks? – Peter Taylor Apr 4 '14 at 9:50 • Thanks for pointing. I re-written the phrase in the answer. You can use every ASCII character in the output (as usual ASCII art). E.g. I used only + symbol, but one could makes round edges using symbols like \ or / etc.. – Antonio Ragagnin Apr 4 '14 at 9:55 • edited again... – Antonio Ragagnin Apr 4 '14 at 10:09 • Can you define the input that will be used by all the participants? It's necessary to have only one input to compare the outputs of the different answers. The first example is too simple and the last one is too long. So I suggest to use something between these 2 examples. – A.L Apr 4 '14 at 17:29 • Thanks, I chosen a cute panda as input. – Antonio Ragagnin Apr 6 '14 at 8:09 • one could let this input (…) could became → try something like "this input (…) could become" outpuit → output – user2428118 Apr 7 '14 at 13:31 • I edited it now, so do you people thinks it is a good question? – Antonio Ragagnin Apr 9 '14 at 17:15 • Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) Due to community guidelines, if you don't respond to this comment in 7 days I have permission to adopt this. – programmer5000 Jun 9 '17 at 16:32 • Hi @programmer5000 , I already asked such a question. Do you mean to re-use it again? See: codegolf.stackexchange.com/questions/26139/… – Antonio Ragagnin Jun 12 '17 at 13:39 Hi, first time golf questioner, hopefully I'm doing it right! # Maths Trade Calculator A maths trade (or "math" trade if you prefer) is a way of calculating complex trades of arbitrary items in a circle of participants where not all participants want all items. X participants have an item they would like to trade. Each participant is assigned a unique number, and provides a list of (numbers identifying) the items they would willingly trade their item for. They may provide an empty list (i.e. they would rather not trade). ## Input X lines, one for each participant, comprising a unique number identifying them, followed by a colon, then a comma-separated-list of numbers identifying other items that they would trade for. e.g.: 1:2,3,4 2: 3:1,4 4:2 The numbers identifying the participants will not necessary be in order, nor will they necessarily be 1 to X. You may assume that they will be numeric. This string can be in STDIN, or an argument to a function, or similar and can be followed by a new-line or not, whatever the coder prefers. ## Output One or more trade loops in which all participants are making trades they're happy with. Each loop should be on a new line and comprise a participant number, followed by "->", followed by the participant they should give their item to, then another "->", and another participant number etc, until the loop is closed and the last participant number matches the first one. Another line is added with the number of completed trades. e.g.: 1->3->1 2 Participants for which no valid trade is possible are omitted. Output can be via STDOUT, or returned as a string, or something else, with an optional final new-line. ## Trade rules 1. A participant may not be involved in more than one trade 2. A participant may not receive an item that they didn't want 3. All loops must be closed 4. Maximum number of possible trades should be completed (i.e. no submitting a zero-trade output and claiming it's valid). If there are multiple permutations, pick whichever you prefer. This is a code golf challenge, so shortest working code wins. ### Some more example inputs and possible outputs ### 1 1:2,3,4,5 2:3,5,7,9 3:1,2,5,6,10 4: 5:1,2,3,4,10 6:5,7,9 7:3,6,9,10 8:1,2,4,10 9:1 10:9 1->9->10->3->1 7->2->5->6->7 8 For instance, in this trade: 9 stated that he would accept 1's item in a trade, 10 stated that he would accept 9's item, 3 would accept 10's and 1 would accept 3's. In the second loop, 2 receives 7's item, 5 receives 2's, 6 receives 5 and 7 receives 6's. (Other outputs are possible from this input.) ### 2 1:2 4: 2:3 5:1 3:4 0 ### 3 1:5,9 5:1 9:1 1->5->1 2 1->9->1 is also valid in this case, but both cannot be completed. Either is acceptable. Thanks for reading guys! Let me know if there are any improvements I can make. • "can be followed by a new-line or not, whatever the coder prefers." How flexible is this? For instance, can I use trailing commas, like 1:2,4,7, if it makes my code shorter? – Martin Ender May 2 '14 at 17:28 • Will the participants always be numbered 1 to n and their input lines provided in order? If so, state it. If not, include a test case which fails if an implementation decides to ignore everything before the : in each input line. – Peter Taylor May 5 '14 at 10:02 • @m.buettner I would say a trailing comma is not acceptable, on the end of any line, or the end of the input/output. – Johno May 6 '14 at 8:55 • @PeterTaylor Good tip. I'll correct the question to state that you can't assume that the numbers will be 1 to n, in order. – Johno May 6 '14 at 8:57 • Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) Due to community guidelines, if you don't respond to this comment in 7 days I have permission to adopt this. – programmer5000 Jun 9 '17 at 16:39 ## Design and Solve a Maze (this question on hold while the details are ironed out) Your task is to play the roles of both characters in this scene from Inception. In it, Cobb gives Ariadne a challenge: You have two minutes to design a maze that takes one minute to solve. Some liberties will be taken on that description. Most importantly, this challenge is not time-based, rather scores are based on the effectiveness of your mazes and maze-solvers. I apologize for the many edits to this challenge as we iterate towards an easy and fair format.. ### Part I: Maze format All mazes are square. A cell in the maze is represented as a zero-indexed tuple row column. Walls are represented by two binary strings: one for horizontal walls (which block movement between rows) and vertical walls (vice versa). On an NxN maze, there are Nx(N-1) possible walls of each type. Let's take a 3x3 example where the cells are labelled: A B | C --- D | E F --- G H | I all possible vertical walls are: AB BC DE EF GH HI. Translated into a string, the walls shown are 011001 for vertical walls and 010010 for horizontal walls. Also, by "binary string" I mean "the characters '0' and '1'". The full maze format is a string which contains, in this order: • width • start cell tuple • end cell tuple • horizontal walls • vertical walls For example, this maze: 0 1 2 3 4 _________ 0 | | E| _| 1 | _|_|_ | 2 |_ _ _ | | 3 | _ _ | | 4 |____S|___| start:(4,2) end:(0,2) is formatted to this: 5 4 2 0 2 00001011101110001100 10100110000100010010 ### Part II: The Architect The Architect program creates the maze. It must play by the rules and provide a valid maze (one where a solution exists, and the end is not on top of the start). input via stdin: Two positive integers: size [random seed] Where size will be in [15, 50]. You are encouraged to make use of the random seed so that matches can be replayed, although it is not required. output to stdout: A valid size x size (square) maze using the format described in Part I. "valid" means that a solution exists, and the start cell is not equal to the end cell. The score of an Architect on a given maze is # steps taken to solve ------------------------------ max(dist(start,end),(# walls)) So architects are rewarded for complex mazes, but penalized for each wall built (this is a substitute for Ariadne's time restriction). The dist() function ensures that a maze with no walls does not get an infinite score. The outside borders of the maze do not contribute to the wall count. ### Part III: The Solver The Solver attempts to solve mazes generated by others' architects. There is a sort of fog-of-war: only walls adjacent to visited cells are included (all others are replaced with '?') input via stdin: the same maze format, but with '?' where walls are unknown, an extra line for the current location, and a comma-separated list of valid choices from this location. (This is a big edit that is meant to make it simpler to write a maze-parsing function) example (same as the above 5x5 maze after taking one step left) 5 4 2 0 2 ???????????????011?? ????????????????001? 4 1 4 0,4 2 Which corresponds something like this, where ? is fog: 0 1 2 3 4 _________ 0 |????E????| 1 |?????????| 2 |?????????| 3 | ?_?_????| 4 |__C_S|_?_| output to stdout: One of the tuples from the list of valid choices Each Solver's score is the inverse of the Architect's score. ### Part IV: King of the Hill Architects and Solvers are given separate scores, so there could potentially be two winners. Each pair of architects and solvers will have many chances to outwit each other. Scores will be averaged over all tests and opponents. Contrary to code golf conventions, highest average score wins! I intend for this to be ongoing, but I can't guarantee continued testing forever! Let's say for now that a winner will be declared in one week. ### Part V: Testing I have written a Python testing kit which includes a Maze class for parsing and writing in the proper formats, as well as an example architect/solver pair: Daedalus and the Minotaur Available on both Dropbox and GitHub ### Part VI: Submitting • I maintain veto power over all submissions - cleverness is encouraged, but not if it breaks the competition or my computer! (If I can't tell what your code does, I will probably veto it) • Come up with a name for your Architect/Solver pair. Post your code along with instructions on how to run it. • I suppose input is via STDIN? You might want to mention that explicitly, because at least the architect could just as well take the input via command-line arguments. – Martin Ender May 15 '14 at 15:34 • updated. I have a driver/referee program which will handle I/O; I'll update it to use stdin/stdout since that will no doubt be the easiest standard. – wrongu May 15 '14 at 16:04 • @m.buettner before de-sandboxing this, would you be willing to try the test kit? – wrongu May 15 '14 at 18:20 • I'd love to, but I'm afraid I'm too busy this week. Try ask for help in the chatroom. – Martin Ender May 15 '14 at 18:35 • Possible architect issue: With this scoring method (steps/walls), you can get a minimum score of 3 by simply putting the start/finish right next to each other with a single wall between. It takes three steps to go around. Most actual mazes I've seen have too many walls to make a score of 3 likely, much less guaranteed. – Geobits May 16 '14 at 13:51 • Thats a problem. What if the dist function was shortest path? Then only mazes which cause detours could get a score > 1 – wrongu May 16 '14 at 15:09 • That would probably be better. That way it's scored on best vs actual. It would take away the incentive to figure out how to build hard mazes with few walls, though, which was interesting itself. – Geobits May 17 '14 at 3:01 • Hey rangu... not sure if you're still planning to do this thing, but overactor just said something in chat which reminded me of your challenge and might be a neat way to avoid the combined score: split this up into two code-challenges, one for maze generation and one for maze solving. Each code-challenge's benchmark set (to determine the scores) would be the outputs of the other challenge's participants. Then you could just pick a best solver and a best generator independently. – Martin Ender Aug 1 '14 at 11:18 Author note: I was thinking about new genres today, and I had an idea. What if there could be a challenge that encourages people to write good code, instead of the code-golf gibberish we all know? Here's a challenge that attempts to do that. (This could even possibly be a , which would be great because it would bring in a greater high quality question volume to the site, but I'm terrible at coming up with names. Feel free to suggest something in the comments.) # Build your own image editor For this challenge, you will create the best GUI image editor that can perform the most tasks that you possibly can... from scratch. ## Tasks and scoring Here are the features / tasks used to score your program. Each task is worth a certain amount of points, which is specified in brackets before the task description. For convenience, each task will also be prefixed by an ID string so that you can refer to them when describing your program. • [1 A] Brush tool: Simple, click and drag the mouse to draw freestyle doodles. Must draw a contiguous path. • [1 A1] Ability to change the brush size. • {TODO: etc., add more} ## Requirements Your editor must conform to the following requirements: • Must accept input via the mouse. Tools (brush, flood fill, etc.) can be switched and configured with keyboard shortcuts, by clicking icons with the mouse, through a menu, or however you would like. • You may not use a single built-in function to accomplish one or more of the tasks. For example, if your language has a built-in image flood fill function, you may not use it and must build your flood fill from scratch. ## Final score and voting This is the syntax you should use to describe your score in your answer: # {language}, {your score} score <sup>(features implemented: {A, A1, ...})</sup> {your code here} {description, comments, other notes, etc. here} The amount of votes your post has (upvotes minus downvotes) will be multiplied by {TODO: figure out a good number} and added to your score. (Do not add this to the score in your post, since votes change constantly; I will add them manually.) Voters, please vote according to the following criteria: • elegance and readability of the source code • ease of use of the image editor and how powerful it is • remember to sort by "active" so that you're voting for new answers too, and not just the top voted ones! • Your questions should be reasonably scoped. If you can imagine an entire book that answers your question, you’re asking too much. A really good answer to this would run into millions of lines of code. – Peter Taylor Jun 4 '14 at 15:52 • @PeterTaylor Yes, I was a little worried about that, but if it's not broad enough, it will be easy to just implement all the features. Any suggestions for fixing this? I was thinking of adding a "brevity" criterion in the voting section, but that doesn't seem like an ideal solution. – Doorknob Jun 4 '14 at 16:00 • To be honest, the site for good code is Code Review. They already have a monthly challenge, for which they post snippets for review. I don't see a need to copy them. – Peter Taylor Jun 4 '14 at 16:12 • @PeterTaylor Wait, isn't Code Review for questions and answers, not challenges and contests? In any case, is there any reason for that to prevent us from posting challenges like we always have? – Doorknob Jun 4 '14 at 16:17 • meta.codereview.stackexchange.com/questions/tagged/… . Surprised you don't know about it, given how dedicated you are to spying on them ;) In general, if a question is on topic for multiple stacks then there's no obligation to do the sensible thing and post it on the one which it best fits, but you should expect people to ask why you're not doing the sensible thing. I think you're going to have to work hard to turn this into a question which fits this site, whereas it's already a good fit for CR's challenge programme. – Peter Taylor Jun 4 '14 at 16:31 • @PeterTaylor Hmm, that's strange. Wouldn't that be more on-topic here? (And I only occasionally pop in to their chat/meta to see what they're up to. :-P) – Doorknob Jun 4 '14 at 16:32 # Help Joe Bloggs with his password hash Joe was confidently using "password1" as his main password to all his accounts until one day he received an e-mail from fBay. His account has been compromised and he must change his password immediately. Yet worse, the attacker had access to all Joe's accounts. Being an engineer, Joe thought: What if I could hash somehow my password using a keyword? I wouldn't need to remember any passwords and I would have a different one for each account. Joe then creates an algorithm - he takes the domain name as a key and creates the password for each of his account consisting of: 1. (<consonants><vowels>)(alternating case: lower, capital, lower...) 2. <number of consonants><number of vowels> 3. <sum of consonants and vowels numbers converted to a character on US Qwerty Keyboard> Joe then opens an account on SO to create a new code golf challenge. He uses stackoverflow as a key to generate password: 1. sTcKvRfLwAoEo - consonants and vowels in alternating case 2. 94 - 9 consonants, 4 vowels 3. 9+4=13, 1+3=4, Shift+4=$
Therefore, Joe's password for stackoverflow is: sTcKvRfLwAoEo94$ ### Challenge Create a shortest function to generate a password given the rules above. The code should accept a string type parameter d and return/display the generated password. ### Rules 1. Only Latin letters from the input should be used. Any other characters should be ignored. 2. Minimum input length is 1 letter. (guys at q.com need passwords as well!) 3. Assume Y is a vowel 4. If vowels or consonants are missing, use 0 accordingly. E.g. input a would result in a01! 5. Shortest code wins List of vowels and consonants US qwerty keyboard • Thanks for the feedback @m.buettner. I meant to say, without using any libraries. The problem is, that people become lazy to think sometimes and just dive straight away to use Linq where a bit of thought will do – mai May 28 '14 at 13:14 • Well actually you can, I'm just checking now. You can do a lot of manipulations on strings without libraries. – mai May 28 '14 at 13:18 • Looping over string characters, concatenation work perfectly. Nevertheless, I've updated the challenge. If a function to depend on a library, it must be included in the character count. – mai May 28 '14 at 13:21 • 1. Strictly speaking, in .Net you don't have strings without libraries. The string keyword is syntactic sugar for a class in mscorlib. 2. As things currently stand, your rule 1 strictly prohibits something and then says what to do if you ignore that prohibition. This is illogical. It's also unclear what "that" in "please inlcude that in characters count" means. Does it mean that each submission should be a program as opposed to a code snippet? If so, state it explicitly. – Peter Taylor May 28 '14 at 13:32 • Hmm.. I don't know how to write it the best way. mscorlib is included by default so that is permissible. I don't want the code to use other libraries as Linq as it's less fun. – mai May 28 '14 at 13:47 • @m.buettner I agree with you. Nevertheless, there will solutions provided in other languages as well (there always are). And I would like the authors of those solutions to think about the best approach in their language of choice without depending on libraries like Linq. – mai May 28 '14 at 14:00 • Does Rule 2 mean ONLY vowels/consonants to be used from input? What about symbols *@#$ etc. Depending on that answer, potentially clarify Rule 5 regarding symbol input. As for Step 3 in the hash, should that progress further, similar to my Appended Numbers game so 103 consonants and 5 vowels would follow as 103+5 = 108, 1+0+8/10+8, etc.? – Matt Jun 4 '14 at 2:35
• @Matt, clarified - only Latin letters are used from the input. If consonants or vowels are missing, use 0 instead. The sum should progress, until it's <=9. E.g. 103 consonants, 5 vowels: 103+5=108, 1+0+8=9. Then, Shift+9='(' – mai Jun 18 '14 at 10:36
# Diplomacy
Note for Sandbox: I have not finished (or really started) the control program for this game, because I wanted to see if there was interest in it before I dedicated too much time to the project. that means that the rule are still up to be tweaked, so please leave a comment if you have a suggestion, and comment or vote if you are interested in seeing this happen.
Diplomacy is a complex strategy game, with a very entertaining combat system. This challenge will be to write a bot to compete in a simplified version of diplomacy combat.
## Rules
### Rounds
Countries (bots) will begin the game with 10 health, representing their remaining will to fight. The goal is to eliminate all other Nations by attacking them until they have 0 health.
The game will consist of several rounds. On the first round, all bots will receive 2 numbers as command line arguments: The first will be the total number of countries fighting, and the second will be their number in the list. Each following round, bots will receive a command line arguments containing the actions taken by each player last round and a list of all bots and their remaining health separated by commas, like so
1:A2,2:S3,3:A4,4:A3 1:10,2:7,3:7,4:1
Each bot must then output a desired action, which is one two commands
1. Attack a player. This is done by printing the letter A followed by the number of the player you with to attack. For instance, A3
2. Support a player. This will give the player you support a boosted attack.
### Resolving combat
After player have sent in their moves, attack scores will be calculated thus:
1. All players start with a strength of 1, and one point is added for every player supporting them. For instance, if the moves are 1:A3,2:S1,3:A2,4:S2 then bot 1 has strength 2, bot 2 has strength 2, bot three has strength 1, and bot 4 has strength 1.
2. After strength has been calculated, bots will deal damage based on their strength. The formula for damage is (Attacker's strength + 1) - (Defender's Strength) In the above situation, player 3 would take 2 damage and player 2 would take 0 damage. Note that, unlike regular diplomacy, attacking a supporter does not cut support.
3. All attack take place simultaneously and independently. This means that if player 1 and 2 both attack player 4, then they each deal 1 damage. If player 3 were to support player 4, then player 4 would take no damage.
### Round Ends
After combat has been resolved, countries that have 0 health will no longer be able to attack or support. However, they still will be listed in the input with an health of 0. When a bot is eliminated, all remaining bots will receive a single point.
### Ending the game
The game ends when either 100 turns have elapsed or only 2 or less players remain. At this point, the player with the highest remaining health is the winner and receives 1 point. In case of ties, all tied bots will revive 1 point. If all bots die on the same turn, this is not a tied victory, but mutually assured destruction, and all bots will receive 0 points.
### Scoring
The control program will run 100 rounds of the game. The winner will be the country with the most points at the end of 100 rounds.
## Code
You may write in any language I can reasonably compile. I will make an effort to compile odd languages, but make no promises as to my ability to do so. Please provide your source code, an explanation, and a command line command to run your program.
Notes
• You are allowed to write to a file. In fact, you are encouraged to do so.
• Because this is a game where cooperation is paramount, you are allowed to write bots that work together, with the following restriction:
• Only two bots can be written by a single player to work together at a time.
• Standard Loopholes apply. You are not allowed to change the way the control program runs. If you provide invalid input to the control program, the program will just skip your turn. However, you are allowed to spy on other countries files, and all bot programs will be in the same folder at runtime. This is war, after all!
• I reserve the right to disqualify any country that takes more than about a second to run, or that tries a loophole not mention within. That being said, if it is sufficiently clever I will probably let it go.
I will have source code up soon for a sample country that will be competing, and will post the control program when I finish it.
• "In case of ties, all tied bots will revive 1 point". Is that supposed to say "receive"? "If all bots die on the same turn, ... all bots will receive 0 points." If there are two bots left, each of which has received 1 point from the earlier death of a third bot, and the two bots destroy each other on the same turn, what's the final score for the round? I'm not sure whether it's 0-0-0 or 1-1-0. "You are allowed to write bots that work together": but how can they identify each other? Do they have to use their moves as a covert channel? – Peter Taylor Aug 29 '14 at 14:28
• "Support a player. This will give the player you support a boosted attack." Or defence. Might be clearer to say "boost that player's strength for the turn". Should also state whether or not it's possible to support yourself. – Peter Taylor Aug 29 '14 at 14:29
# Check GenericScript source code for compiler errors
Given the source code for a GenericScript program as input, parse the source code to check that it conforms to the syntax rules for the language. The syntax definition for GenericScript is below. If a part of the source code is found to be invalid, the program should output "Invalid syntax", otherwise it should output "Valid syntax".
Win Criteria
This is code golf. Shortest code wins.
Syntax
Source code will be considered to be valid if it matches the rule for "Program" below.
Program = Sequence
Sequence = Statement [Sequence]
Statement = SequenceBlock | Assignment | If | While | Output
SequenceBlock = "{" Sequence "}"
Assignment = Identifier "=" (String | Bool);
If = "if(" Bool ")" Statement ["else" Statement]
While = "while(" Bool ")" Statement
Output = "print(" String ");"
Identifier = {Any sequence of alphanumeric characters prefixed with "var" }
Bool = StringEquals | Identifier
StringEquals = String "==" String
String = StringConstant | OperatorConcat | Input | Identifier
StringConstant = "'"StringContent"'"
StringContent = Character [StringContent]
Character = {Any character except for "'"}
OperatorConcat = String "&" String
Whitespace is defined as any sequence of the ascii characters 9, 10, 13 and 32. Whitespace characters are allowed between tokens but are not required.
Rules
1. The answer should be a complete program
2. Standard input/output allowed
3. Standard loopholes apply
Test Input
Valid syntax:
print('What is your name?');
print('Hello ' & varInput);
Invalid syntax:
if(read() == 'DoTask1')
print('Executing you'r command');
## Objective
Your goal is to develop a complete text-based adventure game with the shortest code possible. The player navigates in a dungeon composed of rooms. The game objectives are to find the treasure, slain the dragon and rescue the princess.
## Rules
A room description is as follows:
You are in <description>.
You can go <exits>
You see <object> (optional)
• exits can be "north", "east", "west", "south".
• adjective can be "dark", "murky", "small", "large", "narrow", "gloomy", "huge", "strange", "tiny", "broad", "old".
• object can be "the princess", "the dragon", "a troll", "a goblin", "a sword", "gold", "a key", "a trunk".
Exit list must be comma-separated and end with "and". If there is no object in the room, the last line is omitted.
Example of valid description:
You are in a murky room.
You can go north, east and south.
You see a goblin.
The game accepts the following commands (case is ignored) :
• GO direction : direction can be NORTH, EAST, WEST, SOUTH
• TAKE item : item can be SWORD, GOLD, KEY
• KILL monster : monster can be DRAGON, TROLL, GOBLIN. The DRAGON and the TROLL can be killed only if the user has the SWORD. If he hasn't, he loses the game. The weak GOBLIN can be killed with bare hands. When a monster dies, he disappears from the room. When the GOBLIN dies, he drops a SWORD. When the TROLL dies, he drops a KEY.
• KISS person : person can be PRINCESS, DRAGON, TROLL, GOBLIN. Kissing the princess validates one of the objective of the game, and the princess disappears from the room. Kissing a monster results in player death.
• OPEN object : object can be TRUNK. If the player has the KEY, the TRUNK object disappears and is replaced with GOLD.
OBJECTS
The player can perform an action on an object only if the object is in current room. A room can contain only one object ; a given object can be found in only one room. At the beginning of the game, only the following objects are placed in the map : PRINCESS, DRAGON, TROLL, GOBLIN, TRUNK. Other objects are not yet created.
ACTIONS
• If an action cannot be performed (e.g. GO NORTH where there is no exit to the north, or TAKE DRAGON, or DANCE GANGNAM STYLE), the message "Sorry, I can't do that" must be displayed.
• If an action can be performed, the message "OK" and the current room description should be displayed.
• You can read game commands from console or as a program parameter, as you wish.
MAP
The dungeon should have at least 30 rooms. The dungeon should not contains a series of more than 5 exits in the same direction. The exits between rooms must be consistent, e.g. if you go north from room #1 to room #2, there must a south exit in room #2 leading back to room #1. Every room name should be unique. There must be at least one room of each kind (hall, cavern, corridor...)
• A hall has at least 3 exits.
• A corridor can have only 2 exits.
• The cell has only one exit.
• There is only one dragon's lair and only one cell, containing respectively the dragon and the princess.
GAME END
The game ends when the player has been killed, or when he has taken the gold, slain the dragon and kissed the princess.
• If the player dies, the message "You have been killed by X !" is displayed, with X being the name of the monster.
• If the player wins, the message "Well done adventurer ! you've conquered the dungeon." is displayed.
Player should not be able to win the game in less than 40 turns.
Example
You are in a murky room.
You can go north, east and south.
You see a goblin.
> KILL GOBLIN
Ok.
You are in a murky room.
You can go north, east and south.
You see a sword.
> TAKE SWORD
Ok.
You are in a murky room.
You can go north, east and south.
> GO NORTH
Ok.
You are in a narrow corridor.
You can go south and east.
## Scoring
The shortest code wins.
• @Martin Thanks for your comments! I've updated the question. – Arnaud Aug 21 '14 at 7:59
• Provided the player ignores the troll and goblin (i.e. doesn't try to kiss or kill them), they don't do anything? – Peter Taylor Aug 21 '14 at 8:30
• @Peter you're right. Maybe the player should kill (with bare hands) the goblin in order to get the sword, and then kill the troll (with the sword) to get the key. – Arnaud Aug 21 '14 at 8:50
• "The map must be spatially coherent" still doesn't disallow always going left without ending up in the same room twice, unless you specify that the rooms are all meant to be square (which is what I think you had in mind). Also, I still think that "at least" 30 rooms is unnecessary. Who would implement 8 additional rooms if they don't have to. It will definitely be shorter if I omit the two longest adjectives and just use all available combinations of the remaining ones (giving 30 unique rooms). So you can omit two adjectives and the "at least" right away, I'd say. – Martin Ender Aug 21 '14 at 9:19
• I think it's fine to keep "at least" there for flavour, same with additional adjectives. Also, someone might figure out a way to make the code shorter with a longer adjective (for that reason, having a few more adjectives might be nice) – FireFly Aug 21 '14 at 9:38
• @Martin I've added a criteria "Player should not be able to win in less than 30 turns", to force the golfer to implement more rooms. – Arnaud Aug 21 '14 at 9:39
• @SuperChafouin That doesn't force it though. I just need to place the goblin at the end, troll at the beginning, trunk at the end, so that you need to traverse the map 3 times. – Martin Ender Aug 21 '14 at 9:51
• @Martin It's also here to prevent the dungeon to be too straightforward to solve, e.g. if all the objects are in 5 adjacent rooms near the player start location. – Arnaud Aug 21 '14 at 10:51
• +1 for golfing in Inform 7. – Lopsy Sep 20 '14 at 2:56
• Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) Due to community guidelines, if you don't respond to this comment in 7 days I have permission to adopt this. – programmer5000 Jun 9 '17 at 16:58
• @programmer5000 Yes no problem :-) – Arnaud Jun 10 '17 at 1:07
# Simulate a Quantum Circuit
Work-in-progress until I can make sure I know what I am doing and can finish the spec. or maybe
Quantum computers are the way of the future! Why wait, when you can simulate one now?
Your mission is to determine the output of a quantum circuit given its input and a diagram of logic gates.
# Details
You will simulate a single quantum register and apply a series of quantum logic gates to it. A quantum register is a group of qubits. The state of a register is described by a vector of 2^N complex numbers, where N is the number of qubits in the register.
a|000>
b|001>
c|010>
d|011>
e|100>
f|101>
g|110>
h|111>
Above is a representation of a 3-qubit register. Each letter (a b c etc.) represents a complex number. There is an addition restriction that:
|a|^2 + |b|^2 + |c|^2 + |d|^2 + |e|^2 + |f|^2 + |g|^2 + |h|^2 = 1
## Quantum gates
Gates are represented by a 2^N x 2^N square unitary matrix, where N is the number of input qubits. All quantum gates have the same number of outputs and inputs, since they neither create nor destroy qubits, they modify them.
A common quantum gate is called the Hadamard gate and acts on a single qubit. The matrix [H] looks like this:
1/Sqrt(2) 1/Sqrt(2)
1/Sqrt(2) -1/Sqrt(2)
If we let [R] represent the following 1-qubit register:
0.6|0>
O.8|1>
Then the application of the gate is represented by [H][R] and gives the following result:
7*Sqrt(2)/10|0>
-Sqrt(2)/10|1>
It is still true that the sum of the squares of the absolute values is equal to 1.
(TODO: explain how to apply gates to larger registers)
## Measurement
Measurement collapses the state of the quantum register.
(Todo: Explain how measurement works)
# BS
The goal of this challenge is to implement an AI for the game of BS, also known as Bull Shit, Cheat, Bluff, and numerous other names.
The game is outlined in this wikipedia article.
# The Rules of the Game
For the purposes of this challenge, the game will work like this:
1. A standard 52-card deck is dealt out to the players
2. The current rank is set to Ace
3. The play order is randomized
4. The player holding the Ace of Hearts goes first
5. On each player's turn:
1. The current player plays some number of cards
2. The current player states how many of what rank they played
3. Other players may declare 'BS'.
4. If any player declares 'BS':
1. All players are notified of which players declared 'BS'.
2. The played cards are revealed to all players.
3. If the played cards are inconsistent with the current player's statement:
• The current player adds the played cards and all cards in the pile to their hand
4. If the played cards are consistant with the current player's statement:
• The last player to declare 'BS' that round adds the played cards and pile to their hand.
5. If no player declares 'BS':
1. The played cards are added to the pile, without revealing them.
2. If the played cards were inconsistant with the current player's statement, the current player may declare 'Peanut Butter'.
6. If the current player has no cards in their hand, the current player wins.
7. The current rank is incremented. (If the current rank is King, it becomes Ace.)
# The Messaging Protocol
Play will be conducted via messages passed to the standard input and received from the standard output of each program. Each message will be terminated with a single newline character.
## Cards
Card ranks are represented as one of A, 2, 3, 4, 5, 6, 7, 8, 9, T, J, Q, or K. Card suits will be represented as one of S, C, H, or D. Cards are represented as the rank, followed immediately by a suit. For instance, the Ten of Clubs would be represented by TC, and the Three of Hearts would be represented by 3H.
A hand of cards will be represented as a space-delimited sequence of cards. For instance, a hand containing the Queen of Spades and the Six of Diamonds could be represented as QS 6D or 6D QS.
## Player Identification
A player will be represented by their nickname, followed by a number from 0 to 32768, in parenthesis, formatted as an integer. This number is guaranteed to be unique within a particular game. A player's nickname must have at least one character, can have up to 32 characters, and may only include letters, numbers, and underscores. For instance, a player with nickname ExampleAI and ID number 16480 would be identified in the game as ExampleAI(16480).
When the game begins, each program will recieve a message containing their unique ID:
Unique ID: uniqueID
Each player will reply with their desired nickname:
Nickname: name
Names may contain only alphanumeric characters and underscores.
After all players have responded with their nickname, the standard play sequence begins.
## Standard Play Sequence
When a player's turn begins, each player will receive a be given a list of the players and their card counts, in order of play:
Players: player[count], player[count], ... player[count]
Each player will be informed of the contents of their hands:
Hand: initial_hand
The current the player will then receive this message:
Your turn: current_rank
The current player will reply with a space-separated list of of cards:
Play: list_of_cards
Once they have submitted their play, all players will receive the number of cards, formatted as an integer, along with the current rank:
Player player plays: nunber_of_cards x current_rank
Each other player may then declare BS on that play by sending any message up to 32 characters, containing the capital letters B and S, and otherwise only contains lowercase letters and spaces. So any of Bull Shit, Bananna Split or Bacon Sandwich would be acceptable.
During this period, the current player may declare Peanut Butter by sending any message up to 32 characters, as long as it contains the capital letters P and B, and otherwise only contains lowercase letters and spaces. So any of Peanut Butter, Pancake Batter or Polish Bacon would be acceptable.
In order to allow the game to move faster, if a player does not wish to declare either of these things, they must instead send:
Pass
After all players have responded, all players will receive a list of players who called BS, in the order they called it:
Called BS: player, player ... player
If no player called BS, this message will still be sent --- it just won't have any players listed. If any player did call BS, then all players will recieve:
Player player had played: list_of_cards
If they were bluffing, all players recieve:
Player player was bluffing.
Your bluff was called: list_of_cards_recieved
If they were not bluffing, all players recieve:
Player player was not bluffing.
Player last_player receives the pile.
The last player who called BS recieves this message:
You misjudged: list_of_cards_received
The list of cards received will contain, in reverse chronological order, the contents of each play since the last call. (Separate plays will not be delimited in the list.)
If no player declared BS, and the current player was bluffing and declared Peanut Butter, then all players recieve the message:
Player player was bluffing.
If the current player has no cards left in their hand, all players receive this message, and the game terminates:
Player player won!
Otherwise, the next player's turn begins.
# Example Game
The following might be considered a typical (abbreviated) message transcript:
Unique ID: 16481
> Nickname: Alice
Players: Alice(16481)[18], Bob(16479)[17], Charlie(16480)[17]
Hand: 2D 7S AS TC 5S JS JC 3C 8H 9D 5D AH 7C 6C 4D KC KH KS
> Play: AS 2D AH
Player Alice(16481) plays: 3 x A
> PB
Called BS:
Player Alice(16481) was bluffing.
Players: Bob(16479)[17], Charlie(16480)[17], Alice(16481)[15]
Hand: 7S TC 5S JS JC 3C 8H 9D 5D 7C 6C 4D KC KH KS
Player Bob(16479) plays: 2 x 2
> BS
Called BS: Alice(16481)
Player Bob(16479) had played: 2H 2C
Player Bob(16479) was not bluffing.
Player Alice(16481) takes the pile.
You misjudged: 2H 2C AS 2D AH
Players: Charlie(16480)[17], Alice(16481)[20], Bob(16479)[15]
Hand: 7S TC 5S JS JC 3C 8H 9D 5D 7C 6C 4D KC KH KS 2H 2C AS 2D AH
.
.
.
Players: Alice(16481)[3], Bob(16479)[41], Charlie(16480)[8]
Hand: KC KH KS
> Play: KC KH KS
Called BS: Charlie(16480), Bob(16479)
Player Alice(16481) was not bluffing.
Player Alice(16481) won!
Your implementation may be written in any language, provided that you, upon request, provide a link to a suitable free-as-in-freedom compiler or interpreter that I can download and run at no cost. You also need to provide a UNIX command that can start your program.
# Sandbox Questions
I want to gauge the community's interest in my problem before finalizing the spec and writing the control program.
I also need to get some idea of what sort of time-limiting scheme would be reasonable. In order to be able to to a lot of runs, I will need to be able to ensure that each AI doesn't take too much time to make its decisions, or prevent a stuck AI from holding up a game. I also need to be able to ensure that there is no motivation to deliberately stall a game. For example, if an AI determines that it is very unlikely to win, it might stall in order to prevent the game from finishing.
I would also like feedback on the messaging protocol:
• Are there any additional messages that you think should be passed?
• Would it be more convenient/clear if one or more of them were formatted differently?
• Would it be better to use a different format for the plays message?
• Would it be better to use different words to help distinguish the plays and played messages?
• It looks like quite a tough challenge, but should be enjoyable! – Alexander Craggs Sep 4 '14 at 16:54
• @PopeyGilbert By the way, there was one thing I accidentally left out that I need feedback on. Specifically, time limits - to deal with intentional stalling, getting stuck, or taking too long to decide. – AJMansfield Sep 4 '14 at 17:05
• A question and a feedback. Does our program run as "stop and run" or must receive feedback continuously? And for feedback. I honestly think that the whole username things is kinda confusing. Maybe if you just use only unique id? (Like just simple 0,1,2,3 instead of username) – Realdeo Sep 5 '14 at 8:33
• Oh! More things! I also realize that suit doesn't really matter, right? (We only use suit for deciding who goes first), so fmpov, you can ditch the communication protocol for the suit. (No need for S,C,D,H) We can just use simple random from the computer. Question: What will happened if everybody make infinity loop of pass. For time limit, I prefer 1 second. If no response, make it auto pass. (KOTH chess time limit is 2 seconds. That's why 1 second is good enough) – Realdeo Sep 5 '14 at 8:38
• @Realdeo Card suit is also used to distinguish between separate instances of a card. If Alice plays 3x2 (2S 2D 2C), is not revealed, and Bob gets the pile later, and then Bob plays 3x2 (2S 2C 2H), and this is revealed, it is important for Alice that she knows all four Twoes have passed through Bob's hand. There are other ways that can be used as well. – AJMansfield Sep 5 '14 at 13:29
• @Realdeo I am not sure what you mean by "Does our program run as "stop and run" or must receive feedback continuously?". If you mean, "Does an AI program halt in the periods where no response is expected from it?" the answer is no. – AJMansfield Sep 5 '14 at 13:32
• @Realdeo If everybody makes an infinity loop of pass, then eventually someone will run out of cards, since you are required to play at least one card each turn. – AJMansfield Sep 5 '14 at 13:33
• @Realdeo Which is why making a automatic pass after a timeout not work when waiting for a player to decide their play. Perhaps a simple rule like 'if you take more than 1 second to decide what to play, four cards are selected at random from your hand'. – AJMansfield Sep 5 '14 at 13:36
• And if a player has less then 4 cards? I think in some AI website, like aigames.com, they're like forced to give up that hand? You really want to test your entry before put them in the arena(like vsing a bot dummy?) Either way, this is a good challenge =) – Realdeo Sep 5 '14 at 13:40
• @Realdeo Also note that you can actually play more than four cards in one play. A case where you might wish to do this is when: the next player is very close to winning and some other player is close to winning and you believe(all opponents believe(your next opponent will bluff) and the next opponent will not bluff and the next opponent believes(the other opponent close to winning will call BS against them)). A little convoluted, but could happen. – AJMansfield Sep 5 '14 at 13:42
• @Realdeo Just to explain what I mean, is that there are two people close to winning, each of which would like to dump a large stack of cards on top of the other. Because of this, they both let your obvious bluff slide because they believe that will let them dump a large stack on the other. – AJMansfield Sep 5 '14 at 13:47
• Don't worry I understand. This is a really famous high school game in my country. It just... a little bit too complex for CR. When I saw chess KOTH, I was kinda pessimist. This one? This may deserve it's own AI website. #seriously. I'm just trying to simplify this game =) – Realdeo Sep 5 '14 at 13:50
• – AJMansfield Sep 5 '14 at 13:50
• RE the messaging service, I think the other players should be able to see how many cards the other players have. Also, card counting should be prohibited because that would make the game too easy. – Beta Decay Sep 10 '14 at 17:31
• @BetaDecay First off, according to the protocol, every player is informed of every other player's hand size at the beginning of every round. – AJMansfield Sep 10 '14 at 18:27
I am planning on hosting a King of the Hill challenge in which bots will have to coordinate each other in order to be successful. The idea is to play a Diplomacy-like game between bots. The engine (still in development) will start the bots and communicate with them via stdin/stdout. There will be three phases:
0. Initialization
Well, this is not a recurring phase, it is just the engine telling each bot his id, the total number of bots participating and a seed, which can be used for generation of pseudo-random numbers (bots need to be deterministic).
1. Talking Phase (10s)
In the Talking phase, bots can send messages to each other (via engine) in order to coordinate their actions. To this end, a common language is necessary. This language should be able to express any ideas, plans and opinions a bot could have. However, not every bot is forced to be able to understand everything. Simpler bots might just ignore messages they do not understand.
Since I would like each player to be able to submit more than one bot, it is forbidden to implement a "secret handshake" by which bots recognize each other and from then on work together unconditionally.
2. Planning Phase (2s)
In this phase, bots submit what they want to do this turn. Each bot has a certain amount of supply (initially five), and can command one action per supply point. There are three possible actions:
1. Attack another bot
2. Support another bot's attack against a third bot
3. Defend another bot
There are some restrictions:
• Per opponent, you can either attack or defend them, and only once
• You cannot support an attack against a bot you also defend
• You cannot attack, defend, support yourself or a dead bot, and neither can you support attacks against yourself
3. Resolution Phase (as short as possible)
After all orders have been submitted, the engine will resolve them simultaneously in the following way:
The defending strength of each bot is the number of bots defending that bot. The attacking strength of each attack is the number of support orders for that attack. Each attack with an attacking strength greater than the defending strength of the attacked bot results in the supply counter of the attacked bot being reduced by one, and the supply counter of the attacker being increased by one.
Support orders which support a non-existent attack do nothing.
Then, all bots with supply of zero or less will be shut down by the engine: they died.
Afterwards, all remaining bots are informed about the decisions of other bots, and a new turn begins with its Talking Phase.
Further Rules
A game will consist of ten plus random number turns, so that "last turn betrayals" are not possible. The supply count of each bot will count towards their total score. I plan an ensemble of about 100 games. The bot with the highest total score wins. Tie-breaker will be the popularity (number of votes).
I am interested in your opinion: do you think that this challenge is too complex? I imagine that the code of a decent bot would be too long to fit in a post. So people would have to use github or pastebin or similar to submit their entries. The main problem imo is the interpretation of the (yet to be determined) common language.
• I like it a lot. One possible variation would be to make the "secret handshakes" a feature. To do this, you could allow multiple instances of the same bot. Then part of the challenge is to recognise your own kin and mutually support them; and a viable strategy is to try and work out other players' secret handshakes and imitate them. If you're ok with emphasising this aspect of it, then you can make the shared language pretty unrestricted, e.g. bots can just send arbitrary strings to each other. (I realise this is not what you have in mind, I just thought I'd mention the idea.) – Nathaniel Oct 31 '14 at 15:13
• @Nathaniel, what you propose is a battle of obfuscation/cryptography. What I would like to see is a battle of diplomacy. – M.Herzkamp Nov 2 '14 at 14:27
• Fair enough - I just thought I'd say it in case it sparked any interesting thoughts for you, but I knew it was probably too different from what you want to see. If I have any other ideas about your challenge I'll let you know. Designing the language really seems to be the hard part. – Nathaniel Nov 3 '14 at 0:06
• A diplomatic KOTH is something I've been wanting to see for a while. Working out the specifics of the "diplomatic language" is going to be the most difficult part. My proposal is that each message can either 1) state an intention to another bot, or 2) request an action from another bot. Each message would be formatted in a way similar to how a final command would be. – PhiNotPi Nov 9 '14 at 14:45
• @PhiNotPhi: Thanks for the Feedback! I also imagined something similar with the ability to link atomic statements in a boolean fashion. – M.Herzkamp Nov 11 '14 at 9:23
# Happy Holidays!
## Introduction
With the holidays upon us, I decided to make an appropriately themed challenge. You are provided with a list of holidays and their respective date ranges, and given a date, you have to output a holiday greeting or the time remaining until the next holiday as appropriate.
## Challenge
The list of holidays is below. You have to include it in your program (so no using a library or other external resource for this). Feel free to use any convenient format.
Start | End | Name
------ | ------ | -------------------
Dec 6 | Dec 7 | Saint Nicholas' Day
Dec 13 | Dec 14 | Saint Lucy's Day
Dec 24 | Dec 27 | Christmas
Jan 1 | Jan 2 | New Year
Jan 6 | Jan 7 | Epiphany
Feb 14 | Feb 15 | Valentine's Day
You are given a date as input (STDIN, function argument, or anything convenient) in YYYY-mm-dd HH:MM:SS format (e.g.: 2014-12-30 11:15:00).
You may assume that the time zone is either UTC or the system's time zone. The holiday lasts from 00:00:00 on the start date (inclusive) to 00:00:00 on the end date (exclusive).
If the date falls within the range of the holiday, you must output Happy <holiday>!, except if it's Christmas, in which case you must output Merry Christmas!.
If it doesn't, but another holiday is coming at most a week in the future, you must output:
<time> left until <holiday>.
where <time> is in the following format:
<days>d <hours>h <minutes>m <seconds>s
You can't use a library for converting the time to that format.
If there are no whole days, hours, minutes or seconds remaining, omit the number entirely. For example, 1d 0h 3m 4s should be printed as 1d 3m 4s.
If there are no upcoming holidays, you must output (no pun intended):
There are no upcoming holidays.
A trailing newline is optional, but be consistent in your program—don't add a trailing newline in one case and omit it in another.
Standard loopholes are obviously forbidden.
## Test cases
Date | Output
------------------- | ----------------------------------
2014-12-05 23:59:59 | 1s left until Saint Nicholas' Day.
2014-12-06 00:00:00 | Happy Saint Nicholas' Day!
2014-12-06 12:00:00 | Happy Saint Nicholas' Day!
2014-12-06 23:59:59 | Happy Saint Nicholas' Day!
2014-12-07 00:00:00 | 6d left until Saint Lucy's Day.
2014-12-14 00:00:00 | There are no upcoming holidays.
2014-12-24 00:00:00 | Merry Christmas!
Note that your program must work for any year, not just 2014.
## Winner
This is code golf, so the submission with the fewest number of bytes wins. An answer will be accepted after a week, but I'll be happy to change the accepted answer if a new valid submission beats the previous high score.
• How do you expect people to test the test cases? It would probably be better to take input than to use the current time, because then it actually makes sense to talk about test cases. You should check date for duplicates, and if there are none you should add that tag. – Peter Taylor Dec 29 '14 at 14:51
• @PeterTaylor You're right, I'll do that. – nyuszika7h Dec 29 '14 at 15:37
• @PeterTaylor I couldn't find any exact duplicates, only two holiday-themed questions, both of which ask for much less than my challenge. – nyuszika7h Dec 30 '14 at 11:28
# Find the Minimum Width of a Set of Points
Given a set of points in 2D space, you're to find the direction along which those points occupy the shortest width.
More formally, consider a set of n points P = {p1, ..., pn}, where pi = (xi,yi), and a unit vector d = (xd, yd). Now K is the set of lengths obtained from orthogonal projection of P onto d. In particular, ki = xixd + yiyd. The width L of P along d is defined as max ki - min ki. Your task is to find the d along which L is minimal.
To keep things interesting, your algorithm's time complexity must not exceed O(n log n).
You may write a program or function, taking input via STDIN, command-line argument or function argument. The result may be printed to STDOUT or returned.
You can expect the input P to be in any convenient list or string format, but the input must not be pre-processed (e.g. sorted by coordinates). You may assume that the input contains at least 2 points and that no two points coincide.
The output must be correct to 10 significant (decimal) digits. Of course, d is only unique up to the relative sign of the coordinates, so there are two correct answers for each input. You may return either of those.
You must not use built-in functions related to this problem, like finding the minimum width of a polygon, or computing the convex hull of a set of points. You may use built-in vector/matrix types and operations.
## Sandbox Notes
• I'll write my own solution at some point next week, and use it to provide a number of test cases.
• I'm also planning to add a handful of diagrams to clarify the definitions.
• The challenge was inspired by this proposal from Calvin's Hobbies, I think they are sufficiently different, as this problem here is only one approach to tackling his challenge (and even then it's only a subproblem). But if people think, they are too similar, and posting this one would make his a duplicate in the future, I'll retract this challenge (as I'd really like to see his posted some time).
• I hope you're still planning on posting this. Just a few notes: (a) The minimal direction is not generally unique (e.g., if we have a regular polygon, or just a single point). You'd might want to make the actual (scalar) width the output instead. (b) I assume the input is never empty? (c) Can it contain duplicates? – Ell Jan 3 '15 at 16:01
• @Ell yes I still want to post this. Just didn't get around to writing the reference yet. a) good point. I'll think about asking for the width vs asking for any minimal direction. b) yes, will clarify. c) I'll think about that. Probably not. – Martin Ender Jan 3 '15 at 16:07
# The Genetic Game of Life
In this game, you play as cells (as in cellular automata). Your goal is to reproduce and kill off other cells on the board.
At the beginning of the game, two distinct configurations will be randomly chosen, one for reproduction, one for killing. The configurations will consist of 3 squares in a 5 by 5 area, not including the center square. For example,
OOXOX
OOOOO
OO OO
OOOXO
OOOOO
is an example configuration, where X represents a cell, and O can either be empty or filled. The configurations do not work if rotated.
The cells will be placed randomly, but equidistant from each other in a toroidal board. Cells will be placed in a random turn order.
Each turn, each cell will move a square one at a time. If a cell's movement creates the reproduction configuration with cells of the same type, and the center square is empty, then a new cell will spawn. If a cell's movement creates the killing configuration with cells of any type, and the center square is filled, then the cell in the center square will die.
When a new cell spawns, its DNA will be conglomeration of the DNA of the bots in the configuration. It will take its first turn after all other cells have taken 1 turn.
A cell that has not been part of a killing configuration after 200 turns will die.
The cell type with the most cells after 100K turns wins.
# IO
Each turn, you will be passed a string of 1s and 0s representing your DNA, and a list of 49 integers representing a 7x7 grid of the vision around the cell. Specimens of the same type will have the same integer, and 0 will represent an empty square.
You must return a single character (N, E, S, W or X) representing the direction that the cell will travel. Attempting to move into another cell will result in your cell not moving.
• What size genome are you thinking? – trichoplax Jan 21 '15 at 15:18
• When designing a bot, I think it would make a big difference if the total number of players was known. You could say "a game consists of 50 players competing on a board" and make up the numbers with a simple example bot if there are less than 50 answers. Obviously 50 is just an example - it could be 10 or 2 or 20 or whatever works best. – trichoplax Jan 21 '15 at 18:54
• I say this because it means a bot writer will then know the maximum size of the 49 integers they need to process. – trichoplax Jan 21 '15 at 18:56
• Could you specify whether a killing configuration can be made up using cells of different teams? – trichoplax Feb 5 '15 at 0:38
• 1. The latest version of this question talks a lot about types, but doesn't define them. What is the type of a cell? Its genome? Its team? 2. What is meant by the "conglomeration" of the genomes of the bots in the spawning configuration? Is it the concatenation? Some kind of bit-by-bit random selection between the three parents' genomes? 3. What's the initial setup? How many cells do I get, and do they all have the same genome? – Peter Taylor Feb 5 '15 at 10:32
# Convert a Finite State Machine to a Regular Expression
Anyone can make a finite state machine for matching a regular expression. But what about a regular expression that emulates a finite state machine? This inverse operation is much more confusing.
## Input
• A positive integer N, denoting the number of states in the machine. They are labeled 0 to N - 1.
• A list of accepting states of the machine. A string is considered to be accepted by the machine if it ends in one of these when there are no characters left.
• A list of triples (integer a, character b, integer c) representing the transition rules: when the machine is at state a and the current character in the string is b, then it may advance one character and move to state c.
You may specify the ordering and formatting of input.
## Output
A regular expression that matches a string iff it is accepted by the finite state machine.
• An input for the state machine may contain only printable ASCII characters which are not in the set ^$()[]|+?*\.. • The machine begins in state 0. • You should not use any regex features other than |, (), ?, *, + • You may not use libraries designed for this task (which apparently exist). • The regex should match full strings (assume it is surrounded by ^ and $).
• An answer is either a program which prints the regex to stdout, or a function which returns the regex.
This is code golf: write your code in as few bytes as possible.
## Sandbox Question
Should the FSM be deterministic or non?
• Why not allow character classes? Alteration seems like an overkill alternative, and answers might like to detect parallel edges. – John Dvorak Feb 5 '15 at 14:38
• "a function which returns the regex." - do we need to actually construct the regex object or are we allowed to return a string representation thereof? – John Dvorak Feb 5 '15 at 14:40
• General FSMs tend to produce really big regexes. Not sure if allowing non-determinism inflates that even further, but I believe it actually does not. – John Dvorak Feb 5 '15 at 14:46
• Are you sure you want to allow { and } in the input? Because we'd have to escape these... – John Dvorak Feb 5 '15 at 14:47
• @JanDvorak Thanks. I'll allow character classes although I doubt they would be used in golf. I'm aware they will be big (like this question). – feersum Feb 5 '15 at 14:52
• (a) Can we assume anything at all about the SM? Are all terminal states accepting? Are all states reachable? Can there be no reachable accepting states, and what would be the output in this case? (b) Are the blacklisted input chars also invalid as input for the regex? That is, can we use \* in our regex with the intention that it's never matched? (c) How liberal can we be with the input format? Can we take the transition table as, for example, {src_state: [(char1, dest_state1), (char2, dest_state2)], ... }, or must it be a list of triplets? – Ell Feb 8 '15 at 13:33
• @Ell For (a) and (b), the metacharacters are not allowed in the input strings, so one way to match nothing would be using one of them literally. About (c), it must be a list of triples. – feersum Feb 13 '15 at 3:39
# ASCII Robot Wars
This idea is based off of the game "Besiege" (which I've never played) and a previous sandboxed idea of mine called "Epic Customizable Tank Battle."
The main idea is that your program is the AI that controls a robot equipped with various weapons. In this challenge, however, you will also have the opportunity to design your robot
# List of Parts
(completely arbitrary and subject to total replacement)
• Wooden planks + and armored metal plates # make up the body of the robot.
• Wheels @ allow your robot to move. TODO - turning
• Most weapons are formed with two parts, a body and a pointer v^<> to denote the direction of aim.
• Cannons have a body of %, ballistas have a body of (something), spikes and battering rams are (something).
• Maybe helicopter blades can be X.
• Banners, which serve no purpose but decoration, could be $. ## Example 5x5 robot This robot has four wheels, three cannons facing forward, armored sides, and a flag in the middle. @^^^@ #%%%# #+$+#
#+++#
@###@
## Controlling the Bot
I think this would make a really cool Stack-Snippet KOTH, since it is "visually interesting" to watch robots blow each other into pieces. Writing the controller will be hard because this is a major deviation from previous pixel-based KOTHs.
# Sorting Source Code code-golfquine
Your task in this challenge is to write a program that takes no input and outputs This program consists of followed by your program's source code characters, but in alphabetical order.
## Details
• You may only use printable ASCII characters and line breaks in your program.
• When outputting all source code characters, ignore line breaks and sort all characters by their ASCII number from lowest to highest.
• This is , so the shortest valid submission (in bytes) wins.
Here is a correctly ordered list of all printable ASCII characters (mind the space at the very beginning):
!"#\$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_abcdefghijklmnopqrstuvwxyz{|}~
## Example
If the source code for your program is
print 'This program consists of ';
print this.sort();
then output must be
This program consists of ''().;;Tacfghhiiiiimnnnooooppprrrrrssssssttttt
• You should probably add the quine tag to this. And you should also specify whether reading the source code is allowed. That said, I'm not sure how much it adds over existing (generalised) quine challenges. Ultimately, solutions will just be language's standard quine, followed by sorting the string and prepending This program consists of. – Martin Ender May 18 '15 at 17:33
• Are submissions in which the original source code is already sorted, allowed? – ProgramFOX May 18 '15 at 17:34
• @ProgramFOX: I don't see why they shouldn't be allowed. Is there any particular reason? – vauge May 18 '15 at 18:42
• @MartinBüttner: Good point. I've added the quine tag, but probably there's no point in posting this challenge if there are already enough similar questions! – vauge May 18 '15 at 18:45
• @vauge No, I missed that you had to print a sentence before the sorted characters. Initially I thought of a one-char solution like 1 which works in some languages (and which would pretty much miss the purpose of the challenge), but of course that doesn't print the sentence. – ProgramFOX May 18 '15 at 19:06
• In addition to not reading the source code, you might want to link to these working definitions of what counts as a proper quine. With the requirement of printing the additional string it's not very likely that there are a lot of loopholes left, but better safe than sorry. – Martin Ender May 19 '15 at 7:04
• The problem with quines is that there's a reusable technique to print any function of the source code, so twists like this don't actually change much. – xnor May 19 '15 at 7:20
Strange question about transit schematics (title tbd)
In this challenge, your goal is to produce a schematic diagram of a transit network, given a list of lines and a list of stations as input. This is a popularity-contest -- the program's goal is to maximize the readability of the diagram by carefully choosing where to draw the stations and lines.
Each line in the transit network is formatted as a list of strings. For instance:
"Peel Line", "Douglas", "Port Erin", "Braddan Halt", "Union Mill", "Crosby"
The first string on the list is the name of the line. The remaining strings are the stations that that particular line stops at. In this example the Peel Line stops at Douglas, Port Erin, Braddan Halt, Union Mill, and terminates at Crosby. The lines are bidirectional so the Peel Line would also go back the other way.
Of course, most transit networks will have more than one line. Each line of the transit network will have its own line in the input. For instance:
"Peel Line", "Douglas", "Port Erin", "Braddan Halt", "Union Mill", "Crosby"
"Foxdale Line", "Ramsey", "St. John's", "Union Mill", "Bishop", "Foxdale"
Notice how both lines stop at Union Mill. This means that Union Mill is an interchange station for those two lines. A station is an interchange station if more than one line stops at it. Here is an attempt at what the network might look like:
This map does some things well but fails at other things. The lines are coloured different colours which helps differentiate the lines. In addition, the interchange station is emphasised with the white dot to show it is an interchange station. However, it fails at other things, the most prominent being that the text "Union Mill" is overlapping the line, and there is a lack of a key showing which line is which. When we fix these elements the map looks like this:
Much better! (Another way I could have resolved the Union Mill overlapping issue was to change the paths of the lines.) In addition, we can also have lines being a loop. This is indicated by the first and last train stations being the same. For instance:
"Island Line", "Port Erin", "Kitterland", "Kalfr", "Ardglass", "Kearney", "Port Erin"
The Island Line in this case is a loop that goes from Port Erin to Kitterland to Kalfr, then to Ardglass and Kearney, before finally returning to Port Erin, completing the loop.
With more complicated train networks, it becomes more difficult to arrange the stations and lines in a readable manner. Here are some inputs of varying complexity and density to try your program on. Some of them are based of actual networks, while others are made up for this challenge.
Challenge Input 1: Oslo, Norway:
"1", "Frognerseteren", "Voksenkollen", "Lillevann", "Skogen", "Voksenlia", "Holmenkollen", "Besserud", "Midtstuen", "Skådalen", "Vettakollen", "Gulleråsen", "Gråkammen", "Slemdal", "Ris", "Gaustad", "Vinderen", "Steinerud", "Frøen", "Majorstuen", "Nationaltheatret", "Stortinget", "Jernbanetorget (Oslo S)", "Grønland", "Tøyen", "Ensjø", "Helsfyr", "Brynseng", "Hellerud", "Tveita", "Haugerud", "Trosterud", "Lindeberg", "Furuset", "Ellingsrudåsen"
"2", "Østerås", "Lijordet", "Eiksmarka", "Ekravein", "Røa", "Hovseter", "Holmen", "Makrellbekken", "Smestad", "Borgen", "Majorstuen", "Nationaltheatret", "Stortinget", "Jernbanetorget (Oslo S)", "Grønland", "Tøyen", "Ensjø", "Helsfyr", "Brynseng", "Hellerud", "Tveita", "Haugerud", "Trosterud", "Lindeberg", "Furuset", "Ellingsrudåsen"
"3", "Mortensrud", "Skullerud", "Bogerud", "Bøler", "Ulsrud", "Oppsal", "Skøyenåsen", "Godlia", "Hellerud", "Brynseng", "Helsfyr", "Ensjø", "Tøyen", "Grønland", "Jernbanetorget (Oslo S)", "Stortinget", "Nationaltheatret", "Majorstuen", "Blindern", "Forskningsparken", "Ullevål stadion", "Berg", "Tåsen", "Østhorn", "Holstein", "Kringsjå", "Sognsvann"
"4", "Storo", "Nydalen", "Ullevål stadion", "Forskningsparken", "Blindern", "Majorstuen", "Nationaltheatret", "Stortinget", "Jernbanetorget (Oslo S)", "Grønland", "Tøyen", "Ensjø", "Helsfyr", "Brynseng", "Høyenhall", "Manglerud", "Ryen", "Brattlikollen", "Karlsrud", "Lambertseter", "Munkelia", "Bergkrystallen"
"5", "Storo", "Nydalen", "Ullevål stadion", "Forskningsparken", "Blindern", "Majorstuen", "Nationaltheatret", "Stortinget", "Jernbanetorget (Oslo S)", "Grønland", "Tøyen", "Carl Berners plass", "Hasle", "Økern", "Risløkka", "Vollebekk", "Linderud", "Veitvet", "Rødtvet", "Kalbakken", "Ammerud", "Grorud", "Romsås", "Rommen", "Stovner", "Vestli"
"6", "Bekkestua", "Ringstabekk", "Jar", "Bjørnsletta", "Åsjordet", "Ullernåsen", "Montebello", "Smestad", "Borgen", "Majorstuen", "Nationaltheatret", "Stortinget", "Jernbanetorget (Oslo S)", "Grønland", "Tøyen", "Carl Berners plass", "Sinsen", "Storo"
Here's what the official map looks like for some inspiration (click to enlarge):
TODO: more challenge inputs coming!
Sandbox Notes
• I've had this one sitting around for a while. I've wanted to try and make a that wasn't just about making the prettiest image. So instead it's about making the most functional image, which I'm not sure is any better than the art challenges...
• Another thing I've been considering is changing it to a . I'd have minimum requirements for the final output (a key, coloured lines, distinguished interchange stations, etc.) and the shortest code that implemented all the requirements would win. I'd like your thoughts on whether this challenge would work better as code golf or popularity contest.
• I like this concept, but I somewhat like it better as a popcon. As a golf, you will get the bare minimum requirements for sure, and it might not be very functional/readable at all. As a popcon, you'll probably get at least a couple platform-ready posters. I don't normally recommend popcon for things that could be golfed, but this feels like one them. – Geobits May 19 '15 at 16:37
• For test cases, I think Tokyo would make for something nice and complicated. – Geobits May 19 '15 at 16:41
• @Geobits That is a fantastically complicated network, but it's got all sorts of things that the input format can't handle -- if I'm reading it right, when the Yokosuka Line reaches Chiba and Soga it splits off into different paths with different terminus stations. – absinthe May 19 '15 at 22:18
• Good point, I'm guessing many large cities have split lines like that. I suppose if you want a simple input format it'll have to be some boring old map ;) – Geobits May 20 '15 at 14:01
• There doesn't seem to be an obvious reason for not using an input format which allows lines to fork and loops to encompass only part of a line. Why not take input in a subset of the .dot format? – Peter Taylor May 21 '15 at 19:23
# How to Gossip Appropriately
We all know how important it is to get social arrangements right:
You have a group of friends who love to gossip. However, gossip is notorious for changing as it gets spread from friend to friend, and if somebody hears two versions of the same gossip, it just ruins it for them.
Hence, you and all of your friends have agreed to gossip in an orderly manner, and it is your job to define who will gossip with who and for how long. Ideal gossiping must follow the following rules:
1. Each friend must gossip for a specific amount of time. This time is different for each person.
2. Any pair of friends will only spend so long gossiping. Any longer, and it will become dull. We will refer to this time as L. This time is the same for all friends.
3. Gossipping only comes in minute increments. We have no idea why this is, but its true.
4. Gossip must eventually reach everybody. If any given friend has new gossip, then all of your friends must eventually get that gossip.
5. Proper gossipping never includes circles. If A gossips to B and C, and then B gossips to C, then C will hear the news from two different people, and therefore, two different stories.
As an example say you are given the following as input:
Let's start by looking at B. She prefers to gossip for only 1 minute, so she will only be able to gossip with one friend.
We know that she can't gossip with D, as that breaks rule #4
If we have B gossip with C, then C will have 1 minute of gossipping left, and A won't be able to fill his 2 minutes of gossipping needs.
Therefore, we know that B must gossip with A for 1 minute, and A must gossip for 1 minute with C. C and D each have 1 minute of gossipping remaining, so they must both gossip with E.
E needs 2 more minutes of gossip.
If E gossips with F for 2 minutes, then gossip can't ever reach G.
If E gossips with F for 1 minute and G for 1 minute, then F must gossip with H for 1 minute, and H will then gossip with G for 2 minutes. This will create a circle, breaking rule #5.
Therefore, we know that E gossips with G for 2 minutes, G gossips with H for 1 minute, and H gossips with F for 2 minutes.
Our final gossipping tree looks like:
Input will be in the following format, and will be passed to your program via STDIN (or closest alternative):
Max_Gossip_Time [Node0_Ideal_Gossip_Time, Node1_Ideal_Gossip_Time, ...] [[Node0, Node1], [Node0, Node1], ...]
The second array passed is the friend list, and are integers that refer to the positions in Ideal_Gossip_Time array.
The example above would be input as follows:
2 [2, 1, 2, 1, 4, 2, 3, 3] [[0, 1], [0, 2], [1, 2], [1, 3], [2, 3], [2, 4], [3, 4], [4, 5], [4, 6], [5, 7], [6, 7]]
Output should be to STDIO (or closest alternative) in the following format:
[[Node0, Node1, Gossip_Time], [Node0, Node2, Gossip_Time], ...]
On the above example, the output should be similar to:
[[0, 1, 1], [0, 2, 1], [2, 4, 1], [3, 4, 1], [4, 6, 2], [5, 7, 2], [6, 7, 1]]
On both input and output, the friend list can be in any order.
• I didn't notice that you state anywhere that the weights must be integers. Some more test cases would be good. Do you know anything about the complexity class of this problem? – Peter Taylor May 17 '15 at 16:07
• @PeterTaylor I don't think the complexity is too crazy, but I don't know it. The hardest part of this challenge is actually ensuring that there are no cycles as min-maxing the edges will solve everything else. – Nathan Merrill May 17 '15 at 18:51
• I can see a research paper coming out of this! – Optimizer May 27 '15 at 13:23
# If it floats, it boats!
The goal of this challenge is to determine whether or not an ASCII-art shape will float. Like any other boat, ASCII boats obey the law of buoyancy: it will float if it displaces an equal mass of water.
ASCII boats are made out of O characters arranged in some contiguous shape (diagonals are connected). There may be trailing spaces, but the whole input is a rectangle (trailing newline optional). Example boat:
O O
O O
OOOOOOOOO
The material of the boat has twice the density of water. When a boat is floating, the number of displaced water characters is at least twice number of O character in the boat. Here is an artist's impression of a boat while floating.
O O
~~~~O O~~~~
~~~~~OOOOOOOOO~~~~
~~~~~~~~~~~~~~~~~~
This boat has 13 O characters, but displaces 19 water characters, so it floats.
The key to floating is the creation of an air pocket. Air pockets can be formed in two ways: either the water cannot reach the pocket (because the boat has walls keeping it out), or air is trapped in the pocket and cannot escape. Here's an example of a capsized boat which can still float (warning: do not attempt at home).
OOOOOOOOO
~~~~O O~~~~
~~~O O~~~~
~~~~~~~~~~~~~~~~~~
The following shapes aren't boats because they can't float:
OOO
O
O
O
OOO
O O
O O
OOOOOOOOO
O
OOOOOOOOO
OOOOOOOOO
OOOOOOOOO
# The Goal
Write a program that, when given an ASCII-art shape, outputs a truthy value if it boats, and a falsey value if it doesn't boat. This is code golf, fewest bytes wins.
• Do you want this to be the simple version (if it displaces enough water it floats) or a more complex version (some of the air spaces may be filled as it sinks, but remaining air spaces are sufficient to keep it afloat). I'm thinking of examples with multiple air spaces with different height walls, so at a certain depth the water can only fill some of them. That is, the weight of the boat is too great to keep all of the air spaces empty, but the remaining ones are sufficient to keep it from sinking any further. – trichoplax Jul 24 '15 at 21:58
• 1. Bearing in mind the example of the capsized boat: are we supposed to test all possible rotations of the input, or just the orientation in which it's supplied? 2. It would be good to have test cases which are right on the edge (one which floats by displacing exactly its mass, and one which is one unit too heavy). 3. Another corner case which isn't mentioned is discontiguous boats. Should we assume that the input is fully connected? – Peter Taylor Jul 24 '15 at 22:04
• @trichoplax I am leaning towards the more complex version, where the boat is "lowered into" the water, which may float various parts until the equilibrium is reached. PeterTaylor I'll say that only the given orientation should be tested. Also, boats will always be contiguous. – PhiNotPi Jul 24 '15 at 22:08
• Why are the non-boat example not boats? Or do you mean they are boats that don't float? – xnor Jul 26 '15 at 7:41
• @xnor They don't float. It was supposed to be an extension of "if it floats, it boats" so the ones that don't float aren't boats. – PhiNotPi Jul 26 '15 at 21:30
• The boat has 13 O characters, and the material is twice the density of water. Doesn't it therefore need to displace 26 characters in order to float? Sure, this shape will float but it will sit lower in the water than you have drawn it. – Level River St Aug 4 '15 at 15:35
# Multiples - a wrap battle
## Overview
Change cells in multiples to wipe out your opponent, while avoiding being wiped out yourself.
This is a 2 player game, played on a linear string of cells of length L that wrap in a loop. Counting along the loop eventually brings you back to where you started (after L steps). L will be fixed across all battles, and will be a reasonably large prime.
Each cell is controlled by player 1, player 2, or is neutral. These will be indicated as 1, 2 and 0 respectively.
### Starting position
Player 2 starts with a cell at position 0 (since all are equivalent).
Player 1 starts with a randomly chosen cell from 1 to floor(L/2).
Player 1 moves first, reflecting the fact that player 1 has further to go to catch player 2.
### Taking a turn
Each player begins with a stockpile of 0, and at the start of each turn the player's stockpile is increased by the number of cells that they currently control. The player then takes their turn. They choose any cell they control and specify a number N, which can be any non-negative integer up to and including the size of the stockpile. The stockpile is reduced by this number, and N loop cells are affected as follows:
Starting with the chosen cell as cell 0, each of the cells N, 2N, 3N, ... N*N are changed.
• Choosing 0 means nothing happens, at zero cost.
• Choosing 1 means the cell immediately after the chosen cell is changed, at a cost of 1.
• Choosing 2 means the cell 2 cells on and the cell 4 cells on are changed, at a cost of 2.
• Choosing 3 means the cells 3, 6 and 9 cells on are changed, at a cost of 3.
• In general, choosing N changes N cells at a cost of N.
When a cell is changed it follows the following rules:
• A neutral cell becomes the player's.
• An enemy cell becomes neutral.
• A cell already owned by the player becomes the enemy's
### Large N
I expect most moves will choose N considerably smaller, but saving up would allow choosing N considerably larger than L in theory.
Choosing N=L means that all of the changed cells will be the same - the chosen cell, and it will be changed L times.
Choosing N=L-1 means that the L-1 consecutive cells before the chosen cell will all be changed (that is, every cell except the chosen one will be changed).
### Winning
If a move leaves no enemy cells remaining, that player wins.
After 1000 moves any player who has more cells than their enemy at the start of 2 consecutive turns in a row (one theirs, one their enemy's, in either order) wins.
After 2000 moves the game is a draw (tie).
## Input and output
### Input
At the start of a game the player's code will be called with a command line argument of 1 or 2 indicating which player they are (player 1 moves first and is represented by 1s in the loop string).
Each turn the player will be supplied with:
• A string of 0s, 1s and 2s representing the loop.
• An integer S representing the size of their stockpile.
• An integer R representing the size of their opponent's stockpile.
• An integer representing the number of turns taken so far (this will always be an even number for player 1).
### Output
The player should output 2 integers:
• The cell C to play from, in the range 0 <= C < L.
• The number of cells to change N, in the range 0 <= N <= S (their current stockpile size).
# Sandbox questions
• I like the idea of this being a 1 dimensional game, but I can also see it working on a 2d grid, where each move is applied both horizontally and vertically (either on a square L by L, or with 2 distinct large primes as side lengths). Does anyone have anything for or against either 1d or 2d?
• Any recommendations on what input to provide? I was thinking at least the values of all the cells, but would a history also be good, or better to make the players decide what history to track for themselves rather than providing it? Alternatively they could be memoryless and decide purely based on the current cell formation.
• Is the random starting position a good idea? Would it be better to fix the starting position at floor(L/2), ensuring this number is prime, and let the players taking turns to be player 1 balance out any bias?
Help Indiana Jones and his crew cross the bridge!
This codegolf will solve the Bridge and Torch problem. In this problem, there are multiple people (I'm thinking four) who must all cross a weak bridge to escape an evil dragon as quickly as possible. Because the bridge is weak, only two people can cross the bridge at a time. The whole crew is armed with one torch, which is necessary for 1 or 2 people to cross the bridge. Furthermore, each person takes a certain, integral amount of time to cross the bridge. When two people cross together, they must run at the rate of the slower person. The whole crew needs to quickly figure out how to get all the people across the bridge in the least amount of time to maximize their chances of survival.
Input A list of names of the crew (one word, a-zA-Z) and how long they take to cross the bridge alone.
Output An explanation of who crosses the bridge in which order so that the total time is minimized, and the total time.
Example Input: Indiana 5 Jones 10
Output:
Indiana, Jones 10
Input: A 1 B 2 C 5 D 8
Output:
A, B A C, D B A, B 15`
I'm thinking of either just solving this problem with any number of people, or another version in which anyone not at the end (ie. on the first side or on the bridge) after the time limit dies, and the goal is to minimize the number of deaths.
• If it's code-golf it's easier for you as the question poster. If you put a time limit, then different computers will achieve different amounts in that time limit, so you would need to run all the answers on your own computer to give them an official score. – trichoplax Jul 3 '15 at 16:45
|
2020-07-10 23:23:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5264096856117249, "perplexity": 2074.396614145105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655912255.54/warc/CC-MAIN-20200710210528-20200711000528-00135.warc.gz"}
|
http://clay6.com/qa/olympiad-math
|
### A man is 7 times as old as his son, If he is 56 years , what is the age of his son ?
To see more, click for all the questions in this category.
|
2019-01-21 14:04:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7552189230918884, "perplexity": 337.4746726737628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583792784.64/warc/CC-MAIN-20190121131658-20190121153658-00225.warc.gz"}
|
https://consteelsoftware.com/de/script/simple-column-creation-2/
|
# Simple column creation
### Beschreibung
This script creates a pinned-pinned 3 meter high column. The section is HEA 200. A point load is also applied. Analysis can be run right away.
Current version: 1.0 (release date: 2021.10.20.)
To DOWNLOAD this script, navigate to the MyDescript interface within Consteel, then click the "Edit" button next to the name of the script in the list. From there, the script can be saved to the computer. The availability of the "Edit" button depends on your membership level.
https://consteelsoftware.com/products/offers-licensing/#Membership
|
2022-12-04 16:28:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2406488060951233, "perplexity": 6401.741742454145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00349.warc.gz"}
|
https://calculus.subwiki.org/wiki/Delay_differential_equation
|
# Delay differential equation
## Definition
The notion of delay differential equation (abbreviated DDE) is a variant of the notion of differential equation (in other words, delay differential equations are not (ordinary) differential equations).
### First-order first-degree case
If we denote the dependent variable by $x$ and the independent variable by $t$ (Which we think of as time), the first-order first-degree case is:
$\frac{dx(t)}{dt} = f(t,x(t),\mbox{the entire trajectory of } x \mbox{ prior to time } t)$
### General case
The general case of a delay differential equation is of the form:
$F(t,x(t), \mbox{derivatives of the function } x(t) \mbox{ at the point } t, \mbox{the entire trajectory of } x \mbox{ prior to time } t) = 0$
### Note on autonomous case
The delay differential equations that we study are typically autonomous delay differential equations: an equation in the general form above is autonomous if, for any $\tau \in \R$, the function $F(t,x(t), \mbox{derivatives of the function } x(t) \mbox{ at the point } t, \mbox{the entire trajectory of } x \mbox{ prior to time } t)$ is invariant under replacing $x(t)$ by the function $t \mapsto x(t - \tau)$. Intuitively, what this means is that $t$ does not appear explicitly in $F$, and all the behavior at previous points is specified in terms of how much earlier they were than $t$.
|
2020-04-02 03:05:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188510775566101, "perplexity": 307.3287067032471}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506580.20/warc/CC-MAIN-20200402014600-20200402044600-00018.warc.gz"}
|
https://socratic.org/questions/what-is-the-slope-of-the-tangent-line-of-sqrt-y-e-x-y-c-where-c-is-an-arbitrary-
|
# What is the slope of the tangent line of sqrt(y-e^(x-y))= C , where C is an arbitrary constant, at (-2,1)?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
1
Mar 9, 2018
Equation of tangent is $y - 1 = \frac{1}{{e}^{3} + 1} \left(x + 2\right)$
#### Explanation:
As tangent is saught at the point $\left(- 2 , 1\right)$, it is apparent that the point lies on the curve $\sqrt{y - {e}^{x - y}} = C$, we have
$\sqrt{1 - {e}^{- 2 - 1}} = C$
i.e. $C = \sqrt{1 - \frac{1}{e} ^ 3}$
Hence function is $\sqrt{y - {e}^{x - y}} = \sqrt{1 - \frac{1}{e} ^ 3}$
or $y - {e}^{x - y} = 1 - \frac{1}{e} ^ 3$
As slope of tangent is the value of first derivative at the point, here $\left(- 2 , 1\right)$, let us find first derivative by implicit differentiation. Differentiating $y - {e}^{x - y} = 1 - \frac{1}{e} ^ 3$, we have
$\frac{\mathrm{dy}}{\mathrm{dx}} - {e}^{x - y} \left(1 - \frac{\mathrm{dy}}{\mathrm{dx}}\right) = 0$
or $\frac{\mathrm{dy}}{\mathrm{dx}} \left(1 + {e}^{x - y}\right) = {e}^{x - y}$
i.e. $\frac{\mathrm{dy}}{\mathrm{dx}} = {e}^{x - y} / \left(1 + {e}^{x - y}\right)$
Hence, slope of tangent is ${e}^{- 3} / \left(1 + {e}^{- 3}\right)$ or $\frac{1}{{e}^{3} + 1}$
and equation of tangent is $y - 1 = \frac{1}{{e}^{3} + 1} \left(x + 2\right)$
graph{(sqrt(y-e^(x-y))-sqrt(1-1/e^3))(x-(e^3+1)y+3+e^3)=0 [-11.29, 8.71, -4.52, 5.48]}
• 32 minutes ago
• 45 minutes ago
• 46 minutes ago
• 50 minutes ago
• A minute ago
• A minute ago
• 14 minutes ago
• 14 minutes ago
• 15 minutes ago
• 18 minutes ago
• 32 minutes ago
• 45 minutes ago
• 46 minutes ago
• 50 minutes ago
|
2018-06-20 01:55:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.841431736946106, "perplexity": 2529.2871615635295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863407.58/warc/CC-MAIN-20180620011502-20180620031502-00120.warc.gz"}
|
https://www.esaral.com/sequence-series-jee-advanced-previous-year-questions-with-solutions/
|
Sequence-series – JEE Advanced Previous Year Questions with Solutions
JEE Advanced Previous Year Questions of Math with Solutions are available at eSaral. Practicing JEE Advanced Previous Year Papers Questions of mathematics will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas. eSaral helps the students in clearing and understanding each topic in a better way. eSaral also provides complete chapter-wise notes of Class 11th and 12th both for all subjects. Besides this, eSaral also offers NCERT Solutions, Previous year questions for JEE Main and Advance, Practice questions, Test Series for JEE Main, JEE Advanced and NEET, Important questions of Physics, Chemistry, Math, and Biology and many more. Download eSaral app for free study material and video tutorials.
Q. If the sum of first $n$ terms of an A.P. is $\mathrm{c} n^{2},$ then the sum of squares of these $n$ terms is (A) $\frac{\mathrm{n}\left(4 \mathrm{n}^{2}-1\right) \mathrm{c}^{2}}{6}$ (B) $\frac{\mathrm{n}\left(4 \mathrm{n}^{2}+1\right) \mathrm{c}^{2}}{3}$ (C) $\frac{\mathrm{n}\left(4 \mathrm{n}^{2}-1\right) \mathrm{c}^{2}}{3}$ (D) $\frac{n\left(4 n^{2}+1\right) c^{2}}{6}$ [JEE 2009, 3 (–1)]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (C) $\mathrm{C}$ $\mathrm{S}_{\mathrm{n}}=\mathrm{cn}^{2}$ $\mathrm{S}_{\mathrm{n}-1}=\mathrm{c}(\mathrm{n}-1)^{2}$ $\mathrm{T}_{\mathrm{n}}=\mathrm{S}_{\mathrm{n}}-\mathrm{S}_{\mathrm{n}-1}=\mathrm{c}(2 \mathrm{n}-1)$ $\mathrm{T}_{\mathrm{n}}^{\prime}=\mathrm{T}_{\mathrm{n}}^{2}=\mathrm{c}^{2}\left(4 \mathrm{n}^{2}-4 \mathrm{n}+1\right)$ $\sum \mathrm{T}_{\mathrm{n}}^{\prime}=\mathrm{nc}^{2}\left(\frac{4 \mathrm{n}^{2}-1}{3}\right)$
Q. Let $a_{1}, a_{2}, a_{3}, \ldots \ldots \ldots a_{11}$ be real numbers satisfying $\mathrm{a}_{1}=15,27-2 \mathrm{a}_{2}>0$ and $\mathrm{a}_{\mathrm{k}}=2 \mathrm{a}_{\mathrm{k}-1}-\mathrm{a}_{\mathrm{k}-2}$ for $\mathrm{k}=3,4 \ldots \ldots \ldots 11$ If $\frac{a_{1}^{2}+a_{2}^{2}+\ldots \ldots+a_{11}^{2}}{11}=90,$ then the value of $\frac{a_{1}+a_{2}+\ldots \ldots+a_{11}}{11}$ is equal to [JEE 2010,3+3]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. 0
Q. The minimum value of the sum of real numbers $a^{-5}, a^{-4}, 3 a^{-3}, 1, a^{8}$ and $a^{10}$ with $a>0$ is [JEE 2011, 4]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. 8 As $a>0$ and all the given terms are positive hence considering A.M. $\geq$ G.M. for given numbers : $\frac{\mathrm{a}^{-5}+\mathrm{a}^{-4}+\mathrm{a}^{-3}+\mathrm{a}^{-3}+\mathrm{a}^{-3}+\mathrm{a}^{8}+\mathrm{a}^{10}}{7} \geq\left(\mathrm{a}^{-5} \cdot \mathrm{a}^{-4} \cdot \mathrm{a}^{-3} \cdot \mathrm{a}^{-3} \cdot \mathrm{a}^{-3} \cdot \mathrm{a}^{10}\right)^{\frac{1}{7}}$ $\Rightarrow \frac{\mathrm{a}^{-5}+\mathrm{a}^{-4}+\mathrm{a}^{-3}+\mathrm{a}^{-3}+\mathrm{a}^{-3}+\mathrm{a}^{8}+\mathrm{a}^{10}}{7} \geq 1 \quad \Rightarrow\left(\mathrm{a}^{-5}+\mathrm{a}^{-4}+3 \mathrm{a}^{-3}+\mathrm{a}^{-3}+\mathrm{a}^{10}\right)_{\min }=7$ where $\mathrm{a}^{-5}=\mathrm{a}^{-4}=\mathrm{a}^{-3}=\mathrm{a}^{8}=\mathrm{a}^{10}$ i.e. $\mathrm{a}=1$ $\Rightarrow\left(\mathrm{a}^{-5}+\mathrm{a}^{-4}+3 \mathrm{a}^{-3}+\mathrm{a}^{8}+\mathrm{a}^{10}+1\right)_{\min }=8 \quad$ when $\mathrm{a}=1$
Q. Let $a_{1}, a_{2}, a_{3}, \ldots \ldots \ldots, a_{100}$ be an arithmetic progression with $a_{1}=3$ and $S_{p}=\sum_{i=1}^{p} a_{i}, 1 \leq p \leq 100 .$ For any integer $n$ with $1 \leq n \leq 20,$ let $m=5 n .$ If $\frac{S_{m}}{S_{n}}$ does not depend on n, then $a_{2}$ is [JEE 2011, 4]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. 9 or 3 9 or 3 (Comment : The information about the common difference i.e. zero or non-zero is not given in the question. Hence there are two possible answers) Consider $\mathrm{d} \neq 0$ the solution is $\mathrm{a}_{1}, \mathrm{a}_{2}, \mathrm{a}_{3}, \ldots \ldots \ldots \ldots ., \mathrm{a}_{100} \rightarrow \mathrm{AP}$ $\mathrm{a}_{1}=3 ; \quad \mathrm{S}_{\mathrm{p}}=\sum_{\mathrm{i}=1}^{\mathrm{p}} \mathrm{a}_{\mathrm{i}} \quad 1 \leq \mathrm{n} \leq 20$ $\mathrm{m}=5 \mathrm{n}$ $\frac{\mathrm{S}_{\mathrm{m}}}{\mathrm{S}_{\mathrm{n}}}=\frac{\frac{\mathrm{m}}{2}\left[2 \mathrm{a}_{1}+(\mathrm{m}-1) \mathrm{d}\right]}{\frac{\mathrm{n}}{2}\left[2 \mathrm{a}_{1}+(\mathrm{n}-1) \mathrm{d}\right]}$ $\frac{\mathrm{S}_{\mathrm{m}}}{\mathrm{S}_{\mathrm{n}}}=\frac{5\left[\left(2 \mathrm{a}_{1}-\mathrm{d}\right)+5 \mathrm{nd}\right]}{\left[\left(2 \mathrm{a}_{1}-\mathrm{d}\right)+\mathrm{nd}\right]}$ for $\frac{\mathrm{S}_{\mathrm{m}}}{\mathrm{S}_{\mathrm{n}}}$ to be independent of $\mathrm{n}$ $\begin{array}{ll}{\therefore 2 \mathrm{a}_{1}-\mathrm{d}=0} & {\Rightarrow \mathrm{d}=2 \mathrm{a}_{1} \Rightarrow \mathrm{d}=6 \Rightarrow \mathrm{a}_{2}=9} \\ {\text { If } \mathrm{d}=0 \quad \Rightarrow} & {\mathrm{a}_{2}=\mathrm{a}_{1}=3}\end{array}$
Q. Let $a_{1}, a_{2}, a_{3}, \ldots .$ be in harmonic progression with $a_{1}=5$ and $a_{20}=25 .$ The least positive integer n for which $a_{n}<0$ is (A) 22 (B) 23 (C) 24 (D) 25 [JEE 2012, 3 (–1)]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (D) $a_{1}, a_{2}, a_{3} \dots \ldots . .$ be in $\mathrm{H.P} \Rightarrow \frac{1}{\mathrm{a}_{1}}, \frac{1}{\mathrm{a}_{2}}, \frac{1}{\mathrm{a}_{3}} \ldots$ be in A.P. in A.P. $\mathrm{T}_{1}=\frac{1}{\mathrm{a}_{1}}=\frac{1}{5}$ and $\mathrm{T}_{20}=\frac{1}{\mathrm{a}_{20}}=\frac{1}{25}$ $\therefore \mathrm{T}_{20}=\mathrm{T}_{1}+19 \mathrm{d}$ $\frac{1}{25}=\frac{1}{5}+19 \mathrm{d} \Rightarrow \mathrm{d}=-\frac{4}{19 \times 25}$ $\mathrm{T}_{\mathrm{n}}=\mathrm{T}_{1}+(\mathrm{n}-1) \mathrm{d}<0 \Rightarrow \frac{1}{5}-\frac{(\mathrm{n}-1) \cdot 4}{19 \times 25}<0 \quad \Rightarrow \frac{1}{5}<\frac{4(\mathrm{n}-1)}{25 \times 19} \Rightarrow \frac{5 \times 19}{4}+1<\mathrm{n} \Rightarrow \frac{99}{4}<\mathrm{n}$ $\Rightarrow$ least positive integer $n$ is 25
Q. Let $\mathrm{S}_{\mathrm{n}}=\sum_{\mathrm{k}=1}^{4 \mathrm{n}}(-1)^{\frac{\mathrm{k}(\mathrm{k}+1)}{2}} \mathrm{k}^{2} .$ Then $\mathrm{S}_{\mathrm{n}}$ can take value(s) (A) 1056 (B) 1088 (C) 1120 (D) 1332 [JEE-Advanced 2013, 4, (–1)]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (A,D) $\mathrm{S}_{\mathrm{n}}=-1^{2}-2^{2}+3^{2}+4^{2}-5^{2}-6^{2}+7^{2}+8^{2} \ldots \ldots \ldots$ $\mathrm{S}_{\mathrm{n}}^{\mathrm{n}}=\left(3^{2}-1^{2}\right)+\left(4^{2}-2^{2}\right)+\ldots \ldots \ldots \ldots$ $\mathrm{S}_{\mathrm{n}}=2(1+2+3+\ldots \ldots+4 \mathrm{n})=\frac{2(4 n)(4 \mathrm{n}+1)}{2}$ $\mathrm{S}_{\mathrm{n}}=4 \mathrm{n}(4 \mathrm{n}+1)$ $\mathrm{S}_{\mathrm{n}}=4 \mathrm{n}(4 \mathrm{n}+1)=1056$ is possible when $\mathrm{n}=8$ $4 n(4 n+1)=1088$ not possible $4 n(4 n+1)=1120$ not possible $4 n(4 n+1)=1332$ possible when $n=9$
Q. A pack contains n cards numbered from 1 to n. Two consecutive numbered cards are removed from the pack and the sum of the numbers on the remaining cards is 1224. If the smaller to the numbers on the removed cards is k, then k – 20 = [JEE-Advanced 2013, 4, (–1)]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. 5 When 1 and 2 are removed from numbers 1 to n then we get maximum possible sum of remaining numbers and when n –1,n are removed then get minimum possible sum of remaining numbers.
Q. Let a, b, c be positive integers such that $\frac{b}{a}$ is an integer. If $a, b, c$ are in geometric progression and the arithmetic mean of $a, b, c$ is $b+2,$ then the value of $\frac{a^{2}+a-14}{a+1}$ is [JEE(Advanced)-2014, 3]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. 4
Q. Let $b_{i}>1$ for $i=1,2, \ldots \ldots, 101 .$ Suppose $\log _{e} b_{1}, \log _{e} b_{2}, \ldots \ldots, \log b_{10}$ are in Arithmetic Progression (A.P.) with the common difference log $2 .$ Suppose $a_{1}, a_{2}, \ldots . ., a_{101}$ are in A.P. such that $a_{1}=$ $b_{1}$ and $a_{51}=b_{51}$. If $t=b_{1}+b_{2}+\ldots . .+b_{51}$ and $s=a_{1}+a_{2}+\ldots . .+a_{51}$ then (A) $\mathrm{s}>\mathrm{t}$ and $\mathrm{a}_{101}>\mathrm{b}_{101}$ (B) $\mathrm{s}>\mathrm{t}$ and $\mathrm{a}_{101}<\mathrm{b}_{101}$ (C) $\mathrm{s}<\mathrm{t}$ and $\mathrm{a}_{101}>\mathrm{b}_{101}$ (D) $\mathrm{s}<\mathrm{t}$ and $\mathrm{a}_{101}<\mathrm{b}_{101}$ [JEE(Advanced)-2016]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. B
Q. The sides of the right angled triangle are in arithmetic progression. If the triangle has area 24, then what is the length of its smallest side ? [JEE(Advanced)-2017]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. 6
Q. Let $X$ be the set consisting of the first 2018 terms of the arithmetic progression $1,6,11, \ldots .$ , and $Y$ be the set consisting of the first 2018 terms of the arithmetic progression $9,16,23,$ $\ldots \ldots$ Then, the number of elements in the set $X \cup Y$ is [JEE(Advanced)-2018]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. 3748
• May 6, 2021 at 5:58 pm
good
1
• May 6, 2021 at 5:57 pm
out
1
• May 6, 2021 at 5:58 pm
out
0
• November 17, 2020 at 11:02 pm
Very good explanation
5
• October 22, 2020 at 9:29 pm
Good question..
Hots questions
1
• October 22, 2020 at 12:14 pm
E saral waale bhaiyya ya sir mene aapko yt pe dekha hai aur in questions me kuch questions nahi dikhre saaf se..
Wo do lines ek ke upr ek aari hai unme to plzz is ko theek krdo aap
16
• October 7, 2020 at 3:11 pm
1
• September 16, 2020 at 11:21 pm
gracias
2
• July 28, 2020 at 6:38 pm
All the PYQs are not there…Your content is seriously very limited
4
• July 20, 2020 at 8:46 pm
Nice
2
• July 1, 2020 at 5:35 pm
2nd qūêstitíòñ is hard to read
3
• June 28, 2020 at 8:15 am
|
2021-06-14 05:13:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46937283873558044, "perplexity": 1216.3529264500407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00591.warc.gz"}
|
https://indico.uis.no/event/12/contributions/137/
|
# A Virtual Tribute to Quark Confinement and the Hadron Spectrum 2021
Aug 2 – 6, 2021
online
Europe/Brussels timezone
## Dispersive study of $K\pi$ scattering and determination of threshold parameters and of $\kappa/K_0^*(700)$ resonance properties
Aug 5, 2021, 10:50 AM
20m
online
#### online
Parallel contribution B: Light quarks
### Speaker
Jose R. Pelaez Sagredo (Universidad Complutense)
### Description
We have recently completed the coupled dispersive analysis of $\pi K$ pi pi -> K anti-K data.
We show that just fitting data fails to satisfy the dispersive representation and leads to inconsistencies with threshold sum-rules as well as unreliable resonance parameterizations.
Our main result is a set of constrained fits to data that satisfy 16 dispersion relations of different kinds and allow for a reliable extraction of threshold and subtreshold parameters to be compared with Chiral Perturbation Theory and lattice QCD. In addition, we obtain a precise dispersive determination of the controversial light kappa-meson parameters, which has therefore become a "confirmed" state in the PDG, completing the light meson scalar nonet. The constrained fits to data are easy to implement and ready for further theoretical and experimental studies.
### Primary authors
Jose R. Pelaez Sagredo (Universidad Complutense) Dr Arkaitz Rodas (College of William and Mary)
|
2023-03-24 16:27:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17769785225391388, "perplexity": 5034.89398517495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00712.warc.gz"}
|
https://math.stackexchange.com/questions/726138/for-a-finite-group-of-order-2n-does-there-exist-x-such-that-x-ast-x-e/726150
|
# For a finite group of order $2n$ does there exist $x$ such that $x\ast x=e$? [duplicate]
Let $(G,\ast)$ be a group with identity $e$ and cardinality $2n$ for some $n\in\omega$. Then, does there exist $x\in G$ such that $x\ast x=e$ and $x\neq e$?
## marked as duplicate by Jyrki LahtonenJun 20 '15 at 8:55
• How many fixed points does the map $x\mapsto x^{-1}$ have? – Daniel Fischer Mar 25 '14 at 13:32
• @Daniel Is that a hint or are you really asking? The question in the post is exactly an exercise in my textbook.. – John. p Mar 25 '14 at 13:35
• A hint. You don't need the exact number, by the way. – Daniel Fischer Mar 25 '14 at 13:36
• @luli Is that a hint? I don't get it.. Would you give me some more details? – John. p Mar 25 '14 at 13:36
• @Daniel Would you give me more details? I understand that map is an automorphism, but then next i dunno what to do – John. p Mar 25 '14 at 13:38
To expand on Daniel Fischer's hint:
Let $f : G \to G$ be defined by $f(x) = x^{-1}$. This is a bijection. An element $x \in G$ verifies $x * x = e$ iff $f(x) = x$, so you want to find the fixed points of $f$, or at least find one that is not $e$. Note that $f(e) = e$ already, so that's one fixed point.
Now $f$ is an involution ($f(f(x)) = x$), and $|G|$ is even, so its number of fixed points has to be even too: let $F \subset G$ be the fixed points, then every element $x \in G \setminus F$ can be paired with $f(x)$, thus $|G \setminus F|$ is even, so $|F|$ is even too. And since we already have one fixed point ($e$) then there is at least one other (because $1$ is odd), say $x \neq e$. Then $x$ has order $2$ and we're done.
• the very question points towards the plausibility that the OP's level in group theory is pretty elementary. What are the odds he has studied groups actions on sets, fixed points and etc.? – DonAntonio Mar 25 '14 at 13:48
• @DonAntonio There's nothing with group actions here. Just the very elementary fact that the number of fixed points of an involution has the same parity as the cardinality of the set. – Daniel Fischer Mar 25 '14 at 13:51
• Even so, @DanielFischer (though with group actions it would also be pretty straightforward): involutions, fixed points and etc. are notions that are not usually covered, as far as I can tell, at the beginning of group theory courses. They belong more towards subjects like automorphisms groups, actions and stuff. And the "very elementary" fact you mention may not seem elementary at all to a beginner in these matters. – DonAntonio Mar 25 '14 at 13:53
• @DanielFischer the only other basic stuff in the general mathematics curriculum that includes fixed points I can think of is calculus...and not very basic one: Brouwer's or Banach's fixed point theorems aren't basic, imo. Other things could include numerical approximations (Newton's and etc.), but I'd say "fixed points of involutions" is a neat, almost pure notion belonging to abstract algebra (or even a little to linear algebra with linear maps and eigenvalues equal to one...), and in my expererience none is very-very elementary at all. – DonAntonio Mar 25 '14 at 14:01
• @DonAntonio Brouwer's isn't basic (except in dimension $1$), but Banach's is pretty basic. Nothing but metric spaces and completeness involved, used to prove the inverse and implicit function theorems, second semester stuff. Algebra comes fourth semester hereabouts. – Daniel Fischer Mar 25 '14 at 14:09
Depending on what you know, the answer can be almost painfully trivial: yes, as any group of even order has an element of order two by Cauchy's Theorem.
If you haven't yet covered the above theorem things are going to be lengthier: pair up as many as possible of the $\;2n\;$ elements of the group by the rule $\;(x\,,\,x^{-1})\;$ . Taking into account that the unit element gets paired with itself, and since the number of elements of the group is even, there must be another, non-unit!, element in the group which is paired with itself, and we're done.
(This is a variation on Daniel Fischer's hint in the comments.)
Let's partition the elements of $G$ into subsets: For each $x\in G$, there will be a subset $S_x$ consisting of exactly $\{x, x^{-1}\}$. Note that when $x =x^{-1}$, this subset will have one element, not 2. For example, $S_e$, the subset containing $e$ is $\{e\}$.
Now prove the following:
1. This division into subsets is a partition: each element of $G$ is in exactly one of the subsets.
2. If you add up the sizes of the subsets, you get $2n$.
3. Some of the subsets have 2 elements, some have 1. But there is at least one subset with only 1 element.
By (3), there is at least one subset with only one element. Is it possible that there are no others?
• Why is it impossible that every partition is a singletone? – John. p Mar 25 '14 at 13:48
• It's not impossible, @John.p. But then you're also done, all you need is at least one singleton in the partition besides $\{e\}$. – Daniel Fischer Mar 25 '14 at 13:52
• @John.p It is quite possible that they are all singletons. My question is “Is it possible that there is only one singleton?” – MJD Mar 25 '14 at 13:55
|
2019-04-19 02:21:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7931690812110901, "perplexity": 372.84228212847626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526966.26/warc/CC-MAIN-20190419021416-20190419043416-00347.warc.gz"}
|
https://www.nature.com/articles/s42005-022-00989-x?error=cookies_not_supported&code=6ca2fd21-b08b-4d3d-9b56-21cb406acfc3
|
## Introduction
Strengthening of laser technology’s position in scientific, industrial and daily life applications relies on the system performance, specifically on how flexible it can adapt to the broad range of requirements. In this perspective, Thulium-doped fibres enable the exploration of a highly desirable broad wavelength range, spanning over ~350 nm around 2 μm. Such unique operation bandwidth has placed these fibre laser systems at the forefront of diverse applications. Environmental monitoring of greenhouse gases1, polymer or semiconductor machining2, optical coherence tomography3, nonlinear microscopy4, and optical communication5—these are just a handful of the ultrafast laser tasks which have been enabled by or enhanced with the Tm-doped fibre systems development. Versatile, tuneable, and highly integrated turnkey ultrafast fibre laser solutions are of high demand to empower emerging technological platforms and reinforce their rapid progress towards hand-held instruments. Yet, the development of such instruments is mainly obstructed by the imperfections and limitations of specific essential components that exhibit saturable absorption and wavelength tuneability behaviour.
The most notable saturable absorbers, which ensure phase-locking of longitudinal cavity modes, are SESAMs (e.g. GaSb)6, carbon nanomaterials7, transition metal dichalcogenides8, or MXene9. However, besides poor long-term reliability and power-handling, their performance in the short-wave infra-red (SWIR) wavelength range lags behind their operation at Near-IR. At the same time, widely used all-fibre artificial modulators based on the optical Kerr effect, such as nonlinear polarisation evolution (NPE), nonlinear loop mirrors or nonlinear multimode interference, suffer from the diminished nonlinearity of optical fibres at the SWIR. This limitation leads to a high self-starting threshold and the necessity to design complex cavity configurations10.
In such context, the visionary idea of mode-locked fibre lasers involving no apparent saturable absorber in the cavity has been validated through nonlinear coupling between cores in multi-core fibres11, inter-mode beating12, or self-absorption in a gain fibre. Conventionally, the latter has been considered undesirable instability in continuous-wave fibre lasers13 and was only later studied as an ultrashort pulse formation mechanism. The saturable absorption behaviour of unexcited rare-earth-doped fibres has been mainly exploited at bandwidths where their absorption cross-section covers the emission cross-section of separate gain fibre14,15. Due to the slow recovery time of such a saturable absorption mechanism, the reciprocity of self-phase modulation (SPM) and anomalous group-delay dispersion became imperative for soliton shaping16. So far, only a few works have reported the gain fibre saturable absorption dynamics to enable the formation of few-picosecond mode-locked pulses as long as was assisted by SPM accumulated over tens of metres long laser cavities17,18,19.
At the same time, a broad range of laser generation wavelength tuneability up to 300 nm (1733–2033 nm), demonstrated in Tm-doped fibre lasers20, is generally implemented using bulk acousto-optic tunable filters21, diffraction gratings22, volume Bragg gratings23, or planar semiconductor chips24. For this reason, the benefits of laser operation tuneability come at the cost of generation stability and substantial insertion losses. Thus, the possibilities of shaping and manipulating the laser generation through the interplay of intrinsic intracavity phenomena have recently become a topic of intensive experimental and theoretical investigation25,26,27. It is worth mentioning that bending of active fibre presents another technique for the operation wavelength variation, demonstrating 152 nm wavelength tuneability from 1740 to 1892 nm28. However, the fundamental principle of such tuneability relies on wavelength-selective losses, suppressing spontaneous emission at the longer edge of the gain spectrum. Such tuneability mechanism has been ensured by the proper design of a bend-sensitive W-type active fibre, in which cladding comprises a depressed ring with a lower refractive index. Importantly, like any other technique relying on loss introduction, these results came at the price of reduced laser efficiency. With up to 1.3 W pump power, the maximum average output power reached only 4.5 mW. Additionally, the realisation of continuous sweeping of the central laser wavelength by changing the fibre coiling radius introduces another level of complexity to the system.
Specifically, in Tm-doped fibre as quasi-three-level systems, the equilibrium between emission and absorption and, thus, the profile of the gain spectrum g(λ) is governed by the lower N1 and excited laser level N2 population fractions (Supplementary Note 1). Supplementary Fig. S1b predicts spectral gain profiles of Tm-doped fibre (used in further investigations) to red-shift at higher excitation level29,30. The control of the laser cavity Q-factor allows efficient governing of an excitation level of the gain, leveraging output power and laser generation wavelength. The control of the Q-factor can be implemented, for example, by changing the active fibre length or its doping concentration31, or by altering cavity loss through implemented attenuation32,33 or controlling laser feedback34. The cavity feedback control has been investigated mainly for the relatively narrow band of the Erbium gain34,35,36. Active fibres with broader gain spectra hold the true potential since the actual tuning range is built by the shape and overlap of the emission and absorption cross-sections. Nevertheless, the presented attempts to control wavelength-independent losses have not demonstrated remarkable tuneability ranges in Tm-doped fibre lasers. While theoretical work predicted the possibility of achieving a 105 nm tuneability in a continuous-wave generation, only 3632 and 48 nm33 tuneability has been achieved in the experiment. Still, continuous wavelength tuneability in ultrafast Tm-doped fibre lasers remains largely unexplored.
In this work, we explore the phenomenon of self-mode-locking and enable tuneable ultrashort pulse generation by manipulating the gain excitation level, omitting the application of intracavity filters or saturable absorbers. The proposed concept employs a single Tm-doped fibre as the gain medium, saturable absorber and element responsible for filter-less wavelength tuneability simultaneously. Experimental studies show that a ring fibre oscillator comprising such fibre and only a few conventional non-polarisation-maintaining components can support the generation of near transform-limited 350-fs soliton pulses with ~1.2 nJ energy at ~45 MHz repetition rate. Importantly, we explore the impact of Tm3+ ion-pair clusters on the saturable absorption properties of doped silica fibres. Moreover, the ultrafast fibre laser system demonstrates effective continuous wavelength tuneability within the range spanning from 1873 to 1962 nm by controlling the cavity feedback and maintaining high quantum efficiency. For this, we suggest employing a variable output coupler, which determines the intracavity energy and, therefore, the excitation level of fibre gain. The rigorous theoretical investigation confirms experimental observations of the pulse formation and clarifies the origin of the tuneability effects.
Overall, the elegance and low complexity of the laser system, together with one order of magnitude shorter pulse duration and more than a twofold increase of the tuneability range are the advancements where we focus the key claims of the current work. Another advantage of the demonstrated technique is that it can be translated to other laser configurations and wavelength ranges, which currently lack an extensive selection of components to develop robust fibre-based systems. The demonstrated technique holds high potential in exploring the Mid-IR range, where rare-earth fibres demonstrate broad gain spectra (e.g. Dy3+ ions allow nearly 600 nm tuneability around 3.1 μm37).
## Results
### Investigations of saturable absorption origins in Tm-doped gain fibre
The origin of the saturable absorption in rare-earth-doped fibres is associated with ion pairs excited-state absorption38,39,40 and upconversion interaction41,42. In the context of Tm-doped fibres, the work of Jackson et al. has concluded that 3F4, 3F4 → 3H6, 3H4 (Supplementary Fig. S1a) energy transfer upconversion in unpumped ions plays a key role in establishing the saturable absorption behaviour43. The energy difference of this transition is relatively high, −1500 cm−1, and, therefore, a quite high degree of clustering is required for the Tm-doped silica fibres to achieve ion separations that are short enough to provide a sufficient level of saturable absorption.
To reveal the nature of saturable absorption, we examined non-polarisation-maintaining highly Tm-doped fibre with a 2.9 μm core radius, 0.18 numerical aperture and a cut-off wavelength of 1389 nm. The glass matrix of the fibre core comprises an aluminosilicate host with 2.85 mol.% Al2O3 and 0.33 mol.% Tm2O3, thus providing an 8.6:1 Al:Tm doping ratio. It is worth mentioning that our measurements provided only indicative values of the ion concentration due to the limited resolution of the electron microscope and the small size of the fibre core. Nevertheless, these estimations suggest that the Al concentration is not sufficient to prevent the clustering of Tm ions44. A transverse refractive index measurement can be seen together with scanning electron microscope images of the fibre facet in Fig. 1a. Judging from these measurements, the fibre exhibits a pronounced core circularity and concentricity with a characteristic dip in the core centre due to the decrease of Al3+ concentration due to the formation of volatile AlF345. Furthermore, the refractive index profile features are depressed by the F-doping cladding area.
It is worth noting that we also investigated another fibre sample with a germanaoalumosilicate matrix of the core with ~13 mol.% GeO2, 1.58 mol.% Al2O3 and 0.54 mol.% Tm2O3, resulting in Ge:Al:Tm ion concentration ratio of 12:3:1 (Supplementary Note 2). However, despite high doping concentration and potential ion clustering, this fibre did not enable self-mode-locking generation in the above-discussed laser configuration. Therefore, further measurements of its properties are presented in Supplementary Materials (Supplementary Fig. S2).
The time-resolved luminescence properties, especially the decay time-power dependency, can provide important information about the Tm3+ ion incorporation in the optical fibre matrix and the formation of clusters and ion pairs. The fluorescence lifetimes of the 3F4 and 3H4 levels were measured by direct in-band pumping using 1618 and 792 nm diodes. The emission was collected at around 2 μm and 800 nm using InGaAs and Si photodiode detectors, as discussed by Kamrádek et al.46. In order to suppress the influence of light travelling along the waveguide, namely the amplified spontaneous emission and reabsorption, the measured fibre was only 1.5 mm long, and the emission was detected transversely. The decay time, τ, was obtained from the e−1 value on the normalised decay curve. The fluorescence lifetime, τ0, was retrieved by extrapolation of the decay time to zero excitation power, assuming the behaviour according to ref. 47:
$$\tau =\frac{{\tau }_{{{{{{{{\rm{0}}}}}}}}}}{1+\left(\frac{{\tau }_{0}}{{\tau }_{{{{{{{{\rm{sat}}}}}}}}}}-1\right){\left(\frac{P}{P\,+\,{P}_{{{{{{{{\rm{crit}}}}}}}}}}\right)}^{2}},$$
(1)
wherein τsat is the saturated lifetime, extrapolated to infinite excitation power, P is the excitation power, and Pcrit is the critical power of the energy-transfer processes.
Figure 1c shows the lifetime values of the Tm-doped fibre under the study. The value 425 μs of the 3F4 level lifetime is characteristic for highly doped Tm fibres48. Importantly, the steep decrease in decay time with increasing excitation power suggests a high rate of energy-transfer upconversion. The experimentally obtained value of 3H4 lifetime is around 21.4 μs, which is also typical for highly doped Tm fibres. Conventionally, the 3H4 decay time of highly doped fibres with homogeneously distributed Tm3+ ions should increase with excitation power due triggering the cross-relaxation effect 3H4,3H6 → 3F4,3F4. As observed in our fibre sample, the decrease of decay time with excitation power suggests a suppressed cross-relaxation and, therefore, further supports the hypothesis of clustering and pair-induced quenching of Tm ions.
#### Pump saturable absorption
For estimating the concentration of Tm3+ pairs in our active fibres, we used the approach proposed by Myslinski et al.49 for measuring the saturable loss of the pump absorption. Departing from the measurement of the pump absorption coefficient as a function of the irradiated pump power (Fig. 1b), we model the non-saturable loss growing with the fraction of Tm-pairs using RP Fiber Power software.
A 18-cm long section of studied Tm-doped fibre demonstrated a small-signal absorption, α0, of ~15.3 dB. Towards high irradiation, the experimentally recorded pump absorption coefficients saturate at 2.5 dB, following the trend described as:
$${\alpha }_{{{{{{\rm{sat}}}}}}}=mk{\alpha }_{0}\left\{1-\frac{{\sigma }_{{{{{{{{\rm{a}}}}}}}}}+{\sigma }_{{{{{{{{\rm{e}}}}}}}}}}{m{\sigma }_{{{{{{{{\rm{a}}}}}}}}}+{\sigma }_{{{{{{{{\rm{e}}}}}}}}}}\right\},$$
(2)
here k is the proportion of Tm3+ clusters in the total Tm3+ concentration, m is the number of ions per cluster50. To define the model, we used parameters of Tm-doped fibre, recorded during the fluorescence lifetime measurement and the upconversion coefficients from Kamradek et al.46. The fibre indicated the significant contribution of upconversion processes to the 1G4 level, corresponding to 480 nm. Since the pump at 1550 nm wavelength could not provide single-photon excitation to the 1G4, our simulation accounted for the multiphoton or cooperative process as a third-order term quenching, which is limited by used RP Fibre Power software. We assumed cubic quenching coefficient of 2.8 10−73 m3 s−1 in the simulations. This third-order quenching coefficient ensured considerably better agreement with experimentally measured data, preventing numerical absorption curves from saturating far too fast at lower launched pump powers (Fig. 1b). The detailed numerical simulation allowed estimation of ~20% of Tm3+ pairs in the studied gain fibre.
#### Nonlinear saturable absorption
To measure the saturable absorption of the active fibre via the twin-detector approach, we used a self-build Tm-doped fibre laser generating 500-fs pulses at 1890 nm. The laser output power, controlled via an external variable optical attenuator, was split into two arms and launched into an unpumped fibre under test and a reference detector. Since the saturable absorption behaviour is considered to occur in unexcited rare rare-earth-doped optical fibre, we investigated nonlinear dynamics at 1, 3, 5 and 10-cm long fibre sections. Figure 1d shows the normalised intensity-dependent absorption measurement data with the approximation according to the two-level energy model51:
$$\alpha (I)=\frac{{\alpha }_{0}}{1+\frac{I}{{I}_{{{{{{{{\rm{sat}}}}}}}}}}}+{\alpha }_{{{{{{{{\rm{ns}}}}}}}}},$$
(3)
where α0 and αns are the modulation depth and non-saturable losses, correspondingly. I is the launched pulse intensity, and Isat is the saturation intensity, which corresponds to the twice reduced absorption of the test sample. As seen in Fig. 1d, the saturation intensity of Tm-doped fibre drops from 250 to 72 MW cm−2 with the reduction of the active fibre length. The active fibre demonstrates high modulation depth spanning from 34 to 9.5%, when trimmed from 10 to 3 cm. These values are comparable with the parameters of conventional material saturable absorbers, operating at SWIR wavelength range52. However, the saturation intensity of Tm-doped fibre is significantly higher, which can be explained by a high energy difference between the initial and final upconversion energy states. Moreover, the peak power of the available ultrafast fibre laser was not sufficient to fully saturate the Tm-doped fibre section with the 10-cm length. Therefore, we anticipate that the longer section would not act as an efficient saturable absorber or require considerable laser gain to compensate for the high saturation threshold. Similarly, the decrease of the length to ~1 cm cannot provide benefits to self-mode-locking initiation, as the modulation depth drops to nearly 1% with the saturation intensity on only ~40 MW cm−2.
Further, to assess the influence of laser operation wavelength on saturable absorption of the Tm-doped gain fibre, we have investigated the power-dependent absorption at various wavelengths within the tuneability range from 1880 to 1947 nm. The summary of the saturable absorption parameters of 5-cm long Tm-doped fibre section at 1880, 1900, and 1920 nm wavelengths is presented in Supplementary Fig. S3. It is worth noting that the trend of the measured values of the saturable loss coincides well with the relative absorption slope of Tm3+ ions. The saturation intensity depends on the upper state lifetime τ and the absorption cross-section σabs expressed as $${I}_{{{{{{{{\rm{sat}}}}}}}}}=hc{(2\lambda \tau {\sigma }_{{{{{{{{\rm{abs}}}}}}}}})}^{-1}$$, such that the modulation depth is proportional to the absorption cross-section, taking Supplementary Eq. (S1) into account.
### Experimental configuration
To examine the self-mode-locking and tuneability dynamics of the ultrashort pulse generation in the laser cavity with a variable Q-factor, we assembled the Tm-doped fibre laser setup as presented schematically in Fig. 2. The ring fibre laser cavity employed a 1550-nm pump laser with 1-W power (HPFL-300, BKtel), a 1550/2000 wavelength division multiplexer, an isotropic isolator, and a squeezing polarisation controller to restore the polarisation state after each round trip. A 0.5 m section of a non-polarisation maintaining Tm-doped silica fibre, discussed in the previous chapter, provided both active gain and saturable absorption to enable self-mode-locking. The fibre small-signal gain was estimated as ~30 dB (for 0.95 W pumping and 65 μW seeding a 44 cm piece). Next, the active fibre group velocity dispersion, third-order dispersion and nonlinearity were estimated as β2 = −20 ps2/km, β3 = 0.27 ps3/km, γ = 2 W−1 km−1, correspondingly, at around 1.9 μm wavelength range. The rest of the cavity comprises 4.2 metres-long standard single-mode fibre ports of the fiberised laser components with the following parameters: β2 = −59 ps2/km, β3 = 0.28 ps3/km, γ = 1.3 W−1 km−1 at Tm-doped fibre laser operation wavelength. It is important to stress that the oscillator consists fully of polarisation-insensitive elements and concentric, cylindrical silica fibres. Therefore, any other mode-locking mechanism, particularly NPE, could be not considered due to short cavity length and negligible polarisation dependent loss53.
The tuneability of the laser generation was established by introducing no filtering elements into the cavity, but only through including an in-line variable fibre-optic coupler (Evanescent Optics) based on D-shaped polished fibres, interacting via the evanescence field. The separation of these fibres changes through rotating a knob controller, thus, modifying the coupling efficiency from one waveguide into another, and overall laser cavity feedback from 8 to 93% with nearly 0.3–0.1 dB excess loss. The performance of the variable coupler was characterised by reference laser transmission measurements at 1.95 μm and is shown in the inset in Fig. 2.
### Numerical simulations
To validate the possibility of the self-mode-locking and broadband tuneability in the presented Tm-doped fibre laser, we developed a numerical model which describes consequent pulse propagation through different cavity elements. The pulse propagation along the passive fibre is governed by the Nonlinear Schrödinger equation. The following system of coupled equations for continuous-wave pump and pulsed signal generation, taking into account the effects of dispersion and nonlinearity, was considered to describe the signal amplification54,55:
$$\frac{\partial {A}_{{{{{{{{\rm{s}}}}}}}}}(z,t)}{\partial z} = -i\frac{{\beta }_{2}}{2}\frac{{\partial }^{2}{A}_{{{{{{{{\rm{s}}}}}}}}}(z,t)}{\partial {t}^{2}}+i\gamma | {A}_{{{{{{{{\rm{s}}}}}}}}}(z,t){| }^{2}{A}_{{{{{{{{\rm{s}}}}}}}}}(z,t) \\ \quad +\int\nolimits_{-\infty }^{\infty }\frac{{g}_{s}(\omega ,z)}{2}{\tilde{A}}_{{{{{{{{\rm{s}}}}}}}}}(z,\omega ){{{{{\rm{exp}}}}}}(-i\omega t)d\omega ,$$
(4)
$$\frac{\partial {P}_{{{{{{{{\rm{p}}}}}}}}}(z)}{\partial z}={g}_{{{{{{{{\rm{p}}}}}}}}}(z){P}_{{{{{{{{\rm{p}}}}}}}}}(z),$$
(5)
where As(z, t) is the slowly varying envelope associated with the signal, Pp(z) is the average power of continuous-wave pump, β2 is the group velocity dispersion, γ is the Kerr nonlinearity, gs and gp are signal and pump gain/loss coefficients, correspondingly. Equation (4) was numerically solved by the split-step Fourier method. The spectral window considered in the model extended from 1300 to 2800 nm. The temporal window was equal to 140 ps.
The wavelength dependence of the gain is considered in the frequency domain, where optical field $$\tilde{A}(z,\omega )$$ is multiplied by the gain profile gs(ω, z). Each spectral component of the gain gs(λi, z) (i = 1, .., Nω, where Nω is the number of the discrete frequencies in simulations) and pump gain/loss coefficient at each step along the fibre were found from the rate equations in the stationary case dN2,3/dt = 0:
$${g}_{s}({\lambda }_{i},z)= \; \left({\sigma }_{21}^{s}({\lambda }_{i})-{\sigma }_{23}^{s}({\lambda }_{i})\right){\rho }_{s}({\lambda }_{i}){N}_{2}(z)-{\sigma }_{12}^{s}({\lambda }_{i}){\rho }_{s}({\lambda }_{i}){N}_{1}(z) \\ +{\sigma }_{32}^{s}({\lambda }_{i}){\rho }_{s}({\lambda }_{i}){N}_{3}(z),\,i=1,...,{N}_{\omega }$$
(6)
$${g}_{p}(z)={\sigma }_{21}^{p}{\rho }_{p}{N}_{2}(z)-{\sigma }_{12}^{p}{\rho }_{p}{N}_{1}(z),$$
(7)
$$\frac{d{N}_{2}(z)}{dt}= \; \left({\sigma }_{12}^{p}{\rho }_{p}\frac{{P}_{p}(z)}{h{\nu }_{p}}+\mathop{\sum }\limits_{k=1}^{k}\left({\sigma }_{12}^{s}({\lambda }_{k}){\rho }_{s}({\lambda }_{k})\frac{{P}_{s}({\lambda }_{k},z)}{h{\nu }_{k}}\right)\right){N}_{1}(z) \\ +\left(\mathop{\sum }\limits_{k=1}^{k}\left({\sigma }_{32}^{s}({\lambda }_{k}){\rho }_{s}({\lambda }_{k})\frac{{P}_{s}({\lambda }_{k},z)}{h{\nu }_{k}}\right)\right){N}_{3}(z)\\ -\left({\sigma }_{21}^{p}{\rho }_{p}\frac{{P}_{p}(z)}{h{\nu }_{p}}+\mathop{\sum }\limits_{k=1}^{k}\left({\sigma }_{21}^{s}({\lambda }_{k}){\rho }_{s}({\lambda }_{k})\frac{{P}_{s}({\lambda }_{k},z)}{h{\nu }_{k}}\right)\right. \\ \left.+\mathop{\sum }\limits_{k=1}^{k}\left({\sigma }_{23}^{s}({\lambda }_{k}){\rho }_{s}({\lambda }_{k})\frac{{P}_{s}({\lambda }_{k},z)}{h{\nu }_{k}}\right)+\frac{1}{{T}_{2}}\right){N}_{2}(z)-2{k}_{2231}{N}_{2}^{2}+2{k}_{3122}{N}_{1}{N}_{3},$$
$$\frac{d{N}_{3}(z)}{dt}= \; \left(\mathop{\sum }\limits_{k=1}^{k}\left({\sigma }_{23}^{s}({\lambda }_{k}){\rho }_{s}({\lambda }_{k})\frac{{P}_{s}({\lambda }_{k},z)}{h{\nu }_{k}}\right)\right){N}_{2}(z)\\ -\left(\mathop{\sum }\limits_{k=1}^{k}\left({\sigma }_{32}^{s}({\lambda }_{k}){\rho }_{s}({\lambda }_{k})\frac{{P}_{s}({\lambda }_{k},z)}{h{\nu }_{k}}\right)+\frac{1}{{T}_{3}}\right){N}_{3}(z) \\ +{k}_{2231}{N}_{2}^{2}-{k}_{3122}{N}_{1}{N}_{3},$$
(8)
$${N}_{1}(z)=N-{N}_{2}(z)-{N}_{3}(z),$$
(9)
here N1,2,3 are population densities in the energy levels 3H6, 3F4 and 3H4 correspondingly, N = 4.05938 1015 m−1 is the total number of Tm-ions integrated over the fibre mode cross-section, $${P}_{s}({\omega }_{k},z)=| \tilde{A}(z,{\omega }_{k}){| }^{2}$$ is the signal power at the frequency ωk and position z along the fibre, T2 = 425 μs and T3 = 21.4 μs are fluorescence lifetimes. The effective pump absorption and emission cross-sections at pump wavelengths are $${\sigma }_{12}^{p}=1.5630\cdot 1{0}^{-25}$$ m2 and $${\sigma }_{21}^{p}=5.1005\cdot 1{0}^{-27}$$ m2. The absorption and emission cross-section spectra in the considered spectral window (shown in Supplementary Fig. S4) are described by $${\sigma }_{12}^{s}({\lambda }_{i})$$, $${\sigma }_{23}^{s}({\lambda }_{i})$$, $${\sigma }_{21}^{s}({\lambda }_{i})$$ and $${\sigma }_{32}^{s}({\lambda }_{i})$$. The normalised pump and signal power distributions through the fibre cross-section are marked ρp,s = Γp,s/πa2, where a = 2.65 μm is the core radius of a single-mode fibre, Γps) corresponds to the modal overlap factor between the pump (signal) mode and the ion distribution. Γp = 1 for core pumping, Γs = 1 − exp(−2a2/w2), w is 1/e electric field radius of the equivalent Gaussian spot. The energy transfer coefficients k3122 = 2.52 10−22 m3/s and k2231 = 3.44 10−24 m3/s describe the cross-relaxation process 3H4, 3H6 → 3F4, 3F4 and energy transfer upconversion 3F4, 3F4 → 3H4, 3H6 correspondingly48.
The used approach dNi/dt = 0 means that this evolution occurs with the roundtrips quite slowly, i.e., during one round trip the optical field does not change the populations N1, N2, N3 significantly. At the end of the active fibre, the pump is depleted, and the last fibre segment plays the role of a saturable absorber. Equations (4)–(9) are also applicable to describe the optical losses (α = −gs) experienced by the signal beam in the absence of the pump power. In this case, the condition dNi/dt = 0 sets an instantaneous response of the saturable absorber on the pulse power within each round-trip. It results in the non-saturated losses αns determined by the ion quenching and up convention processes and the saturated absorption fraction (Eq. (3)), expressed through the modulation depth α0, saturation intensity Isat and with I(t) = A(t)2. Specifically, for our numerical simulations, we have simplified the model considering the active fibre as a combination of the amplifier described by the wavelength-dependent gain profile gs(λi, z) distributed over the whole fibre length and an absorber described as the time-dependent optical losses α(t, z) = −gs(t, z) localised in a rare end of fibre section. Such consideration has allowed us to separate contributions of the amplification and saturable absorption to the laser dynamics and utilise for simulations the characteristics of Tm-doped fibre directly measured in the experiment (Fig. 1d). In fact, an analogous approach based on a combination of the wavelength-dependent and time-dependent gain has been recently applied for accurate modelling of optical pulse propagation in Yb-doped fibres56. However, the used gain presentation as a product of wavelength-dependent and time-dependent factors was rather artificial to approximate a more complicated mathematical function.
The initial field at the first round trip was modelled with “white” Gaussian noise. As Fig. 3a illustrates, a lasing self-start could not be obtained using saturable absorber with saturation intensity Isat = 46 MW/cm2, corresponding to the shortest 1-cm long fibre samples. The optical field tends to zero in this case. The optimal value of saturation intensity is Isat = 150 MW/cm2, which match the parameters of 5-cm section of Tm-fibre used in the experiments. Here, we observed the formation of stable pulse generation (Fig. 3b). Further increase of saturation intensity leads to the distortion of the pulse, significant growth of the Kelly sidebands (Fig. 3c) and pulse break-up (Fig. 3d). Therefore, in further simulations, we fixed parameters of saturable absorber at values corresponding to 5-cm long Tm-doped fibre saturable absorber, i.e. α0 = 0.15 and αns = 0.85, Psat = 100 W.
Figure 4a shows simulated output spectra at different gain excitation levels. The increase in cavity feedback leads to a red-shift of the pulse central wavelength from 1872 to 1961 nm, resulting in wavelength tuneability of about 90 nm. The output spectra have pronounced Kelly sidebands characteristic to periodically amplified average solitons.
We also investigated field evolution towards the steady-state mode-locking regime using initial seed pulse with different parameters to confirm the reproducibility and uniqueness of solution. First, a hyperbolic secant pulse with 1.13 pJ energy, 1 W peak power, 1 ps duration and varying wavelength was used as a seed. The simulation results show that the output pulse wavelength and energy in the stable regime do not depend on the wavelength and character of energy evolution of the seed pulse. Figure 4b shows the example of simulation with the feedback ratio of 30%, where four initial pulses with different wavelengths converge to the same attractor with the increase of the cavity round trips number. Different colours of the lines in Fig. 4b correspond to different wavelengths of the seed pulses at the fixed ~1 pJ initial energy. Such dynamics verifies the uniqueness of the steady-state solution. Overall, this study justifies that the tuneability of the central operation wavelength in the filter-less laser cavity under investigation is governed solely by the wavelength-dependent gain distributed over the fibre length.
To better understand the nature of the tuneability dynamics, we considered the evolution of the gain gs(λ) along the active fibre corresponding to the cavity feedback values R = 5% and R = 90% (Fig. 4c). The pulse spectrum in steady-state is depicted by white lines in Fig. 4c. A feedback increase from 5 to 90% leads to higher pulse energy inside the cavity and faster gain saturation. The gain maximum shifts towards a higher wavelength and causes the shift of the pulse spectrum to find energy balance. Thus, absorption and amplification actively reshape the gain spectrum along the fibre, driving the pulse wavelength. Gain spectra measured at the points “1”, “2”, and “3” along the fibre (Fig. 4d) qualitatively agree with the spectra shown in Supplementary Fig. S1.
The system of the coupled Eqs. (4)–(9) allows to describe amplification of a narrow sub-picosecond pulse accompanied by dynamically evolving gain spectrum. Note, that in the conventional approach for modelling of Tm-doped amplifiers the system of population inversion rate equations is written for the continuous-wave signal and pump and describes evolution of the average powers43,46,48. The spectral dependence of the gain and nonlinear pulse propagation along the fibre are not taken into account. Here we solve the population inversion rate equations simultaneously with the pulse evolution, where spectral dependence of the gain is calculated at each step along the fibre. Therefore, wavelength tuneability of the output pulse can be demonstrated.
### Experimental laser characterisation
We next examined the self-mode-locked Tm-doped fibre laser experimentally. The application of 50 cm of Tm-doped fibre in the ring laser cavity allowed achieving stable self-starting ultrashort pulse generation. Notably, the self-mode-locked operation was observed within the entire range of the cavity feedback tuneability from 8 to 93%, yet featured a different threshold. In general, the mode-locking threshold was relatively higher than one for the schemes with conventional material saturable absorbers. This can be explained by the high saturation intensity (Fig. 1d) of Tm- doped fibre when it acts as a saturable absorber. In contrast to conventional ultrafast lasers, the efficiency of the mode-locked regime is lower than that of continuous-wave generation. We attribute this to the nature of the saturable absorption mechanism in Tm-doped fibre associated with the reabsorption of the laser signal. The intracavity polarisation controller required proper initial adjustment, while later, after several switching on-off cycles, the self-mode-locking regime could self-start without polarisation tuning.
The continuous central wavelength tuneability of the self-mode-locked generation was observed in the range spanning from 1873 and 1962 nm (Fig. 5a) without further alignment. As the numerical simulation predicted, the tuneability was highly reproducible with the feedback variation even after several switch-on and off cycles. Here, we would like to stress that to investigate tuneability dynamics of the single-pulse generation under the same conditions, the pump power and polarisation state in the cavity were fixed, while only the feedback was altered. Naturally, at higher feedback, hence, higher intracavity energy, the fundamental soliton generation regime tended to break into multi-pulsing with the pump power increase. Therefore, we limited the pump power at 0.7 W. Supplementary Movie 1 demonstrates smooth wavelength tuneability of the self-mode-locked generation with no pulse break up or instabilities. The variation of output average power at 0.7 W pump power is presented in Fig. 5b (blue scatters). Its trend is in good agreement with the laser efficiency dynamics predicted numerically using the Rigrod model57 (Supplementary Fig. S5), which showed the highest efficiency at 20% cavity feedback. While the efficiency is decreasing for longer laser operation wavelengths, this deterioration can be efficiently mitigated by appropriate optimisation of background losses according to Rigrod analysis.
Figure 5b, c demonstrates self-mode-locked pulse parameters at the laser output with the different outcoupling ratio. Thus, the pulse duration broadened from 550 to 860 fs with the feedback increase. At the same time, the spectral full width at half-maximum ranges only from 7.4 to 7.7 nm with the increase of the cavity feedback. Consequently, the pulses evolve from nearly transform-limited below 12% cavity feedback to slightly chirped with decreased outcoupling. At high feedback values (over 85%) and, therefore, high intracavity intensities, the time-bandwidth product rises to 1.172, indicating that nonlinear effects do not balance the cavity dispersion. With the increase in feedback, the pump power threshold for achieving self-mode-locking decreased from 538 to 289 mW (Fig. 5b). At the same time, the upper threshold for stable single-pulse operation reduces with higher feedback due to gain saturation.
With careful adjustment of the polarisation controller, the maximum average output power of 68 mW could be obtained in a single-pulse generation regime with 25% feedback and maximum available pump power of 1240 mW, resulting in 5.5% optical conversion efficiency. It is important to notice that neither saturation of the output power nor degradation of any components were observed, while the only limitation for further power increase was the availability of a pump source. Figure 6 demonstrates the output pulse parameters at the highest average power. With its central wavelength at 1889 nm, the optical spectrum displays a bandwidth of 6.8 nm with pronounced Kelly-sidebands, as depicted in Fig. 6a. Figure 6b demonstrates an autocorrelation trace with sech2 approximation and full width at half-maximum of ~870 fs, resulting in a soliton pulse duration of ~600 fs. The fundamental repetition rate complies with the fibre laser cavity length at around 44 MHz (Fig. 6c). Note that the RF spectrum was retrieved numerically via the fast Fourier transform of the pulse train, recorded by the oscilloscope with the dynamic range of 5.5 bits. This limited the signal-to-noise ratio to ~34 dB. Overall, it yields a peak power of 1.6 kW, corresponding to 1.0 nJ pulse energy, bearing in mind that nearly 26% of the total power belongs to the sidebands.
Furthermore, the self-mode-locked generation regime can be adapted by using the feedback variation. Supplementary Fig. S6 shows the map of possible generation regimes with fixed polarisation controller and with its adjustment, ensuring single-pulse generation when possible. The laser regime can be switched from the generation of fundamental solitons to soliton complexes, stable high-harmonics generation (up to fourth order), and chaotic behaviour.
#### Pulse duration and laser optimisation
As we mentioned before, the self-mode-locking dynamics rely on the fibre cavity nonlinearity, being assisted by SPM. To validate this fact, we have replaced the tuneable coupler in the laser cavity with a fused one, enabling 20% cavity feedback as predicted by Rigrod analysis (Supplementary Fig. S5). In this case, the maximum output power reached 83 mW at 903 mW pump power, resulting in the laser slope efficiency of 14.5%. The output autocorrelation trace, spectrum and pulse train are shown in Supplementary Fig. S7. In brief, the output optical spectrum is centred at 1888 nm. The intensity autocorrelation features a symmetric sech2 peak without secondary signals or a substantial pedestal, which estimates the pulse duration of 350 fs. We tested the self-mode-locked Tm-doped fibre laser generation at maximum output performance over 49 h of continuous operation to confirm its stability. Supplementary Fig. S7d justifies that no pulse breakup occurred, merely a negligible variance in the intensity of sidebands and ~1 nm the central wavelength blue-shift due to temperature fluctuations in the laboratory.
To get a deeper insight into the saturable absorption performance of the Tm-doped fibre and separate this role from the one of the gain medium, we studied the self-mode-locking regime while gradually cutting back the length of the active fibre. Our findings demonstrate that the mode-locking dynamics are sensitive to the length of the active fibre. In the experiments, the ultrashort pulse generation could still be achieved with the length of Tm-doped fibre reduced from 50 down to 47.8 cm with the fine-tuning of the polarisation controller. However, Q-switch intensity modulation became more pronounced and affected the quality of the generation. The cut-back studies demonstrated a reduction of the spectral bandwidth of the generated solitons down to 0.95 nm, as shown in Supplementary Fig. S8. It also reveals the rise of the mode-locking threshold from 610 mW pump power at the original length, over 690–810 mW. In addition, laser efficiency increased with the active fibre length reduction, which can be explained by reduced losses due to signal reabsorption in the unexcited rear part of the active fibre. Overall, the experimental studies supported by numerical simulations presented in Fig. 3 allow us to conclude that only a rear unexcited section of Tm-doped fibre (with a length of around 5–7 cm) can efficiently operate as a saturable absorber.
## Discussion
Our comprehensive investigation of highly Tm-doped fibre properties has confirmed that highly Tm-doped fibre can efficiently operate as a laser gain and saturable absorber and provide generation wavelength tuneability. While comparing two fibre samples with different core glass matrixes and comparable Tm concentrations, we concluded that the refractive index profile is not a decisive criterion for fibre performance as a saturable absorber. In contrast, the important aspects for saturable absorption are the strongly quenched lifetime for the 3F4 level, a rather low cross-relaxation action for the 3H4 level and certain concentration of Tm3+ pairs. These features justify reinforced interionic energy transfer mechanisms between 3F43H6: 3H63F4 levels. Our study confirms that the estimated concentration of Tm3+ ion pairs of 20% equips aluminosilicate gain fibre with 23% modulation depth and high saturation intensity (95 MW cm−2). These parameters appeared to be sufficient for establishing effective self-mode-locking generation. On the other hand, the germanoaluminosilicate Tm-doped fibre demonstrated a more complicated quenching of Tm ions, and our measurements did not confirm the presence of ion pairs. This fibre featured excessively high saturation intensity and could not initiate mode-locked generation in our laser configuration. Nevertheless, detailed spectroscopic and pump-probe investigation of various Thulium-doped fibre compositions would be beneficial for in-depth evaluation of the impact of clustering and quenching processes on the relaxation and saturable absorption behaviour.
Furthermore, the presented experimental and numerical studies of the self-mode-locked Tm-doped fibre laser with variable feedback demonstrated broadband central wavelength tuneability. The results of numerical simulations agree well with the experiment, demonstrating the same tuneability range. Both investigations have confirmed the variation of the gain excitation level through alteration of cavity feedback to be the primary mechanism of the tuneability of the ultrashort pulse spectrum. Our numerical simulations have also confirmed that the range of observed gain-controlled tuneability through laser feedback variation is determined by the emission cross-section and bandwidth of used Tm-doped fibre (see Supplementary Fig. S4). It is important to note that the amplification and absorption, responsible for different aspects of the laser dynamics, are localised in different parts of the fibre. Wavelength tuneability is provided by wavelength-dependent gain distributed over the fibre length, whereas the saturable absorption localised at the fibre rear end is responsible for the pulse formation.
In conclusion, the experimental and theoretical investigations reported in the current work have extended the understanding of gain dynamics in ultrafast fibre lasers. The self-mode-locked laser setup enabled the generation of stable self-starting ultrashort pulses with the duration of 350 fs at 45 MHz repetition rate delivering ~1.2 nJ of the energy in the main pulse peak. To the best of our knowledge, this is the first demonstration of femtosecond pulse generation directly from the self-mode-locked Thulium-doped fibre laser. Moreover, through the alteration of cavity feedback in the ultrafast Tm-doped fibre laser, we have recorded a central wavelength shift within the range spanning from 1873 to 1962 nm. The control of the excitation level and gain through variation of out-coupled power proved to be more advantageous than loss management. Together with a broad emission spectrum and high cross-section, this became enabling aspects for more than doubled tuneability range. From an instrument development viewpoint, our results have provided a further example of the incredible flexibility of ultrafast fibre lasers. The next step to provide a higher stability and avoid polarisation instabilities could be a realisation of all-polarisation maintaining laser cavity employing the concept of feedback-controlled self-mode-locking. Yet, this would require first and foremost the development of a Tm-doped fibre with properties similar and, particularly, glass matrix to the one studied here. Although the current studies were focused on Tm-doped fibres as the gain medium, the key underlying phenomenon of the suggested tuneability method is versatile. It could be translated to other wavelengths, where the majority of fibreised laser components, including filters, are currently unavailable. In particular, this refers to exploring the Mid-IR wavelength range, where Dy and Er-doped fluoride-based fibres also offer exceptionally broad gain spectra.
## Methods
### Instrumentation
The measuring equipment is made up of a 10 GHz extended InGaAs photodetector (ET-5000, EOT) next to a 25 GS/s oscilloscope (DPO 70604C, Tektronix) for electric characterisation. For optical characterisation an optical spectrum analyser with down to 50 pm resolution and a 1.1–2.5 μm spectral coverage is used (AQ6375, Yokogawa). With a frequency doubling autocorrelator (PulseCheck, APE), the time dependence has been examined. For quantifying the mean optical power, an integrating sphere type sensor with InGaAs photodiode and 1 nW resolution is employed (S148C, Thorlabs).
|
2023-01-31 21:30:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5976589918136597, "perplexity": 2485.418890376606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00205.warc.gz"}
|
https://www.albert.io/ie/ap-statistics/single-proportion-conditions
|
?
Free Version
Moderate
# Single Proportion Conditions
APSTAT-1VJ6WK
When using normal calculations involving sampling proportions, which of the following conditions must be met?
I. The population must be at least $10$ times the sample.
II. $np$ and $n (1-p)$ must both be greater than $10$, where $n$ is the sample size and $p$ is the proportion of success.
III. The sample size must be over $30$.
A
I only.
B
II only.
C
III only.
D
I and II only.
E
II and III only.
|
2016-12-10 01:04:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547892570495605, "perplexity": 902.0506849802459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542932.99/warc/CC-MAIN-20161202170902-00021-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://denizyuret.github.io/Knet.jl/latest/backprop/
|
Backpropagation and SGD
Backpropagation and SGD
Prerequisites
basic Julia, linear algebra, calculus
Concepts
supervised learning, training data, loss function, prediction function, squared error, gradient, backpropagation, stochastic gradient descent
Supervised learning
Arthur Samuel, the author of the first self-learning checkers program, defined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed". This leaves the definition of learning a bit circular. Tom M. Mitchell provided a more formal definition: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E," where the task, the experience, and the performance measure are to be specified based on the problem.
We will start with supervised learning, where the task is to predict the output of an unknown process given its input, the experience consists of training data, a set of example input-output pairs, and the performance measure is given by a loss function which tells us how far the predictions are from actual outputs.
We model the unknown process using a prediction function, a parametric function that predicts the output of the process given its input. Here is an example:
$\hat{y} = W x + b$
Here $x$ is the model input, $\hat{y}$ is the model output, $W$ is a matrix of weights, and $b$ is a vector of biases. By adjusting the parameters of this model, i.e. the weights and the biases, we can make it compute any linear function of $x$.
"All models are wrong, but some models are useful." George Box famously said. We do not necessarily know that the system whose output we are trying to predict is governed by a linear relationship. All we know is a finite number of input-output examples in the training data:
$\mathcal{D}=\{(x_1,y_1),\ldots,(x_N,y_N)\}$
It is just that we have to start model building somewhere and the set of all linear functions is a good place to start for now.
A commonly used loss function in problems with numeric outputs is the squared error, i.e. the average squared difference between the actual output values and the ones predicted by the model. So our goal is to find model parameters that minimize the squared error:
$\arg\min_{W,b} \frac{1}{N} \sum_{n=1}^N \| \hat{y}_n - y_n \|^2$
Where $\hat{y}_n = W x_n + b$ denotes the output predicted by the model for the $n$ th example.
There are several methods to find the solution to the problem of minimizing squared error. Here we will present the stochastic gradient descent (SGD) method because it generalizes well to more complex models. In SGD, we take the training examples (individually or in groups), compute the gradient of the error for the current example(s) with respect to the parameters using the backpropagation algorithm, and move the parameters a small step in the direction that will decrease the error. First some notes on the math.
Partial derivatives
When we have a function with a scalar output, we can look at how its value changes in response to a small change in one of its inputs or parameters, holding the rest fixed. This is called a partial derivative. Let us consider the squared error for the $n$ th input as an example:
$J = \| W x_n + b - y_n \|^2$
So the partial derivative $\partial J / \partial w_{ij}$ would tell us how many units $J$ would move if we moved $w_{ij}$ in $W$ one unit (at least for small enough units). Here is a more graphical representation:
In this figure, it is easier to see that the machinery that generates $J$ has many "inputs". In particular we can talk about how $J$ is effected by changing parameters $W$ and $b$, as well as changing the input $x$, the model output $\hat{y}$, the desired output $y$, or intermediate values like $z$ or $r$. So partial derivatives like $\partial J / \partial x_i$ or $\partial J / \partial \hat{y}_j$ are fair game and tell us how $J$ would react in response to small changes in those quantities.
Chain rule and backpropagation
The chain rule allows us to calculate partial derivatives in terms of other partial derivatives, simplifying the overall computation. We will go over it in some detail as it forms the basis of the backpropagation algorithm. For now let us assume that each of the variables in the above example are scalars. We will start by looking at the effect of $r$ on $J$ and move backward from there. Basic calculus tells us that:
\begin{aligned} J &= r^2 \\ {\partial J}/{\partial r} &= 2r \end{aligned}
Thus, if $r=5$ and we decrease $r$ by a small $\epsilon$, the squared error $J$ will go down by $10\epsilon$. Now let's move back a step and look at $\hat{y}$:
\begin{aligned} r &= \hat{y} - y \\ {\partial r}/{\partial \hat{y}} &= 1 \end{aligned}
So how much effect will a small $\epsilon$ decrease in $\hat{y}$ have on $J$ when $r=5$? Well, when $\hat{y}$ goes down by $\epsilon$, so will $r$, which means $J$ will go down by $10\epsilon$ again. The chain rule expresses this idea:
$\frac{\partial J}{\partial\hat{y}} = \frac{\partial J}{\partial r} \frac{\partial r}{\partial\hat{y}} = 2r$
Going back further, we have:
\begin{aligned} \hat{y} &= z + b \\ {\partial \hat{y}}/{\partial b} &= 1 \\ {\partial \hat{y}}/{\partial z} &= 1 \end{aligned}
Which means $b$ and $z$ have the same effect on $J$ as $\hat{y}$ and $r$, i.e. decreasing them by $\epsilon$ will decrease $J$ by $2r\epsilon$ as well. Finally:
\begin{aligned} z &= w x \\ {\partial z}/{\partial x} &= w \\ {\partial z}/{\partial w} &= x \end{aligned}
This allows us to compute the effect of $w$ on $J$ in several steps: moving $w$ by $\epsilon$ will move $z$ by $x\epsilon$, $\hat{y}$ and $r$ will move exactly the same amount because their partials with $z$ are 1, and finally since $r$ moves by $x\epsilon$, $J$ will move by $2rx\epsilon$.
$\frac{\partial J}{\partial w} = \frac{\partial J}{\partial r} \frac{\partial r}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial z} \frac{\partial z}{\partial w} = 2rx$
We can represent this process of computing partial derivatives as follows:
Note that we have the same number of boxes and operations, but all the arrows are reversed. Let us call this the backward pass, and the original computation in the previous picture the forward pass. Each box in this backward-pass picture represents the partial derivative for the corresponding box in the previous forward-pass picture. Most importantly, each computation is local: each operation takes the partial derivative of its output, and multiplies it with a factor that only depends on the original input/output values to compute the partial derivative of its input(s). In fact we can implement the forward and backward passes for the linear regression model using the following local operations:
This is basically the backpropagation algorithm in a nutshell, i.e. backpropagation can be viewed as the application of Leibniz' chain rule from 1660s to machine learning in 1980s.
Multiple dimensions
Let's look at the case where the input and output are not scalars but vectors. In particular assume that $x \in \mathbb{R}^D$ and $y \in \mathbb{R}^C$. This makes $W \in \mathbb{R}^{C\times D}$ a matrix and $z,b,\hat{y},r$ vectors in $\mathbb{R}^C$. During the forward pass, $z=Wx$ operation is now a matrix-vector product, the additions and subtractions are elementwise operations. The squared error $J=\|r\|^2=\sum r_i^2$ is still a scalar. For the backward pass we ask how much each element of these vectors or matrices effect $J$. Starting with $r$:
\begin{aligned} J &= \sum r_i^2 \\ {\partial J}/{\partial r_i} &= 2r_i \end{aligned}
We see that when $r$ is a vector, the partial derivative of each component is equal to twice that component. If we put these partial derivatives together in a vector, we obtain a gradient vector:
$\nabla_r J \equiv \langle \frac{\partial J}{\partial r_1}, \cdots, \frac{\partial J}{\partial r_C} \rangle = \langle 2 r_1, \ldots, 2 r_C \rangle = 2\vec{r}$
The addition, subtraction, and square norm operations work the same way as before except they act on each element. Moving back through the elementwise operations we see that:
$\nabla_r J = \nabla_\hat{y} J = \nabla_b J = \nabla_z J = 2\vec{r}$
For the operation $z=Wx$, a little algebra will show you that:
\begin{aligned} \nabla_W J &= \nabla_z J \cdot x^T \\ \nabla_x J &= W^T \cdot \nabla_z J \end{aligned}
Note that the gradient of a variable has the same shape as the variable itself. In particular $\nabla_W J$ is a $C\times D$ matrix. Here is the graphical representation for matrix multiplication:
Multiple instances
We will typically process data multiple instances at a time for efficiency. Thus, the input $x$ will be a $D\times N$ matrix, and the output $y$ will be a $C\times N$ matrix, the $N$ columns representing $N$ different instances. Please verify to yourself that the forward and backward operations as described above handle this case without much change: the elementwise operations act on the elements of the matrices just like vectors, and the matrix multiplication and its gradient remains the same. Here is a picture of the forward and backward passes:
The only complication is at the addition of the bias vector. In the batch setting, we are adding $b\in\mathbb{R}^{C\times 1}$ to $z\in\mathbb{R}^{C\times N}$. This will be a broadcasting operation, i.e. the vector $b$ will be added to each column of the matrix $z$ to get $\hat{y}$. In the backward pass, we'll need to add the columns of $\nabla_\hat{y} J$ to get the gradient $\nabla_b J$.
The gradients calculated by backprop, $\nabla_w J$ and $\nabla_b J$, tell us how much small changes in corresponding entries in $w$ and $b$ will effect the error (for the current example(s)). Small steps in the gradient direction will increase the error, steps in the opposite direction will decrease the error.
In fact, we can show that the gradient is the direction of steepest ascent. Consider a unit vector $v$ pointing in some arbitrary direction. The rate of change in this direction, $\nabla_v J$ (directional derivative), is given by the projection of $v$ onto the gradient, $\nabla J$, i.e. their dot product $\nabla J \cdot v$:
$\nabla_v J = \frac{\partial J}{\partial x_1} v_1 + \frac{\partial J}{\partial x_2} v_2 + \cdots = \nabla J \cdot v$
What direction maximizes this dot product? Recall that:
$\nabla J \cdot v = | \nabla J |\,\, | v | \cos(\theta)$
where $\theta$ is the angle between $v$ and the gradient vector. $\cos(\theta)$ is maximized when the two vectors point in the same direction. So if you are going to move a fixed (small) size step, the gradient direction gives you the biggest bang for the buck.
This suggests the following update rule:
$w \leftarrow w - \nabla_w J$
This is the basic idea behind Stochastic Gradient Descent (SGD): Go over the training set instance by instance (or minibatch by minibatch). Run the backpropagation algorithm to calculate the error gradients. Update the weights and biases in the opposite direction of these gradients. Rinse and repeat...
Housing Example
We will use the Boston Housing dataset from the UCI Machine Learning Repository to train a linear regression model using backprop and SGD. The dataset has housing related information for 506 neighborhoods in Boston from 1978. Each neighborhood has 14 attributes, the goal is to use the first 13, such as average number of rooms per house, or distance to employment centers, to predict the 14’th attribute: median dollar value of the houses.
First, we download and convert the data into Julia arrays. The Knet package provides some utilities for this:
using Knet
include(Knet.dir("data","housing.jl"))
x,y = housing() # x is (13,506); y is (1,506)
Second, we implement our loss calculation in Julia. Personally I think callable objects are the most natural way to represent parametric functions. But you can use any coding style you wish as long as you can calculate a scalar loss from parameters and data.
struct Linear; w; b; end # new type that can be used as a function
(f::Linear)(x) = f.w * x .+ f.b # prediction function if one argument
(f::Linear)(x,y) = mean(abs2, f(x) - y) # loss function if two arguments
Now we can initialize a model, make some predictions, and calculate loss:
julia> model = Linear(zeros(1,13), zeros(1))
Linear([0.0 0.0 … 0.0 0.0], [0.0])
julia> pred = model(x) # predictions for 506 instances
1×506 Array{Float64,2}:
0.0 0.0 0.0 … 0.0 0.0 0.0
julia> y # not too close to real outputs
1×506 Array{Float64,2}:
24.0 21.6 34.7 … 23.9 22.0 11.9
julia> loss = model(x,y) # average loss for 506 instances
592.1469169960474
The loss gradients with respect to the model parameters can be computed manually as described above:
julia> r = (model(x) - y) / length(y)
1×506 Array{Float64,2}:
-0.0474308 -0.0426877 -0.0685771 … -0.0472332 -0.0434783 -0.0235178
julia> ∇w = 2r * x'
1×13 Array{Float64,2}:
7.12844 -6.617 8.88016 -3.2174 … 8.60132 9.32187 -6.12163 13.5419
julia> ∇b = sum(2r)
-45.06561264822134
For larger models manual gradient calculation becomes impractical. The Knet package can calculate gradients automatically for us: (1) mark the parameters with the Param type, (2) apply the @diff macro to the loss calculation, (3) the grad function calculates the gradients:
julia> model = Linear(Param(zeros(1,13)), Param(zeros(1)))
Linear(P(Array{Float64,2}(1,13)), P(Array{Float64,1}(1)))
julia> loss = @diff model(x,y)
T(592.1469169960474)
1×13 Array{Float64,2}:
7.12844 -6.617 8.88016 -3.2174 … 8.60132 9.32187 -6.12163 13.5419
1-element Array{Float64,1}:
-45.06561264822134
We can use the gradients to train our model:
function sgdupdate(model, x, y)
loss = @diff model(x, y)
for p in params(model)
p .-= 0.1 * grad(loss, p)
end
return value(loss)
end
Here is a plot of the loss value vs the number of updates:
julia> using Plots
julia> plot([sgdupdate(model,x,y) for i in 1:20])
The new predictions are a lot closer to the actual outputs:
julia> [ model(x); y ]
2×506 Array{Float64,2}:
30.4126 24.8121 30.7946 29.2931 … 27.6193 26.1515 21.9643
24.0 21.6 34.7 33.4 23.9 22.0 11.9
Problems with SGD
Over the years, people have noted many subtle problems with the SGD algorithm and suggested improvements:
Step size: If the step sizes are too small, the SGD algorithm will take too long to converge. If they are too big it will overshoot the optimum and start to oscillate. So we scale the gradients with an adjustable parameter called the learning rate $\eta$:
$w \leftarrow w - \eta \nabla_w J$
Step direction: More importantly, it turns out the gradient (or its opposite) is often NOT the direction you want to go in order to minimize error. Let us illustrate with a simple picture:
The figure on the left shows what would happen if you stood on one side of the long narrow valley and took the direction of steepest descent: this would point to the other side of the valley and you would end up moving back and forth between the two sides, instead of taking the gentle incline down as in the figure on the right. The direction across the valley has a high gradient but also a high curvature (second derivative) which means the descent will be sharp but short lived. On the other hand the direction following the bottom of the valley has a smaller gradient and low curvature, the descent will be slow but it will continue for a longer distance. Newton's method adjusts the direction taking into account the second derivative:
In this figure, the two axes are w1 and w2, two parameters of our network, and the contour plot represents the error with a minimum at x. If we start at x0, the Newton direction (in red) points almost towards the minimum, whereas the gradient (in green), perpendicular to the contours, points to the right.
Unfortunately Newton's direction is expensive to compute. However, it is also probably unnecessary for several reasons: (1) Newton gives us the ideal direction for second degree objective functions, which our objective function almost certainly is not, (2) The error function whose gradient backprop calculated is the error for the last minibatch/instance only, which at best is a very noisy approximation of the real error function, thus we shouldn't spend too much effort trying to get the direction exactly right.
So people have come up with various approximate methods to improve the step direction. Instead of multiplying each component of the gradient with the same learning rate, these methods scale them separately using their running average (momentum, Nesterov), or RMS (Adagrad, Rmsprop). Some even cap the gradients at an arbitrary upper limit (gradient clipping) to prevent instabilities.
You may wonder whether these methods still give us directions that consistently increase/decrease the objective function. If we do not insist on the maximum increase, any direction whose components have the same signs as the gradient vector is guaranteed to increase the function (for short enough steps). The reason is again given by the dot product $\nabla J \cdot v$. As long as these two vectors carry the same signs in the same components, the dot product, i.e. the rate of change along $v$, is guaranteed to be positive.
Minimize what? The final problem with gradient descent, other than not telling us the ideal step size or direction, is that it is not even minimizing the right objective! We want small error on never before seen test data, not just on the training data. The truth is, a sufficiently large model with a good optimization algorithm can get arbitrarily low error (down to the noise limit) on any finite training data (e.g. by just memorizing the answers). And it can typically do so in many different ways (typically many different local minima for training error in weight space exist). Some of those ways will generalize well to unseen data, some won't. And unseen data is (by definition) not seen, so how will we ever know which weight settings will do well on it?
There are at least three ways people deal with this problem: (1) Bayes tells us that we should use all possible models and weigh their answers by how well they do on training data (see Radford Neal's fbm), (2) New methods like dropout that add distortions and noise to inputs, activations, or weights during training seem to help generalization, (3) Pressuring the optimization to stay in one corner of the weight space (e.g. L1, L2, maxnorm regularization) helps generalization.
Notes
• Supervised learning is also known as regression if the outputs are numeric and classification if they are discrete.
• Linear regression is a regression model with a linear prediction function. Linear regression with a scalar input and output is called simple linear regression, if the input is a vector we have multiple linear regression, and if the output is a vector we have multivariate linear regression.
|
2020-06-04 02:28:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9599847793579102, "perplexity": 486.432649105397}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436828.65/warc/CC-MAIN-20200604001115-20200604031115-00322.warc.gz"}
|
http://xarray.pydata.org/en/stable/generated/xarray.core.resample.DataArrayResample.html
|
# xarray.core.resample.DataArrayResample¶
class xarray.core.resample.DataArrayResample(*args, dim=None, resample_dim=None, **kwargs)
DataArrayGroupBy object specialized to time resampling operations over a specified dimension
__init__(self, *args, dim=None, resample_dim=None, **kwargs)
Create a GroupBy object
Parameters
• obj (Dataset or DataArray) – Object to group.
• group (DataArray) – Array with the group values.
• squeeze (boolean, optional) – If “group” is a coordinate of object, squeeze controls whether the subarrays have a dimension of length 1 along that coordinate or if the dimension is squeezed out.
• grouper (pd.Grouper, optional) – Used for grouping values along the group array.
• bins (array-like, optional) – If bins is specified, the groups will be discretized into the specified bins by pandas.cut.
• restore_coord_dims (bool, optional) – If True, also restore the dimension order of multi-dimensional coordinates.
• cut_kwargs (dict, optional) – Extra keyword arguments to pass to pandas.cut
Methods
__init__(self, \*args[, dim, resample_dim]) Create a GroupBy object all(self[, dim, axis]) Reduce this DataArrayResample’s data by applying all along some dimension(s). any(self[, dim, axis]) Reduce this DataArrayResample’s data by applying any along some dimension(s). apply(self, func[, shortcut, args]) Apply a function over each array in the group and concatenate them together into a new array. argmax(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying argmax along some dimension(s). argmin(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying argmin along some dimension(s). asfreq(self) Return values of original object at the new up-sampling frequency; essentially a re-index with new times set to NaN. assign_coords(self[, coords]) Assign coordinates by group. backfill(self[, tolerance]) Backward fill new values at up-sampled frequency. bfill(self[, tolerance]) Backward fill new values at up-sampled frequency. count(self[, dim, axis]) Reduce this DataArrayResample’s data by applying count along some dimension(s). ffill(self[, tolerance]) Forward fill new values at up-sampled frequency. fillna(self, value) Fill missing values in this object by group. first(self[, skipna, keep_attrs]) Return the first element of each group along the group dimension interpolate(self[, kind]) Interpolate up-sampled data using the original data as knots. last(self[, skipna, keep_attrs]) Return the last element of each group along the group dimension max(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying max along some dimension(s). mean(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying mean along some dimension(s). median(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying median along some dimension(s). min(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying min along some dimension(s). nearest(self[, tolerance]) Take new values from nearest original coordinate to up-sampled frequency coordinates. pad(self[, tolerance]) Forward fill new values at up-sampled frequency. prod(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying prod along some dimension(s). quantile(self, q[, dim, interpolation, …]) Compute the qth quantile over each array in the groups and concatenate them together into a new array. reduce(self, func[, dim, axis, keep_attrs, …]) Reduce the items in this group by applying func along some dimension(s). std(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying std along some dimension(s). sum(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying sum along some dimension(s). var(self[, dim, axis, skipna]) Reduce this DataArrayResample’s data by applying var along some dimension(s). where(self, cond[, other]) Return elements from self or other depending on cond.
Attributes
groups
|
2019-10-21 22:30:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3249472379684448, "perplexity": 4440.731474759217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795253.70/warc/CC-MAIN-20191021221245-20191022004745-00084.warc.gz"}
|
https://math.stackexchange.com/questions/3073830/question-about-integral-notation-in-a-markov-process-how-to-evaluate-said-inte
|
# Question about integral notation in a Markov process + how to evaluate said integral
I'm reading Chapter 11 of Puterman's book on Markov Decision Processes (in particular, about continuous-time Markov processes). There's a lot of notation involved, but I've tried to distill the question. Puterman defines a function $$Q(t,j|s,a)$$, which, as a simple example, might equal $$Q(t,j|s,a)=\frac{1}{4}(1-e^{-\mu{t}})$$ for some $$\mu>0$$. The function $$Q$$ is a joint probability distribution in $$t\geq0$$ and $$j\in{S}$$ for finite $$S$$ (in the example above, the product of the CDF of an exponential random variable with a constant). He then writes down the integral $$\int_0^\infty e^{-\alpha{t}}Q(dt,j|s,a),$$ and asserts that the value of this integral is $$<1$$. Puterman states "[w]e use $$Q(dt,j|s,a)$$ to represent a time-differential", but I don't know what this means in the context of integration.
Question 1 What kind of integral is this? Seems like Riemann-Stieltjes or Lebesgue, but I can't tell. I thought it might be strange notation for $$\int_0^\infty e^{-\alpha{t}}Q(t,j|s,a)dt,$$ but it seems that's not the case (as then the integral can easily be $$\geq1$$).
Question 2 How do you evaluate such an integral? Is there e.g. a closed-form for the $$Q$$ defined above?
• The integral would be $$\int_0^\infty e^{-\alpha t}\frac14 \mu e^{-\mu t}\ \mathsf dt = \frac\mu{4(\alpha+\mu)}.$$ – Math1000 Jan 14 at 23:07
• @Math1000 Weird notation. Post as an answer (maybe with a small explanation?) and I’ll accept! – David M. Jan 14 at 23:18
|
2019-10-20 19:42:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560828804969788, "perplexity": 258.1183240608965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00499.warc.gz"}
|
https://itprospt.com/num/7020395/chbjaua-i0-1-re0-1-3-u-3-1
|
5
~chbjaua I0 [ 1 RE0 1 3 U 3 1...
Question
~chbjaua I0 [ 1 RE0 1 3 U 3 1
~chbjaua I0 [ 1 RE0 1 3 U 3 1
Similar Solved Questions
Solve the system of linear equations by rOw operationsXt Ym{#l Xm2y+3z=4 3x + 2y+7- =0
Solve the system of linear equations by rOw operations Xt Ym{#l Xm2y+3z=4 3x + 2y+7- =0...
11. In cellular respiration , glucose ' (C,H,,Og) reacts with oxygen to produce energy: carbon dioxide. and water: (Hint: There's an extra conversion factorl) If .08 mg of glucose react with .05 mg of oxygen, determine how much water can be produced based on the limiting reagent: (3 pts) Determine the amount of excess 2 pts) 002 grams of water is actually produced_ determine the percent 'yield (2 pts).
11. In cellular respiration , glucose ' (C,H,,Og) reacts with oxygen to produce energy: carbon dioxide. and water: (Hint: There's an extra conversion factorl) If .08 mg of glucose react with .05 mg of oxygen, determine how much water can be produced based on the limiting reagent: (3 pts) D...
20.In the following oxidation-reduction reaction; SnClo?-(aq) + 4NO2(g) -4H-A sxff(aq) 6CI-(aq) Snks) + +N "(4q) whatis the Oxidizing "gcpp Sn red , +10 ~ir b. NO; Xb Ht K- 2 - d.CI 'SnCib SnClo? - 36Cl-~a
20.In the following oxidation-reduction reaction; SnClo?-(aq) + 4NO2(g) -4H-A sxff(aq) 6CI-(aq) Snks) + +N "(4q) whatis the Oxidizing "gcpp Sn red , +10 ~ir b. NO; Xb Ht K- 2 - d.CI 'SnCib SnClo? - 3 6Cl- ~a...
Exercise 6.47Part Asmall glider is placed against compressed spring at the bottom ofan air track that slopes upward atan angle 0f 42.0 above the horizontal. The glider has mass 00x10-2kg The spring has 640 N/m and negligible mass_ When the spring released, the glider travels maximum distance of 140 m along the air track before sliding back down Before reaching this maximum distance, the glider loses contact with the spring:What distance was the spring originally compressed?AEdSubmitMy Answers Gi
Exercise 6.47 Part A small glider is placed against compressed spring at the bottom ofan air track that slopes upward atan angle 0f 42.0 above the horizontal. The glider has mass 00x10-2kg The spring has 640 N/m and negligible mass_ When the spring released, the glider travels maximum distance of 14...
2 64142 ntpboyostra P4p5:K14ue3Ju0njap? JAWWIPPV h43 fo Kh W1X044 Jo '19 :K9A!IBR:U0J1JJ1? J8uBI-Suo?19vt5 240u dou
2 64142 ntpboyostra P4p5 :K14ue3Ju0njap? JAWWIPPV h 43 fo Kh W1X044 Jo '19 :K9A!IBR:U0J1JJ1? J8uBI-Suo ?19vt5 240u dou...
Evaluate the indefinite integral: (Use C for the constant of integration ) | xvx+31 dx
Evaluate the indefinite integral: (Use C for the constant of integration ) | xvx+31 dx...
User SetlingsGradesResults for this submissionProblemsEnteredAnswer PreviewResult(1/9)*sin(7*(t-9)*u(t-9))~sin(7(t - 9) u(t 9))incorrectProblemProblem 2 Problem 3The answer above NOT correct_point) Use the Laplace transform to solve the following initial value problem:y" + 14y' | 53y = f(t = 9)9(O) = 0, % (0) = 0y(t)(Notation: write u(t-c) for the Heaviside step function uc(t) with step att = c )Preview My AnswersSubmit AnswersYour score was recorded You have attempted inis problem tim
User Setlings Grades Results for this submission Problems Entered Answer Preview Result (1/9)*sin(7*(t-9)*u(t-9)) ~sin(7(t - 9) u(t 9)) incorrect Problem Problem 2 Problem 3 The answer above NOT correct_ point) Use the Laplace transform to solve the following initial value problem: y" + 14y...
What : are the fundamental spectra objects? astronomers take advantage of when studying astronomicalAbsorption SpectrumContinuous spectrum Absorption and continuous spectra Zceman spectrumnone of ther
What : are the fundamental spectra objects? astronomers take advantage of when studying astronomical Absorption Spectrum Continuous spectrum Absorption and continuous spectra Zceman spectrum none of ther...
Let the sample space be the set of all positive integers: Is it possible to have 'uniform" probability law; that is, probability law that assigns the same probability to each positive integer? Select an option Select an option Yes pve used 0 of No attempt Save
Let the sample space be the set of all positive integers: Is it possible to have 'uniform" probability law; that is, probability law that assigns the same probability to each positive integer? Select an option Select an option Yes pve used 0 of No attempt Save...
Find the dot product of <10,16> % 4i-4j )
Find the dot product of <10,16> % 4i-4j )...
Dr: HuM Cyia iaganl This &o, ra nas darekeec Man 7-n49in7" onentrg Ther (50303} d2lcedes ech have # dangtar * 5 C0 cr ard ? unformly distnbrcd crarge Toneindnbdon etonoa 00n5r0t Omlorydqi 420x10 & other tng have 50 tnat the clccn fxdarte Bit 6204d [o Ho# mla cage TJst IraL.00 cmn
Dr: HuM Cyia iaganl This &o, ra nas darekeec Man 7-n49in7" onentrg Ther (50303} d2lcedes ech have # dangtar * 5 C0 cr ard ? unformly distnbrcd crarge Toneindnbdon etonoa 00n5r0t Omlorydqi 420x10 & other tng have 50 tnat the clccn fxdarte Bit 6204d [o Ho# mla cage TJst Ira L.00 cmn...
:5 0'5 Jo} s/u 0'9 jo paads Jueasuo? 2 32 Iam 2 J0 Woroq a41 WOJJ p2a[qo 6x-0'2 @ bupyll uosuad 2 Aq auop SI YOM Yonu MOH
:5 0'5 Jo} s/u 0'9 jo paads Jueasuo? 2 32 Iam 2 J0 Woroq a41 WOJJ p2a[qo 6x-0'2 @ bupyll uosuad 2 Aq auop SI YOM Yonu MOH...
B) Take %o 0, compute 11 and 12 using the Jacobi method the infinity norm of the Jacobi iteration matrix, and the exact solution &
b) Take %o 0, compute 11 and 12 using the Jacobi method the infinity norm of the Jacobi iteration matrix, and the exact solution &...
Determine whether or not the function is one-to-one and, if so, find the inverse. If the function has an inverse, give the domain of the inverse.$$f(x)=3 x + 5$$
Determine whether or not the function is one-to-one and, if so, find the inverse. If the function has an inverse, give the domain of the inverse. $$f(x)=3 x + 5$$...
Make a sketch of the region and its bounding curves. Find the area of the region. The region inside the circle $r=8 \sin \theta$
Make a sketch of the region and its bounding curves. Find the area of the region. The region inside the circle $r=8 \sin \theta$...
1.How is turgor pressure thought to drive plant cell growth?2. When in solution NaCl will dissociate into Na+ and Cl- ions?What concentration would you need to add a NaCl solution at toobserve the same rates of plasmolysis as with a 0.8M sucrosesolution?3. If you placed your onion cells in a slightly acidicenvironment would you expect there to be any change in the pH ofthe cytosol due to the open water channels? Why?4. Why are changes in turgor pressure important with respect togas exchange in t
1.How is turgor pressure thought to drive plant cell growth? 2. When in solution NaCl will dissociate into Na+ and Cl- ions? What concentration would you need to add a NaCl solution at to observe the same rates of plasmolysis as with a 0.8M sucrose solution? 3. If you placed your onion cells in a sl...
|
2022-08-17 14:25:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.77190762758255, "perplexity": 11243.317978456056}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00091.warc.gz"}
|
http://academic.research.microsoft.com/Publication/47365877
|
## Keywords (2)
Publications
On the $\gamma$Variation of Processes with Stationary Independent Increments
# On the $\gamma$Variation of Processes with Stationary Independent Increments,10.1214/aoms/1177692473,The Annals of Mathematical Statistics,Itrel Monro
On the $\gamma$Variation of Processes with Stationary Independent Increments
Let $\{X_t; t \geqq 0\}$ be a stochastic process in $R^N$ defined on the probability space $(\Omega, \mathscr{F}, \mathbf{P})$ which has stationary independent increments. Let $\nu$ be the Levy measure for $X_t$ and let $\beta = \inf\{\alpha > 0: \int_{|x| < 1}|x|^\alpha\nu(dx) < \infty\}$. For each $\omega \in \Omega$, let $V_\gamma(\mathbf{X}(\bullet, \omega); a, b) = \sup \sum^m_{j=1} |X(t_j, \omega) - X(_{t-1}, \omega)|^\gamma$ where the supremum is over all finite subdivisions $a = t_0 < t_1 < \cdots < t_m = b$. Then if $\gamma > \beta, \mathbf{P}\{\mathbf{V}_\gamma(\mathbf{X}(\bullet, \omega); a, b) < \infty\} = 1$.
Journal: The Annals of Mathematical Statistics , vol. 43, no. 1972, pp. 1213-1220, 1972
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
( projecteuclid.org )
## Citation Context (5)
• ...The results of Blumenthal and Getoor [5, Theorems 4.1, 4.2] and Monroe [23, Theorem 2] combine to provide a similar result for normalized Lévy processes...
• ...Theorem 6.4 [5, 23] Let X ={ Xt ,t ≥ 0} be a normalized Lévy process in R d . Then...
### Jamison Wolf. Random Fractals Determined by Lévy Processes
• ...Theorem 1.5 [15, Theorem 2] Let (Xt)t≥0 be a L´evy process in Rn without a...
• ...[15, Theorem 1] shows that the index of the process τ(s) is half that of the...
### David R. E. Williams. Path-wise solutions of SDE's driven by Levy processes
• ...Remark 4.1 In [13] the characterisation of the sample path p-variation of all L´evy processes is completed...
### David R. E. Williams. Diffeomorphic flows driven by Levy processes
• ...ments of a stochastically continuous process with stationary independent increments, the behavior of V~(g, S) has been extensively studied in [3, 8, 9, 11, 12, 16] and [17]...
### Patrick L. Brockett. Variational sums of infinitesimal systems
• ...Getoor [3] and Monroe [19] on "strong" variation generalize in an obvious way to ich...
Sort by:
## Citations (15)
### Random Fractals Determined by Lévy Processes
Journal: Journal of Theoretical Probability - J THEOR PROBABILITY , vol. 23, no. 4, pp. 1182-1203, 2010
### First order p -variations and Besov spaces(Citations: 2)
Journal: Statistics & Probability Letters - STAT PROBAB LETT , vol. 79, no. 1, pp. 55-62, 2009
### Generalized fractional Ornstein-Uhlenbeck processes
Published in 2008.
### Rough functions: p Variation, calculus, and index estimation(Citations: 2)
Journal: Lithuanian Mathematical Journal - LITH MATH J , vol. 46, no. 1, pp. 102-128, 2006
### Quadratic variation, p-variation and integration with applications to stock price modelling
Published in 2001.
|
2014-03-14 16:53:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7631410956382751, "perplexity": 2803.8992265845923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678694108/warc/CC-MAIN-20140313024454-00083-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://tug.org/pipermail/texhax/2009-February/011748.html
|
# [texhax] Changing \skip\footins locally
tom sgouros tomfool at as220.org
Wed Feb 11 17:28:13 CET 2009
Here's a try at a solution. It appends a \changefoot command to the end
of the \output routine. By redefining \changefoot, you can sneak
changes into the page measurements, and have them reflected in the pages
you're outputting.
Warning: this isn't what you'd call kosher, since messing with \output
often has unexpected consequences, but in the context of trying to muck
around with the final copy of a particular book, you may find it useful.
Note that I think you'll need to change the \changefoot definition
somewhere on the previous page to the one you're trying to fidget with.
But mess around with this, and I bet you'll see how to make it work for
you.
Hope this helps,
-tom
\documentclass{article}
\def\changefoot{\global\skip\footins=5\bigskipamount}
\newtoks\newoutput%
\global\newoutput\expandafter{\the\output}%
\global\output{%
\expandafter\the\newoutput%
\changefoot}%
\AtBeginDocument{\changefoot}
\newcommand{\lotsoftext}{Now is the time for all good men to change
their documents to Unicode with the help of Yannis Haralambous's
excellent book. }
\begin{document}
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext\footnote{I'm not sure}
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext\footnote{What do we have here?}
\lotsoftext\def\changefoot{\global\skip\footins=10\bigskipamount}
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
Hi there.
\lotsoftext
\lotsoftext
\lotsoftext\footnote{Did it change?}
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\lotsoftext
\end{document}
--
------------------------
tomfool at as220 dot org
http://sgouros.com
http://whatcheer.net
|
2019-04-20 16:38:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8758208155632019, "perplexity": 5213.152931562326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529898.48/warc/CC-MAIN-20190420160858-20190420181954-00007.warc.gz"}
|
https://quant.stackexchange.com/tags/portfolio-management/hot
|
# Tag Info
26
This question goes to whether the historical returns to factors represent: Spurious results, overfitting, data mining... Mispricing Unexploitable effects Compensation for risk Case 1: Spurious results etc... If someone constructs a "stock tickers that begin with AAP or GOO" factor, the highly above average returns would almost certainly reflect a fishing ...
11
In my experience, a VaR or CVaR portfolio optimization problem is usually best specified as minimizing the VaR or CVaR and then using a constraint for the expected return. As noted by Alexey, it is much better to use CVaR than VaR. The main benefit of a CVaR optimization is that it can be implemented as a linear programming problem. Another option I have ...
11
Have a look at this classic paper: Honey, I Shrunk the Sample Covariance Matrix by O. Ledoit and M. Wolf The abstract answers your question already: The central message of this article is that no one should use the sample covariance matrix for portfolio optimization. It is subject to estimation error of the kind most likely to perturb a mean-...
11
After having done a lot of research on the topic I found the following excellent research piece on ETF.com: Wealthfront modifies historic asset-class returns with current market implied expected returns (Black-Litterman) as well as with the in-house views of Chief Investment Officer Burton Malkiel’s team. In addition, Wealthfront sets minimum and ...
11
Alphas from a time-series regression are error terms in the cross-sectional, linear relationship between expected returns and factor betas. If a factor model were correct those error terms (the alphas) would be zero. Discussion A carefully written version of a standard time-series regression of returns in excess of the risk free rate on market excess ...
11
Define excess return $r^x_{it} = r_{it} - r^f_{t}$ as the return $i$ minus the risk free rate, and $f_{jt}$ similarly denotes the excess return of factor $j$ at time $t$. Let's say we have some factor model of returns where: $$r^x_{it} = \alpha_i + \sum_j \beta_{i,j} f_{jt} + \epsilon_{it}$$ F-test / GRS Test If we assume the error terms $\epsilon_{it}$ ...
11
The underlying problem: your ACTR constraints aren't convex The $i$th constraint on your risk contribution can be written: $$w_i \sum_j \sigma_{ij} w_j \leq c_i s$$ And this isn't a convex constraint because of the $w_j w_i$ terms (a function $g(x,y)=xy$ isn't convex in $x$ and $y$). They're not convex constraints, so you won't be able to write them as ...
10
This is indeed an interesting question. According to this website, a paper by Goldman Sachs [Tierens and Anadu (2004)] proposes three alternative methods for estimating average stock correlations: Calculate a full correlation matrix, weighting its elements in line with the weight of the corresponding stocks in the portfolio/index, and excluding ...
9
If you measure risk by the standard deviation of the portfolio return $$\sigma = \sqrt{w^T \Sigma w},$$ then it is usual to define risk contributions for each asset by $$\sigma_i = w_i (\Sigma w)_i/\sigma,$$ then diversified could mean that these $\sigma_i$ are evenly spread over the assets in the portfolio. You find this approach and more in this paper ...
9
There's no easy answer to your question, as noob2 pointed out. You can look online for info from Universa. That fund does exactly what you are asking: https://www.universa.net/riskmitigation.html Of course, post a crash, such as the one we just experienced, the cost of hedges is larger than it is prior to such events. Understand that you aren't going ...
8
Bernd Scherer has done exactly this test in his text "Portfolio Construction and Risk Budgeting 4th Edition". There is an SSRN paper by Scherer called "Resampled Efficiency and Portfolio Choice (2004)" you can take a look at as well. I would suggest you skip re-sampling (especially if you have a long-only portfolio) and take a look at Meucci's Robot ...
8
The VaR constraint is convex and quadratic and can be handled with any solver supports quadratic constraints, like Guribi, cplex (from IBM) or xpress (from FICO). The CVaR can be formulated as a linear program if you are able to perform monte-carlo simulations on the returns. Briefly, the LP model is \begin{eqnarray*} c &\ge& \alpha + {1 \over (...
8
It appears that you are re-running the regression with each new data point. Instead, you should use an update/online formula (see an excellent answer by the famous Dr. Huber at stats.se). You can find an implementation in the R package biglm. If it doesn't have all the features you need (no windowing out of old data) you can at least adapt it and use it ...
8
To clarify notation, you have an universe of $n=2000 \space$ stocks and two portfolio vectors $\mathbf{a},\mathbf{b}\in\mathbb{R}^{n}$ with $\left\|\mathbf{a}\right\|_{1}=\left\|\mathbf{b}\right\|_{1}=1$. Further, you have Estimators for the true Variance $\operatorname{Var}\left[\mathbf{a}\right]$ resp. $\operatorname{Var}\left[\mathbf{b}\right]$ and the ...
8
The estimation of a covariance matrix is unstable unless the number of historical observations $T$ is greater than the number of securities $N$ (5000 in your example). Consider that 10 years of data represents only 120 monthly observations and about 2500 daily observations. Depending on the application, using data dating farther back than 10 years may be ...
8
Generally speaking, let us consider a problem where you have a series of simple payoffs $f_{K_i}(S_T)$ of strike $K_i$, $i \in I$, that depend on the value of $S_T$ at time $T$, as well as a more complex, laddered payoff $P_L(T)$ which pays a quantity $g_i(S_T)$ on regions of the form $\{K_i \leq S_T < K_{i+1}\}$ $-$ regions are delimited by the strikes ...
7
Step 1: Get your data from SQL into R -> http://www.r-bloggers.com/?s=SQL Step 2: Run your analysis/optimizations like -> http://www.r-bloggers.com/portfolio-optimization-in-r-part-1/ or http://blog.streeteye.com/blog/2012/01/portfolio-optimization-and-efficient-frontiers-in-r/ or via RMetrics: http://www.statistik.wiso.uni-erlangen.de/lehre/bachelor/...
7
The term in sample and out of sample are commonly used in any kind of optimization or fitting methods (MVO is just a particular case). When you make the optimization, you compute optimal parameters (usually the weights of the optimal portfolio in asset allocation) over a given data sample, for example, the returns of the securities of the portfolio for the ...
7
This is indeed a subtle point. What is generally meant with this statement is that correlation is going up in bear markets, so it is not so much the "turmoil" part (i.e. volatility per se) but the "trend" (i.e. negative in this case) part. Putting it another way is that when you control for volatility not the correlation but the covariance (which is the part ...
7
CAPM states that the expected return of any given asset should equal $ER_i=R_f+β_i (R_m-R_f)$, with α being the error term of the previous equation. Now, as α has an expected value of zero, then only way to achieve higher expected returns is taking on more β (given that $E[(R_m-R_f )]>0$). Every individual stock has some idiosyncratic risk in addition to ...
7
A lot has happened since Markowitz and Sharpe. While their work is still considered foundational, the empirical/practical relevance of their models has been questioned by later work. Here are a few more recent articles about portfolio theory, in no particular order (all accessible online): Jorion: Bayes-Stein Estimation for Portfolio Analysis, JFQA, 1986 ...
7
I will be glad to help, but let me first advise you away from working on this topic until you have an academic position. This topic has been poison for me, but I am slogging on anyways. Before you use anything I do, get permission from your academic advisor. I have an unpublished article on options pricing, and I am proposing a new branch of stochastic ...
6
Strictly speaking, this is a proxy hedging problem. You have to hedge one currency with another. The one period covariance matrix is assumed to be $$\Sigma_{1}=\left[\begin{array}{cc} 0.03 & 0\\ 0 & 0.025 \end{array}\right]\left[\begin{array}{cc} 1 & 0.8\\ 0.8 & 1 \end{array}\right]\left[\begin{array}{cc} 0.03 & 0\\ 0 & 0.025 \end{... 6 Transaction costs - even for banks, funds etc, every trade has an associated cost, so if you would be buying a small number of shares, it's probably cheaper to carry the risk and not make those small trades. The source data is imperfect, and contains noise. A lot of the smaller components are simply artefacts of that noise so it would be both an unnecessary ... 6 Both free and paid access to data sets conatianing company financial statement items is available from Quandl. The free data sets are sourced from the SEC based on compnay electronic filings and go back about five years. For example, you could obtain five years of MSFT's quarterly net income using the R call Quandl("RAYMOND/MSFT_NET_INCOME_Q") Lists of ... 6 Mean-variance (MV) is a framework rather than a prescription. This framework allows one to make, discuss, and defend his investment decision. In practice, there are many ways to make adjustments to this framework, if you believe they will improve performance. E.g. you can adjust the framework by stating "I will MV-optimize weights subject to "0" if the ... 6 There are more ways to approach this but the method I propose should work reasonably well in practice, especially if you increase the number of assets you hold. Calculate the beta of the stocks you're holding with respect to an index Buy N_f (sell when N_f is negative) future contracts on that index N_f can be calculated as$$N_f = \frac{\beta_T - \...
6
Seems like a small mistake in the last equation. It should read $\Delta^* = A^{-1} \left[\mu-\gamma \Sigma \omega_c - \frac{1}{\iota'A^{-1}\iota} \iota' A^{-1}(\mu-\gamma \Sigma \omega_c )\iota\right]$, which is not equivalent to your result.
6
I just want to add to vonjd's answer some info on the comparison of the 3 methods. This is too big for a comment so I'm posting as a separate answer but please upvote his answer, not mine. Do the differences in methodologies matter in practice? To gauge the practical importance of the biases in methods 2 and 3, we calculate the weighted stock correlation ...
6
To add another perspective see this current and very relevant article with many unique and original insights (Kritzman is one of my favorite authors anyway): Cocoma, Paula and Czasonis, Megan and Kritzman, Mark and Turkington, David, Facts About Factors (April 6, 2015). MIT Sloan Research Paper No. 5128-15. Available at SSRN: https://ssrn.com/abstract=...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-10-23 00:40:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6992352604866028, "perplexity": 1076.5458571689683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880401.35/warc/CC-MAIN-20201022225046-20201023015046-00064.warc.gz"}
|
https://electronics.stackexchange.com/questions/236758/understanding-why-voltage-and-current-sources-are-to-be-divided-by-s-in-the-freq
|
# Understanding why voltage and current sources are to be divided by s in the frequency domain
I have been told that whenever I'm drawing a circuit in the frequency (s) domain, the voltage source is to be replaced by V/s and the current source by I/s.
But what exactly is the reason?
When I'm plotting this circuit (replacing s with rad/s), I get that the voltage across R is decreasing as the frequency increase. intuitively, I would say that the voltage or current source should only have a frequency component at s=0, and 0 everywhere else, instead we have that as s approaches 0, the value of the voltage or current source approaches $\infty$ .
simulate this circuit – Schematic created using CircuitLab
It's because the Laplace transform is: $$A \to \dfrac{A}{s}$$
You assume that the source is applied at t=0: $$\int_{0}^{\infty} A e^{-st} \, dt = \frac{A}{s}$$
|
2019-07-19 16:49:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7447237372398376, "perplexity": 496.9162338671856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526324.57/warc/CC-MAIN-20190719161034-20190719183034-00427.warc.gz"}
|
https://www.springerprofessional.de/en/do-innovation-and-financial-constraints-affect-the-profit-effici/23534070
|
## Swipe to navigate through the articles of this issue
Open Access 23-09-2022 | Regular Article
# Do innovation and financial constraints affect the profit efficiency of European enterprises?
Authors: Graziella Bonanno, Annalisa Ferrando, Stefania Patrizia Sonia Rossi
print
PRINT
insite
SEARCH
## Abstract
This paper investigates the relationship between profit efficiency, finance and innovation. By adopting stochastic frontiers, we pioneer the use of a novel dataset merging firm level survey data with balance sheet information for a large sample of European companies. We find that firms having difficulties in access to finance as well as firms introducing product innovation display an incentive to improve their efficiency. While innovation produces benefit for firms’ profitability, financial constraints impose a discipline to the firms forcing them to cut unproductive costs that reduce the profitability. We document nuanced differences between firms in industry and services, while they are more pronounced when we look at disaggregation across High-Tech and Low-Tech companies. From a policy perspective, our results enrich the understanding on the link between innovation, financial constraints and efficiency, which goes beyond the idea that easier access to finance is the panacea to get higher performance.
Notes
## Supplementary Information
The online version contains supplementary material available at https://doi.org/10.1007/s40821-022-00226-z.
## Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## 1 Introduction
This study assesses how firms’ performances can stem from innovation efforts and financial friction in credit access by using the perspective of the economic efficiency approach.
From a broader perspective, the relation between performance, innovation growth and their links with finance has been well documented in the literature (Acemoglu et al., 2006; Aghion et al., 2010; Bartelsman et al., 2013; Love & Roper, 2015). Technological innovation is indeed a critical element in enhancing and fostering firm performance and therefore is considered a conductor of economic development (Acemoglu et al., 2006; Aghion & Howitt, 1998; Archibugi & Coco, 2004; Goedhuys & Veugelers, 2012; Grossman & Helpman, 1991; Romer, 1990). Through research and development (R&D) activities, firms are able to launch new products and services for the market, which allow them to attain a strategic advantage over competitors (see among others, Dosi et al., 2015; Love & Roper, 2015; Ferreira & Dionísio, 2016).
The literature has also underlined that the growth ambitions of firms are often compromised or weakened by the presence of financial constraints, which are particularly binding for small- and medium- sized enterprises (SMEs) that often suffer from lack of transparencies on their credit records, lack of own capital and ability to provide collateral (Acharya & Xu, 2017; Becker, 2015; Cowan et al., 2015; Pigini et al., 2016).When financial frictions are strong, such as during the recent great financial crisis (Agénor & Pereira da Silva, 2017; Carbo-Valverde et al., 2015), enterprises tend to counteract their adverse impact on profitability by reducing investment activities and, in particular, by abandoning innovation projects (García-Quevedo et al. 2018).
However, looking closely at the complex links of firm efficiency with innovation and financial constraints, we discover that they remained largely unexplored in the literature and they deserve additional scrutiny. More specifically, while few studies—closely related with our research target—have investigated the interplay between efficiency and financial constraints (Bhaumik et al., 2012; Maietta & Sena, 2010; Sena, 2006; Wang, 2003), the academic research directly focusing on the relation between innovation and firm profit efficiency is relatively scant. In fact, the literature has rather concentrated on the link between innovation and firm profitability or firm performance from one side (i.e., Koellinger, 2008; Lööf & Heshmati, 2006; Shao & Lin, 2016), and on the effect of R&D activities on innovation efficiency, from the other side (i.e., Yang et al., 2020; Zhang et al., 2018). Furthermore, by focusing on productivity, several studies have explored the effects that financial constraints will exert on firm productivity—without tackling the perspective of economic efficiency (Butler & Cornaggia, 2011; Ferrando & Ruggieri, 2018; Jin et al., 2019; Midrigan & Xu, 2014). Others have focused on the link between innovation efforts and firm productivity (Calza et al., 2018; Dabla-Norris et al., 2012; Dai & Sun, 2021; Kumbhakar et al., 2012), showing that enterprises, particularly European ones, display a low ability to translate R&D activities into productivity gains (Castellani et al., 2019; Ortega-Argilés et al., 2015).
Building on these research inputs, our paper aims at filling the gap in the literature by providing, in a unified framework, novel evidence on whether innovations efforts and obstacles in access to finance affect firms’ profit efficiency. The empirical analysis relies on a unique firm-level dataset comprising a large sample of European SMEs and large enterprises over the period 2012–2017. The investigation compares firm profit performance across sectors taking into account also the technological and knowledge-intensive content of their activities.
Our main contributions to the literature move along the following three dimensions.
First, to the best of our knowledge, there are no other studies that test in a unified framework whether innovation efforts and financial constraints exert a significant effect on firms’ performance. In particular, we enrich our understanding on firms’ performance by adopting the stochastic frontier approach (SFA) to estimate profits functions and to obtain efficiency scores for a sample of European firms.
As well known in the literature, profit efficiency measures the distance between the current profit of a firm and the efficient profit frontier (Berger & Mester, 1997). Compared to cost/revenue efficiency and other measures based on financial ratios, profit efficiency is able to account for the overall firm performance (Arbelo et al., 2021; Chen et al., 2015; Pilar et al., 2018).
To estimate profit frontiers, we prefer to use the SFA, proposed by Battese and Coelli (1995), which offers several theoretical and empirical advantages. First, it allows to formulate a model for inefficiency in terms of observable variables (Coelli et al., 2005; Kumbhakar & Lovell, 2000) and second, by exploiting the panel dimension of the data, it allows to overcome the shortcomings of time-invariant firm-level inefficiency, while benefitting from easier identification and smaller bias (Cornwell & Smith, 2008; Greene, 2005, among others).
Second, we pioneer the use of a novel dataset that merges firms’ survey-based replies derived from the European Central Bank Survey on access to finance for enterprises (ECB SAFE) with their financial statements—taken from AMADEUS by Bureau van Dijk (BvD). From the survey data, we retrieve harmonized and homogeneous information on several aspects of financial constraints and innovation for a large set of European countries. From the financial statements, we make use of output and input variables to be included in the production frontier as well as of other financial information useful to define firms’ characteristics. In fact, this unique dataset allows us to rely on various indicators of financial constraints and innovation activities. The literature has underlined the difficulties in directly measuring the financial constraints and have relied on indirect proxies (Farre-Mensa & Ljungqvist, 2016). By contrast, we use both perceived and objective indicators of financial constraints based on the qualitative responses of surveyed firms and we complement them with a quantitative measure based on the cash flow available to firms (Fazzari et al., 1988).
Third, to consider the different technologies and production functions across sectors, we complement our dataset with the Eurostat classification on high-technology/knowledge-intensive sectors. We then estimate different frontiers for two main productive sectors—Industry and Services—and for two main technological sectors—High-technology and Low-technology sectors- as well as for their subsectors, up to 5 distinct macro sectors.
This approach allows us to exploit an alternative sectoral heterogeneity—in a similar fashion to Baum et al. (2017) and to Pellegrino and Piva (2020)—which might bring novel evidence on the topic. In fact, high-technology/knowledge-intensive companies in industry and services turn to be more like to each other than high technology/knowledge-intensive and low technology/knowledge intensive within the two sectors.
Based on a variety of model specifications, we document that innovation has an important impact on firms’ profit efficiency. Additionally, bearing in mind that policymakers and economists generally agree that well-functioning financial institutions and markets contribute to economic growth (Urbano & Alvarez, 2014), we provide evidence that, in the presence of market failure, financial constraints induce firms to improve efficiency. We also contribute to the literature by providing novel evidence on the effects of debt maturity and the cost of debt for the efficiency decisions of firms. We find that firms with long-term debt tend to increase efficiency. As for the cost of debt, the impact is different depending on the sector in which a firm operates. More precisely, firms operating in Services seem to have more stringent cost burden on the debt side impacting negatively on profit efficiency. An opposite picture emerges for firms in the manufacturing sector. This supposedly occurs because of a better bank-firm relation that seems to favor firms operating in the traditional manufacturing sector. Finally, we document the presence of heterogeneity across technology and knowledge-based sectors. Specifically, for high-technology and high-knowledge-intensive companies product innovation has a strong positive impact on profit efficiency, while for low-technology and low-knowledge-intensive companies product innovations negatively affect firms’ efficiency.
The rest of the paper is organized as follows. The next section highlights the theoretical background and the research hypotheses. In Sect. 3 we present the methodological issues, and we describe the firm-level database as well as the empirical models. The estimated results are presented in Sect. 4, while the last section concludes.
## 2 Theoretical background and research hypotheses
Efficiency in production function focuses on the relationship between inputs and outputs, and a production plan is called efficient if it is not possible to produce more using the same inputs, or to reduce these inputs leaving the output unchanged (Farrell, 1957). The presence of frictions either in terms of agency problems, lags between the choice of the plan and its implementation or inertia in human behavior and bad management, can drive observable data away from the optimum production plan (Leibenstein, 1978) and create instances of technical inefficiency.
Focusing on the economic efficiency perspective to assess firms’ performance, in this section we shortly refer to the theoretical and empirical evidence related to our research. From one side we consider the link between performance, profit efficiency and financial constraints; on the other side the link between performance, profit efficiency and innovation. Building on those links we develop our research hypotheses that provide the backbone of our empirical model.
### 2.1 Performance, technical efficiency and financial constraints
It is well known that financial frictions influence firm performance (Farre-Mensa & Ljungqvist, 2016) and the issue of financial constraints has been investigated in literature using different theoretical perspectives, such as monetary policy (Bernanke et al., 1996), corporate finance (Hanousek et al., 2015) and entrepreneurship (Kerr & Nanda, 2009, Kerr & Nanda, 2015).
Looking at the link between finance and productivity, some studies argue that lower financial constraints exert a positive effect on productivity and growth (Aghion et al., 2010, and Aghion et al., 2012, for France, and Manaresi & Pierri, 2017, for Italy) as firms exposed to higher financial constraints lower their investment, in particular on assets that have a strong impact on productivity. By contrast, the strand of the literature focusing on the cleansing “Schumpeterian” effect of financial constraints points to the fact that the highest productive firms crowd out the least efficient ones. In the environment of low real interest rates and low financial constraints which characterized the period just before the financial crisis, the cleansing mechanism has been weakened with a detrimental impact on average productive growth (Cette et al., 2016; Gopinath et al., 2017). Interestingly, Jin et al. (2019) show that financial constraints might have two opposite effects on the firm productivity: from one side financial constraints increase productivity because they force to clean-out “sub-optimal investment”; on the other side they harm productivity, because the scarcity of financial resources reduce the “productivity-enhancing investment”.
If we look at the effect that financial constraints exert on firm efficiency, it is indisputable that the increase in the cost of borrowing has a negative impact on firms’ performance in terms of investment activity. Due to the presence of information asymmetries, borrowing external funds for firms turn to be more expensive than using internal finance (Nickell & Nicolitsas, 1999). In presence of binding financial frictions, enterprises tend to counteract their adverse impact on profitability by reducing investment activities and improving their efficiency in order to reduce the risk of failure (Bhaumik et al., 2012; Maietta & Sena, 2010; Sena, 2006).
The preceding arguments lead us to test whether being financially constrained has an impact on firms profit efficiency, by formulating the following hypothesis:
H1
Binding finance constraints exert a positive effect on firms’ efficiency, as debt constrained firms try to reduce their risk of failure.
While this hypothesis has already been researched in the literature, our empirical testing based on survey and balance sheet data is quite novel and could contribute to the actual debate.
### 2.2 Performance, technical efficiency and innovation
Our second working hypothesis relates to the innovative activity of companies. While it is easily recognized the central role of innovation as engine of economic growth (Grossman & Helpman, 1991; Romer, 1990; Solow, 1957), the relationship between innovation and efficiency in production is more complex and contingent to several factors.
Starting from the seminal work of Griliches (1979), the literature has investigated the impact of R&D activities on productivity. Innovation can cause shifts outwards in the production frontier and this in turn might reduce the inefficiency of firms that do not lie on the technical efficient frontier (Aghion et al., 2005; Nickell, 1996). More recently, a stream of the literature has documented that R&D activities contribute—with different nuances—in improving firm productivity (Heshmati & Kim, 2011; Janz et al., 2004; Klette & Kortum, 2004; Lööf & Heshmati, 2006; Ugur et al., 2016), especially for high-tech sector (Castellani et al., 2019; Kumbhakar et al., 2012; Ortega-Argilés et al., 2015).
However, if innovation efforts are just inputs and do not generate innovation outputs they shall be considered as sunk costs that do not necessarily increase firm performance (Koellinger, 2008). Additionally, some scholars have also underlined the limited ability of firms, particularly European ones, in translating R&D efforts into productivity gains (Castellani et al., 2019; Ortega-Argilés et al., 2015).
Looking more closely at the stream of literature on the impact of innovation on profitability, many papers provide support to a positive relationship (Cefis & Ciccarelli, 2005; Geroski et al., 1993; Leiponen, 2000; Lööf & Heshmati, 2006), which may arise because either innovative firms are able to shield their new products from competition, or they display higher internal capabilities, compared to non-innovators (Love et al., 2009).
By contrast, other studies do not find a clear relationship between technological innovation and firm performance (Díaz-Díaz et al., 2008), when analyzing either short term effects (Deeds, 2001; George et al., 2002; Le et al., 2006) or long term indirect effects (Schroeder et al., 2002). Furthermore, the empirical evidence on the linkages between innovation and efficiency is mixed and, in some cases, it even documents a trade-off between them (Zorzo et al., 2017).
The link between innovation and performance is crucial also for the productivity literature that recognizes the strategic role of profit-seeking entrepreneurs investing on R&D activities for higher productivity, and, consequently, for higher economic growth (Bravo-Ortega & García-Marín, 2011; Castellani et al., 2019; Foster et al., 2008; Kumbhakar et al., 2012; O’Mahony & Vecchi, 2009; Ortega-Argilés et al., 2015).
We bring new evidence to the extant contributions by employing the economic efficiency approach to measure firm performance. As we focus on the output of innovation efforts (product innovation introduced by firms) rather than the inputs of innovations efforts (R&D investments), in our investigation we look at the subsequent efficiency gains stemming from increasing revenues.
Consequently, we would like to investigate whether undertaking innovation exerts an effect on firms’ profit efficiency by formulating our second working hypothesis as follows:
H2
Innovation output improves profit efficiency, as the developments of new products or services are aimed at attaining a strategic advantage over competitors and leveraging revenues.
## 3 Empirical setting
### 3.1 Stochastic frontier approach
In the previous section we introduced the theoretical reasoning of why the presence of financing constraints and the innovation efforts of firms may have a positive impact on profit efficiency. To test these hypotheses, we estimate the profit functions by employing the SFA, which is a stochastic method that allows companies to be distant from the frontier also for randomness (Aigner et al., 1977; Meeusen & van den Broeck, 1977).1 The SFA is a parametric method, which means that it assigns a distribution function to the stochastic component of the model and, thus, allows making inference. In our analysis we make use of the specification introduced by Battese and Coelli (1995), which permits the simultaneous estimation of the stochastic frontier and the inefficiency model, given appropriate distributional assumptions associated with panel data. This approach improves, in terms of consistency, previous modeling based on two-step approaches.2
We estimate a two-inputs-one-output model described by the following Translog profit frontier.3
$$\mathrm{log}\left(\frac{{Profit}_{it}}{{w}_{kit}}\right)= {\beta }_{0}+{\beta }_{1}\mathrm{log}{y}_{it}+{\delta }_{1}\mathrm{log}\frac{{w}_{lit}}{{w}_{kit}}+\frac{1}{2}\left[{\beta }_{2}{\left(\mathrm{log}{y}_{it}\right)}^{2}+{\delta }_{2}{\left(\mathrm{log}\frac{{w}_{lit}}{{w}_{kit}}\right)}^{2}\right]+\alpha \left(\mathrm{log}{y}_{it}\right)\left(\mathrm{log}\frac{{w}_{lit}}{{w}_{kit}}\right)+\gamma {controls}_{cst}+{v}_{it}-{u}_{it}$$
(1)
where the dependent variable is the natural logarithm of value added over the cost of fixed asset calculated for each firm i at time t.4 Additionally, y represents the output and is equal to the operating revenues; wl is the cost of labor (measured as the ratio between the personnel expenses and the number of employees), wk is the cost of fixed assets (measured as the ratio between the depreciation and total amount of fixed assets); α, β, δ and $$\gamma$$ are the parameters to be estimated; v is the random error; u is the inefficiency. In the profit frontier, the inefficiency tends to reduce the profit, thus the composite error is equal to (v-u).
Equation (1) is an alternative profit function since it depends on inputs and output, whereas actual profits depend on the prices of outputs. It uses the same variables as for a cost function, implying that output-prices are free to vary (Huizinga et al., 2001).5,6
The specification includes also a set of control dummies (controls) to guarantee that the efficiency scores are net of additional heterogeneity: at country level, c, to exclude any geographical and institutional fixed effect; at firm size level s, to take care of the possibility of different shifts in the frontier for different group of firms (Micro, Small and Medium)7 and at each survey round t to control for the dynamics over time.
We also take into account the different technologies and production functions across sectors by estimating different frontiers for several productive sectors: Industry and Services and also two technological sectors—High-tech and Low-tech sectors- using the Eurostat classification.8
From the Eq. (1), profit efficiency (PE) is the ratio between the observed firms’ profit and the maximum level of profit achievable in case of full efficiency:
$${PE}_{it}=\frac{{F}_{p}\left({y}_{it},{w}_{it}\right){e}^{{v}_{itp}}{e}^{{-u}_{itp}}}{{F}_{p}\left({y}_{it},{w}_{it}\right){e}^{{v}_{itp}}}={e}^{-{u}_{itp}}$$
(2)
where Fp(.) indicates a generic profit function in which the profit is obtainable from producing y at input price w.
Finally, we assume that vit is normally distributed with mean zero and uit is distributed as a truncated Normal, as proposed by Battese and Coelli (1995), we estimate the following inefficiency equation:
$${u}_{it}={\eta }_{1} {Finance \, constraints}_{it}+{\eta }_{2}{ Product \, innovation}_{it}+{\eta }_{3} {Firm \, controls}_{it}+ {\eta }_{4} {macroeconomic \, controls}_{jt}+{e}_{it}$$
(3)
where i indicates the ith firm, j the country, t is time and eit the random component.
Efficiency is time-varying, ensuring a change in the relative ranking among enterprises, which accommodates the case where an initially inefficient firm becomes more efficient over time and vice versa.
To test the effect of the determinants of firms’ efficiency, we simultaneously estimate Eqs. (1) and (3), by employing the following covariates for the Eq. (3):
(i) Finance constraintsit includes a set of variables able to capture firms’ experience in their access to finance. We consider three different alternative proxies of financial constraints. First, we use the ratio between cash flow and total assets (Cash flow/Total assetsit), as the dependence to internal finance represents a particularly binding constraint for firms to finance investment (Fazzari et al., 1988; Guariglia & Liu, 2014; Sasidharan et al., 2015). We are aware of the criticism of the subsequent literature on the use of this indicator (starting by Kaplan & Zingales, 1995, and recently summarized in Farre-Mensa & Ljungqvist, 2016). For this reason, we turn to the information derived from the survey to define financial constrained firms.
Our second proxy of financial constraint is derived directly from the survey information. Problem of Financeit captures firms’ perception of potential financing constraints. It is a dummy equal to one if firms reported that access to finance represents the most relevant problem among a set of other problems (competition, finding customers, costs of production or labor, availability of skilled staffs and business regulation), and 0 otherwise.
Our third financial friction indicator—Finance obstaclesit—is an “objective” measure of credit constraints, also derived from the survey. This dummy variable indicates firms as financially constrained if they report that: (1) their loan applications were rejected; (2) only a limited amount of credit was granted; (3) they themselves rejected the loan offer because the borrowing costs were too high, or (4) they did not apply for a loan for fear of rejection (i.e. discouraged borrowers). The indicator is equal to one if at least one of the above conditions (1–4) is verified, and 0 otherwise.
As shown in Ferrando and Mulier (2015), firms that self-report finance as the largest obstacle for their business activity display often different characteristics compared with financially constrained firms. For instance, the authors find that more profitable firms are less likely to face actual financing constraints, while firms are more likely to perceive access to finance problematic when they have more debt with short term maturity. For this reason, we consider both indicators in our analysis.
(ii) Product innovationit: is a dummy equal to one if a firm declares in the survey to have undertaken product innovation, and 0 otherwise.9 It is worth noting that this variable has the advantage of providing direct information on the innovation undertaken by the firms, rather than the information on R&D investments, which do not necessarily turn into product innovation outcome. This is for us relevant as we need to assess how the innovation output will impact on revenues and therefore on profit efficiency.
(iii) In addition, we use some firm-varying covariates describing firms’ market and debt conditions, Firm controlsit. To capture the change in profitability—in a similar fashion to Srairi (2010) and to Luo et al. (2016)—we rely on two alternative measures. The first is Profit marginit, defined as net income divided by sales and the second Profit upit which is a dummy equal to one if the firm has experienced an increase in profit in the past six months, and 0 otherwise.10 We proxy firms’ debt conditions using the dummy Leverage upit which is equal to one if the firm has experienced an increase in the ratio debt/assets in the past six months, and 0 otherwise. In some specifications, we consider also the maturity structure of indebtedness and the debt burden in terms of interest expenses paid on total debt. Both variables are derived from the financial statements as explained in detail in the next section.
Finally, we use the real GDP growth rate as an additional time-country varying control for the business cycle, named macroeconomic controls (Ferrando et al., 2017).
### 3.2 Data
In order to test our hypotheses, we rely on a novel dataset that merges survey-based data derived from the ECB SAFE with detailed balance sheet and profit and loss information gathered from BvD AMADEUS. We also augmented firm level data with the firms´ technological intensity provided by the Eurostat classification on high-tech industry and knowledge-intensive services.
SAFE gathers information about access to finance for non-financial enterprises in the European Union. It is an on-going survey conducted on behalf of the European Commission and the European Central Bank every 6 months since 2009. The sample of interviewed firms is randomly selected from the Dun and Bradstreet database and it is stratified by firm-size class, economic activity and country. The firms´ selection guarantees satisfactory representation at the country level.
The combined dataset has several advantages. First, we retrieve harmonized and homogeneous information on several aspects of financial constraints and innovation from the survey dimension of the dataset. Second, we can use output and input variables to be included in the production frontier as well as other firm-level information useful to pursue our research trajectory (e.g. leverage compositions and profitability measures). Third, we are also able to disentangle the different technological characteristics of the firms, high-, medium-, low-technology industries, and knowledge-intensive and less knowledge-intensive services.
Our investigation is based on firms belonging to the following eight European countries (Austria, Belgium, France, Finland, Germany, Italy, Portugal and Spain)11 observed from wave 8th (second part of 2012)12 until wave 17th (first part of 2017).
Moreover, we focus our analysis only on firms belonging to Industry and Services.13 Our choice is driven by the following considerations. (i) they are the largest sectors (Industry accounts for about 19% of European GDP, and Services account for about two thirds of European value added (Eurostat Data, 2016). (ii) they have displayed divergent trends in recent years in terms of shares of value added to GDP with a declining trend for Industry and an increasing one for Services (Stehrer et al., 2015). (iii) the two sectors differ also for their efficient allocation of resources as shown by the allocative efficiency index, which is particularly low for the Services (European Commission, 2013).
Our starting sample includes almost 30,000 observations, of which 53% are from the Industry sector and 47% from the Services sector. Once we take into consideration the variable Product innovation, the sample reduces more than half to 7279 observations for Industry and 5864 for Services. Table 1 displays some descriptive statistics of the variables used in defining the frontiers (Panel A) and the determinants of efficiency (Panel B) for both sectors of activities. All balance sheet data are deflated using HICP index.
Table 1
Descriptive statistics for profit functions and inefficiency equations
Industry
Services
N. obs
Mean
Standard deviation
N. obs
Mean
Standard deviation
Panel A: Profit frontier
7279
12,376
50,333
5864
8171
75,842
Operating revenuesb
7279
45,179
160,000
5864
25,462
370,000
Labor costb
7279
43.606
59.337
5864
50.714
293.230
Capital costb
7279
0.236
0.631
5864
0.374
3.236
7279
10.042
2.030
5864
9.073
2.299
Log (operating revenues)b
7279
9.155
1.827
5864
7.815
1.944
Log (labor cost)b
7279
5.599
1.198
5864
5.501
1.427
Number of employeesa
7279
205
2402
5864
169
1,495
Panel B: Inefficiency equation
Cash flow to total assetsb
7279
0.067
0.082
5864
0.078
0.115
Problem of financea
7090
0.136
0.343
5687
0.143
0.350
Finance obstaclesa
5327
0.111
0.314
4091
0.108
0.310
Product innovationa
7279
0.458
0.498
5864
0.314
0.464
Profit upa
7279
0.320
0.466
5864
0.288
0.453
Profit marginb
7090
0.017
0.085
5687
0.017
0.109
Leverage upa
7279
0.190
0.392
5864
0.186
0.389
ROEb
7264
3.148
15.095
5842
3.336
14.351
Financial leverageb
6936
0.216
0.191
5349
0.220
0.223
Use bank loansa
7279
0.367
0.482
5864
0.277
0.448
Use credit linesa
7279
0.468
0.499
5864
0.384
0.486
Long-term debtb
5830
0.496
0.345
4343
0.600
0.360
Interest ratiob
5,830
0.075
0.103
4343
0.079
0.112
Real GDP growth
7279
0.692
1.464
5864
0.795
1.652
Microa
7279
0.096
0.294
5864
0.325
0.468
Smalla
7279
0.299
0.458
5864
0.293
0.455
Mediuma
7279
0.448
0.497
5864
0.272
0.445
Largea
7279
0.157
0.364
5864
0.110
0.312
Source: our elaboration on data from ECB/EC SAFE & BvD AMADEUS
Regarding the production factors in Panel A, firms in Industry report on average higher operating revenues with lower labor and capital costs than firms in Services. Moreover, they are also on average bigger than firms in Services in terms of numbers of employees. From Panel B, our sample is mostly composed by SMEs by construction of the survey.14 13.6% of firms in Industry and 14.3% in Services perceived access to finance as a major problem. A slightly lower percentage of firms (around 11%) are financially constrained according to the objective indicator Finance obstacles. Regarding the innovative activity, around 46% and 31% of firms have indicated that they introduced product innovation in the previous six months in Industry and Services, respectively.
Turning to firms’ financial position, the average firm in our sample is profitable with a profit margin of 1.7% in both sectors, although looking at the distribution we see that at least 10% of all firms in our sample are reporting losses. On average, firms in our sample can generate internal funds (6.6% and 7.5% of total assets for Industry and Services, respectively). At the same time, at least 19% of companies are reporting increasing debt to total assets in the previous six months (Leverage up). In the table we report also some additional financial ratios to quantify better the financial conditions of firms in the sample: financial leverage and return on equity (ROE).15 On average, firms have a financial debt which is around 22% of total assets, while the return on equity ratios are just above 3%. This latter ratio, although relatively low, shows some efficiency by firms in using their equity. It also emerges from the survey that the use of bank loans and credit lines are more relevant for firms operating in Industry (37% and 47%) than in Services (28% and 38%).16 Looking at the debt maturity structure, firms in the manufacturing sector tend to use on average the same amount of short- and long-term debt (50%), while those in the service sector report a slightly higher percentage of long-term debt (60%).17 As for the interest burden, this is slightly higher for firms in the service sector (7.9%) than for those in the manufacturing sector (7.5%).
Looking at size classes, our sample has a total of 2876 (3626) wave-firm observations of micro/small companies (up to 49 employees) in the Industry (Services) sector. The remaining observations belong to medium and large companies (from 50 employees) for industry (4420) and for Services (2238). Common to other studies based on matched databases like ours, the sample composition in terms of size classes might not reflect the general distribution of the population of firms within the different countries. This is a caveat for our empirical results and, for this reason, we perform some additional robustness checks based on firm size classes in Sect. 4.4.
Finally, in the Supplementary material, Figure S1 reports the sample composition by country and by industries. Most firms are from Italy, France and Spain, covering almost three quarters of all observations, which reflects mostly the fact that the country information on financial statements from BvD Amadeus is much higher for those countries.
## 4 Econometric results
### 4.1 The impact of innovation and finance constraints in industry and services sectors
In this section, we discuss the results of the maximum likelihood estimations of the profit functions for both Industry and Services. Following the approach proposed by Battese and Coelli (1995), the coefficients are obtained by simultaneous estimates of the profit efficiency frontier (Eq. 1) and the inefficiency term, expressed as a function of a set of explanatory variables (Eq. 3). We point out that in the framework of Battese and Coelli (1995), we can interpret only the sign and the significance of the estimated coefficients.
Before presenting the estimated results, we report some information on model diagnostics. All estimations in Table 2 show the appropriateness of the Translog specification used in the analysis. In fact, it turns out that most of the second-order terms parameter estimates (σ2) of the profit function are significant. In addition, the high value of the estimation of the γ parameter, reflecting the importance of the inefficiency effects, strongly advocates the use of the stochastic frontier production function rather than the standard OLS method.18 Finally, the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) are used to provide some models diagnostics (Burnham & Anderson, 2004).19
Table 2
Estimation of profit functions and inefficiency equations for Industry and Services sectors
Industry
Services
(1)
(2)
(3)
(4)
(5)
(6)
Profit frontier
Intercept
3.1480***
1.6476***
1.7311***
0.2460
−0.8539***
−1.0440***
Log (operating revenues)
0.0355
0.4388***
0.4331***
0.6354***
0.8207***
0.8174***
Log(wl/wk)
0.6889***
0.5198***
0.5186***
0.9137***
0.8685***
0.9489***
Log (operating revenues)2
0.0665***
0.0222***
0.0251***
0.0024
−0.0117**
−0.0038
Log(wl/wk)2
0.0655***
0.0631***
0.0704***
0.0026
0.0067
0.0042
Log (operating revenues*wl/wk)
−0.0218***
−0.0033
−0.0093**
−0.0175***
−0.0171***
−0.0264***
Country/size/time effect
Yes
Yes
Yes
Yes
Yes
Yes
Inefficiency equation
Cash flow/total assets
−7.7532***
−4.1044**
Problem of finance
−0.1168
−0.3546***
Finance obstacles
−0.1291
−0.3112**
Product innovation
−0.1279*
−0.1284***
−0.1175**
−0.0363
−0.0760
−0.2288**
Profit up
−0.0999*
−0.2817***
Profit margin
−3.5652***
−3.4140***
−2.7278***
−2.2365***
Leverage up
−0.3989***
−0.1690***
−0.1426**
−0.4186***
−0.3508***
−0.1613*
σ2
0.8896***
0.6974***
0.6621***
1.0777***
1.0902***
0.9751***
Γ
0.6815***
0.6222***
0.5935***
0.5090***
0.6137***
0.5807***
Macroeconomic controls
Yes
Yes
Yes
Yes
Yes
Yes
N. obs
7279
7090
5327
5864
5687
4091
Log-likelihood
−7172
−6797
−5077
−7213
−6805
−4743
Mean PE
0.6771
0.6579
0.6658
0.6503
0.5934
0.6155
AIC
14,408
13,658
10,218
14,489
13,675
9549
BIC
14,629
13,878
10,428
14,703
13,888
9752
Source: Our elaboration on data from ECB/EC SAFE & BvD AMADEUS. Significance levels: ‘***’ 0.01, ‘**’ 0.05, ‘*’ 0.10
Table 2 displays several model specifications (columns 1–3 for Industry, and 4–6 for Services), which differ for the alternative inclusion of financial constraints as z-variables of the inefficiency equation.20
In the various specifications we take care of the likely high correlation between the cash flow ratio and the profit margin. Hence when we employ the continuous variable Cash flow ratio we use the dummy Profit up as our preferred profitability measure. When the financial constraints indicators are those derived from the survey (Problem of finance and Finance obstacles), we use the Profit margin, retrieved from balance sheet data.
Starting with Industry, all three measures of firms’ external financial constraints display a negative and significant coefficient, signaling, in the context of the Battese and Coelli (1995) model, lower inefficiency scores. Most likely, when the availability of external finance decreases, financially constrained firms are forced to be more efficient in order to counter the potential adverse impact of financial frictions on their profitability. These findings provide support to our prediction (H1) and are in line with previous studies based on different countries and sample periods (Bhaumik et al., 2012; Maietta & Sena, 2010; Nickell & Nicolitsas, 1999; Sena, 2006).
Interestingly, we also find a negative and significant coefficient for the variable Product innovation. This indicates that the efforts of firms to develop new products or services—in order to attain a strategic advantage over competitors—produce some leverage on revenues. This evidence corroborates the prediction of our hypothesis (H2).21 Our analysis also shows the relevance of the performance indicators. In all specifications, the two alternative measures of profit (Profit up and Profit margins) and leverage (Leverage up) display a negative and significant coefficient, suggesting that the increase in profitability and leverage have a positive effect on efficiency.
Turning to the Services sector results are displayed in columns 4–6 (Table 2). Remarkably, some similarities emerge with the analysis performed for the Industry firms: the financial obstacles and the firm performance indicators show a negative sign indicating a positive effect exerted by those variables on efficiency. By contrast, and differently from the Industry case, the variable Product innovation is not statistically significant in most specifications, except for the last one (Column 6). A tentative interpretation of this outcome could be that firms operating in the Services sector, which is traditionally considered non-tradeable, are less exposed to the international competition. For this reason, the pressure for these firms to develop and launch new product and services for the market might be less cogent. However, our results also show that for firms that report access to finance as an acute obstacle for their activity, innovation positively affects their profit efficiency, as indicated by the negative sign displayed in Column 6.
To address potential endogeneity issues related to the link between efficiency and innovation, we implemented the instrumental variable approach proposed by Karakaplan and Kutlu (2017).22 By using R&D expenses as percentage of GDP by sector of activity as instrument for innovation, this approach allows us to test the endogeneity bias in the stochastic frontier estimation in both the frontier and efficiency determinants equations. The results rule out any evidence of endogeneity referred to Product innovation.23
### 4.2 Sectoral heterogeneity: high- and low-tech sectors
In this section we exploit further the sectoral heterogeneity by aggregating firms according to the technological and knowledge-intensive content of their activities, in a similar fashion to Baum et al. (2017) and Pellegrino and Piva (2020).
Starting from the Eurostat classification on technological and knowledge intensity, we collapse the sectors into two main groups: High-Tech and Low-Tech sectors. In the first group we include high-technology industries and knowledge-intensive services (HT and KIS, respectively) and in the second one medium–low and low-technology industries and less knowledge intensive (MT, LT and less-KIS, respectively).
Our assumption is that HT companies in Industry and KIS companies in Services are more like to each other than high knowledge-intensive and low technology/knowledge-intensive within the two sectors.
The results of the simultaneous estimations of Eq. (1) and several specifications of Eq. (3) for High-Tech and Low-Tech sectors are displayed in Table 3, in columns 1–3, and columns 4–6, respectively.
Table 3
Estimation of profit functions and inefficiency equations for High-tech and Low-tech sectors
High-tech
Low-tech
(1)
(2)
(3)
(4)
(5)
(6)
Profit frontier
Intercept
1.9161***
0.9359***
0.8191**
0.7363***
0.0242
−0.1556
Log (operating revenues)
0.3114***
0.6146***
0.5789***
0.4879***
0.6887***
0.7062***
Log (wl/wk)
0.7095***
0.5507***
0.6813***
0.8017***
0.6969***
0.7421***
Log (operating revenues)2
0.0514***
0.0195***
0.0317***
0.0078*
−0.0101**
−0.0117**
Log(wl/wk)2
0.0681***
0.0734***
0.0717***
0.0206***
0.0245***
0.0176***
Log (operating revenues*wl/wk)
−0.0343***
−0.0218***
−0.0379***
−0.0113***
−0.0026
−0.0046
Country/size/time effect
Yes
Yes
Yes
Yes
Yes
Yes
Inefficiency equation
Cash flow/total assets
−5.6028***
−5.4167***
Problem of finance
−0.0230
−0.2632***
Finance obstacles
−0.1096
−0.4125***
Product innovation
−0.2411***
−0.2160***
−0.1584**
0.1252**
0.0784*
0.0816*
Profit up
−0.1947**
−0.1326**
Profit margin
−3.5680***
−3.2530***
−3.0247***
−2.6378***
Leverage up
−0.3527***
−0.2589***
−0.2005**
−0.4743***
−0.3424***
−0.2224***
σ2
0.9815***
0.8569***
0.7771***
1.0687***
0.9433***
0.8105***
γ
0.6612***
0.6585***
0.6258***
0.5509***
0.5378***
0.4303***
Macroeconomic controls
Yes
Yes
Yes
Yes
Yes
Yes
N. obs
4675
4556
3348
10,009
9717
7128
Log-likelihood
−4992
−4728
−3401
−11,985
−11,358
−8175
Mean PE
0.6510
0.6272
0.6385
0.6444
0.6233
0.6667
AIC
10,048
9520
6866
24,034
22,780
16,414
BIC
10,254
9726
7062
24,265
23,010
16,634
Source: our elaboration on data from ECB/EC SAFE & BvD AMADEUS. Significance levels: ‘***’ 0.01, ‘**’ 0.05, ‘*’ 0.10
Note: High-tech includes high-technology industries and knowledge-intensive services; Low-tech includes medium–low and low-technology industries and less-knowledge intensive services
Results are noteworthy. First, the variables accounting for the financial constrains turn to be significant with the negative sign in all specifications for LT firms. Conversely, this effect disappears for HT firms. The only exception is the negative coefficient for the cash flow ratio, which confirms that firms using internal sources are forced to be more efficient. In other words, when financial constraints are binding, LT firms are induced to be more efficient to enhance their profitability than HT firms. Second, our evidence shows that for the HT sector Product innovation displays a negative sign, indicating that it produces positive effect on profit efficiency. By contrast, for LT firms we get a positive sign signaling a reduction in profit efficiency. This evidence might indicate that investments in product innovation for HT companies imply complementing different tasks such as information technologies, which in turns produce efficiency gains. In the case of Low-Tech companies, instead, it seems that the business activities needed to introduce new products might divert funds and efforts that could be otherwise used in a more efficient way. Noticeably, the signs of all the other inefficiency determinants are stable across the High-Tech and Low-Tech disaggregation.
In a second step, we estimate our model by disaggregating the sectors into five sectors: HT, MT and LT for Industry and for KIS and less-KIS for Services. For the sake of brevity, Table S1 in Supplementary material reports only the results of the specification including the Cash flow ratio, as financial constraints indicator.24 Noticeably, the variable product innovation displays the expected negative sign for HT industry and KIS Services—indicating that innovation output reduces profit inefficiency. By contrast, in the case of less-KIS companies, innovation efforts seem to enhance profit inefficiency. For MT and LT results are not conclusive (the coefficients are not significant). Overall, this evidence reinforces once more our assumption that the two sectors (HT industry and KIS Services) share more common characteristics in terms of the impact of innovation on efficiency than the other subgroups.
### 4.3 Further analysis: the impact of firm indebtedness
So far in our analysis we have not considered in an explicit way the role of firms’ indebtedness on their performance (Maietta & Sena, 2004, 2010; Sena, 2006; Vermoesen et al., 2013). It is known that companies choose between short-term and long-term debt depending on their productive needs. While they typically utilize short-term financing for working capital, they turn on long-term debt to better align their capital structure with long-term strategic goals, including innovation plans. Hence, long-term financing affords companies more time to realize a return on investment and innovation activities and reduces refinancing risks that come with shorter-term debt maturities. In this respect, we should expect a positive impact of long-term debt on profit efficiency.
To address the issue, we re-estimate our model including two additional variables in the inefficiency equation: (i) Long-term debt and (ii) Interest ratio.25 If our predictions are corroborated, we should find, ceteris paribus, a negative coefficient for the variable Long-term debt. For similar reasons, we shall expect a positive sign for the variable Interest ratio, as an increase of the debt burden will affect the firm cost structure and, in turns, will deteriorate profit efficiency. Panel A of Table 4 displays the results of the new specifications for both Industry (columns 1–3) and Services (columns 4–6), while Panel B reports the same specifications for HT (columns 1–3) and LT sectors (columns 4–6).
Table 4
The impact of the indebtedness: estimation of inefficiency equations for industry and services sectors (panel A) high-tech and low-tech sectors (panel B)
Industry
Services
(1)
(2)
(3)
(4)
(5)
(6)
Panel A
Cash flow/total assets
−11.9818***
−5.0654***
Problem of finance
−0.0564
−0.3423***
Finance obstacles
−0.1836*
−0.5890***
Product innovation
−0.1883**
−0.1535**
−0.1655**
−0.1431
−0.1766*
−0.2598**
Long-term debt
−2.3497***
−1.7898***
−2.1420***
−1.1711***
−1.1065***
−1.5191***
Interest ratio
−0.8729
−0.2519
−0.1463
0.9708***
1.0188***
1.3128***
Profit up
−0.0282
−0.3141**
Profit margin
−5.3137**
−5.4731***
−2.8135***
−2.6137***
Leverage up
−0.6156***
−0.2055***
−0.2255***
−0.1537
−0.1246
−0.0160
σ2
1.3176***
0.9414***
0.9455***
1.3395***
1.1995***
1.1317***
γ
0.8191***
0.7503***
0.7490***
0.7128***
0.6993***
0.6737***
Macroeconomic controls
Yes
Yes
Yes
Yes
Yes
Yes
N. obs
5830
5700
4603
4343
4242
3329
Log-likelihood
−5383
−5179
−4120
−4941
−4754
−3630
Mean PE
0.7021
0.6955
0.7107
0.6480
0.6363
0.6737
AIC
10,834
10,426
8309
9950
9575
7328
BIC
11,061
10,652
8528
10,166
9791
7536
High-tech
Low-tech
(1)
(2)
(3)
(4)
(5)
(6)
Panel B
Cash flow/total assets
−8.4411***
−8.0934***
Problem of finance
0.0506
−0.2349***
Finance obstacles
−0.3137**
−0.5375***
Product innovation
−0.3015***
−0.2843***
−0.2268*
0.2227***
0.1922***
0.1784***
Long-term debt
−2.0750***
−1.8668***
−2.5426***
−2.0654***
−1.6611***
−1.8566***
Interest ratio
0.0065
0.3792
0.7459*
−0.6097s
0.0558
0.3076
Profit up
−0.1183
−0.0918
Profit margin
−5.2921***
−5.5280***
−4.0641***
−3.8414***
Leverage up
−0.4564***
−0.2963***
−0.4438***
−0.4452***
−0.2968***
−0.1752**
σ2
1.3058***
1.0759***
1.0607***
1.5739***
1.1980***
1.1047***
γ
0.7933***
0.7573***
0.7446***
0.7461***
0.6685***
0.6284***
Macroeconomic controls
Yes
Yes
Yes
Yes
Yes
Yes
N. obs
3715
3644
2885
7569
7382
5929
Log-likelihood
−3659
−3544
−2744
−8686
−8293
−6559
Mean PE
0.6834
0.6760
0.7062
0.6588
0.6660
0.6965
AIC
7386
7156
5556
17,440
16,654
13,186
BIC
7597
7367
5759
17,676
16,889
13,413
Source: Our elaboration on data from ECB/EC SAFE & BvD AMADEUS. Significance levels: ‘***’ 0.01, ‘**’ 0.05, ‘*’ 0.10 . For sake of brevity, we report in this Table only the z-variables coefficients. Note: High-tech includes high-technology industries and knowledge-intensive services; Low-tech includes medium–low and low-technology industries and less-knowledge intensive services
Starting with panel A (Table 4), results can be summarized as follows. First, the inclusion of the two new variables in our specifications does not affect the baseline results for the key variables of the inefficiency equation (displayed in Table 3), thus providing support to the robustness of our analysis in terms of innovation and financial constraints.
Second, the estimated coefficients of the variable Long-term debt are always negative and statistically significant in both sectors, confirming the positive effect of the debt maturity structure on profit efficiency, as shown also by Sena (2006) and Maietta and Sena (2010). Interestingly, some differences between sectors arise for the estimated coefficients of the Interest ratio variable. Specifically, they turn out to be positive and significant across the three specifications (columns 4–6) for Services, while they are never statistically significant for Industry. This evidence may suggest that firms operating in the Industry sector might be less affected by their debt burden as a consequence of a better bargaining power with banks.26
The main takeaway from this additional analysis is that while the maturity of the debt structure is not unpaired between sectors, the effect of the debt burden appears to damage firms in Services, regardless of the macroeconomic context. However, we are cautious when interpreting such a result as we acknowledge that further investigation might be required to check out its consistency.
Turning to Panel B, where we present the new specifications (including firm indebtedness indicators) for HT and LT sectors, results show that the impact of innovation and financial constraints on efficiency is consistent with the estimates displayed in Table 3. The coefficients of Long-term debt are always negative and significant indicating a positive impact on the profit efficiency independently from the sectoral disaggregation. This evidence turns to be consistent with the results presented in Panel A. As an additional check, we estimate models (1)–(6) of Table 4 (for both Panels A and B) with lagged innovation and finance constraints. Although this leads to a notable reduction in the number of observations by roughly two/thirds, the results corroborate the main findings reported in Table 4.27
A different picture emerges when we look at the estimated coefficients of the Interest ratio. The coefficient is statistically significant only in the subgroup of HT firms and when firms face objective difficulties in their access to finance. The sign is positive indicating that increases of the interest ratio reduce profit efficiency for firms in that group (Column 3).
### 4.4 Robustness analysis: a focus on micro-small firms
By looking at our sample composition, the summary statistics show that close to 40% of the industrial companies and almost 60% of the service companies are classified as micro and small (see Table 1). Previous studies have shown that firm size matters on the decision to innovate (see among others, Leal-Rodríguez et al., 2015). Two opposite perspectives are recalled here. According to the Schumpeterian point of view (Karlsson & Olsson, 1998; Schumpeter, 1942), large firms have an advantage to innovate vis à vis smaller companies as innovation requires effort, long-time investment, know–how and resources that small firms cannot often afford. By contrast, it is also argued that smaller-sized firms tend to display more innovative and efficient efforts than large firms in order to survive (Baumann & Kritikos, 2016; Cohen & Klepper, 1996; De Jong & Marsili, 2006; Laforet, 2008, 2013).
To address the issue, we re-estimate our model specifications for the sub-sample of micro and small firms (up to 49 employees). Results are displayed in Table S2 of Supplementary Material where we report only the z-variables of the several inefficiency term specifications. As far as Industry is concerned, while the sign and the significance of the financial constraints covariates are consistent with the previous analysis, the variable product innovation turns to be not significant. We read these inconclusive results as a signal that we should investigate the impact of innovation on small-sized firms by focusing more on their innovative characteristics. By contrast, no relevant differences emerge for the micro-small firms compared to the full sample in the Services sector.
Indeed, the sector disaggregation based on the technology and knowledge intensity brings more clear-cut findings (see Table S3 of the Supplementary Material). In detail, we observe that for the micro-small HT enterprises product innovation matters in reducing firm inefficiency, while for LT firms the innovation efforts are negligible or they could be even counter-productive as they induce an increase of inefficiency. Once again, these results are largely consistent with the findings on the full sample on the similarity between technological and knowledge-intensive companies independently from firm size.
## 5 Discussion and conclusions
This paper contributes to the literature that investigates the interplay between firms’ efficiency, innovation and access to finance. Despite the policy relevance of this topic, the fundamental assumptions underlying it have remained largely unexplored. Indeed, the related literature has mainly focused on the role of financial constraints and innovation on productivity separately, yet these studies do not address the direct link between innovation and profit efficiency and the effect that a limited access to credit may exert on profit efficiency.
We fill this gap through the lens of the economic efficiency perspective. To the best of our knowledge, no previous work considered the effects that both innovation and credit access limitations may have on profit efficiency. To accomplish such a task, we pioneer the use of a novel dataset that merges survey-based data with balance sheet information. This allows us to exploit the heterogeneity across firms’ financial and financing positions.
Furthermore, we consider heterogeneous production functions across sectors by estimating different frontiers: first for two main productive sectors (Industry and Services) and second for an alternative sectoral distribution based on the technological and knowledge intensity. The empirical analysis confirms the hypothesis that technological and knowledge-intensive companies in the manufacturing and service sectors are more like to each other than high knowledge-intensive and low technology/knowledge-intensive firms within Industry and Services.
Our main findings support the prediction (H1) according to which firms that perceived difficulties in accessing external finance, or that are objectively financially constrained, tend to improve efficiency to reduce their risk of failure and to maintain profits, independently from the macro sector disaggregation. This outcome seems to be consistent with previous literature (Sena, 2006). Our analysis also documents that when financial constraints are binding, LT firms are induced to be more efficient to enhance their profitability than HT firms. Moreover, we find that debt maturity matters as well. Firms making more use of long-term debt are more efficient as they have more time to realize a return on their business activities to cover their debt independently from the sectoral disaggregation employed in our investigation. As for the cost of debt, the impact is different depending on the sector in which a firm operates. Enterprises operating in Services seem to have a more stringent cost burden on the debt side determining an increase in inefficiency. In the case of HT firms, an increase of the interest ratio is reducing their efficiency but only when they perceive difficulties in the access to finance.
Consistently with our second hypothesis (H2), we show that firms which stated in the survey to have introduced product innovation, have a higher likelihood to improve efficiency. This evidence is robust for firms in the manufacturing sector, and only weakly present for firms belonging to Services. We also find that product innovation is important for the profit efficiency of high-technology and knowledge-intensive companies, while it tends to diminish it for LT and less-KIS firms.
Finally, we consider also the role of firm size within sectors. Specifically, we show that micro and small HT firms are able to turn innovation into revenue gains, while it is not the case for LT and less-KIS companies, broadly confirming the findings of the full sample on the relevance of the sectoral composition. This supports the idea that the different sectoral aggregations provided in our analysis are indeed relevant for detecting additional firm heterogeneity and this does not depend on the firm’s size.
The implications of our results are not negligible. Fostering innovation and growth opportunities for enterprises is particularly relevant in times of economic slowdown and financial distress. Recommendations for public policy to encourage long-term investment in innovation and to reduce conducts which particularly penalize enterprises when investing in R&D would be another outcome of our investigation. Hence, our results give support to the line of firm-level policy interventions directly aimed at mitigating the underinvestment in R&D in Europe while considering firm heterogeneity across technology and knowledge intensity sectors. Though not explicitly analyzed in this paper, the line of reasoning goes beyond the idea that easier access to finance is the panacea to get higher profit efficiency. What seems more important is the support provided to businesses to be competitive by encouraging them to adopt new business models and innovative practices.
Despite its numerous contributions to the literature, we acknowledge some limitations of our study which can provide input for further research. First, while the results of our analysis seem to be robust to different econometric approaches and turn to be robust to endogeneity issues, we could argue that the introduction of additional variables controlling, for instance, the different degrees of entrepreneurship could be an advantage to better understand firms’ different efficient scores. Beyond size and innovation, several studies have focused on age, ownership structure, skill and competencies (Binnui & Cowling, 2016; Falk & Hagsten, 2021).
Second, while the instrument-based approach proposed by Karakaplan and Kutlu (2017) could remove some potential sources for reverse causality, it does not eliminate other possible sources of endogeneity. In fact, identifying valid instruments has been proved a difficult task for our financial constraints indicators, also because the majority of observable firm characteristics are already included in the inefficiency equation.
Third, one of the main advantages of our investigation is the use of a unique dataset that allows us to employ not only the qualitative survey-based information to measure financial constraints, but also balance sheet data to estimate production functions at firm level. However, it could be argued that, by merging two different data sources, firms in our sample might not reflect the composition of the population of firms by size and sector of activity within the different countries. We are aware of this limitation and one step further in our analysis would be to set appropriate weights to be used to have representative results. This is of particular importance for the novel results related to HT and LT companies. Fourth, we also recognise that differences in institutional settings might play a role in innovation policy at the national level, so we control for the country macroeconomic context in our estimates. However, we leave a more in-depth country-level analysis on this issue to future research tasks.
Finally, though our analysis starts just after the great financial crisis due to the availability of our survey database, we would expect that our results are not specific to the period that we study. Once more data will become available, further research should be devoted to substantiating our assessment over the business cycle, in particular considering the long-term impact of the Covid19 pandemic on firms’ efficiency.
## Acknowledgements
Previous versions of this paper were presented at the World Finance Banking Symposium 2019 (Delhi), at the XVIII SIEPI Annual Conference, University Cà Foscari, Venezia, 30–31 January 2020, at the 27th Annual MFS conference, 28 June 2020, and at the Virtual North American Productivity Workshop (vNAPW XI), 8–12 June 2020. We thank the discussants and participants of these conferences for useful suggestions and comments. Furthermore, we thank the participants of the seminars—where the paper was presented—for helpful comments. This paper contains views of the authors and not necessarily those of the European Central Bank. Usual disclaimer applies. Grateful acknowledgements are due to the European Central Bank for making available the Survey on the Access to Finance of Enterprises (SAFE) data set. Graziella Bonanno gratefully acknowledges research grant [300399FRB20BONAN] from the University of Salerno. Stefania P.S. Rossi gratefully acknowledges research grant [FRA 2018] from the University of Trieste.
None.
## Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
print
PRINT
Appendix
## Supplementary Information
Below is the link to the electronic supplementary material.
Footnotes
1
Econometric methods used to estimate the economic efficiency are largely employed in several strands of literature (among others, Aiello and Bonanno, 2018, and Zhang et al., 2021, for the banking sector; Le et al., 2018, Hanousek et al., 2015, and Bonanno, 2016, for manufacturing and non-manufacturing firms).
2
In the two-step approach, first inefficiency is estimated using a frontier and, in the second step, the estimated efficiency-scores are the dependent variable in a subsequent regression (Greene, 1993). As shown by Lensink and Meesters (2014) and Wang and Schmidt (2002), the two-step approach suffers from the fact that the inefficiency is assumed to be identically and independently distributed in the main frontier equation, while it depends on other variables in the inefficiency equation.
3
We use the Translog function to model the form of frontiers (expressed in log-linear form), which satisfies the assumptions of non-negativity, concavity and linear homogeneity (Kumbhakar and Lovell, 2000). We take into account the constraint of homogeneity in relation to input-prices (which requires the sum to one of the input price elasticities).
4
We choose the value added to proxy profit as it includes both revenues and costs information. Moreover, by choosing this variable we overcome the practical problem of having too many negative values for the profit variable. Empirically, the correlation between value added and profit is very high.
5
Exhaustive discussions on alternative versus traditional profit efficiency are in Berger and Mester (1997) and Vander-Vennet (2002).
6
As in Berger and Mester (1997), Fitzpatrick and McQuinn (2008) and Huizinga et al. (2001), we transform profits by adding the absolute value of minimum profit plus one to actual profits. This ensures that log(Profit) = log[π +|π^min |+ 1] is defined in [0, + ∞).
7
Micro and Small firms register a turnover less than 2 and between 2 and 10 million (euros), respectively; Medium firms have a turnover between 10 and 50 million (euros); Large enterprises have a turnover more than 50 million (euros). In our analysis Micro firms are the controlling group.
8
See Eurostat: https://ec.europa.eu/eurostat/cache/metadata/Annexes/htec_esms_an3.pdf. Based on NACE Rev. 2 at 2-digit level Eurostat has compiled a classification of 12 sectors and subsectors according to their degree of technology and knowledge intensity. In the paper, we use the main 5 sectors. The supplementary material section will report the estimations also for those 5 sectors.
9
The information on this variable (question Q1 in the survey) is provided by SAFE every second wave, and refers to the previous 12 months, i.e. two waves. As the SAFE survey is conducted every six months, in order to restore this information at the wave round, we replicate this data for firms present on consecutive waves.
10
In the empirical analysis we use this proxy when the measure of finance constraints is the Cash flow/Total assets ratio because of the high correlation with the profit margin ratio.
11
The selection of those countries is driven by the availability of the data after the merge of surveyed firms in SAFE with the financial statements in the dataset from BvD AMADEUS.
12
This is due to the availability of the variable Product innovation from the 8th wave onwards in SAFE.
13
We use the 2-digit NACE classification used in the survey to define the two sectors. Industry includes manufacturing, mining, electricity, gas and water supply, while Services include construction, wholesale and retail trade, transport, accommodation, food services and other services to business or persons. We exclude public administration, financial and insurance services.
14
Albeit the focus of SAFE is on SMEs, the survey also provides information on large firms. As for Services 12% of our sample are large enterprises. In the case of Industry, they are 16%.
15
The financial leverage is defined as the ratio of short- and long-term debt, excluding trade credit and provisions, to total assets while the return on equity (ROE) is the amount of net profit earned as a percentage of shareholders’ equity.
16
The variables Use bank loans, Use credit lines and Use bank credit capture firms’ use of banking products as reported by firms in the survey. They are dummy variables equal to 1 if one financing source is used by the firm and 0 otherwise.
17
Long-term debt is defined as the ratio between long term debt and total financial debt. Interest ratio is the ratio between the interest payable on short and long-term debt, accrued during the period covered by the financial statements, and total financial debt.
18
The variance σ2 is equal to the sum of the variances of the two error components: $${\sigma }_{u}^{2}$$ and $${\sigma }_{v}^{2}$$. γ is equal to $$\frac{{\sigma }_{u}^{2}}{{\sigma }^{2}}$$ where the zero value of this parameter indicates that deviations from the frontier are only due to random error, while values close to one indicate that the distance from the frontier is due to inefficiency.
19
AIC is equal to [2 k-2Log-likelihood], where k is the number of estimated parameters; BIC is equal to [ln(N. obs) k-2 Log-likelihood].
20
In order to exclude the possibility that our findings are driven by the contemporaneous presence of financial constraints and innovation, we run different specifications for Industry and Services, in which we introduce one by one the three proxies for financial constraints (without Product innovation)—Cash flow ratio, Problem of Finance, Finance obstacles—and the variable Product innovation (without the variables accounting for financial constraints). The main results on the impact of the different variables on efficiency are confirmed and are available upon request.
21
In order to disentangle possible combined effects of financial frictions and innovation efforts, we also estimated model specifications where we included interaction terms between Product innovation and each indicator of financial constraints. Estimates (available upon request) on the interactions did not provide conclusive results.
22
We are aware that our approach is far from being conclusive in eliminating other possible sources of endogeneity which might affect the relationship under scrutiny. Other approaches have been used for addressing the endogeneity of inputs and output in the SFA (Lien et al., 2018). We acknowledge this potential limitation of our study in the conclusion section of this paper.
23
Untabulated results are available upon request.
24
Similar results are obtained when we use the other two indicators of financial frictions. The results are available upon request.
25
The variable Long-term debt is the ratio between long term debt and total financial debt. Interest ratio is measured as the ratio between the interest payable on short and long-term debt, accrued during the period covered by the financial statements, and total financial debt.
26
We find analogous results, available on request, when we use alternative variables for measuring the interest burden such as the interest coverage ratio which is defined as the ratio between earnings before interest and taxes and interest payments due within the same period.
27
Untabulated regressions are available upon request.
Literature
Acemoglu, D., Aghion, P., & Zilibotti, F. (2006). Distance to frontier, selection, and economic growth. Journal of the European Economic Association, 4(1), 37–74. CrossRef
Acharya, V., & Xu, Z. (2017). Financial dependence and innovation: The case of public versus private firms. Journal of Financial Economics, 124(2), 223–243. CrossRef
Agénor, P.-R., & Pereira da Silva, L. (2017). Cyclically adjusted provisions and financial stability. Journal of Financial Stability, 28, 143–162. CrossRef
Aghion, P., Angeletos, G.-M., Banerjee, A., & Manova, K. (2010). Volatility and growth: Credit constraints and the composition of investment. Journal of Monetary Economics, 57, 246–265. CrossRef
Aghion, P., Askenazy, P., Berman, N., Cette, G., & Eymard, L. (2012). Credit constraints and the cyclicality of R&D investment: Evidence from France. Journal of the European Economic Association, 10, 1001–1024. CrossRef
Aghion, P., Bloom, N., Blundell, R., Griffith, R., & Howitt, P. (2005). Competition and innovation: An inverted-U relationship. Quarterly Journal of Economics, 120, 701–728.
Aghion, P., & Howitt, P. (1998). Endogenous Growth Theory Cambridge. MIT Press.
Aiello, F., & Bonanno, G. (2018). On the sources of heterogeneity in banking efficiency literature. Journal of Economic Surveys, 32(1), 194–225. CrossRef
Aigner, D., Lovell, C. K., & Schmidt, P. (1977). Formulation and estimation of stochastic frontier production function models. Journal of Econometrics, 6(1), 21–37. CrossRef
Arbelo, A., Arbelo-Pérez, M., & Pérez-Gómez, P. (2021). Profit efficiency as a measure of performance and frontier models: A resource-based view. BRQ Business Research Quarterly, 24(2), 143–159. CrossRef
Archibugi, D., & Coco, A. (2004). A new indicator of technological capabilities for developed and developing countries (ArCr). World Development, 32(4), 629–654. CrossRef
Bartelsman, E., Haltiwanger, J., & Scarpetta, S. (2013). Cross-country differences in productivity: The role of allocation and selection. American Economic Review, 103(1), 305–334. CrossRef
Battese, G. E., & Coelli, T. J. (1995). A model for technical inefficiency effects in a stochastic frontier production function for panel data. Empirical Economics, 20(2), 325–332. CrossRef
Baum, C. F., Lööf, H., Nabavi, P., & Stephan, A. (2017). A new approach to estimation of the R&D–innovation–productivity relationship. Economics of Innovation and New Technology, 26(1–2), 121–133. CrossRef
Baumann, J., & Kritikos, A. S. (2016). The link between R&D, innovation and productivity: Are micro firms different? Research Policy, 45(6), 1263–1274. CrossRef
Becker, B. (2015). Public R&D policies and private R&D investment: A survey of the empirical evidence. Journal of Economic Surveys, 29(5), 917–942. CrossRef
Berger, A. N., & Mester, L. J. (1997). Inside the black box: What explains differences in the efficiencies of financial institutions? Journal of Banking & Finance, 21(7), 895–947. CrossRef
Bernanke, B., Gertler, M., & Gilchrist, S. (1996). The financial accelerator and flight to quality. Review of Economics and Statistics, 78, 1–15. CrossRef
Bhaumik, S. K., Das, P. K., & Kumbhakar, S. C. (2012). A stochastic frontier approach to modelling financial constraints in firms: An application to India. Journal of Banking & Finance, 36(5), 1311–1319. CrossRef
Binnui, A., & Cowling, M. (2016). A conceptual framework for measuring entrepreneurship and innovation of young hi-technology firms. GSTF Journal on Business Review, 4(3), 32–47.
Bonanno, G. (2016). ICT and R&D as inputs or efficiency determinants? Analysing Italian manufacturing firms (2007–2009). Eurasian Business Review, 6(3), 383–404. CrossRef
Bravo-Ortega, C., & García-Marín, A. (2011). R&D and productivity: A two way avenue? World Development, 39(7), 1090–1107. CrossRef
Burnham, K. P., & Anderson, D. R. (2004). Multimodel inference: Understanding AIC and BIC in model selection. Sociological Methods & Research, 33(2), 261–304. CrossRef
Butler, A. W., & Cornaggia, J. (2011). Does access to external finance improve productivity? Evidence from a natural experiment. Journal of Financial Economics, 99(1), 184–203. CrossRef
Calza, E., Goedhuys, M., & Trifković, N. (2018). Drivers of productivity in Vietnamese SMEs: The role of management standards and innovation. Economics of Innovation and New Technology, 28(1), 1–22.
Carbo-Valverde, S., Degryse, H., & Rodríguez-Fernández, F. (2015). The impact of securitization on credit rationing: Empirical evidence. Journal of Financial Stability, 20, 36–50. CrossRef
Castellani, D., Piva, M., Schubert, T., & Vivarelli, M. (2019). R&D and productivity in the US and the EU: Sectoral specificities and differences in the crisis. Technological Forecasting and Social Change, 138, 279–291. CrossRef
Cefis, E., & Ciccarelli, M. (2005). Profit differentials and innovation. Economics of Innovation and New Technology, 14(1–2), 43–61. CrossRef
Cette, G., Fernald, J., & Mojon, B. (2016). The pre-Great Recession slowdown in productivity. European Economic Review, 88, 3–20. CrossRef
Chen, C. M., Delmas, M. A., & Lieberman, M. B. (2015). Production frontier methodologies and efficiency as a performance measure in strategic management research. Strategic Management Journal, 36(1), 19–36. CrossRef
Coelli, T. J., Rao, D. S. P., O’Donnell, C. J., & Battese, G. E. (2005). An introduction to efficiency and productivity analysis. Springer Science & Business Media.
Cohen, W. M., & Klepper, S. (1996). A reprise of size and R & D. The Economic Journal, 106, 925–951. CrossRef
Cornwell, C. M., & Smith, P. C. (2008). Stochastic frontier analysis and efficiency estimation. In P. Sevestre & L. Mátyás (Eds.), The econometrics of panel data (pp. 697–726). Springer Verlag. CrossRef
Cowan, K., Drexler, A., & Yanez, Á. (2015). The effect of credit guarantees on credit availability and delinquency rates. Journal of Banking & Finance, 59, 98–110. CrossRef
Dabla-Norris, E., Kersting, E. K., & Verdier, G. (2012). Firm productivity, innovation, and financial development. Southern Economic Journal, 79(2), 422–449. CrossRef
Dai, X., & Sun, Z. (2021). Does firm innovation improve aggregate industry productivity? Evidence from Chinese manufacturing firms. Structural Change and Economic Dynamics, 56, 1–9. CrossRef
European Commission. (2013). Product market review 2013: Financing the real economy. European Economy 8, 2013. ISBN 978-92-79-33667-6. https://doi.org/10.2765/58867.
de Jong, J. P., & Marsili, O. (2006). The fruit flies of innovations: A taxonomy of innovative small firms. Research Policy, 35(2), 213–229. CrossRef
Deeds, D. L. (2001). The role of R&D intensity, technical development and absorptive capacity in creating entrepreneurial wealth in high technology start-ups. Journal of Engineering and Technology Management, 18(1), 29–47. CrossRef
Díaz-Díaz, N. L., Aguiar-Díaz, I., & De Saá-Pérez, P. (2008). The effect of technological knowledge assets on performance: The innovative choice in Spanish firms. Research Policy, 37(9), 1515–1529. CrossRef
Dosi, G., Grazzi, M., & Moschella, D. (2015). Technology and costs in international competitiveness: From countries and sectors to firms. Research Policy, 44(10), 1795–1814. CrossRef
Falk, M., & Hagsten, E. (2021). Innovation intensity and skills in firms across five European countries. Eurasian Business Review, 11(3), 371–394. CrossRef
Farrell, M. J. (1957). The measurement of productive efficiency. Journal of the Royal Statistical Society Series A (General), 120(3), 253–290. CrossRef
Farre-Mensa, J., & Ljungqvist, A. (2016). Do measures of financial constraints measure financial constraints? The Review of Financial Studies, 29(2), 271–308. CrossRef
Fazzari, S. M., Hubbard, R. G., Petersen, B. C., Blinder, A. S., & Poterba, J. M. (1988). Financing constraints and corporate investment. Brookings Papers on Economic Activity, 1988(1), 141–206. CrossRef
Ferrando, A., & Mulier, K. (2015). Financial obstacles and financial conditions of firms: Do perceptions match the actual conditions? The Economic and Social Review, 46(1), 87–118.
Ferrando, A., Popov, A., & Udell, G. F. (2017). Sovereign stress and SMEs’ access to finance: Evidence from the ECB’s SAFE survey. Journal of Banking & Finance, 81, 65–80. CrossRef
Ferrando, A., & Ruggieri, A. (2018). Financial constraints and productivity: Evidence from euro area companies. International Journal of Finance & Economics, 23(3), 257–282. CrossRef
Ferreira, P. J. S., & Dionísio, A. T. M. (2016). What are the conditions for good innovation results? A fuzzy-set approach for European Union. Journal of Business Research, 69(11), 5396–5400. CrossRef
Fitzpatrick, T., & McQuinn, K. (2008). Measuring bank profit efficiency. Applied Financial Economics, 18(1), 1–8. CrossRef
Foster, L., Haltiwanger, J., & Syverson, C. (2008). Reallocation, firm turnover, and efficiency: Selection on productivity or profitability? American Economic Review, 98(1), 394–425. CrossRef
García-Quevedo, J., Segarra-Blasco, A., & Teruel, M. (2018). Financial constraints and the failure of innovation projects. Technological Forecasting and Social Change, 127, 127–140. CrossRef
George, G., Zahra, S. A., & Wood, D. R., Jr. (2002). The effects of business–university alliances on innovative output and financial performance: A study of publicly traded biotechnology companies. Journal of Business Venturing, 17(6), 577–609. CrossRef
Geroski, P., Machin, S., & Van Reenen, J. (1993). The profitability of innovating firms. The RAND Journal of Economics, 24, 198–211. CrossRef
Goedhuys, M., & Veugelers, R. (2012). Innovation strategies, process and product innovations and growth: Firm-level evidence from Brazil. Structural Change and Economic Dynamics, 23(4), 516–529. CrossRef
Gopinath, G., Kalemli-Özcan, Ş, Karabarbounis, L., & Villegas-Sanchez, C. (2017). Capital allocation and productivity in South Europe. The Quarterly Journal of Economics, 132(4), 1915–1967. CrossRef
Greene, W. H. (1993). The econometric approach to efficiency analysis. The measurement of productivity efficiency: Techniques and applications (pp. 92–250). Oxford University Press.
Greene, W. (2005). Fixed and random effects in stochastic frontier models. Journal of Productivity Analysis, 23(1), 7–32. CrossRef
Griliches, Z. (1979). Issues in assessing the contribution of research and development to productivity growth. The Bell Journal of Economics, 10, 92–116. CrossRef
Grossman, G., & Helpman, E. (1991). Innovation and growth in the global economy. MIT Press.
Guariglia, A., & Liu, P. (2014). To what extent do financing constraints affect Chinese firms’ innovation activities? International Review of Financial Analysis, 36, 223–240. CrossRef
Hanousek, J., Kočenda, E., & Shamshur, A. (2015). Corporate efficiency in Europe. Journal of Corporate Finance, 32, 24–40. CrossRef
Heshmati, A., & Kim, H. (2011). The R&D and productivity relationship of Korean listed firms. Journal of Productivity Analysis, 36(2), 125–142. CrossRef
Huizinga, H. P., Nelissen, J. H. M., & Vander Vennet, R. (2001). Efficiency effects of bank mergers and acquisitions in Europe. Working paper no. 088/3. Tinbergen Institute.
Janz, N., Lööf, H., & Peters, B. (2004). Firm level innovation and productivity-is there a common story across countries? Problems and Perspectives in Management, 2, 184–204.
Jin, M., Zhao, S., & Kumbhakar, S. C. (2019). Financial constraints and firm productivity: Evidence from Chinese manufacturing. European Journal of Operational Research, 275(3), 1139–1156. CrossRef
Kaplan, S. N., & Zingales, L. (1995). Do financing constraints explain why investment is correlated with cash flow? (No. w5267). National Bureau of Economic Research.
Karakaplan, M. U., & Kutlu, L. (2017). Endogeneity in panel stochastic frontier models: An application to the Japanese cotton spinning industry. Applied Economics, 49(59), 5935–5939. CrossRef
Karlsson, C., & Olsson, O. (1998). Product innovation in small and large enterprises. Small Business Economics, 10(1), 31–46. CrossRef
Kerr, W. R., & Nanda, R. (2009). Democratizing entry: Banking deregulations, financing constraints, and entrepreneurship. Journal of Financial Economics, 94(1), 124–149. CrossRef
Kerr, W. R., & Nanda, R. (2015). Financing innovation. Annual Review of Financial Economics, 7, 445–462. CrossRef
Klette, T. J., & Kortum, S. (2004). Innovating firms and aggregate innovation. Journal of Political Economy, 112(5), 986–1018. CrossRef
Koellinger, P. (2008). The relationship between technology, innovation, and firm performance—Empirical evidence from e-business in Europe. Research Policy, 37(6), 1317–1328. CrossRef
Kumbhakar, S. C., & Lovell, C. A. K. (2000). Stochastic frontier analysis. Cambridge University Press. CrossRef
Kumbhakar, S. C., Ortega-Argilés, R., Potters, L., Vivarelli, M., & Voigt, P. (2012). Corporate R&D and firm efficiency: Evidence from Europe’s top R&D investors. Journal of Productivity Analysis, 37(2), 125–140. CrossRef
Laforet, S. (2008). Size, strategic, and market orientation affects on innovation. Journal of Business Research, 61(7), 753–764. CrossRef
Laforet, S. (2013). Organizational innovation outcomes in SMEs: Effects of age, size, and sector. Journal of World Business, 48(4), 490–502. CrossRef
Le, S. A., Walters, B., & Kroll, M. (2006). The moderating effects of external monitors on the relationship between R&D spending and firm performance. Journal of Business Research, 59(2), 278–287. CrossRef
Le, V., Vu, X. B. B., & Nghiem, S. (2018). Technical efficiency of small and medium manufacturing firms in Vietnam: A stochastic meta-frontier analysis. Economic Analysis and Policy, 59, 84–91. CrossRef
Leal-Rodríguez, A. L., Eldridge, S., Roldán, J. L., Leal-Millán, A. G., & Ortega-Gutiérrez, J. (2015). Organizational unlearning, innovation outcomes, and performance: The moderating effect of firm size. Journal of Business Research, 68(4), 803–809. CrossRef
Leibenstein, H. (1978). General X-efficiency theory and economic development. Oxford University Press.
Leiponen, A. (2000). Competencies, innovation and profitability of firms. Economics of Innovation and New Technology, 9(1), 1–24. CrossRef
Lensink, R., & Meesters, A. (2014). Institutions and bank performance: A stochastic frontier analysis. Oxford Bulletin of Economics and Statistics, 76(1), 67–92. CrossRef
Lien, G., Kumbhakar, S. C., & Alem, H. (2018). Endogeneity, heterogeneity, and determinants of inefficiency in Norwegian crop-producing farms. International Journal of Production Economics, 201, 53–61. CrossRef
Lööf, H., & Heshmati, A. (2006). On the relationship between innovation and performance: A sensitivity analysis. Economics of Innovation and New Technology, 15(4–5), 317–344. CrossRef
Love, J. H., & Roper, S. (2015). SME innovation, exporting and growth: A review of existing evidence. International Small Business Journal, 33(1), 28–48. CrossRef
Love, J. H., Roper, S., & Du, J. (2009). Innovation, ownership and profitability. International Journal of Industrial Organization, 27(3), 424–434. CrossRef
Luo, Y., Tanna, S., & De Vita, G. (2016). Financial openness, risk and bank efficiency: Cross-country evidence. Journal of Financial Stability, 24, 132–148. CrossRef
Maietta O. W., & Sena V. (2004), Profit sharing, technical efficiency change and finance constraints. In Employee participation, firm performance and survival (pp. 149–167). Emerald Group Publishing Limited.
Maietta, O. W., & Sena, V. (2010). Financial constraints and technical efficiency: Some empirical evidence for Italian producers’ cooperatives. Annals of Public and Cooperative Economics, 81(1), 21–38. CrossRef
Manaresi, F., & Pierri, N. (2017). Credit constraints and firm productivity: Evidence from Italy. Mo. Fi. R. Working Papers, 137.
Meeusen, W., & van Den Broeck, J. (1977). Efficiency estimation from Cobb–Douglas production functions with composed error. International Economic Review, 18(2), 435–444. CrossRef
Midrigan, V., & Xu, D. Y. (2014). Finance and misallocation: Evidence from plant-level data. American Economic Review, 104(2), 422–458. CrossRef
Nickell, S. (1996). Competition and corporate performance. Journal of Political Economy, 104, 724–746. CrossRef
Nickell, S., & Nicolitsas, D. (1999). How does financial pressure affect firms? European Economic Review, 43(8), 1435–1456. CrossRef
O’Mahony, M., & Vecchi, M. (2009). R&D, knowledge spillovers and company productivity performance. Research Policy, 38(1), 35–44. CrossRef
Ortega-Argilés, R., Piva, M., & Vivarelli, M. (2015). The productivity impact of R&D investment: Are high-tech sectors still ahead? Economics of Innovation and New Technology, 24(3), 204–222. CrossRef
Pellegrino, G., & Piva, M. (2020). Innovation, industry and firm age: Are there new knowledge production functions? Eurasian Business Review, 10(1), 65–95. CrossRef
Pigini, C., Presbitero, A. F., & Zazzaro, A. (2016). State dependence in access to credit. Journal of Financial Stability, 27, 17–34. CrossRef
Pilar, P. G., Marta, A. P., & Antonio, A. (2018). Profit efficiency and its determinants in small and medium-sized enterprises in Spain. BRQ Business Research Quarterly, 21(4), 238–250. CrossRef
Romer, P. (1990). Endogenous technological change. Journal of Political Economy, 98, 72–102. CrossRef
Sasidharan, S., Lukose, P. J., & Komera, S. (2015). Financing constraints and investments in R&D: Evidence from Indian manufacturing firms. The Quarterly Review of Economics and Finance, 55, 28–39. CrossRef
Schroeder, R. G., Bates, K. A., & Junttila, M. A. (2002). A resource-based view of manufacturing strategy and the relationship to manufacturing performance. Strategic Management Journal, 23(2), 105–117. CrossRef
Schumpeter, J. A. (1942). Socialism, capitalism and democracy. Harper and Brothers.
Sena, V. (2006). The determinants of firms’ performance: Can finance constraints improve technical efficiency? European Journal of Operational Research, 172(1), 311–325. CrossRef
Shao, B. B., & Lin, W. T. (2016). Assessing output performance of information technology service industries: Productivity, innovation and catch-up. International Journal of Production Economics, 172, 43–53. CrossRef
Solow, R. (1957). A contribution to the theory of economic growth. Quarterly Journal of Economics, 70, 65–94. CrossRef
Srairi, S. A. (2010). Cost and profit efficiency of conventional and Islamic banks in GCC countries. Journal of Productivity Analysis, 34(1), 45–62. CrossRef
Stehrer, R., Baker, P., Foster, N., McGregor, N., Koenen, J., Leitner, S., Schricker, J., Strobel, T., Vieweg, H. G., Vermeulen, J., & Yagafarova, A. (2015). The relation between industry and services in terms of productivity and value creation. Wiiw Research Report 404.
Ugur, M., Trushin, E., Solomon, E., & Guidi, F. (2016). R&D and productivity in OECD firms and industries: A hierarchical meta-regression analysis. Research Policy, 45(10), 2069–2086. CrossRef
Urbano, D., & Alvarez, C. (2014). Institutional dimensions and entrepreneurial activity: An international study. Small Business Economics, 42(4), 703–716. CrossRef
Vander-Vennet, R. (2002). Cost and profit efficiency of financial conglomerates and universal banks in Europe. Journal of Money, Credit, and Banking, 34(1), 254–282. CrossRef
Vermoesen, V., Deloof, M., & Laveren, E. (2013). Long-term debt maturity and financing constraints of SMEs during the global financial crisis. Small Business Economics, 41(2), 433–448. CrossRef
Wang, H.-J. (2003). A stochastic frontier analysis of financing constraints on investment: The case of financial liberalization in Taiwan. Journal of Business and Economic Statistics, 21, 406–419. CrossRef
Wang, H. J., & Schmidt, P. (2002). One-step and two-step estimation of the effects of exogenous variables on technical efficiency levels. Journal of Productivity Analysis, 18(2), 129–144. CrossRef
Yang, Z., Shao, S., Li, C., & Yang, L. (2020). Alleviating the misallocation of R&D inputs in China’s manufacturing sector: From the perspectives of factor-biased technological innovation and substitution elasticity. Technological Forecasting and Social Change, 151, 119878. CrossRef
Zhang, D., Zheng, W., & Ning, L. (2018). Does innovation facilitate firm survival? Evidence from Chinese high-tech firms. Economic Modelling, 75, 458–468. CrossRef
Zhang, J., Naseem, M. A., Ahmed, M. I., & Ali, R. (2021). Board independence and Chinese banking efficiency: A moderating role of ownership restructuring. Eurasian Business Review, 11(3), 517–536. CrossRef
Zorzo, L. S., Diehl, C. A., Venturini, J. C., & Zambon, E. P. (2017). The relationship between the focus on innovation and economic efficiency: A study on Brazilian electric power distribution companies. RAI Revista De Administração e Inovação, 14(3), 235–249. CrossRef
Title
Do innovation and financial constraints affect the profit efficiency of European enterprises?
Authors
Graziella Bonanno
Annalisa Ferrando
Stefania Patrizia Sonia Rossi
Publication date
23-09-2022
Publisher
Springer International Publishing
Published in
|
2023-02-02 14:26:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4255136549472809, "perplexity": 3944.216139045198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00175.warc.gz"}
|
https://math.stackexchange.com/questions/1316282/expected-number-of-dice-rolls-for-a-sequence-of-dice-rolls-ending-at-snake-eyes
|
# Expected number of dice rolls for a sequence of dice rolls ending at snake eyes
If I roll a pair of dice repeatedly and stop only when I get snake eyes (both dice show 1), what is the expected number of dice rolls that will occur? I know the answer is 36, but I'm having trouble understanding why that is the answer.
• Take a look at the 'geometric distribution'. The wikipedia page has more than enough information on it for your needs. – Marc Jun 7 '15 at 22:09
• The moral is that since the probability of it happening on a single roll is $1/36$, you're expected to use on average $36$ throws to get to one. – Arthur Jun 7 '15 at 22:09
That happens because the mean of a geometric distribution with $p=\frac{1}{36}$ is exactly $\frac{1}{p}=36$.
The probability that a double one occurs at the $k$-th throw is given by: $$\mathbb{P}[X=k] = \frac{1}{36}\left(1-\frac{1}{36}\right)^{k-1},\tag{1}$$ hence: $$\mathbb{E}[X]=\sum_{k\geq 1}k\cdot\mathbb{P}[X=k]=\frac{1}{36}\sum_{k=1}^{+\infty}k\left(\frac{35}{36}\right)^{k-1},\tag{2}$$ but since for any $|x|<1$ we have: $$\sum_{k\geq 0}x^k = \frac{1}{1-x},\tag{3}$$ by differentiating both sides of $(3)$ with respect to $x$ we have: $$\sum_{k\geq 1}k x^{k-1} = \frac{1}{(1-x)^2}\tag{4}$$ so the claim follow by evaluating $(4)$ at $x=\frac{35}{36}$.
|
2019-07-23 13:10:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7872636318206787, "perplexity": 152.4839161119382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529406.97/warc/CC-MAIN-20190723130306-20190723152306-00029.warc.gz"}
|
https://itprospt.com/num/20321864/oell-t-micbile-i-030-pm9-done-instucton338-labzmulmlar
|
4
# Oell T-Micbile I:030 PM9%Done Instucton338-labzMuLmlar...
## Question
###### Oell T-Micbile I:030 PM9%Done Instucton338-labzMuLmlar
oell T-Micbile I: 030 PM 9% Done Instucton 338-labz MuLml ar
#### Similar Solved Questions
##### Find the - Question 13 plates- 0f 16 Assignment Score: magnilude of the H U electron the field between the plates and that the uniform field between the parallel electric projected 9606 4 Qiaia 3 with CARLSON field: electric E the 16*€TS trio fiepdate the plates and speed and emerges HH Due Dates from the E 2 E electron , W 1.00 cm3Resources2.00 CIn Attempt 4 Resume
find the - Question 13 plates- 0f 16 Assignment Score: magnilude of the H U electron the field between the plates and that the uniform field between the parallel electric projected 9606 4 Qiaia 3 with CARLSON field: electric E the 16*€TS trio fiepdate the plates and speed and emerges HH Due D...
##### 3. Approximate the integral J8 sin(x4) dx with an error less than 0.0001 using power series. Determine the number of terms you need to get the desired result.
3. Approximate the integral J8 sin(x4) dx with an error less than 0.0001 using power series. Determine the number of terms you need to get the desired result....
##### F Conw{ _ 8nre andl check and Intavdl Fx^d me rodis 02 Ule k_ 2 (x-3) . ' Poin#s_ Shols ail x? id ^?1 2 (0+')
f Conw{ _ 8nre andl check and Intavdl Fx^d me rodis 02 Ule k_ 2 (x-3) . ' Poin#s_ Shols ail x? id ^?1 2 (0+')...
##### When NaF is concentration added of slowly (CaFz) calcium to a 1.7 X be when solution BaFz_ 10*10, the that just Hint: barium is 0.025 M Baz- begins What [F] just begins and 0.025 M Caz- what will the precipitate? when BaFz just precipitate? begins Ksp (BaFz) = 1.0 x 107; Ksp precipitate? What is [Caz-] when
When NaF is concentration added of slowly (CaFz) calcium to a 1.7 X be when solution BaFz_ 10*10, the that just Hint: barium is 0.025 M Baz- begins What [F] just begins and 0.025 M Caz- what will the precipitate? when BaFz just precipitate? begins Ksp (BaFz) = 1.0 x 107; Ksp precipitate? What is [Ca...
##### Par &Whal Ihe magniluda ol the orbital velocity Ihe onrth m/8?AEdm/#SubmitHla qucatPer BWnal Iho radhal uccolornton tho oarth loward Iho sun?AzdSubnitAcaucblAnautniPan €What Is the magntude Iho orbital valocitypunol Mercury (orbll radius6.79 x 107 km, orb tal period 88 0 doys)?AEd
Par & Whal Ihe magniluda ol the orbital velocity Ihe onrth m/8? AEd m/# Submit Hla qucat Per B Wnal Iho radhal uccolornton tho oarth loward Iho sun? Azd Subnit AcaucblAnautni Pan € What Is the magntude Iho orbital valocity punol Mercury (orbll radius 6.79 x 107 km, orb tal period 88 0 doy...
##### List the principal characteristics that helped plants adapt to life on land. For each charaeter; explain how is adaplive What sequence of origin of these charaeters, based on evidence from the fossil record?Where and when were plants first domesticated? Which plants were these? How did the domestication of plants influence other aspects human culture"Describe the mechanismn by which water moves from the soil to the top of = tree-both at the level ofthe |enves and through the trunk/ stem sys
List the principal characteristics that helped plants adapt to life on land. For each charaeter; explain how is adaplive What sequence of origin of these charaeters, based on evidence from the fossil record? Where and when were plants first domesticated? Which plants were these? How did the domestic...
##### Indefinjic inicual; fGr"A) The supply curye Dndu given by: S(4) - 4' indthc demano @ven by: D(q) - 9 for 0 <9313. Ilthe auanury' J3 4-5,mIl the supply denand prict higher? Will Ius Icrd Fush thc quantity MDLLze mphcr uwer! Sketch S(q) Muld Dial thc S 0e sel ofnxt Find the Equilibriwn price id quantily SholaIl wark Calculate arla intcrpu te consuet Muplua 41c Int prodlcer suenlut
indefinjic inicual; fGr" A) The supply curye Dndu given by: S(4) - 4' indthc demano @ven by: D(q) - 9 for 0 <9313. Ilthe auanury' J3 4-5,mIl the supply denand prict higher? Will Ius Icrd Fush thc quantity MDLLze mphcr uwer! Sketch S(q) Muld Dial thc S 0e sel ofnxt Find the Equili...
##### (7: 25 points) Robert and Patrick both maintain very large vegetable gardens. They have constant arguments concerning whose garden is better Their friend, Edward, suggests they measure the weight of vegetables (in Ibs) produced per day to see if there is statistical evidence that their gardens are different. They each randomly sample 10 days over the course of a year:Robert 's Garden Y Patrick' s Garden Yz n] = 10 n2 10 91 = 1.25 92 = 1.05 81 = 1.83 82 = 2.04
(7: 25 points) Robert and Patrick both maintain very large vegetable gardens. They have constant arguments concerning whose garden is better Their friend, Edward, suggests they measure the weight of vegetables (in Ibs) produced per day to see if there is statistical evidence that their gardens are d...
##### 4. (3) Complete the following reactions by drawing structures for the reagents Or principal organic products as necded.Dees-Martin OH periodinaneNIHz NHaSOClzOhOH
4. (3) Complete the following reactions by drawing structures for the reagents Or principal organic products as necded. Dees-Martin OH periodinane NIHz NHa SOClz Oh OH...
##### Sketch the graph of a continuous function $f$ an [0,4] satisfying the given properties. $f^{\prime}(x)=0$ for $x=1.2,$ and $3: f$ has an absolute minimum at $x=1 ; f$ has no local extremum at $x=2 ;$ and $f$ has an absolute maximum at $x=3$
Sketch the graph of a continuous function $f$ an [0,4] satisfying the given properties. $f^{\prime}(x)=0$ for $x=1.2,$ and $3: f$ has an absolute minimum at $x=1 ; f$ has no local extremum at $x=2 ;$ and $f$ has an absolute maximum at $x=3$...
##### Express as a product. $$\log _{c} K^{-6}$$
Express as a product. $$\log _{c} K^{-6}$$...
##### Point) Find y as a function of = ify" + 8ly' =0,y(o)6, % (0)81, %"(0)81.Hint: Let z=y'_u(r)
point) Find y as a function of = if y" + 8ly' =0, y(o) 6, % (0) 81, %"(0) 81. Hint: Let z=y'_ u(r)...
##### Determine the number of triangles ABC possible with the given parts. 2 = 36 b = 25 A =359The number of possible triangles is
Determine the number of triangles ABC possible with the given parts. 2 = 36 b = 25 A =359 The number of possible triangles is...
##### What is the income distribution of super shoppers? A supermarketsuper shopper is defined as a shopper for whom at least 70% of theitems purchased were on sale or purchased with a coupon. In thefollowing table, income units are in thousands of dollars, and eachinterval goes up to but does not include the given high value. Themidpoints are given to the nearest thousand dollars.Incomerange5-1515-2525-3535-4545-5555 or moreMidpoint x102030405060Percent of supershoppers21%14%20%17%19%9%(a) Using the
What is the income distribution of super shoppers? A supermarket super shopper is defined as a shopper for whom at least 70% of the items purchased were on sale or purchased with a coupon. In the following table, income units are in thousands of dollars, and each interval goes up to but does not inc...
##### (10 points) Given & quadratic function f(r) = 212 + 9r Rewrite f in standard form_ (b) Specify the vertex of f . (c) Specify the axis of symmetry. (d) Determine whether there is a minimum or maximum value value Specify the 1-intercept(s) and y-intercept of the graph of f. Sketch the graph of fand specily that
(10 points) Given & quadratic function f(r) = 212 + 9r Rewrite f in standard form_ (b) Specify the vertex of f . (c) Specify the axis of symmetry. (d) Determine whether there is a minimum or maximum value value Specify the 1-intercept(s) and y-intercept of the graph of f. Sketch the graph of f a...
##### ADoA displacement vector / in the xy plane is 46 m long and directed at angle 8 = 45*. The x component of the displacement vector isA.32.53 mB.42.53 mC.46 mD.52.53
ADo A displacement vector / in the xy plane is 46 m long and directed at angle 8 = 45*. The x component of the displacement vector is A.32.53 m B.42.53 m C.46 m D.52.53...
|
2022-08-14 18:59:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6457681655883789, "perplexity": 10016.137529176209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00064.warc.gz"}
|
http://jvmwriter.org/latex-error/latex-error-emergency-stop.html
|
Home > Latex Error > Latex Error Emergency Stop
Latex Error Emergency Stop
Contents
What does a profile's Decay Rate actually do? Note that you may have to run latex on any .ins or .dtx file you find to get the sty file. LaTeX Error: Too many unprocessed floats ! Try entering x a couple of times. this contact form
You will see this error when LaTeX cannot find the \end{document} line. How to decipher Powershell syntax for text formatting? TeX needs to know which case it is. Remember the special characters. http://latex-community.org/forum/viewtopic.php?f=5&t=15478
Fatal Error Occurred No Output Pdf File Produced
Is it legal to bring board games (made of wood) to Australia? followed by ! Invalid code ( If stumped, try the general tricks. !
LaTeX Error: No such counter A counter is undefined Do not use \newcounter after \begin{document}. The start of my main.tex file looks like: \documentclass[a4paper]{report} \input{header.tex} \title{the title} \begin{document} \maketitle ... Extra }, or forgotten There is some problem with figuring out what begins or ends what. Then check for typos in the array/, tabular/, multicolumn/, whatever, environment parameters. Thus, echo "" | latex also crashes, with a different error message. –Bruno Le Floch Oct 14 '11 at 22:25 3 but this requires a response to the ? Online Latex Missing semicolon in a tikz environment. Browse other questions tagged c latex pdflatex or ask your own question. http://tex.stackexchange.com/questions/31517/shortest-code-causing-emergency-stop-error But do not save. Ask your beloved professor to help you get this just perfectly right. Missing \endcsname inserted Typically, this occurs when you put a \ in front of an environment name. If stumped, try the general tricks. ! for package ... ! Online Latex Misplaced alignment tab character &. ! https://github.com/gallandarakhneorg/autolatex/issues/63 LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 8. Fatal Error Occurred No Output Pdf File Produced My guess is my ordering either hadn't flushed the buffer one last time (which would have included the \end{document}) before attempting to compile or the file being in use affected the Sharelatex for package ... That mathematician was Dr. http://jvmwriter.org/latex-error/latex-error-caption-sty.html To get a paragraph break, in TeX you use a blank line. Remember that the inside of an \mbox or \hbox is not in math mode unless you enclose it between and \$. Things like \usepackage can only be used before the \begin{document} line.
However, from the community's perspective this may seem like duplication of effort. Are you using plain latex where xelatex is needed? With equal certainty we can't help you find you error without seeing your code. navigate here No need for all this chopping into pieces.
Limit controls must follow a math operator ! already defined A new command being defined already exists. If the error is in that half, it will not show up.
Now don't get scared.
Bad register code ! I do agree that no changes to the TeX engine are required here, though :) –Lev Bishop Jun 15 '12 at 15:33 | show 3 more comments up vote 37 down Or use the H placement option of the float package to put it down whether latex likes it or not. Remove the blank line and all three error messages should disappear.
Improper discretionary list ! Only one # is allowed per tab If stumped, try the general tricks. ! If you have any errors that would be good to include here, please e-mail them to me. his comment is here With certainty your problem is NOT that LaTeX requires more than one \end{document}.
LaTeX has now reached end of file without finding the closing bracket. undefined. Run latex once more. LaTeX Error: \verb ended by end of line A \verb must be of the form \verb!...!
LaTeX Error: Bad use of \\ ! Did you switch your compiler? The actual error may be well before the point where the problem is recognized. Not a letter !
LaTeX Font Info: External font `cmex10' loaded for size (Font) <12> on input line 9. If stumped, try the general tricks. ! If stumped, try the general tricks. ! If stumped, try the general tricks. (end occurred inside a group at level nnn) Something was not properly closed when the end of the file was reached.
|
2018-02-22 02:33:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583019614219666, "perplexity": 4110.601731265396}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813883.34/warc/CC-MAIN-20180222022059-20180222042059-00515.warc.gz"}
|
http://aviation.stackexchange.com/questions/26404/what-is-the-difference-between-mist-and-fog
|
# What is the difference between mist and fog?
Fog is a form of a cloud and it is said clouds are visible moisture. We all know that mist (baby rain) is also moisture so I am curious when will a METAR say BR (mist) or FG (fog)?
-
I always think of BR as "barely rain" but it actually come from the French word "Brume" meaning mist. cfidarren.com/r-metarmystery.htm – Mike Sowsun Mar 25 at 16:47
Or it could be the French word "brouillard" => fog – Antzi Mar 25 at 19:39
@Antzi Related – Pondlife Mar 26 at 15:57
how would this be off-topic? O_o – Federico Apr 11 at 12:23
I could see the question being off-topic if it stopped at the title question, but the body clearly ties in aviation – CGCampbell Apr 11 at 13:29
Quoting from the METAR decoder:
BR Mist (Foggy conditions with visibilities greater than 5/8 statute mile)
FG Fog (visibility 5/8 statute mile or less)
You may ask "Why 5/8th of a statute mile?" That's because 5/8ths of a mile is 1,006 meters, or about 1 kilometer.
-
Great answer thanks. I found this: ofcm.gov/fmh-1/pdf/FMH1.pdf page 8-1 states max mist visibility is 7SM. – wbeard52 Mar 25 at 21:06
The main difference between the fog and mist is their density and as a consequence, the visibility. From UK Met office:
... they are two distinct terms for a similar phenomenon. Visibility less than 1,000 metres we call 'fog' and obscurity with visibility greater than 1,000 metres we call 'mist'.
This is the same one given in FAA Aeronautical Information Manual (Page 7-1-62 in Dec 2015 Version). For visibility below 5/8 statute miles (~1000m), the term fog is used, while for values above that, mist is used.
Image from FAA AIM, page 7-1-62
-
Visibility less than 1000 metres , classified as fog; visibility more than 1000 metres, classified as mist. These observations may have to be made at sea or ground level. It would be interesting to hear from aviators how visibility is classified at altitude - in particular: is a 'fog ceiling' a useful and recorded statistic. – Arif Burhan Mar 26 at 2:13
What the heck is Snow Grains? – Burhan Khalid Mar 26 at 13:06
@Burhan Khalid. Snow Grains are precipitation of very small, white, and opaque grains of ice. – wbeard52 Mar 26 at 16:55
|
2016-05-26 02:44:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5784720182418823, "perplexity": 5651.163255567452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275437.19/warc/CC-MAIN-20160524002115-00200-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/244650/glowing-path-in-tikz
|
# Glowing path in tikz
I want to draw in tikz a line that diffuses and a dashed path that is basically the contour of this diffused area. I think that in photoshop the diffusing effect can be done by a blurred stroke.
Is this possible?
Here is a very badly hand-drawn example:
## EDIT
Thanks to the help of @percusse and @Alenanno I was able to produce almost exactly what I wanted. The only thing missing is a dashed line around the shaded area instead of the full gray line that I hacked in.
Code here:
\documentclass[margin=10pt]{standalone}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}[>=latex]
\def\lc{0.6} %Transition: 5-linear %0-infinitely sharp
\def\psz{1.0} %Potato size
\def\smss{0.92} %Smoothness: number of colors=1/(1-\smss)
%Potato
\newcommand{\potato}[1]{(0*#1,0*#1)(10*#1,-1*#1)(12*#1,10*#1)(4*#1,11*#1)(-1*#1,7*#1)}
%Crack path
\def\Ax{0} \def\Ay{0}
\def\Bx{1.5} \def\By{2}
\def\Cx{4.5} \def\Cy{4}
\def\Dx{6} \def\Dy{6}
\def\Ex{6} \def\Ey{4}
\def\Fx{8} \def\Fy{3}
\def\linepath{(\Ax,\Ay)(\Bx,\By)(\Cx,\Cy)(\Dx,\Dy)}
\def\linepathnb{(\Cx,\Cy)(\Ex,\Ey)(\Fx,\Fy)}
\begin{scope}
\clip plot [smooth cycle, tension =0.8] coordinates{ \potato{\psz}};
\draw[line cap=round,line width=105pt, black!20] plot [smooth,tension=1] coordinates{ \linepath};
\draw[line cap=round,line width=105pt, black!20] plot [smooth,tension=1] coordinates{ \linepathnb};
\foreach \x[evaluate={\xc=90*(exp(-\x/\lc)-0.9*exp(-1/\lc)*\x;}] in {1,\smss,...,0}{
\draw[line cap=round,line width=\x*100pt,black,draw=black!\xc] plot [smooth,tension=1] coordinates{ \linepath};
\draw[line cap=round,line width=\x*100pt,black,draw=black!\xc] plot [smooth,tension=1] coordinates{ \linepathnb};
}
\draw[line cap=round,line width=2pt, black] plot [smooth,tension=1] coordinates{ \linepath};
\draw[line cap=round,line width=2pt, black] plot [smooth,tension=1] coordinates{ \linepathnb};
\end{scope}
\draw [line width=5pt] plot [smooth cycle, tension =0.8] coordinates{ \potato{\psz}};
\end{tikzpicture}
\end{document}
• Take a look at the fadings and shadings libraries. What do you have so far? If you post your own attempt, you are more likely to get answers. I don't think you can blur a line in TikZ. You most probably need to use a fill. – cfr May 13 '15 at 2:03
• I think cfr means what kind of approach you have, than we could find the answer. For example is that the fading changing with data? or with function? Furthermore, in tikz, links are solid, and only the area filled with color could fade. – selwyndd21 May 13 '15 at 3:28
• – Ignasi May 13 '15 at 6:55
• Glowing path = Laser beam ? tex.stackexchange.com/a/80207/14500 ... – Paul Gaborit May 15 '15 at 23:14
• @PaulGaborit Ah, I completely missed your answer. Sorry about that. – percusse May 16 '15 at 13:55
You can use the idea behind the pgf-blur package (originated from the question Reuse of soft path in fading declaration? Transformation of fadings? ) that utilizes for smooth shadows : draw over and over with changing the color (or opacity as you wish) and shape size (or line width for paths). The step size can be changed to make it smoother.
I cooked up a formula that looked OK to my eyes but you can go more rigorous for the decay rate and so on.
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\foreach \x[evaluate={\xc=0.5*100*ln(10/\x);}] in {10,9.9,...,1}{
\draw[line cap=round,line width=\x*1pt,draw=black!\xc]
(0,0) arc (0:30:1 and 2) to[bend left] (3,2) arc (0:250:1 and 1) -- cycle;
}
\end{tikzpicture}
\end{document}
• Oooh, so there is a package. I was trying to adapt another solution but it was harder than expected. – Alenanno May 15 '15 at 21:03
As far as I know, you cannot shade a path the way you have it in your question. You could use a path to clip a shaded shape beneath it, but it wouldn't achieve your result because it's curved and the shading follows the curve.
However, you can blur the line elsewhere and include it as an external image. If it's your only instance of such a graphic, then it's not that time-consumming, but if you don't have multiple types throughout a document, then you might do the dashed line in an external program (it's easy to select a -blurred- shape and apply a contour).
If you did the dashed curve using a graphics editing program along the contour of a blurred stroke, then it would be much more precise than the example below. After that, you would only need to include it in a simple shape in Tikz.
This solution uses a tweaked version of JLDiaz's solution.
## Code
\documentclass[margin=10pt]{standalone}
\usepackage{tikz}
\usepackage{graphicx}
\usetikzlibrary{shapes,calc}
\newcommand\fillshape[3]{ % #1 = shape, #2 = filename of texture, #3 = includegraphics options
\begin{scope}
\clip #1;
\node[yshift=-2.6em] {\includegraphics[#3]{#2}};
\draw[densely dashed,line width=.2pt] (.43,-1.5)
to[out=77,in=255,looseness=1.6,yshift=1.3] (.1,-.2)
to[out=65,in=69,looseness=1.7] (-.45,.05)
to[out=249,in=95,looseness=.7] (-.55,-.6)
to[out=275,in=70,looseness=1.1,yshift=1.3] (-.15,-1.6);
\end{scope}
\draw[line width=.5pt] #1;
}
\begin{document}
\begin{tikzpicture}
\fillshape{(0,0) circle (1.5cm)}{line}{width=2cm};
\end{tikzpicture}
\end{document}
• Thank you. I agree that using an external program will probably be the easiest way. I just wanted to try to have something vectorized because I'm going to include this in my thesis. If no one figures out a way of doing this fully on tikz I'll accept your answer. Thanks again. – Miguel May 14 '15 at 15:22
|
2020-11-30 02:34:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7548492550849915, "perplexity": 2487.8018308616674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00354.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/air-cools-electronics-compartment-entering-60-f-leaving-105-f-pressure-essentially-constan-q1128790
|
## problem 2-47 from Thermodynamics: concepts and applications
Air cools an electronics compartment by entering at 60 F and leaving at 105 F. The pressure is essentially constant at 14.7 psia. Determine (a) the change in internal energy of the air as it flows through the compartment and (b) the change in specific volume. Also provide a sketch of the system and any formulas you may use.
|
2013-06-19 08:59:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284679651260376, "perplexity": 770.048080461952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708546926/warc/CC-MAIN-20130516124906-00096-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://engineering.stackexchange.com/tags/machine-elements/hot
|
# Tag Info
8
It is a high pressure seal and bearing unit. The metal assembly to the right with the yellow parts is the drive coupling. After Sam’s comment and looking at that part again, there are 3 distinct metal parts - the middle one with the holes is aluminium, while the outer two are the outer races of the two bearings. Two bearings are often used due to either ...
6
The image shows what is often known as a zerk fitting, aka a grease fitting. The ball seals the passage from the outside to the internals which require grease. A grease gun with a properly sized connector will snap onto the zerk fitting. The gun is used to pump grease under pressure to the area described in the image as "no space, grinds," likely ...
5
The name Rolamite was given to the concept of a flexible metal strip wrapped around rollers, to form a linear bearing. Not pictured in your image, there would be an outer casing which fixes the ends of the strip and prevents the rollers from moving perpendicularly to the length of the strip. And, if one or more of the rollers are to be attached to anything ...
3
The impulse causes lateral acceleration of the rod and rotation of the rod. $$P=m\frac{dv}{dt} \quad \text{for lateral acceleration}$$ $$P=I\frac{d\omega}{dt} \quad \text{for rotational acceleration}$$ We calculate the net acceleration for each point along the length of the rod and after multiplying that by the density of the rod we get the forces' acting ...
2
Combined thrust bearing and shaft seal / stuffing box. Yes, it's really called that. The nipple on top is to inject lubricant which has dual function of preventing water ingress to engine compartment. The thrust unit forward loads cannot be transferred to drive shaft / engine without: damage or serious design compromise The thrust bearing transfers these ...
1
You have to show two reference lines for tolerance measurements. One is the longitudinal axis of the linked parts. The other one is the line through the center of the pin as shown below. Does this answer your question?
1
You have to remember that you can control independently (or with a fixed gear ratio), the rotational velocity for each roll. So, in order to decrease the tension, what you do is you decrease the rolling speed progressively in the output zone. A very similar concept which is under the same restrictions, is the belt pulley system. Below it shows the tension ...
1
Kevin Reid answered the question, and I've accepted it-thanks, Kevin! This answer is to post a copy of relevant parts of the original article that had become rather blurry in my memory. I'll probably post other related info as I dig it up in order to catch the best selection of search keywords possible in case anyone else ever looks for sliding mechanical ...
1
Gimbals are the classic solution for this: https://en.wikipedia.org/wiki/Gimbal And if you don't need any roll, you can omit one of the gimbal rings, in which case a pan/tilt mechanism is sufficient (often sold for photography.) You might look for a gimballed gyroscope toy and start with that. (Flight quality gimbals are expensive.)
1
Perhaps a "turntable bearing" also called a "lazy susan" bearing. Something like this one: https://www.amazon.com/Square-Inch-Susan-Turntable-Bearing/dp/B00ZSQSWTM
1
Look for overrunning clutch or power transmission elements. You may find a lot of info in Ringspann site: Ringspann power transmission section Or here: Ringspann catalogue
Only top voted, non community-wiki answers of a minimum length are eligible
|
2021-05-07 09:20:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5452472567558289, "perplexity": 1453.2709819609097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00117.warc.gz"}
|
http://docs.obspy.org/packages/obspy.segy.html
|
# obspy.segy - SEG Y and SU read and write support for ObsPy¶
The obspy.segy package contains methods in order to read and write files in the SEG Y (rev. 1) and SU (Seismic Unix) format.
copyright: The ObsPy Development Team (devs@obspy.org) GNU Lesser General Public License, Version 3 (http://www.gnu.org/copyleft/lesser.html)
Note
The module can currently read files that are in accordance to the SEG Y rev. 1 specification but has been designed to be able to handle custom headers as well. This functionality is not yet exposed because the developers have no files to test it with. If you have access to some files with custom headers please consider sending them to devs@obspy.org.
The SEG Y and Seismic Unix (SU) file formats are quite different from the file formats usually used in observatories (GSE2, MiniSEED, ...). The Stream/Trace structures of ObsPy are therefore not fully suited to handle them. Nonetheless they work well enough if some potential problems are kept in mind.
SEG Y files can be read in three different ways that have different advantages/disadvantages. Most of the following also applies to SU files with some changes (keep in mind that SU files have no file wide headers).
1. Using the standard read() function.
2. Using the obspy.segy specific obspy.segy.core.readSEGY() function.
3. Using the internal obspy.segy.segy.readSEGY() function.
### Reading using methods 1 and 2¶
The first two methods will return a Stream object and they are identical except that the file wide SEGY headers are only accessible if method 2 is used. These headers are stored in Stream.stats.
The obvious advantage of these methods is that the returned Stream object interfaces very well with other functionality provided by ObsPy (file format conversion, filtering, ...).
Due to the fact that a single SEG Y file can contain several tens of thousands of traces and each trace will be a Trace instance which in turn will contain other objects these methods are quite slow and memory intensive.
To somewhat rectify this issue all SEG Y specific trace header attributes are only unpacked on demand by default.
>>> from obspy.segy.core import readSEGY
>>> from obspy.core.util import getExampleFile
>>> # or 'from obspy import read' if file wide headers are of no interest
>>> filename = getExampleFile("00001034.sgy_first_trace")
>>> st = readSEGY(filename)
>>> st
<obspy.core.stream.Stream object at 0x...>
>>> print(st)
1 Trace(s) in Stream:
Seq. No. in line: 1 | 2009-06-22T14:47:37.000000Z - 2009-06-22T14:47:41...
SEG Y files contain a large amount of additional trace header fields which are not unpacked by default. However these values can be accessed by calling the header key directly or by using the unpack_trace_headers keyword with the read()/ readSEGY() functions to unpack all header fields.
>>> st1 = readSEGY(filename)
8
>>> st1[0].stats.segy.trace_header.data_use # Unpacking a value on the fly.
1
>>> len(st1[0].stats.segy.trace_header) # This value will remain unpacked.
9
92
Reading SEG Y files with unpack_trace_headers=True will be very slow and memory intensive for a large number of traces due to the huge number of objects created.
### Reading using method 3¶
The internal reading method is much faster and less of a memory hog but does not return a Stream object. Instead it returns a SEGYFile object which is somewhat similar to the Stream object used in ObsPy but specific to segy.
>>> from obspy.segy.segy import readSEGY
>>> segy = readSEGY(filename)
>>> segy
<obspy.segy.segy.SEGYFile object at 0x...>
>>> print(segy)
1 traces in the SEG Y structure.
The traces are a list of SEGYTrace objects stored in segy.traces. The trace header values are stored in trace.header as a SEGYTraceHeader object.
By default these header values will not be unpacked and thus will not show up in ipython’s tab completion. See obspy.segy.header.TRACE_HEADER_FORMAT (source) for a list of all available trace header attributes. They will be unpacked on the fly if they are accessed as class attributes.
By default trace data are read into memory, but this may be impractical for very large datasets. To skip loading data into memory, read SEG Y files with headonly=True. The data class attribute will not show up in ipython’s tab completion, but data are read directly from the disk when it is accessed:
>>> from obspy.segy.segy import readSEGY
>>> print(len(segy.traces[0].data))
2001
## Writing¶
### Writing ObsPy Stream objects¶
Writing Stream objects is done in the usual way.
>>> st.write('file.segy', format='SEGY')
or
>>> st.write('file.su', format='SU')
It is possible to control the data encoding, the byte order and the textual header encoding of the final file either via the file wide stats object (see sample code below) or directly via the write method. Possible values and their meaning are documented here: writeSEGY()
### Writing SEGYFile objects¶
SEGYFile objects are written using its write() method. Optional kwargs are able to enforce the data encoding and the byte order.
>>> segy.write('file.segy')
### Converting other file formats to SEG Y¶
SEGY files are sensitive to their headers and wrong headers might break them.
If some or all headers are missing, obspy.segy will attempt to autogenerate them and fill them with somehow meaningful values. It is a wise idea to manually check the headers because some other programs might use them and misinterpret the data. Most header values will be 0 nonetheless.
One possibility to get valid headers for files to be converted is to read one correct SEG Y file and use its headers.
The other possibility is to autogenerate the headers with the help of ObsPy and a potential manual review of them which is demonstrated in the following script:
from obspy import read, Trace, Stream, UTCDateTime
from obspy.core import AttribDict
from obspy.segy.core import readSEGY
import numpy as np
import sys
stream = Stream()
for _i in xrange(3):
# Create some random data.
data = np.random.ranf(1000)
data = np.require(data, dtype=np.float32)
trace = Trace(data=data)
# Attributes in trace.stats will overwrite everything in
trace.stats.delta = 0.01
# SEGY does not support microsecond precision! Any microseconds will
trace.stats.starttime = UTCDateTime(2011,11,11,11,11,11)
# If you want to set some additional attributes in the trace header,
# add one and only set the attributes you want to be set. Otherwise the
# header will be created for you with default values.
if not hasattr(trace.stats, 'segy.trace_header'):
trace.stats.segy = {}
trace.stats.segy.trace_header.trace_sequence_number_within_line = _i + 1
# Add trace to stream
stream.append(trace)
# A SEGY file has file wide headers. This can be attached to the stream
# object. If these are not set, they will be autocreated with default
# values.
stream.stats = AttribDict()
print "Stream object before writing..."
print stream
stream.write("TEST.sgy", format="SEGY", data_encoding=1,
byteorder=sys.byteorder)
print "Stream object after writing. Will have some segy attributes..."
print stream
print "Reading using obspy.segy..."
print st1
print "Reading using obspy.core..."
|
2015-09-04 01:27:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23401924967765808, "perplexity": 5648.281504222013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00278-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://en.m.wikibooks.org/wiki/LaTeX/Print_version
|
# LaTeX/Print version
Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License.
# Contents
If you have saved this file to your computer, click on a link in the contents to go to that section.
Getting Started
Common Elements
Mechanics
Technical Texts
Special Pages
Special Documents
Creating Graphics
Programming
Miscellaneous
Help and Recommendations
Appendices
# Introduction
## What is TeX?
TeX is a language created by Donald Knuth to typeset documents attractively and consistently. Knuth started writing the TeX typesetting engine in 1977 to explore the potential of the digital printing equipment that was beginning to infiltrate the publishing industry at that time, in the hope that he could reverse the trend of deteriorating typographical quality that he saw affecting his own books and articles. While TeX is a programming language in the sense that it is Turing complete, its main job is to serve as a markup language for describing how your document should look. The fine control TeX offers over document structure and formatting makes it a powerful and formidable tool. TeX is renowned for being extremely stable, for running on many different kinds of computers, and for being virtually bug free. The version numbers of TeX are converging toward the mathematical constant $\pi$ , with the current version number being 3.1415926.
The name TeX is intended by its developer to be /'tɛx/, /x/ being the velar fricative, the final consonant of loch and Bach. (Donald E. Knuth, The TeXbook) The letters of the name are meant to represent the capital Greek letters tau, epsilon, and chi, as TeX is an abbreviation of τέχνη (ΤΕΧΝΗ – technē), Greek for both "art" and "craft", which is also the root word of technical. English speakers often pronounce it /'tɛk/, like the first syllable of technical.
The tools TeX offers "out of the box" are relatively primitive, and learning how to perform common tasks can require a significant time investment. Fortunately, document preparation systems based on TeX, consisting of collections of pre-built commands and macros, do exist. These systems save time by automating certain repetitive tasks; however, this convenience comes at the cost of complete design flexibility. One of the most popular macro packages is called LaTeX.
## What is LaTeX?
LaTeX (pronounced either "Lah-tech" or "Lay-tech") is a set of macros for TeX created by Leslie Lamport. Its purpose is to simplify TeX typesetting, especially for documents containing mathematical formulae. Within the typesetting system, its name is formatted as LaTeX.
TeX is both a typographical and a logical markup language, and one has to take account of both issues when writing a TeX document. On the other hand, Lamport's aim when creating LaTeX was to split those two aspects. A typesetter can make a template and then the writers can just focus on LaTeX logical markup. They might not know anything about typesetting.
In addition to the commands and options LaTeX offers, many other authors have contributed extensions, called packages or styles, which you can use for your documents. Many of these are bundled with most TeX/LaTeX software distributions; more can be found in the Comprehensive TeX Archive Network (CTAN).
## Why should I use LaTeX?
Most readers will be familiar with WYSIWYG (What You See Is What You Get) typesetting systems such as LibreOffice Writer, Microsoft Word, or Google Docs. Using LaTeX is fundamentally different from using these other programs—instead of seeing your document as it comes together, you describe how you want it to look using commands in a text file, then run that file through the LaTeX program to build the result. While this has the disadvantage of needing to pause your work and take multiple steps to see what your document looks like, there are many advantages to using LaTeX:
• You can concentrate purely on the structure and contents of the document. LaTeX will automatically ensure that the typography of your document—fonts, text sizes, line heights, and other layout considerations—are consistent according to the rules you set.
• In LaTeX, the document structure is visible to the user, and can be easily copied to another document. In WYSIWYG applications it is often not obvious how a certain formatting was produced, and it might be impossible to copy it directly for use in another document.
• Indexes, footnotes, citations and references are generated easily and automatically.
• Mathematical formulae can be easily typeset. (Quality mathematics was one of the original motivations of TeX.)
• Since the document source is plain text,
• Document sources can be read and understood with any text editor, unlike the complex binary and XML formats used with WYSIWYG programs.
• Tables, figures, equations, etc. can be generated programmatically with any language.
• Changes can be easily tracked with version control software.
• Some academic journals only accept or strongly recommend submissions in the form of LaTeX documents. Publishers offer LaTeX templates.
When the source file is processed by the LaTeX program, or engine, it can produce documents in several formats. LaTeX natively supports DVI and PDF, but by using other software you can easily create PostScript, PNG, JPEG, etc.
## Terms regarding TeX
Document preparation systems
LaTeX is a document preparation system based on TeX. So the system is the combination of the language and the macros.
Distributions
TeX distributions are collections of packages and programs (compilers, fonts, and macro packages) that enable you to typeset without having to manually fetch files and configure things.
Engines
An engine is an executable that can turn your source code into a printable output format. The engine by itself only handles the syntax. It also needs to load fonts and macros to fully understand the source code and generate output properly. The engine will determine what kind of source code it can read, and what format it can output (usually DVI or PDF).
All in all, distributions are an easy way to install what you need to use the engines and the systems you want. Distributions usually target specific operating systems. You can use different systems on different engines, but sometimes there are restrictions. Code written for TeX, LaTeX or ConTeXt are (mostly) not compatible. Additionally, engine-specific code (like font for XeTeX) may not be compiled by every engine.
When searching for information on LaTeX, you might also stumble upon XeTeX, ConTeXt, LuaTeX or other names with a -TeX suffix. Let's recap most of the terms in this table.
Systems Descriptions
AMSTeX A legacy TeX macro-based document preparation system used by the American Mathematical Society (AMS) from 1982 to 1985. It evolved into the AMS-LaTeX collection which includes the amsmath package used in nearly every LaTeX document as well as mutliple AMS publication layout standards (document classes).
ConTeXt A TeX macro-based document preparation system designed by Hans Hagen and Ton Otten of Pragma ADE in the Netherlands around 1991. It is compatible with the pdfTeX, XeTeX and LuaTeX engines.
ConTeXt assumes the content author (writer of the document’s text) and the style author (designer of the document’s layout and appearance) are the same. It has a consistent and easy to understand syntax that provides the author with the tools and freedom necessary to produce a document with any desired layout. In cases where there are no standards to follow, ConTeXt provides creative freedom at the expense of required additional effort. ConTeXt excels at producing high-quality works with creative flair, such as textbooks and literature with artistically distinctive layouts.
LaTeX A TeX macro-based document preparation system designed by Leslie Lamport.
LaTeX assumes the content author and style author are different people. This allows authors (researchers, students, etc.) to concentrate on content and forget about design while allowing publishers (journals, graduate departments, etc.) to enforce institutional standards. Separation of content and design comes with the costs of package management, a less consistent syntax, and added complexity (compared to ConTeXt) if an author wishes to deviate from the layout designer's specification (documentclass). LaTeX excels at producing high-quality academic documents that conform to publication requirements, such as journal articles and theses.
MetaFont A high-quality font system designed by Donald Knuth along with TeX.
MetaPost A descriptive vector graphics language based on MetaFont.
TeX The original language designed by Donald Knuth.
Texinfo A TeX macro-based document preparation system designed by Richard Stallman that specializes in producing technical documentation (software manuals).
Engines Descriptions
xetex, xelatex a TeX engine which supports Unicode input and .ttf and .otf fonts. See Fonts.
luatex, lualatex A TeX engine with embedded Lua support, aiming at making TeX internals more flexible. Like XeTeX, supports Unicode input and modern font files.
pdftex, pdflatex Generates PDF output.
tex, latex The "original" TeX engine. Generates DVI output.
TeX Distributions Descriptions
MacTeX A TeX Live based distribution targetting Mac OS X.
MiKTeX A TeX distribution for Windows.
TeX Live A cross-platform TeX distribution.
## What's next?
In the next chapter we discuss installing LaTeX on your system. Then we will typeset our first LaTeX file.
## Learning more
One of the most frustrating things beginners and even advanced users might encounter using LaTeX is the difficulty of changing the look of your documents. While WYSIWYG programs make it trivial to change fonts and layouts, LaTeX requires you to learn new commands and packages to do so. Subsequent chapters will cover many common use cases, but know that this book is only scratching the surface.
Coming from a community of typography enthusiasts, most LaTeX packages contain excellent documentation. This should be your first step if you have questions—if a package's manual has not been installed on your machine as part of your TeX distribution, it can be found on CTAN.
Other useful resources include:
# Installation
If this is the first time you are trying out LaTeX, you don't even need to install anything. For quick testing purpose you may just create a user account with an online LaTeX editor such as Overleaf, and continue this tutorial in the next chapter. These websites offer collaborative editing capabilities while allowing you to experiment with LaTeX syntax — without having to bother with installing and configuring a distribution and an editor. When you later feel that you would benefit from having a standalone LaTeX installation, you can return to this chapter and follow the instructions below.
LaTeX is not a program by itself; it is a document preparation system along with a language. Using LaTeX requires a series of tools. Acquiring them manually would result in downloading and installing multiple programs in order to have a suitable computer system that can be used to create LaTeX output, such as PDFs. TeX Distributions help the user in this way, in that it is a single step installation process that provides (almost) everything.
At a minimum, you'll need a TeX distribution, a good text editor and a DVI or PDF viewer. More specifically, the basic requirement is to have a TeX compiler (which is used to generate output files from source), fonts, and the LaTeX macro set. Optional, and recommended installations include an attractive editor to write LaTeX source documents (this is probably where you will spend most of your time), and a bibliographic management program to manage references if you use them a lot.
## Distributions
TeX and LaTeX are available for most computer platforms, since they were programmed to be very portable. They are most commonly installed using a distribution, such as TeX Live, MiKTeX, or MacTeX. TeX distributions are collections of packages and programs (compilers, fonts, and macro packages) that enable you to typeset without having to manually fetch files and configure things. LaTeX is just a set of macro packages built for TeX.
The recommended distributions for each of the major operating systems are:
• TeX Live is a major TeX distribution for *BSD, GNU/Linux, Mac OS X and Windows.
• MiKTeX is a Windows-specific distribution.
• MacTeX is a Mac OS-specific distribution based on TeX Live.
These, however, do not necessarily include an editor. You might be interested in other programs that are not part of the distribution, which will help you in writing and preparing TeX and LaTeX files.
### *BSD and GNU/Linux
In the past, the most common distribution used to be teTeX. As of May 2006 teTeX is no longer actively maintained and its former maintainer Thomas Esser recommended TeX Live as the replacement.[1]
The easy way to get TeX Live is to use the package manager or portage tree coming with your operating system. Usually it comes as several packages, with some of them being essential, other optional. The core TeX Live packages should be around 200-300 MB.
If your *BSD or GNU/Linux distribution does not have the TeX Live packages, you should report a wish to the bug tracking system. In that case you will need to download TeX Live yourself and run the installer by hand.
You may wish to install the content of TeX Live more selectively. See below.
### Mac OS X
Mac OS X users may use MacTeX, a TeX Live-based distribution supporting TeX, LaTeX, AMSTeX, ConTeXt, XeTeX and many other core packages. Download MacTeX.pkg on the MacTeX page, unzip it and follow the instructions. Further information for Mac OS X users can be found on the TeX on Mac OS X Wiki.
Since Mac OS X is also a Unix-based system, TeX Live is naturally available through MacPorts and Fink. Homebrew users should use the official MacTeX installer because of the unique directory structure used by TeX Live. Further information for Mac OS X users can be found on the TeX on Mac OS X Wiki.
### Microsoft Windows
Microsoft Windows users can install MiKTeX onto their computer. It has an easy installer that takes care of setting up the environment and downloading core packages. Both the basic and the complete LaTeX systems are provided, with the distribution offering advanced features such as automatic installation of packages and simple interfaces to modify settings (e.g., default paper sizes).[2]
There is also a port of TeX Live available for Windows. For more, see TeX Live on Windows.
## Custom installation with TeX Live
This section targets users who want fine-grained control over their TeX distribution, like an installation with a minimum of disk space usage. If not needed, the user may feel free to jump to the next section.
Picky users may wish to have more control over their installation. Common distributions might be tedious for the user caring about disk space. In fact, MikTeX and MacTeX and packaged TeX Live features hundreds of LaTeX packages, most of them which you will never use. Most Unix with a package manager will offer TeX Live as a set of several big packages, and you often have to install 300–400 MB for a functional system.
TeX Live features a manual installation with a lot of possible customizations. You can get the network installer at tug.org. This installer allows you to select precisely the packages you want to install. As a result, you may have everything you need for less than 100 MB. TeX Live is then managed through its own package manager, tlmgr. It will let you configure the distributions, install or remove extra packages and so on.
You will need a Unix-based operating system for the following. Mac OS X, GNU/Linux or *BSD are fine. It may work for Windows but the process must be quite different.
TeX Live groups features and packages into different concepts:
• Collections are groups of packages that can always be installed individually, except for the Essential programs and files collection. You can install collections at any time.
• Installation Schemes group collections and packages. Schemes can only be used at installation time. You can select only one scheme at a time.
### Minimal installation
We will give you general guidelines to install a minimal TeX distribution (i.e., only for plain TeX).
1. Download the installer at http://mirror.ctan.org/systems/texlive/tlnet/install-tl-unx.tar.gz and extract it to a temporary folder.
2. Open a terminal in the extracted folder and log in as root.
# umask 022
1. Launch install-tl.
2. Select the minimal scheme (plain only).
3. You may want to change the directory options. For example you may want to hide your personal macro folder which is located at TEXMFHOME. It is ~/texmf by default. Replace it by ~/.texmf to hide it.
4. Now the options:
1. use letter size instead of A4 by default: mostly for users from the USA.
2. allow execution of restricted list of programs via \write18: it is recommended to select it for security reasons. Otherwise it allows the TeX engines to call any external program. You may still configure the list afterwards.
3. create all format files: targetting a minimal disk space, the best choice depends on whether there is only one user on the system, then deselecting it is better, otherwise select it. From the help menu: "If this option is set, format files are created for system-wide use by the installer. Otherwise they will be created automatically when needed. In the latter case format files are stored in user's directory trees and in some cases have to be re-created when new packages are installed."
4. install macro/font doc tree: useful if you are a developer, but very space consuming. Turn it off if you want to save space.
5. install macro/font source tree: same as above.
6. create symlinks to standard directories: symlinks are fine by default, change it if you know what you are doing.
5. Select portable installation if you install the distribution to an optical disc, or any kind of external media. Leave to default for a traditional installation on the system hard drive.
At this point it should display
1 collections out of 85, disk space required: 40 MB
or a similar space usage.
You can now proceed to installation: start installation to hard disk.
Don't forget to add the binaries to your PATH as it's noticed at the end of the installation procedure.
### First test
In a terminal write
$tex '\empty Hello world!\bye'$ pdftex '\empty Hello world!\bye'
You should get a DVI or a PDF file accordingly.
### Configuration
Formerly, TeX distributions used to be configured with the texconfig tool from the teTeX distribution. TeX Live still features this tool, but recommends using its own tool instead: tlmgr. Tlmgr’s functionality completely subsumes texconfig.[1]
List current installation options:
tlmgr option
You can change the install options:
tlmgr option srcfiles 1
tlmgr option docfiles 0
tlmgr paper letter
See the TLMGR(1) man page for more details on its usage. If you did not install the documents as told previously, you can still access the tlmgr man page with
tlmgr help
### Installing LaTeX
Now we have a running plain TeX environment, let's install the base packages for LaTeX.
# tlmgr install latex latex-bin latexconfig latex-fonts
In this case you can omit latexconfig latex-fonts as they are auto-resolved dependencies to LaTeX. Note that tlmgr resolves some dependencies, but not all. You may need to install dependencies manually. Thankfully this is rarely too cumbersome.
Other interesting packages:
# tlmgr install amsmath babel carlisle ec geometry graphics hyperref lm marvosym oberdiek parskip graphics-def url
amsmath The essentials for math typesetting. babel Internationalization support. carlisle Bundle package required for some babel features. ec Required for T1 encoding. geometry For page layout. graphics The essentials to import graphics. htlatex Includes TeX4ht used in (LA )TeX to HTML (and XML and more) convertion. hyperref PDF bookmarks, PDF followable links, link style, TOC links, etc. lm One of the best Computer Modern style font available for several font encodings (such as T1). marvosym Several symbols, such as the official euro. oberdiek Bundle package required for some geometry features. parskip Let you configure paragraph breaks and indents properly. graphics-def Required for some graphics features. url Required for some hyperref features.
If you installed a package you do not need anymore, use
# tlmgr remove <package>
### Hyphenation
If you are using Babel for non-English documents, you need to install the hyphenation patterns for every language you are going to use. They are all packaged individually. For instance, use
# tlmgr install hyphen-{finnish,sanskrit}
for finnish and sanskrit hyphenation patterns.
Note that if you have been using another TeX distribution beforehand, you may still have hyphenation cache stored in you home folder. You need to remove it so that the new packages are taken into account. The TeX Live cache is usually stored in the ~/.texliveYYYY folder (YYYY stands for the year). You may safely remove this folder as it contains only generated data. TeX compilers will re-generate the cache accordingly on next compilation.
### Uninstallation
By default TeX Live will install in /usr/local/texlive. The distribution is quite proper as it will not write any file outside its folder, except for the cache (like font cache, hyphenation patters, etc.). By default,
• the system cache goes in /var/lib/texmf;
• the user cache goes in ~/.texliveYYYY.
Therefore TeX Live can be installed and uninstalled safely by removing the aforementioned folders.
Still, TeX Live provides a more convenient way to do this:
# tlmgr uninstall
You may still have to wipe out the folders if you put untracked files in them.
## Editors
TeX and LaTeX source documents (and its related auxiliary files) are all plain-text files, and can be opened and modified in almost any text editor. You should use a text editor (e.g. Notepad), not a word processor (e.g., Microsoft Word, LibreOffice Writer). Dedicated LaTeX editors are more useful than generic plain text editors, because they are usually equipped with the autocomplete feature for commands, spelling and error checking and other handy macros.
Note
Microsoft Word can accept LaTeX through Equation Editor, but it is not a full-fledged LaTeX editor.
### Cross-platform
#### Emacs
Emacs is a general purpose, extensible text processing system. Advanced users can program it (in elisp) to make Emacs the best LaTeX environment that will fit their needs. In turn beginners may prefer to use it in combination with AUCTeX and Reftex (extensions that may be installed into the Emacs program). Depending on the configuration, Emacs can provide a complete LaTeX editing environment with auto-completion, spell-checking, a complete set of keyboard shortcuts, view of table of contents, document preview and many other features.
#### gedit-latex-plugin
Gedit with gedit-latex-plugin is also worth trying out for users of GNOME. GEdit is a cross-platform application for Windows, Mac, and Linux
#### Gummi
Screenshot of Gummi.
Gummi is a LaTeX editor for Linux, which compiles the output of pdflatex in real-time and shows it on the right half of the screen[3].
#### LyX
LyX1.6.3
LyX is a popular document preparation system for Windows, Linux and Mac OS. It provides a graphical interface to LaTeX, including several popular packages. It contains formula and table editors and shows visual clues of the final document on the screen — which enables users to write LaTeX documents without worrying about the actual syntax. LyX calls this a What You See Is What You Mean (WYSIWYM) approach, since the screen only shows the structure and an approximation of the output.[4]
LyX saves a document in its own markup, from which LaTeX code can then be generated. The user is mostly isolated from the LaTeX code and is not in complete control of it, and for that reason LyX is generally not considered as a proper LaTeX editor. However, since it uses LaTeX as its underlying system, knowledge of how LaTeX works can also be useful to a LyX user. In addition, if one wants to implement a feature that is not supported in the GUI, then the use of LaTeX code may be required.
#### TeXmaker
TeXmaker is a cross-platform editor that is very similar to Kile in both features and user interface. It is also equipped with its own PDF viewer as well.
#### TeXstudio
TeXstudio is a cross-platform open source LaTeX editor forked from Texmaker.
#### TeXworks
Screenshot of TeXworks on Ubuntu 12.10.
TeXworks is a dedicated TeX editor that is included in MiKTeX and TeX Live. It was developed with the idea that a simple interface is better than a cluttered one, and thus to make it easier for the beginners of LaTeX to write their own documents. TeXworks originally came about precisely because a math professor wanted his students to have a better initial experience with LaTeX.
You can install TeXworks with the package manager of your Linux distribution or choose it as an install option in the Windows or Mac installer.
#### Vim
Vim is another general purpose text editor for a wide variety of platforms including UNIX, Mac OS X and Windows. A variety of extensions exist including LaTeX Box and Vim-LaTeX.
### *BSD and GNU/Linux-only
#### Kile
Screenshot of Kile.
Kile is a LaTeX editor for KDE (cross platform), providing a powerful GUI for editing multiple documents and compiling them with many different TeX compilers. Kile is based on Kate editor, has a quick access toolbar for symbols, document structure viewer, a console and customizable build options. Kile can be run in all operating systems that can run KDE.
#### GNOME-LaTeX
GNOME-LaTeX is another text editor for Linux (GNOME).
### Mac OS X-only
#### TeXShop
TeXShop, the model for the TeXworks editor and previewer, is for Mac OS and is bundled with the MacTeX distribution. It uses multiple windows, one for editing the source, one for the preview, and one as a console for error messages. It offers one-click updating of the preview and allows easy crossfinding between the code and the preview by using CMD-click along with many features to make editing and typesetting TeX source easier.
#### TeXnicle
TeXnicle is a free editor for Mac OS that includes the ability to perform live updates. It includes a code library for the swift insertion of code and the ability to execute detailed word counts on documents. It also performs code highlighting and the editing window is customisable, permitting the user to select the font, colour, background colour of the editing environment. It is in active development.
#### Archimedes
Archimedes is an easy-to-use LaTeX and Markdown editor designed from the ground up for Mac OS X. It includes a built-in LaTeX library, code completion support, live previews, macro support, integration with sharing services, and PDF and HTML export options. Archimedes's Magic Type feature lets users insert mathematical symbols just by drawing them on their MacBook's trackpad or Magic Trackpad.
Texpad is an integrated editor and viewer for Mac OS with a companion app for iOS devices. Similar to TeXShop, Texpad requires a working MacTeX distribution to function, however it can also support other distributions side-by-side with MacTex. It offers numerous features including templates, outline viewing, auto-completion, spell checking, customizable syntax highlighting, to-do list integration, code snippets, Markdown integration, multi-lingual support, and a Mac OS native user interface. Although Texpad offers a free evaluation period, the unlocked version is a paid download.
### Windows-only
#### TeXnicCenter
TeXnicCenter is a popular free and open source LaTeX editor for Windows. It also has a similar user interface to TeXmaker and Kile.
#### WinEdt
WinEdt is a powerful and versatile text editor with strong predisposition towards creation of LaTeX/TeX documents for Windows. It has been designed and configured to integrate with TeX Systems such as MiTeX or TeX Live. Its built-in macro helps in compiling the LaTeX source to the WYSIWYG-like DVI or PDF or PS and also in exporting the document to other mark-up languages as HTML or XML.
### Online solutions
To get started without needing to install anything, you can use a web-hosted service featuring a full TeX distribution and a web LaTeX editor.
• Authorea is an integrated online framework for the creation of technical documents in collaboration. Authorea's frontend allows one to enter text in LaTeX or Markdown, as well as figures, and equations (in LaTeX or MathML). Authorea's versioning control system is entirely based on Git (as every article is a Git repository).
• CoCalc is a collaborative online workplace for computations, which also offers an editor for LaTeX documents.
• Overleaf is a secure, easy to use online LaTeX editor with integrated rapid preview - like EtherPad for LaTeX. One can start writing by creating a free account, and share the link or add collaborators to the projects before publishing it through their platform. It supports real time preview, Rich Text mode (a partial WYSIWYG mode with math expressions, ordered/unordered lists, sectional titles and figures in rendered form), bibliographies and custom styles. Since July 2017, ShareLaTeX is now part of Overleaf.[5][6]
• Verbosus is a professional online LaTeX Editor that supports collaboration with other users and is free to use. Merge conflicts can easily resolved by using a built-in merge tool that uses an implementation of the diff-algorithm to generate information required for a successful merge.
## Bibliography management
Bibliography files (*.bib) are most easily edited and modified using a management system. These graphical user interfaces all feature a database form, where information is entered for each reference item, and the resulting text file can be used directly by BibTeX.
### Cross-platform
Screenshot of JabRef.
### Mac OS X-only
Screenshot of BibDesk
• BibDesk is a bibliography manager based on a BibTeX file. It imports references from the internet and makes it easy to organize references using tags and categories[7].
## Viewers
Finally, you will need a viewer for the files to view LaTeX outputs. By default, LaTeX saves the final document as a .dvi (Device independent file format), but you will rarely want it to, as DVI files do not contain embedded fonts — not to mention that many document viewers are unable to open them.
In most scenarios, you will use a LaTeX compiler like pdflatex to produce a PDF file directly, or a tool like dvi2pdf to convert the DVI file to PDF format. Then you can view the result with any PDF viewer.
Practically all LaTeX distributions have a DVI viewer for viewing the default output of latex, and also tools such as dvi2pdf for converting the result automatically to PDF and PS formats.
The following is a list of the various PDF viewers available on the web:
## Tables and graphics tools
LaTeX is a document preparation system above all else: it does not aim at being a spreadsheet tool nor a vector graphics tool.
If LaTeX can render beautiful tables in a dynamic and flexible manner, it will not handle the handy features you could get with a spreadsheet like dynamic cells and calculus. Other tools are better at that. The ideal solution is to combine the strength of both tools: build your dynamic table with a spreadsheet, and export it to LaTeX to get a beautiful table seamlessly integrated to your document. See Tables for more details.
The graphics topic is a bit different since it is possible to write procedural graphics from within your LaTeX document. Procedural graphics produce state-of-the-art results that integrates perfectly to LaTeX (e.g. no font change), but have a steep learning curve and require a lot of time to draw.
For easier and quicker drawings, you may want to use a WYSIWYG tool (e.g., Adobe Photoshop, Canva) and export the result to a vector format like PDF. The drawback is that it will contrast in style with the rest of your document (e.g., font, size, color). Some tools have the capability to export to LaTeX, which will partially solve this issue. See Importing Graphics for more details.
## References
3. Gummi
4. LyX
5. ShareLaTeX Joins Overleaf
6. The Definitive, Non-Technical Introduction to LaTeX: Overleaf
7. BibDesk
# Installing Extra Packages
Add-on features for LaTeX are known as packages. Dozens of these are pre-installed with LaTeX and can be used in your documents immediately. They should all be stored in subdirectories of texmf/tex/latex named after each package. The directory name "texmf" stands for “TEX and METAFONT”. To find out what other packages are available and what they do, you should use the CTAN search page which includes a link to Graham Williams' comprehensive package catalogue.
A package is a file or collection of files containing extra LaTeX commands and programming which add new styling features or modify those already existing. There are two main file types: class files with .cls extension, and style files with .sty extension. There may be ancillary files as well. When you try to typeset a document which requires a package which is not installed on your system, LaTeX will warn you with an error message that it is missing. You can download updates to packages you already have (both the ones that were installed along with your version of LaTeX as well as ones you added). There is no limit to the number of packages you can have installed on your computer (apart from disk space!), but there is a configurable limit to the number that can be used inside any one LaTeX document at the same time, although it depends on how big each package is. In practice there is no problem in having even a couple of dozen packages active.
Most LaTeX installations come with a large set of pre-installed style packages, so you can use the package manager of the TeX distribution or the one on your system to manage them. See the automatic installation. But many more are available on the net. The main place to look for style packages on the Internet is CTAN. Once you have identified a package you need that is not in your distribution, use the indexes on any CTAN server to find the package you need and the directory where it can be downloaded from. See the manual installation.
## Automatic installation
If on an operating system with a package manager or a portage tree, you can often find packages in repositories.
With MikTeX there is a package manager that allows you to pick the package you want individually. As a convenient feature, upon the compilation of a file requiring non-installed packages, MikTeX will automatically prompt to install the missing ones.
With TeX Live, it is common to have the distribution packed into a few big packages. For example, to install something related to internationalization, you might have to install a package like texlive-lang. With TeX Live manually installed, use tlmgr to manage packages individually.
tlmgr install <package1> <package2> ...
tlmgr remove <package1> <package2> ...
The use of tlmgr is covered in the Installation chapter.
If you cannot find the wanted package with any of the previous methods, see the manual installation.
### Instructions for specific operating systems
On Ubuntu, with releases such as Trusty, you can use texlive and texlive-extra packages, e.g. texlive-full, texlive-latex-extra, texlive-math-extra, texlive-plain-extra, texlive-bibtex-extra, texlive-generic-extra, and language packages, which are all available here on the Ubuntu packages site, as well as here for Trusty updates. You can install these packages with sudo apt-get install <insert package name here>.
## Manual installation
What you need to look for is usually two files, one ending in .dtx and the other in .ins. The first is a DOCTeX file, which combines the package program and its documentation in a single file. The second is the installation routine (much smaller). You must always download both files. If the two files are not there, it means one of two things:
• Either the package is part of a much larger bundle which you shouldn't normally update unless you change LaTeX version of LaTeX;
• or it's an older or relatively simple package written by an author who did not use a .dtx file.
Download the package files to a temporary directory. There will often be a readme.txt with a brief description of the package. You should of course read this file first.
### Installing a package
There are five steps to installing a LaTeX package. (These steps can also be used on the pieces of a complicated package you wrote yourself; in this case, skip straight to Step 3.)
1. Extract the files Run LaTeX on the .ins file. That is, open the file in your editor and process it as if it were a LaTeX document (which it is), or if you prefer, type latex followed by the .ins filename in a command window in your temporary directory. This will extract all the files needed from the .dtx file (which is why you must have both of them present in the temporary directory). Note down or print the names of the files created if there are a lot of them (read the log file if you want to see their names again).
2. Create the documentation Run LaTeX on the .dtx file. You might need to run it twice or more, to get the cross-references right (just like any other LaTeX document). This will create a .dvi file of documentation explaining what the package is for and how to use it. If you prefer to create PDF then run pdfLaTeX instead. If you created a .idx as well, it means that the document contains an index, too. If you want the index to be created properly, follow the steps in the indexing section. Sometimes you will see that a .glo (glossary) file has been produced. Run the following command instead:
makeindex -s gglo.ist -o name.gls name.glo
3. Install the files While the documentation is printing, move or copy the package files from your temporary directory to the right place[s] in your TeX local installation directory tree. Packages installed by hand should always be placed in your "local" directory tree, not in the directory tree containing all the pre-installed packages. This is done to a) prevent your new package accidentally overwriting files in the main TeX directories; and b) avoid your newly-installed files being overwritten when you next update your version of TeX.
For a TDS(TeX Directory Structure)-conformant system, your "local installation directory tree" is a folder and its subfolders. The outermost folder should probably be called texmf-local/ or texmf/. Its location depends on your system:
• Unix-type systems: Usually ~/texmf/. If you use TexMaker on Ubuntu 18 it may be in /usr/share/texmf/
• MikTeX: Your local directory tree can be any folder you like, as long as you then register it as a user-managed texmf directory (see http://docs.miktex.org/manual/localadditions.html#id573803)
The "right place" sometimes causes confusion, especially if your TeX installation is old or does not conform to the TeX Directory Structure(TDS). For a TDS-conformant system, the "right place" for a LaTeX .sty file is a suitably-named subdirectory of texmf/tex/latex/. "Suitably-named" means sensible and meaningful (and probably short). For a package like paralist, for example, I'd call the directory texmf/tex/latex/paralist.
Often there is just a .sty file to move, but in the case of complex packages there may be more, and they may belong in different locations. For example, new BibTeX packages or font packages will typically have several files to install. This is why it is a good idea to create a sub-directory for the package rather than dump the files into misc along with other unrelated stuff. If there are configuration or other files, read the documentation to find out if there is a special or preferred location to move them to.
Where to put files from packages
Type Directory (under texmf/ or texmf-local/) Description
.afm fonts/afm/foundry/typeface Adobe Font Metrics for Type 1 fonts
.bib bibtex/bib/bibliography BibTeX bibliography
.bst bibtex/bst/packagename BibTeX style
.cls tex/latex/base Document class file
.dvi doc package documentation
.enc fonts/enc Font encoding
.fd tex/latex/mfnfss Font Definition files for METAFONT fonts
.fd tex/latex/psnfss Font Definition files for PostScript Type 1 fonts
.map fonts/map Font mapping files
.mf fonts/source/public/typeface METAFONT outline
.pdf doc package documentation
.pfb fonts/type1/foundry/typeface PostScript Type 1 outline
.sty tex/latex/packagename Style file: the normal package content
.tex doc TeX source for package documentation
.tex tex/plain/packagename Plain TeX macro files
.tfm fonts/tfm/foundry/typeface TeX Font Metrics for METAFONT and Type 1 fonts
.ttf fonts/truetype/foundry/typeface TrueType font
.vf fonts/vf/foundry/typeface TeX virtual fonts
others tex/latex/packagename other types of file unless instructed otherwise
For most fonts on CTAN, the foundry is public.
4. Update your index Finally, run your TeX indexer program to update the package database. This program comes with every modern version of TeX and has various names depending on the LaTeX distribution you use. (Read the documentation that came with your installation to find out which it is, or consult http://www.tug.org/fonts/fontinstall.html#fndb):
• teTeX, TeX Live, fpTeX: texhash
• web2c: mktexlsr
• MacTeX: MacTeX appears to do this for you.
• MikTeX: initexmf --update-fndb (or use the GUI)
• MiKTeX 2.7 or later versions, installed on Windows XP through Windows 7: Start -> All Programs -> MikTex -> Settings. In Windows 8 use the keyword Settings and choose the option of Settings with the MiKTex logo. In Settings menu choose the first tab and click on Refresh FNDB-button (MikTex will then check the Program Files directory and update the list of File Name DataBase). After that just verify by clicking 'OK'.
5. Update font maps If your package installed any TrueType or Type 1 fonts, you need to update the font mapping files in addition to updating the index. Your package author should have included a .map file for the fonts. The map updating program is usually some variant on updmap, depending on your distribution:
• TeX Live and MacTeX: updmap --enable Map=mapfile.map (if you installed the files in a personal tree) or updmap-sys --enable Map=mapfile.map (if you installed the files in a system directory).
• MikTeX: Run initexmf --edit-config-file updmap, add the line "Map mapfile.map to the file that opens, then run initexmf --mkmaps.
The reason this process has not been automated widely is that there are still thousands of installations which do not conform to the TDS, such as old shared Unix systems and some Microsoft Windows systems, so there is no way for an installation program to guess where to put the files: you have to know this. There are also systems where the owner, user, or installer has chosen not to follow the recommended TDS directory structure, or is unable to do so for political or security reasons (such as a shared system where the user cannot write to a protected directory). The reason for having the texmf-local directory (called texmf.local on some systems) is to provide a place for local modifications or personal updates, especially if you are a user on a shared or managed system (Unix, Linux, VMS, Windows NT/2000/XP, etc.) where you may not have write-access to the main TeX installation directory tree. You can also have a personal texmf subdirectory in your own login directory. Your installation must be configured to look in these directories first, however, so that any updates to standard packages will be found there before the superseded copies in the main texmf tree. All modern TeX installations should do this anyway, but if not, you can edit texmf/web2c/texmf.cnf yourself.
## Checking package status
The universal way to check if a file is available to TeX compilers is the command-line tool kpsewhich.
$kpsewhich tikz /usr/local/texlive/2012/texmf-dist/tex/plain/pgf/frontendlayer/tikz.tex kpsewhich will actually search for files only, not for packages. It returns the path to the file. For more details on a specific package use the command-line tool tlmgr (TeX Live only): tlmgr info <package> The tlmgr tool has lot more options. To consult the documentation: tlmgr help ## Package documentation To find out what commands a package provides (and thus how to use it), you need to read the documentation. In the texmf/doc subdirectory of your installation there should be directories full of .dvi files, one for every package installed. This location is distribution-specific, but is typically found in: Distribution Path MacTeX /Library/TeX/Documentation/texmf-doc/latex MiKTeX %MIKTEX_DIR%\doc\latex TeX Live$TEXMFDIST/doc/latex
Generally, most of the packages are in the latex subdirectory, although other packages (such as BibTeX and font packages) are found in other subdirectories in doc. The documentation directories have the same name of the package (e.g. amsmath), which generally have one or more relevant documents in a variety of formats (dvi, txt, pdf, etc.). The documents generally have the same name as the package, but there are exceptions (for example, the documentation for amsmath is found at latex/amsmath/amsdoc.dvi). If your installation procedure has not installed the documentation, the DVI files can all be downloaded from CTAN. Before using a package, you should read the documentation carefully, especially the subsection usually called "User Interface", which describes the commands the package makes available. You cannot just guess and hope it will work: you have to read it and find out.
You can usually automatically open any installed package documentation with the texdoc command:
texdoc <package-name>
## External resources
The best way to look for LaTeX packages is the already mentioned CTAN: Search. Additional resources form The TeX Catalogue Online:
# Basics
This tutorial is aimed at getting familiar with the bare bones of LaTeX.
Before starting, ensure you have LaTeX installed on your computer (see Installation for instructions of what you will need).
• We will first have a look at the LaTeX syntax.
• We will create our first LaTeX document.
• Then we will take you through how to feed this file through the LaTeX system to produce quality output, such as postscript or PDF.
• Finally we will have a look at the file names and types.
## The LaTeX syntax
When using LaTeX, you write a plain text file which describes the document's structure and presentation. LaTeX converts this source text, combined with markup, into a typeset document. For the purpose of analogy, web pages work in a similar way: HTML is used to describe the document, which is then rendered into on-screen output - with different colours, fonts, sizes, etc. - by your browser.
You can create an input file for LaTeX with any text editor. A minimal example looks something like the following (the commands will be explained later):
\documentclass{article} \begin{document} Hello world! \end{document}
### Spaces
LaTeX normalises spaces in its input files so that whitespace characters, such as a space or a tab, are treated uniformly as space. Several consecutive spaces are treated as one, space opening a line is generally ignored, and a single line break also yields space. More line breaks (empty lines) define the end of a paragraph. An example of applying these rules is presented below: the left-hand side shows the user's input (.tex), while the right-hand side depicts the rendered output (.dvi, .pdf, .ps).
It does not matter whether you enter one or several spaces after a word. An empty line starts a new paragraph. It does not matter whether you enter one or several spaces after a word. An empty line starts a new paragraph.
### Reserved Characters
The following symbols are reserved characters that either have a special meaning under LaTeX or are unavailable in all the fonts. If you enter them directly in your text, they will normally not print but rather make LaTeX do things you did not intend.
# $% ^ & _ { } ~ \ As you will see, these characters can be used in your documents all the same by adding a prefix backslash: \# \$ \% \^{} \& \_ \{ \} \~{} \textbackslash{}
In some circumstances, the square bracket characters [ ] can also be considered as reserved characters, as they are used to give optional parameters to some commands. If you want to print these directly after some command, like in this situation: \command [text] it will fail, as [text] will be considered as an option given to \command. You can achieve the correct output this way: \command {} [text].
The backslash character \ cannot be entered by adding another backslash in front of it, like so \\; this sequence is used for line breaking. For introducing a backslash in math mode, you can use \backslash instead.
The commands \~ and \^ produce respectively a tilde and a hat which is placed over the next letter. For example \~n gives ñ. That's why you need braces to specify there is no letter as argument. You can also use \textasciitilde and \textasciicircum to enter these characters; or other commands .
If you want to insert text that might contain several particular symbols (such as URIs), you can consider using the \verb command, which will be discussed later in the section on formatting. For source code, see Source Code Listings
The less-than < and greater-than > characters are the only visible ASCII characters (not reserved) that will not print correctly. See Special Characters for an explanation and a workaround.
Non-ASCII characters (e.g. accents, diacritics) can be typed in directly for most cases. However you must configure the document appropriately. The other symbols and many more can be printed with special commands as in mathematical formulae or as accents. We will tackle this issue in Special Characters.
### LaTeX groups
Sometimes a certain state should be kept local, in other words its scope should be limited. This can be done by enclosing the part to be changed locally in curly braces. In certain occasions, using braces won't be possible. LaTeX provides \bgroup and \egroup to begin and end a group, respectively.
\documentclass{article} \begin{document} normal text {\itshape walzing \bfseries Wombat} more normal text normal text \bgroup\itshape walzing \bfseries Wombat\egroup{} more normal text \end{document}
Environments form an implicit group.
### LaTeX environments
Environments in LaTeX have a role that is quite similar to commands, but they usually have effect on a wider part of the document. Their syntax is:
\begin{environmentname} text to be influenced \end{environmentname}
Between the \begin and the \end you can put other commands and nested environments. The internal mechanism of environments defines a group, which makes its usage safe (no influence on the other parts of the document). In general, environments can accept arguments as well, but this feature is not commonly used and so it will be discussed in more advanced parts of the document.
Anything in LaTeX can be expressed in terms of commands and environments.
### LaTeX commands
LaTeX commands are case sensitive, and take one of the following two formats:
1. They start with a backslash \ and then have a name consisting of letters only.
• Command names are terminated by a space, a number or any other non-letter.
2. They consist of a backslash \ and exactly one non-letter.
• Command names are terminated after that one non-letter.
Some commands need an argument, which has to be given between curly braces { } after the command name. Some commands support optional parameters, which are added after the command name in square brackets [ ]. The general syntax is:
\commandname[option1,option2,...]{argument1}{argument2}...
Many LaTeX formatting commands come in pairs.
1. An argument form command, where one of the arguments is the text to be formatted.
2. A scope form command, where the formatting will be applied to all text after the command until the end of the current scope. That is, until the end of the current group or environment. This form may also be called a switch command. A scope form command might still have arguments, but the text to be formatted is not an argument. This form should almost never be called outside of any scope, otherwise it will apply on the rest of the document.
An argument form command will have one argument more than its corresponding scope form command, the extra argument being the text the command affects.
Examples:
Emphasing text: \emph is an argument form command with one argument, the text to be emphasised. \em is the corresponding scope form command with no arguments.
\emph{emphasized text}, this part is normal % Correct. {\em emphasized text}, this part is normal % Correct. \emph emphasized text, this part is normal % Incorrect: command without argument. \em{emphasized text}, this part is normal % Incorrect: switch with argument. \em emphasized text, this part is normal % Dangerous: switch outside of any environment.
Coloring text: This example requires you to \usepackage{xcolor}. \textcolor is an argument form command with two arguments, the color and the text to be colored. \color is the corresponding scope form command with only one argument, the color.
By default, this text is black. \textcolor{red}{This is red text.} Back to black. By default, this text is black. {\color{red}This is red text.} Back to black.
When LaTeX encounters a % character while processing an input file, it ignores the rest of the current line, the line break, and all whitespace at the beginning of the next line.
This can be used to write notes into the input file, which will not show up in the printed version.
This is an % stupid % Better: instructive <---- example: Supercal% ifragilist% icexpialidocious This is an example: Supercalifragilisticexpialidocious
Note that the % character can be used to split long input lines that do not allow whitespace or line breaks, as with Supercalifragilisticexpialidocious above.
The core LaTeX language does not have a predefined syntax for commenting out regions spanning multiple lines. Refer to multiline comments for simple workarounds.
## Our first document
Now we can create our first document. We will produce the absolute bare minimum that is needed in order to get some output; the well known Hello World! approach will be suitable here.
• Open your favorite text-editor. vim, emacs, Notepad++, and other text editors will have syntax highlighting that will help to write your files.
• Reproduce the following text in your editor. This is the LaTeX source.
% hello.tex - Our first LaTeX example! \documentclass{article} \begin{document} Hello World! \end{document}
• Save your file as hello.tex.
When picking a name for your file, make sure it bears a .tex extension.
### What does it all mean?
% hello.tex - Our first LaTeX example! The first line is a comment. This is because it begins with the percent symbol (%); when LaTeX sees this, it simply ignores the rest of the line. Comments are useful for people to annotate parts of the source file. For example, you could put information about the author and the date, or whatever you wish. \documentclass{article} This line is a command and tells LaTeX to use the article document class. A document class file defines the formatting standard to follow, which in this case is the generic article format. Journals, university departments, etc. can provide these files to ensure publication standards are met. In many instances, the same document content can be reformatted for submission to a different publisher simply by substituting the required document class file. There are numerous generic document classes available to choose from if one is not provided. \begin{document} This line is the beginning of the environment called document; it alerts LaTeX that content of the document is about to commence. Anything above this command is known generally to belong in the preamble. Hello World! This was the only actual line containing real content - the text that we wanted displayed on the page. \end{document} The document environment ends here. It tells LaTeX that the document source is complete, anything after this line will be ignored.
As we have said before, each of the LaTeX commands begins with a backslash (\). This is LaTeX's way of knowing that whenever it sees a backslash, to expect some commands. Comments are not classed as a command, since all they tell LaTeX is to ignore the line. Comments never affect the output of the document, provided there is no white space before the percent sign.
## Building a document
We then feed our input file into a LaTeX engine, a program which generates our final document.
There are several LaTeX engines in modern use: lualatex, xelatex, and pdflatex. There are important differences between the three, but we'll discuss those elsewhere - any of them will work for building our first document.
### Generating the document
LaTeX itself does not have a GUI, though some LaTeX installations feature a graphical front-end where you can click LaTeX into compiling your input file. Assuming you're not using one of those:
1. Open a terminal and navigate to the directory containing your .tex file.
2. Type the command: xelatex hello.tex (The .tex extension is not required, although you can include it if you wish.)
3. Various bits of info about LaTeX and its progress will be displayed. If all went well, the last two lines displayed in the console will be:
Output written on hello.pdf (1 page).
Transcript written on hello.log.
This means that your source file has been processed and the resulting document is called hello.pdf. You can view it with any PDF viewer installed on your system.
In this instance, due to the simplicity of the file, you only need to run the LaTeX command once. However, if you begin to create complex documents, including bibliographies and cross-references, etc., LaTeX needs to be executed multiple times to resolve the references. This will be discussed in the future when it comes up.
### Autobuild Systems
Compiling can be quite tricky as soon as you start working on more complex documents. A number of programs exist to automatically read in a LaTeX document and run the appropriate compilers the appropriate number of times. For example, latexmk can generate a PDF from most LaTeX files simply:
A character that can only be used in the mathematics mode was inserted in normal text. If you intended to use mathematics mode, then use $...$ or \begin{math}...\end{math} or use the 'quick math mode': \ensuremath{...}. If you did not intend to use mathematics mode, then perhaps you are trying to use a special character that needs to be entered in a different way; for example _ will be interpreted as a subscript operator in mathematics mode, and you need \_ to get an underscore character.
This can also happen if you use the wrong character encoding, for example using utf8 without "\usepackage[utf8]{inputenc}" or using iso8859-1 without "\usepackage[latin1]{inputenc}", there are several character encoding formats, make sure to pick the right one.
### Runaway argument
Runaway argument?
{December 2004 \maketitle
! Paragraph ended before \date was complete.
\par
l.8
In this error, the closing curly brace has been omitted from the date. It's the opposite of the error of too many }'s, and it results in \maketitle trying to format the title page while LaTeX is still expecting more text for the date! As \maketitle creates new paragraphs on the title page, this is detected and LaTeX complains that the previous paragraph has ended but \date is not yet finished.
### Underfull hbox
Underfull \hbox (badness 1394) in paragraph
at lines 28--30
[][]\LY1/brm/b/n/10 Bull, RJ: \LY1/brm/m/n/10
Ac-count-ing in Busi-
[94]
This is a warning that LaTeX cannot stretch the line wide enough to fit, without making the spacing bigger than its currently permitted maximum. The badness (0-10,000) indicates how severe this is (here you can probably ignore a badness of 1394). It says what lines of your file it was typesetting when it found this, and the number in square brackets is the number of the page onto which the offending line was printed. The codes separated by slashes are the typeface and font style and size used in the line. Ignore them for the moment.
This comes up if you force a linebreak, e.g., \\, and have a return before it. Normally TeX ignores linebreaks, providing full paragraphs to ragged text. In this case it is necessary to pull the linebreak up one line to the end of the previous sentence.
This warning may also appear when inserting images. It can be avoided by using the \textwidth or possibly \linewidth options, e.g. \includegraphics[width=\textwidth]{image_name}
### Overfull hbox
[101]
Overfull \hbox (9.11617pt too wide) in paragraph
at lines 860--861
[]\LY1/brm/m/n/10 Windows, \LY1/brm/m/it/10 see
\LY1/brm/m/n/10 X Win-
An overfull \hbox means that there is a hyphenation or justification problem: moving the last word on the line to the next line would make the spaces in the line wider than the current limit; keeping the word on the line would make the spaces smaller than the current limit, so the word is left on the line, but with the minimum allowed space between words, and which makes the line go over the edge.
The warning is given so that you can find the line in the code that originates the problem (in this case: 860-861) and fix it. The line on this example is too long by a shade over 9pt. The chosen hyphenation point which minimizes the error is shown at the end of the line (Win-). Line numbers and page numbers are given as before. In this case, 9pt is too much to ignore (over 3mm), and a manual correction needs making (such as a change to the hyphenation), or the flexibility settings need changing.
If the "overfull" word includes a forward slash, such as "input/output", this should be properly typeset as "input\slash output". The use of \slash has the same effect as using the "/" character, except that it can form the end of a line (with the following words appearing at the start of the next line). The "/" character is typically used in units, such as "mm/year" character, which should not be broken over multiple lines.
The warning can also be issued when the \end{document} tag was not included or was deleted.
#### Easily spotting overfull hboxes in the document
To easily find the location of overfull hbox in your document, you can make latex add a black bar where a line is too wide:
\overfullrule=2cm
### Missing package
! LaTeX Error: File paralisy.sty' not found.
Type X to quit or <RETURN> to proceed,
or enter new name. (Default extension: sty)
Enter file name:
When you use the \usepackage command to request LaTeX to use a certain package, it will look for a file with the specified name and the filetype .sty. In this case the user has mistyped the name of the paralist package, so it's easy to fix. However, if you get the name right, but the package is not installed on your machine, you will need to download and install it before continuing. If you don't want to affect the global installation of the machine, you can simply download from Internet the necessary .sty file and put it in the same folder of the document you are compiling.
### Package babel Warning: No hyphenation patterns were loaded for the language X
Although this is a warning from the Babel package and not from LaTeX, this error is very common and (can) give some strange hyphenation (word breaking) problems in your document. Wrong hyphenation rules can decrease the neatness of your document.
Package babel Warning: No hyphenation patterns were loaded for
(babel) the language Latin'
This can happen after the usage of: (see LaTeX/Internationalization)
\usepackage[latin]{babel}
The solution is not difficult, just install the used language in your LaTeX distribution.
### Package babel Error: You haven't loaded the option X yet.
If you previously set the X language, and then decided to switch to Y, you will get this error. This may seem awkward, as there is obviously no error in your code if you did not change anything. The answer lies in the .aux file, where babel defined your language. If you try the compilation a second time, it should work. If not, delete the .aux file, then everything will work as usual.
### No error message, but won't compile
One common cause of (pdf)LaTeX getting stuck is forgetting to include \end{document}
## Software that can check your .tex Code
There are several programs capable of checking LaTeX source, with the aim of finding errors or highlighting bad practice, and providing more help to (particularly novice) users than the built-in error messages.
# Counters
Counters are an essential part of LaTeX: they allow you to control the numbering mechanism of everything (sections, lists, captions, etc.). To that end each counter stores an integer value in the range of long integer, i.e., from $-2^{31}$ to $2^{31}-1$ . [19]
## Counter manipulation
In LaTeX it is fairly easy to create new counters and even counters that reset automatically when another counter is increased (think subsection in a section for example). With the command
\newcounter{NameOfTheNewCounter}
you create a new counter that is automatically set to zero. If you want the counter to be reset to zero every time another counter is increased, use:
\newcounter{NameOfTheNewCounter}[NameOfTheOtherCounter]
For example, if you want to enumerate the equations in each chapter independently, you can create something like an "equationschapter" counter that will be automatically reset at the begin of each section.
\newcounter{equationschapter}[section] \section{First Section} I present one equation: \stepcounter{equationschapter} $a=b+c$ (Eq. \arabic{section}.\arabic{equationschapter}) \section{Second Section} I present more equations: \stepcounter{equationschapter} $a=c+d$ (Eq. \arabic{section}.\arabic{equationschapter}) \stepcounter{equationschapter} $d=e$ (Eq. \arabic{section}.\arabic{equationschapter})
To add to an existing counter another counter causing a reset when increased, use:
\counterwithin*{NameOfTheCounter}{NameOfTheOtherCounter}
If this doesn't work it might be because of an old LaTeX version, the following should work in that case:
\makeatletter \@addtoreset{NameOfTheCounter}{NameOfTheOtherCounter} \makeatother
To undo this effect one can use:
\counterwithout*{NameOfTheCounter}{NameOfTheOtherCounter}
or:
\makeatletter \@removefromreset{NameOfTheCounter}{NameOfTheOtherCounter} \makeatother
To increase the counter, either use
\stepcounter{NameOfTheNewCounter}
or
\refstepcounter{NameOfTheNewCounter} % used for labels and cross referencing
or
\addtocounter{NameOfTheNewCounter}{number}
here the number can also be negative. For automatic resetting you need to use \stepcounter.
To set the counter value explicitly, use
\setcounter{NameOfTheNewCounter}{number}
## Counter access
• \theNameOfTheNewCounter will print the formatted string related to the counter (note the "the" before the actual name of the counter).
• \value{NameOfTheNewCounter} will return the counter value which can be used by other counters or for calculations. It is not a formatted string, so it cannot be used in text.
• \arabic{NameOfTheNewCounter} will print the formatted counter using arabic numbers.
Note that \arabic{NameOfTheNewCounter} may be used as a value too, but not the others.
Strangely enough, LaTeX counters are not introduced by a backslash in any case, even with the \the command. plainTeX equivalents \count and \newcount\mycounter do abide by the backslash rule.
## Counter style
The following internal LaTeX commands will convert numeric value of specified counter into printable string and insert string into document:
\arabic
Numbers from ${\displaystyle -2^{31}}$ to
|
2022-01-27 17:37:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 48, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448899030685425, "perplexity": 2799.6176521120865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00159.warc.gz"}
|
https://math.stackexchange.com/questions/35704/using-the-determinant-to-figure-out-the-orientation-of-a-simplex-in-1-dimension/35772
|
# Using the determinant to figure out the orientation of a simplex in 1-dimension gives disturbing results when vertex indices are "rotated"
Assertion: With any simplex $P$ with vertices $V_{0..N+1} \subset R^N$, it is possible to compute the sign of the oriented volume using the expression $det(V_1-V_0, V_2-V_0, ..., V_{N-1}-V_0, V_{N}-V_0)$, where the vector differences are represented as a list of column vectors.
Further, it is possible to permute the indices of the vertices, so as to "rotate" them $k$ times so that $V'_i := V_{((i + k) \mod (N+1))}$, and the determinant stays the same.
However, this does not seem to hold with simplices in $R^1$ (where the simplex is a line segment) and it prevents some simple tests from working in the general case. What am I leaving out, or how is my assertion incorrect? My gut is telling me there's something special about the first dimension that's missing from my rotation operation.
Can anyone explain this special case?
• I get only a handful of Google hits for "boolean convexity", and none of them seem to define the term. That's a good sign you should probably include a definition. Apr 28, 2011 at 21:03
• Thanks @joriki. I've updated the question. I really meant, does this winding/rotation of the vertices give us a negative or positive determinant? Apr 28, 2011 at 21:28
• In $R^1$ your simplex is a triangle, 'no? Remember that swapping rows/columns of a determinant changes its sign... for the example of a triangle, you want to order your vertices in anticlockwise fashion. Apr 28, 2011 at 21:29
• Thanks J.M.. However, in $R^1$ the simplex is a segment. In $R^2$ (the simplex is a triangle), rotation of the indices always causes two column swaps, which cancel each other out (in terms of negation of the determinant.) Apr 28, 2011 at 21:35
• As far as I know, every simplex is convex. What your determinant tests has nothing to do with convexity, but rather with the arrangement of the list of vertices of the simplex. Apr 28, 2011 at 23:20
I think one of the main problems is the language you are using. If the $V_i$ are the vertices of an $N$-simplex in $\mathbb{R}^N$, then $$\det (V_1-V_0,\ldots, V_N-V_0)$$ is equal to $N!$ times the oriented volume of the simplex. This is discussed on the Wikipedia page. You are using "convex" when you mean positively oriented I think. (I think you also say "rotate the vertices" when you mean permute the vertices.)
I do not see the problem you are having in $\mathbb{R}^1$, but that may be because you have edited the question in response to comments without realizing that it fixed the problem. A 1-simplex has two vertices $V_0$, and $V_1$. In this case $\det(V_1-V_0) = V_1-V_0$ since it is a $1\times 1$ matrix. It is the length of your segment, and the orientation is positive if $V_1 \gt V_0$.
A "rotation" is a cyclic permutation which is odd if and only if the number of permuted elements is even. So, in the corrected version, the rotation will preserve orientation for even $N$ and change orientation for odd $N$.
|
2022-07-06 10:03:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6910907626152039, "perplexity": 225.03698587559526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00225.warc.gz"}
|
https://www.cse.wustl.edu/~cytron/cse247/Modules/0/studio.html
|
## Racing Arrays
Warning: This studio is too long to be finished in the studio time you have today. Please get as far as you can, but going sufficiently slowly that all group members understand what is happening.
Then, on your own, after studio is over, please continue working on the studio. This material is covered on Exam I, so be sure you get to the end of studio work on your own, if necessary, and feel free to work collaboratively on all elements of this studio.
We will propagate code as you direct us in the green box, and the propagation will be done the day after studio.
Authors:
• Jacob Frank
• Ryan Smith
• Yonatan David
• Tim Heyer
• Ron Cytron
Abstract: Arrays are a fundamental and useful data type. Once an array is allocated, it cannot change size. Nonetheless, we can build lists and other data structures using arrays if we are willing to replace an old, full array with a bigger one.
In this assignment, you study various ways to formulate a bigger array to replace one that has no room left. The point of this work is to understand that the choices you make matter greatly in terms of performance.
What you should learn through this assignment:
• You can use our course infrastructure to measure ticks and time.
• Ticks are a reliable way to measure the number of operations in a computation, but you must insert instrumentation (Java code) to count ticks into our code.
• Time is a more realistic measurement, but it varies between computers.
Also, because computers are so fast, the time of a relatively brief computation may register as 0, even though the number of ticks is positive.
• Allocation of a n integers (or boolean, objects, etc.) does not take constant time. It takes time proportional to n.
In studio this semester, you will be supplying answers to questions posed here. Such questions appear in boxes like this one.
You are in a group, but only one of you needs to open and complete the text file. When you demo, you can specify whose repository contains the write up, and we will propagate code and write ups to the other group members.
Ideally, the write up is on your shared monitor if you have one.
So, one of you, please find and open the studio0.txt file in the studiowriteups folder.
## A. Time and Ticks
In this course, we are interested in the time taken to execute an algorithm. We will analyze algorithms mathematically and empirically, but how do we measure time? We will be using two measurements:
Time
It seems most natural to measure time using time. However, on modern computers, most computations are so fast that their time is difficult to measure with any accuracy. But a prof can dream, so we will try to measure time.
Ticks
We can count operations in your program, using a counter that you advance as appropriate in your code. This assignment investigates how you can do that. You will be using this technique throughout this course.
Your computer is busy doing many things. When you run a program on it, that program competes with other programs for your computer's attention.
The most widely available timer that we can use measures time in milliseconds, or one thousandth of a second. If your computer's clock runs at 3 GHz (billions of cycles per second), then in one millisecond, your computer can execute 3 million instructions. Thus, as much as we would like to measure the time of relatively short programs, unless that short program does at least 3 million things, it won't even take a measurable amount of time.
Let's investigate this.
• In the coursesupport source folder, find and open the timing.examples package and find and open the Linear class.
• Take a look at the code, in particular the run() method:
public void run() {
for (int i=0; i < n; ++i) {
ticker.tick();
this.value = this.value + i;
}
}
• The statement inside the loop perform a simple addition, storing the result in the value instance variable.
• Also, the call to ticker.tick() advances the tick counter by one.
• If you run this program, you will notice that it runs quickly, using values for n that range from 10000 to 90000 in steps of 10000.
The particular values of n that are used in the run() method above were generated in the main method by:
GenSizes sizes = GenSizes.arithmetic(10000, 100000, 10000);
which generates an arithmetic sequence for n, starting at 10,000, up to but not including 100,000, in steps of 10,000. The result is then used by:
ExecuteAlgorithm.timeAlgorithm(
"linear",
"timing.examples.Linear",
new IntArrayGenerator(),
sizes
);
which runs the Linear class on the specified sizes, with values for n of 10000, 20000, 30000, 40000, 50000, 60000, 70000, 80000, and 90000.
Throughout this course, and in the text as well, you will see n as the variable we use to describe the size of an input. It's generally a better practice to name a variable such that it corresponds more closely to how it is used. So, while size would have been a better variable name, we use n so that you become accustomed to thinking of the size of an input as n.
For example, we will consider in this course:
• Sorting n numbers
• Adding to the end of a linked list that has n items in it
• Finding the shortest path from one intersection to another in a map that has n street segments
• Operations on a set containing n elements
• The output in the console window shows the size, ticks, and time taken for each sample of the size variable.
• There is an outputs folder in your repository, but eclipse does not know that the run of the Linear class wrote files into that folder.
To see what's there, you can right (control) click on the outputs folder and select Refresh.
• Once you see the files there, double-click on linear-ticks.csv, which should cause the file to open in excel. What you see might look like this:
These values were generated by your run of Linear.
• Select the data including the columm headings, and use excel to create a marked scatter chart. Get help from the TA if you need it. If you don't know how to do this in Excel, here is some advice:
1. Highlight all data including labels
2. Go to the Insert tab on the Excel toolbar
3. In the Charts header, click on Scatter Plot and select Scatter with Straight Lines and Markers
• The legend may flip the axes by default, in which the legend names will appear as Series1-5
• To resolve this go to the Chart Tools tab and select Switch Row/Column
• You should see a plot that looks like this:
• The plot you see shows how many times ticker.tick() was called in the run of your Linear program for each value of n.
The plot depicts the running time of Linear as a function of the input paramter n. Not surprisingly, the plot shows a linear relationship between the input parameter and the number of times ticker.tick() was called.
• Examine the code and convince yourselves that the cost of executing
this.value = this.value + i;
n times is indeed a linear function of n.
which implies that the cost of executing the sum and assignment to this.value takes 1 tick (or, constant time)
• There is another file in the outputs folder, namely linear-time.csv. Open it and plot the values the same way we plotted the tick counts.
• What do you see?
You should see values at or near 0, because the cost of running this program is so low, it cannot be measured in milliseconds.
• What can you do to increase the runtime of the program, so that it shows up in the timings?
Modify the line we saw earlier so that much larger values are generated for n:
GenSizes sizes = GenSizes.arithmetic(1000000, 10000000, 1000000);
so that the sizes that are used are sufficiently large to generate positive times.
If you are unsure of what each parameter means, mouse over the arithmetic method name in eclipse and the JavaDoc will be shown to you.
• Are you able to find values that generate a nice linear plot of time? Discuss what you find with your TA.
You may notice in the console output that the Linear program is run several times with the same input value. Each time we run for a given size (value of n), the ticker count should be the same. However, because of other things running on your computer, the time may vary.
The code we provide for you runs your run() method several times, and then takes the fastest time found among those runs as the best indicator of the time taken by your program.
Let's try this same thing with a different toy algorithm:
• Find and open the Quadratic class in the timing.examples package in the coursesupport source folder.
• Be sure to refresh the eclipse view of the outputs folder, as you did above.
• Find, open, and create a plot of the data you find in both quadratic-ticks.csv and quadratic-time.csv.
Question A1: What do you see in both plots? Are there any differences between the two? What could account for those differences?
Question A2: Why do the times provided for Quadratic produce such a nice plot, while the original values of Linear did not?
• Look at the run() method of Quadratic, and notice that ticker.tick() is called in just one spot:
public void run() {
for (int i=0; i < n; ++i) {
for (int j=0; j < n; ++j) {
ticker.tick();
this.value = this.value + i;
}
}
}
• Each ticker.tick() call models the cost of one operation. In the extreme, you might have arrived at the code shown below, with a ticker.tick() call for each operation performed in your code, as shown in the comments:
public void run() {
ticker.tick(); // i = 0
for (int i=0; i < n; ++i) {
ticker.tick(); // i < n
ticker.tick(); // j = 0
for (int j=0; j < n; ++j) {
ticker.tick(); // j < n
//
// original one below
//
this.value = this.value + i;
ticker.tick(); // ++j
}
ticker.tick(); // ++i
}
}
• Experiment now by different placements of ticker.tick() throughout the run() method, and run Quadratic to see how the ticks and time change.
Question A3: From the runs you have tried so far, how does the placement of ticker.tick() calls affect the plots you see? In particular, do the changes affect the shapes of the curves, the values plotted, or both?
• Does it matter if you have extra ticker.tick() calls? In particular:
Question A4: In terms of n, how would you characterize, in the most simple terms, the time and ticks curves that have been generated so far?
Question A5: What would happen if you deleted all ticker.tick() calls in the innermost loop, while leaving other calls to ticker.tick() that you just placed in run()?
Throughout this course, you will be asked to place ticker.tick() calls in your code. Placing them everywhere seems like the right thing to do, so that every operation is counted, as shown in the box above.
You could put them everywhere, and the results will be right, but isn't that tedious? Did your added ticker.tick() statements affect the shape of the resulting curves that show ticks or time?
How do you decide where to put them most parsimoniously?
We will soon speak of this as analyzing an algorithm's asymptotic complexity:
• From a practical point of view, it's the part of your algorithm whose time dominates any curve you could generate that reflects the time or ticks taken by your algorithm.
• We will say that constant differences in time really don't matter: they could attributed, for example, to the difference in speed between one computer and another.
• We will care about the shape of the curves that correspond to ticks and time, and we will reason about those curves in terms of their asympototic behavior.
For now, you have to look at the code and reason about where it spends most of its time, as a function of n.
For the run() method we have been considering, most of the time must be spent in the innermost loop, where we have:
this.value = this.value + i;
That is why placing just one ticker.tick() there creates operation counts that adequately model the time spent by the entire run() method.
## B. Some statements take constant time, some don't
When we look at a line of Java code above, for example:
this.value = this.value + i;
we want to reason about how much time it takes to execute that line of code. Our model so far, in terms of ticker.tick() is that it takes one tick, or operation, for such a line of code to execute.
But is this true for all statements? Surely not. For example, the single line of code:
Arrays.sort(array);
would sort an array of values, which surely takes more than one operation. In fact, for an array of n values, it would take at least n operations to show the results of the sorted array.
Let's investigate this for allocating an array of integers:
• In the studios source folder, find and open the AddsTwoNumbers class in the studio0.allocates package.
• Run AddsTwoNumbers, refresh the outputs folder, and open and plot the time and ticks shown in the and files.
Question B1: What do you see? How do the curves reflect the code inside AddsTwoNumbers?
Do you think the value of n matters here in terms of the time it takes to perform the operation? Why or why not?
• Now find, open, and run the Allocates class, which generates ticks and time data for allocating an array of n integers, with n varying from 0 to 10000.
• Refresh the outputs folder and then open the allocates-time.csv file. Plot the runtime against the value of n.
Question B2: What do the data and plot tell you about the time it takes to allocate an array of n integers?
Is it reasonable to say that the line of code
this.array = new int[this.n]
takes a constant amount of time, independent of the value of this.n?
• Now open and plot the data in allocates-ticks.csv.
Question B3: Do the ticks agree in shape with the time we measured in running the Allocates code?
• There is another call you can make on a ticker which is
ticker.tick(n);
The above call causes the ticker to register n ticks instead of just 1.
• Modify the call to ticker in Allocates so that it registers this.n ticks for the new int[this.n] allocation.
• Rerun Allocates and refresh the outputs folder.
• Generate plots for allocates-ticks.csv and allocates-time.csv.
Question B4: Are the plots more similar to each other than before? What does this tell you about how much time it takes to allocate an array of n integers?
So we see (hopefully) that a line of code doesn't always take a constant amount of time. While it was fair to judge
this.value = this.value + i;
as taking one tick, it is not fair to say that
this.array = new int[this.n]
takes one tick. Instead we must insist it takes n ticks.
Why does the time depend on n? Java must allocate a chunk of memory for the n integers. While that might take constant time, Java must also initialize that chunk of storage to 0. That cannot take constant time: that cost must depend on n.
Throughout this course, you are asked to reason about the time of a computation. Eventually, you must reason about how much time a given operation or statement takes, in terms of the computation's input size n.
You will be producing timing curves, and we will be looking at your code to ensure you have accounted properly for how time is spent.
• Perhaps the most important message of this course is the following:
How you achieve a particular result—how you implement an algorithm—can have a profound impact on the time necessary to obtain that result.
• To continue with this idea, perform the following exercise in your group:
• One person announces a positive integer n. There is no limit on how large n can be. For example, I might announce the integer five. However, you should pick a larger integer, say between 50 and 75.
• Divide the rest of the group into two teams: the decimal team and the tally mark team.
• The decimal team writes down the announced number in decimal form. For my announced five, they would write 5.
• The tally mark team writes down the announced number in tally mark form. The tally mark form for five is .
Question B5: Which group do you expect to finish first?
Can you formalize, in terms of n the amount of work (ticks) that each group must do to write n in the form required for that group?
Both groups achieve the same result, namely the recording of an integer n.
• Under what circumstances is the decimal notation more efficient than tally marks?
• Under what circumstances is the tally mark notation more efficient than decimal notation? If you are not sure, take a look this.
## C. Growing a List
Let's apply what we have learned to growing an array-based list.
• In your repository, find the Rarrays class in the studio0.growinglist package in the studios source folder.
You will see that this is an abstract class, because the method int getNewSize() is missing. This is intentional, because we want to experiment with various ways of replacing the full array, and we do that by specifying the array's new size.
We provide two extensions of Rarrays for you:
Doubling
causes the new array to be twice the old array's size. This may seem wasteful: a full 1024 cell array would grow to contain 2048 cells just to accommodate, at present, one more cell.
causes the new array to be one item greater than the previous array's size. This is the least we can do to make room for one new element in an already full array.
Each of those classes completes the Rarrays abstract class by providing the missing method int getNewSize().
Take a look.
• Let's look at the code from reset(Ticker):
public void reset(Ticker ticker) {
this.ticker = ticker;
this.array = new int[2];
ticker.tick(2);
}
• We save ticker in an instance variable this.ticker so we can use it throughout this class and any of its extensions.
• We start our array with two elements.
• We use the ticker.tick(2) to account for allocating the 2-cell array, recalling from above we model allocating n integers by spending n ticks.
• The run() method attempts to fill the current array, calling replaceArrayWithBiggerOne when the current array no longer has sufficient room.
• We can't make an existing array bigger, but the array instance variable that had two elements can be subsequently reassigned to reference another (perhaps bigger) array.
For example, if for some reason two elements are insufficent for the array, the following would allow the instance variable to reference an 8-element array:
this.array = new int[8];
However, reassignment of the array loses the reference to the previous array. Any data in that previous array would be lost.
To preserve information as you provision for a larger array, you must
• hold on to both arrays,
• copy the old information into the newer one, and
• then reassign the instance variable.
• Find and look at the comments in the replaceArrayWithBiggerOne method.
Your first task is to complete this method until it passes the TestRarrays unit tests.
The TestRarrays contains 3 test cases:
• testInit should work as given to you.
• testGrowPreservesData ensures that you retain information from the old array as you make the newer, larger one.
• testGrowSufficientTicks ensures that you account properly for the ticks taken for allocating and filling the newer, larger array.
• Work together to write the appropriate code below the comments.
• When your code successfully passes the testGrowPreservesData test, your code is copying data reliably from the old to new array.
• When your code successfully passes the testGrowSufficientTicks test, you are accounting properly for the allocations.
Work on this until you get the green bar, indicating all tests are passing.
## D. How much does it cost to grow the array?
OK we really don't grow the array, but we continually replace it with a larger one. Let's study how the growth rate of the arrays affects the overall cost of Rarrays.
• Run AddOne, which should run without error.
• As before, the output of the run is in your outputs folder, but you won't see anything until you refresh eclipse's view.
• Open the growbyone-ticks.csv and plot the ticks against n.
Question D1: How would you describe the curve you see?
As a team, think about possible polynomial functions that could generate such a curve.
• Repeat the above steps by running Doubling, refreshing your eclipse view, opening doubling-ticks.csv in Excel, and plotting the same data.
• Question D2 Why does your program generate the data you see?
• While the data has a certain disconnected quality to it, let's analyze the plotted data as follows:
• Using any straight edge you can find nearby, see how tightly the straight edge can bound all the points from below.
• Then see if that straight edge can also tightly bound all the points from above.
• Question D3: Describe what you were able to do with the straight edge.
• If you are successful, you see that the points generated by Doubling, while somewhat strange-looking,
• make sense, because the jumps are due to the time spent doubling
• are boundable below by one linear function
• are boundable above by one linear function
You may not be able to complete this part in studio. Take a look at it on your own if necessary.
## E. Mathematical analysis
Now let's look mathematically at how much time is spent growing the list by our two methods: add one, and doubling.
Here, we reach a total of n cells in the array, growing by one cell each time, and copying the old array into the slightly larger new array.
How much time does this take? We copy 1, then 2, then 3, and so on, until we have copied n-1 items to make room for the nth item. If we count the time for allocation, we can extend the sum to n. If T(n) represents the total time taken to achieve an array of n elements by growing one at at time, then
T(n) = 1 + 2 + … n $=\sum _{i=1}^{n}i$
Question E1: What is the closed form solution for T(n)?
Search the web for summation formulas if you need help. You are expected to know such sums in this course.
Does it agree with what you have seen empirically?
Doubling
Here, we reach a total of n cells. Let's assume n=2k for some k, so that we have performed work:
T(n) = 1 + 2 + 4 + … 2k $=\sum _{i=0}^{k}{2}^{i}$
Question E2: What is the closed-form solution for T(n)?
Recalling that n=2k, can you express the result in terms of n?
Based on your analysis and what you have seen, what effect does the growing strategy have on the time necessary to accommodate the array-based list that grew to n items?
## Further exploration
Based on what you have seen emprically and shown mathematically, what do you think would happen if you tried the following strategies for increasing the size of the current array as it fills:
• For one strategy, you grow the array by 10% of its previous size, making sure that the growth adds at least one extra cell to the next array.
• For another strategy, you increase the array by 20 cells each time it fills.
Returning to your code, you will see an OurGrowth1 and OurGrowth2 class. Use those to experiment with these ideas and see what results you get.
• If there is a file for you to complete in studiowriteups, please respond to the questions and supply any requested information.
• You must commit and push all of your work to your repository. It's best to do this from the top-most level of your repository, which bears your name and student ID.
|
2018-09-18 21:02:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3849089443683624, "perplexity": 1143.2280583830889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155702.33/warc/CC-MAIN-20180918205149-20180918225149-00373.warc.gz"}
|
https://www.toppr.com/guides/maths/application-of-derivatives/tangents-and-normals/
|
> > Tangents and Normals
# Tangents and Normals
Tangents and Normals: Have you ever sat on a merry-go-round? If yes, then you would understand from your experience when I tell you that the force you experience is towards the centre of the merry-go-round but your velocity (the tendency of motion) is in the way towards which your body is pointing.
### Suggested Videos
Rolle's Theorem
Derivative as Rate Measurer
Find Equation of Normal in Easy Steps
Another way of saying the same thing would be to let you know that your velocity at any point is tangential while the force at any point is normal to the circle along which you are moving. Can you draw a connection between both the ways of saying the same thing?
Don’t worry if you can’t because that’s what this branch of application of derivatives is concerned with: Finding tangents and normals to a given curve. It is a branch of great significance in finding the different maxima and minima of a function, analyzing the directions of velocity and acceleration of a moving object, finding the angles and the shortest distance between two curves and much more. Let’s jump straight into it!
## Tangent
A tangent at a point on the curve is a straight line that touches the curve at that point and whose slope is equal to the gradient/derivative of the curve at that point. From the definition, you can deduce how to find the equation of the tangent to the curve at any point. Given a function y = f(x), the equation of the tangent to this curve at x = x0 can be found in the following way:
• Find out the gradient/derivative of the curve at the point x = x: To do this one needs to calculate $$\frac{dy}{dx} \rfloor_{x = x_0}$$. Let us call this value m, in analogy to the slope of a straight line.
• Find the equation of the straight line passing through the point (x0, y(x0)) with slope m. This is quite straightforward and can be found out as $$\frac{y – y_1}{x – x_1} { = m}$$ You have found out the equation of the tangent to the curve at the given point!
## Normal
A normal at a point on the curve is a straight line that intersects the curve at that point and is perpendicular to the tangent at that point. If its slope is given by n, and the slope of the tangent at that point or the value of the gradient/derivative at that point is given by m; then we have m×n = -1. Steps for finding the normal to a given curve y = f(x) at a point x = x0:
• Find out the gradient/derivative of the curve at the point x = x0: This first step is exactly the same as in the method of finding the equation of the tangent to the curve i.e. m = $$\frac{dy}{dx} \rfloor_{x = x_0}$$
• Find the slope ‘n’ of the normalAs the normal is perpendicular to the tangent, we have: $${n = } \frac{-1}{m}$$
• Now, find the equation of the straight line passing through the point (x0, y(x0)) with slope nThe equation is given by: $$\frac{y – y_1}{x – x_1} { = n}$$
It might be quite noticeable that both the tangents and normals to a curve go hand in hand. Both are easily derivable from one another. Now take a look at the diagram below to visualize them better, and then proceed towards the solved example to clear your doubts.
## Solved Examples on Tangents and Normals
Question 1: Consider the curve given by y = f(x) = x3 – x + 3.
1. Find the equation of the line tangent to the curve at the point (1,3)
2. Find the line normal to the curve at the point (1,3)
Answer : a) We can see that the point (1,3) satisfies the equation of the curve. Now, for the equation of the tangent, we need the gradient of the curve at that point. It can be found as,
f(x) = x3 – x + 3
f′(x) = 3x2 – 1
Then, f′(x = 1) = 3.(1)2 – 1 = 2 = m
The tangent would be the straight line passing through (1,3) with slope = 2.
(y – 3) = 2(x – 1)
y = 2x + 1
2x – y + 1 = 0
b) The normal would pass through the point (1,3) and its slope n would be given by,
n = -(1/m) = -(1/2) = -0.5
Equation of the normal:
(y – 3) = -0.5(x – 1)
2y = -x + 7
x – 2y – 7 = 0
Question 2: Explain the difference between a tangent and a normal?
Answer: A tangent refers to a straight line whose extension takes place from a point on a curve, with a gradient equal to the curve’s gradient existing at that particular point. A normal, in contrast, refers to a straight line whose extension takes place from a curve’s point such that it is perpendicular to the point’s tangent.
Question 3: Explain how can one find the tangent?
Answer: One can find the tangent by the following steps:
• Sketch the tangent line and the function.
• Take the first derivative in order to find the equation for the tangent line’s slope.
• Enter the x value belonging to the point under investigation.
• In point-slope form, write the equation of the tangent line.
• Finally, do confirmation of the equation on the graph.
Question 4: Can we say that the gradient is the same as a slope?
Answer: Gradient refers to the degree of a graph’s steepness at any point. Slope refers to the graph’s gradient at any point. So, one can say that both are the same.
Question 5: Name the four kinds of slopes?
Answer: The four kinds of slopes are zero, undefined, positive, and negative.
Share with friends
## Customize your course in 30 seconds
##### Which class are you in?
5th
6th
7th
8th
9th
10th
11th
12th
Get ready for all-new Live Classes!
Now learn Live with India's best teachers. Join courses with the best schedule and enjoy fun and interactive classes.
Ashhar Firdausi
IIT Roorkee
Biology
Dr. Nazma Shaik
VTU
Chemistry
Gaurav Tiwari
APJAKTU
Physics
Get Started
Subscribe
Notify of
## Question Mark?
Have a doubt at 3 am? Our experts are available 24x7. Connect with a tutor instantly and get your concepts cleared in less than 3 steps.
|
2020-04-04 16:46:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5922984480857849, "perplexity": 323.6077185577627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00476.warc.gz"}
|
https://electronics.stackexchange.com/questions/582124/why-program-for-mi-v-bigger-than-64-kb-does-not-build-properly
|
# Why program for MI-V bigger than 64 kB does not build properly?
I am woking with Microsemi Polarfire Splashkit evaluation board (Microchip's Polarfire MPF300T FPGA on board). My project has Mi-V RV32 Softcore processor (RISC-V ISA) and I am writing firmware for it. I set 256kB TCM as a memory for exeсutable program. When my programm's size is small it's all fine. Program compiled and executed in debug mode as expected. However if text + data sections of .elf exceed 64 kB then .elf builded but program doesn't work. Main program even doesn't loaded in debug mode. I realized that launched code has some odd arbitrary instructions in reset vector instead of boot code miv_rv32_entry.S from miv_rv32_hal. What's wrong?
On Libero SoC's side (Libero SoC v2021.1) I am using Mi-V RV32 3.0.100, IMC RISC-V extension. TCM is on, TCM APB slave (TAS) is off, TCM has address range 0x80000000 - 0x8003ffff. Reset vector is set to 0x80000000 also.
For making firmware I am using SoftConsole v2021.1. HAL is MIV_RV32 HAL version 3.0.109. In Properties->Target Processor pane Multiply and Compressed Extensions are on, align set to strict. -O0 optimization was choosen and "use newlib-nano" is off. For Linker Script I took miv-rv32-ram.ld from miv_rv32_hal and changed it to:
OUTPUT_ARCH( "riscv" )
ENTRY(_start)
MEMORY
{
ram (rwx) : ORIGIN = 0x80000000, LENGTH = 256k
}
RAM_START_ADDRESS = 0x80000000; /* Must be the same value MEMORY region ram ORIGIN above. */
RAM_SIZE = 256k; /* Must be the same value MEMORY region ram LENGTH above. */
STACK_SIZE = 8k; /* needs to be calculated for your application */
HEAP_SIZE = 8k; /* needs to be calculated for your application */
SECTIONS
{
.entry : ALIGN(0x10)
{
KEEP (*(SORT_NONE(.entry)))
. = ALIGN(0x10);
} > ram
.text : ALIGN(0x10)
{
*(.plt)
. = ALIGN(0x10);
KEEP (*crtbegin.o(.ctors))
KEEP (*(EXCLUDE_FILE (*crtend.o) .ctors))
KEEP (*(SORT(.ctors.*)))
KEEP (*crtend.o(.ctors))
KEEP (*crtbegin.o(.dtors))
KEEP (*(EXCLUDE_FILE (*crtend.o) .dtors))
KEEP (*(SORT(.dtors.*)))
KEEP (*crtend.o(.dtors))
*(.gcc_except_table)
*(.eh_frame_hdr)
*(.eh_frame)
KEEP (*(.init))
KEEP (*(.fini))
PROVIDE_HIDDEN (__preinit_array_start = .);
KEEP (*(.preinit_array))
PROVIDE_HIDDEN (__preinit_array_end = .);
PROVIDE_HIDDEN (__init_array_start = .);
KEEP (*(SORT(.init_array.*)))
KEEP (*(.init_array))
PROVIDE_HIDDEN (__init_array_end = .);
PROVIDE_HIDDEN (__fini_array_start = .);
KEEP (*(.fini_array))
KEEP (*(SORT(.fini_array.*)))
PROVIDE_HIDDEN (__fini_array_end = .);
. = ALIGN(0x10);
} > ram
/* short/global data section */
.sdata : ALIGN(0x10)
{
__sdata_start = .;
PROVIDE( __global_pointer\$ = . + 0x800);
*(.srodata.cst16) *(.srodata.cst8) *(.srodata.cst4) *(.srodata.cst2)
*(.srodata*)
. = ALIGN(0x10);
__sdata_end = .;
} > ram
/* data section */
.data : ALIGN(0x10)
{
__data_start = .;
*(.got.plt) *(.got)
*(.shdata)
. = ALIGN(0x10);
__data_end = .;
} > ram
/* sbss section */
.sbss : ALIGN(0x10)
{
__sbss_start = .;
*(.scommon)
. = ALIGN(0x10);
__sbss_end = .;
} > ram
/* sbss section */
.bss : ALIGN(0x10)
{
__bss_start = .;
*(.shbss)
*(COMMON)
. = ALIGN(0x10);
__bss_end = .;
} > ram
/* End of uninitialized data segment */
_end = .;
.heap : ALIGN(0x10)
{
__heap_start = .;
. += HEAP_SIZE;
__heap_end = .;
. = ALIGN(0x10);
_heap_end = __heap_end;
} > ram
.stack : ALIGN(0x10)
{
__stack_bottom = .;
. += STACK_SIZE;
__stack_top = .;
} > ram
}
Remaining part of linker script was unchanged. .map and .lst files seem ok, boot code has right place according to it.
I discovered that content of TCM memory differs from builded .hex file using SmartDebug. So, I think that problem is not in linker-script (not only, at least). Then I tried to boot from external SPI-flash (may be problem that binary loaded in debug mode). But result is the same (although memory content differs in different way for that case). If firmware is small then .hex content and loaded in TCM data match. Very strange situation. Why could it be?
• I would start from checking the link map. Could you post the full linker script. Map file would be to big to post, but will need eventually.
– jay
Aug 18, 2021 at 13:37
I have to explain this, for I would need understanding of the readers. I may not be able to answer it at once, since I have passed that part of process some time ago, even for the current project. I would be very happy to yield if someone right at the position come in.
You are in a serious area that EEs encounter when sofis complain hardware not working. Link script is always challenging due to it does not need to be touched once it has been settled, and forgotten. However, it is a critical part, especially when touching OS/RTOS, Bootloader, Configurable architecture, Multi-core system, and etc.
First answer, though you did not ask, my point to @ArchiMAD is, bite the bullet and understand the meaning of every line every word in the link script, and then put it under your complete control. You will come back to that, and feel easy to come back, when you run into problems like this, that often comes to Firmware and Embedded area of EE tasks.
As I explained why, I opened your script and trying to grapple it, but going to take some time. Thus, I have to just suggest what you to look at to find the ultimate answer. And, I will quickly run down the list. You will come back with either your own answer or a question.
1. I cannot tell where the core finds the very first code, looking at the script. Find where the reset goes. Locate the reset vector (not going to talk about other vectors) on the link script, may look at the map file. You may trace "instruction/assembly/core sequence" using debugger header (JTAG) from the "system reset". For an example, in convention, that is located in LOAD_ADDRESS, for it is expected to be non-volatile. It could be in something like ".vector", ".reset", or just at the ".text".
2. Figure out where and how the "binary (code)" gets "loaded" and "executed". That is sometimes called "load address" and "run address". The linker input (through the configurations) may supply it. Copy the linker output on the console into your favorite text editor, and analyze every word in that.
.entry : ALIGN(0x10) { KEEP (*(SORT_NONE(.entry))) . = ALIGN(0x10); } > ram
1. If the reset was to be at the beginning, check if ".entry" hosts space for the reset (vector).
2. ".stack" segment is too close to ".heap", in my opinion. You may want to place it to the end of the ">ram". I would say your core is "push down" & "pop up". If any part of your code requires the stack size known, it is either for sanity check or a "kernel" to detect stack usage. Otherwise, you can just make it "1" sized at the top of the memory.
3. Read the c_start (or similar, likely in assembly) how it prepares it's "run" environment (.ctor, .dtor, .vtab (those of c++) .init, .const, etc.) along with the "relocation of dynamic vectors" and the heap, stack & variables initialization before calling "main()".
When you feel like to start writing a C/C++ compiler, your are doing it right.
Sorry that I could not give you an immediate answer.
It seems that I found solution. I customized configuration of Mi-V: I have set boxes "Internal MTIME" and "Internal MTIME IRQ" (despite that I don't use internal timer). And then I saw expected boot code in the reset vector's location. Also I noted that checking "Internal MTIME", "Internal MTIME IRQ" and "GPR Registers" facilitates place and route significantly reducing time. Linker script remained unchanged.
|
2022-05-27 18:25:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35055240988731384, "perplexity": 9381.979847996896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662675072.99/warc/CC-MAIN-20220527174336-20220527204336-00075.warc.gz"}
|
https://en.wiktionary.org/wiki/hyperbolic
|
# hyperbolic
Jump to: navigation, search
## English
### Etymology 1
#### Adjective
hyperbolic (comparative more hyperbolic, superlative most hyperbolic)
1. of or relating to hyperbole
2. using hyperbole: exaggerated
This hyperbolical epitaph. — Fuller.
• 2012 May 20, Nathan Rabin, “TV: Review: THE SIMPSONS (CLASSIC): “Marge Gets A Job” (season 4, episode 7; originally aired 11/05/1992)”, in The Onion AV Club[1]:
At the risk of being slightly hyperbolic, the fourth season of The Simpsons is the greatest thing in the history of the universe.
### Etymology 2
#### Adjective
hyperbolic (not comparable)
1. Of or pertaining to a hyperbola.
• 1988, R. F. Leftwich, "Wide-Band Radiation Thermometers", chapter 7 of, David P. DeWitt and Gene D. Nutter, editors, Theory and Practice of Radiation Thermometry, ISBN 0471610186, page 512 [2]:
In this configuration the on-axis image is produced at the real hyperbolic focus (fs2) but off-axis performance suffers.
2. Indicates that the specified function is a hyperbolic function rather than a trigonometric function.
The hyperbolic cosine of zero is one.
3. (mathematics, of a metric space or a geometry) Having negative curvature or sectional curvature.
• 1998, Katsuhiko Matsuzaki and Masahiko Taniguchi, Hyperbolic Manifolds and Kleinian Groups, 2002 reprint, Oxford, ISBN 0198500629, page 8, proposition 0.10 [3]:
There is a universal constant ${\displaystyle m_{0}>0}$ such that every hyperbolic surface ${\displaystyle R}$ has an embedded hyperbolic disk with radius greater than ${\displaystyle m_{0}}$.
4. (geometry, topology, of an automorphism) Whose domain has two (possibly ideal) fixed points joined by a line mapped to itself by translation.
• 2001, A. F. Beardon, "The Geometry of Riemann Surfaces", in, E. Bujalance, A. F. Costa, and E. Martínez, editors, Topics on Riemann Surfaces and Fuchsian Groups, Cambridge, ISBN 0521003504, page 6 [4]:
A hyperbolic isometry ${\displaystyle f}$ has two (distinct) fixed points on ${\displaystyle \partial {\mathcal {H}}}$.
5. (topology) Of, pertaining to, or in a hyperbolic space (a space having negative curvature or sectional curvature).
• 2001, A. F. Beardon, "The Geometry of Riemann Surfaces", in, E. Bujalance, A. F. Costa, and E. Martínez, editors, Topics on Riemann Surfaces and Fuchsian Groups, Cambridge, ISBN 0521003504, page 6 [5]:
Exactly one hypercycle is a hyperbolic geodesic, and this is called the axis ${\displaystyle A_{f}}$ of ${\displaystyle f}$.
##### Translations
The translations below need to be checked and inserted above into the appropriate translation tables, removing any numbers. Numbers do not necessarily match those in definitions. See instructions at Help:How to check translations.
|
2017-02-24 17:18:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7788132429122925, "perplexity": 6715.325329828324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00423-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://dmoj.ca/problem/dmopc21c2p4/editorial
|
Editorial for DMOPC '21 Contest 2 P4 - Water Mechanics
Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist.
Submitting an official solution before solving the problem yourself is a bannable offence.
Author: Riolku
For this subtask, the simplest solution is probably, for all indices , determine, if water was poured here, the endpoints of the interval and . Now run a dp[i] where dp[i] represents the minimum cost such that the first cells have water in them. This should be a fairly classical DP: sort the intervals and transition in . Note that intervals can overlap.
Time Complexity:
|
2021-10-28 15:02:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45673394203186035, "perplexity": 2629.165974773262}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00172.warc.gz"}
|
https://meta.stackoverflow.com/questions/329986/can-we-have-a-button/329989
|
# Can we have a button?
People really really really like abusing the <kbd> html element for quasi-buttons. The <kbd> element is supposed to be used in situations where we want to signify a keyboard key (e.g. Press Ctrl + C to copy something).
Can we have special markup to create a button, or a button-like element? Just so people stop abusing another element for it?
• Can you describe the kind of situation, or link to a post, where a button element would be useful? – Nisse Engström Jul 28 '16 at 17:59
• muh semantic web! – Ripped Off Jul 28 '16 at 18:12
• stackoverflow.com/documentation/proposed/changes/64440 stackoverflow.com/documentation/java/99/… I don't say they would be useful there. But people really want to abuse kbd there to create some kind of quasi snippet like thing. – Sumurai8 Jul 28 '16 at 18:53
• Oh, is this a big thing on that there Documentation doohickey I keep hearing about? – Nisse Engström Jul 28 '16 at 19:22
• People do it on SO too. I just don't see nearly as much answers on SO as I see edit suggestions on Docs right now. – Sumurai8 Jul 28 '16 at 19:29
• I don't think that I've ever seen it. Anyway, I downvoted both posts for not bothering to explain the issue. – Nisse Engström Jul 28 '16 at 19:37
• I guess this would make you nervous :p – Félix Gagnon-Grenier Jul 28 '16 at 20:06
• @FélixGagnon-Grenier Yes, that makes me feel like burning someone with fire. – Sumurai8 Jul 28 '16 at 21:44
Having a replacement will not stop people who want to abuse it, and regular hyperlinks work just fine. If you see someone abusing it, change it to a regular link via editing.
• Why would a hyperlink be a meaningful replacement for a button? – Nisse Engström Jul 28 '16 at 18:00
• They're still hyperlinks. No one actually wants buttons (that is, the behavior associated with buttons) to be used in these situations; button behavior is a subset of hyperlink behavior, you only lose functionality. The markup looks like a button, but still works like (and is...) a hyperlink. @Nisse. So what you get is a misleading appearance, and... Nothing else. – Shog9 Jul 28 '16 at 18:01
• A misleading appearance is how I got my job and friends. Don't dismiss it too quickly. – Bart Jul 28 '16 at 18:07
• Nobody expects <kbd> elements to behave like actual keypresses. Similarly, I would assume that a <button> element is meant to appear like a button, not actually behave like one. Both posts are still missing some basic explanation. – Nisse Engström Jul 28 '16 at 18:10
• @NisseEngström No, no-one would expect those elements to press actual keys on their keyboard. The correct usage however is to mark them like here or like here. – Sumurai8 Jul 28 '16 at 19:01
|
2019-11-20 09:40:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20543278753757477, "perplexity": 1990.8786685064745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670535.9/warc/CC-MAIN-20191120083921-20191120111921-00361.warc.gz"}
|
http://slideplayer.com/slide/4317710/
|
# Elementary Number Theory and Methods of Proof
## Presentation on theme: "Elementary Number Theory and Methods of Proof"— Presentation transcript:
Elementary Number Theory and Methods of Proof
Chapter 3 Elementary Number Theory and Methods of Proof
Direct Proof and Counterexample 1 Integers
3.1 Direct Proof and Counterexample 1 Integers
Mathematical Proof A mathematical proof is a carefully reasoned argument to convince a skeptical listener. (pg 125) Example: How would you prove if 5x + 3 = 33 then, x = 6? 5x + 3 – 3 = 33 – 3 5x = 30 5x / 5 = 30 / 5 x = 6
Properties Equality Property
A = A A = B then B = A A = B and B = C then A = C Integers closed under addition, subtraction, multiplication. integer + integer = integer integer – integer = integer integer * integer = integer
Even & Odd Integers Definition
an integer n is even if, and only if, n equals twice some integer. n is even ⇔ ∃an integer k such that n =2k an integer n is odd if, and only if, n equals twice some integer + 1. n is even ⇔∃an integer k such that n = 2k + 1
Example Is 0 even? Is -301 odd? Is 6a2b even, if a and b are integers?
Yes, 0 = 2 * 0 Is -301 odd? Yes, 2(-151) + 1 Is 6a2b even, if a and b are integers? Yes, 2(3a2b) (integers closed on multiplication) If a and b are integers, is 10a + 8b + 1 odd? Yes, 2(5a + 4b) + 1, where s = 5a + 4b, then 2s + 1 hence, odd because the definition of odd integers.
Prime and Composite Definition of Prime Definition of Composite
n is prime ⇔ ∀positive integers r and s, if n = r*s then r=1 or s=1. n>1 Definition of Composite n is composite ⇔ ∃positive integers r and s such that n = r * s and r ≠ 1 or s ≠1.
Example Is 1 prime? No, for definition n > 1. Is it true that every integer greater than 1 is either prime or composite? Yes, for any integer n > 1 the definitions are negations of each other. Write the first 6 prime numbers. 2, 3, 5, 7, 11, 13 Write the first 6 composite numbers. 4, 6, 8, 9, 10, 12
Constructive Proofs of Existence
Existential Statement Truth ∃x ∈D such that Q(x) Q(x) is true for at least one x in D. Constructive proof demonstrates the existence by providing a method for creating the object under proof, Q(x).
Example Constructive proofs of existence
Prove: There is an integer n that can be written in two ways as a sum of two prime numbers. (Note for ∃proof you need only one example of truth.) Can you find an example of n that makes the statement true? Solution: Let n 10. Then = (true) Suppose that r and s are integers. Prove: ∃an integer k such that 22r + 18s = 2k. Solution: Let k = 11r + 9s. Then 2k = 2(11r + 9s) by distributive law 2k = 22r + 18s.
Nonconstructive Proof of Existence
Nonconstructive proof involves showing that the existence of a value of x that makes Q(x) true is guaranteed by an axiom or a prior proof or that there is no such x leads to a contradiction.
Disproving Universal by Counterexample
∀x in D, if P(x) then Q(x) must be true for all x Counterexample ∃x in D such that P(x) and ~Q(x) must find a x that makes P(x) true and Q(x) false Disproof by Counterexample To disprove a statement of the form “∀x in D, if P(x) then Q(x),” find a value of x in D for which P(x) is true and Q(x) is false. Such an x is called a counterexample.
Example Disprove ∀real numbers a and b, if a2 = b2 then a = b
P is a2 = b2 , Q is a = b find a case where P is true and Q is false will disprove the universal statement. a = 1 and b = -1, P is true and Q is false, hence disproved
Proving Universal Statements
Universal: ∀x in D, if P(x) then Q(x) Proof by exhaustion can be used if D is finite and small. This proof requires you to test every x in D. Proof by exhaustion can be applied to P(x) too. When there are only a finite set of elements that satisfy P(x) then exhaustion can be employed.
Example Method of exhaustion for a finite P(x)
∀n ∈ Z, if n is even and 4≤n≤30, then n can be written as a sum of two prime numbers P(n) ∈ (4, 6, 8, 10, 12, … , 28, 30) Q – a sum of two prime numbers Solution 4 = 2+2 6=3+3 8=3+5 10=5+5 12=5+7 14= = = =7+13 …
Proving Universal Statements
Method of Generalizing from the Generic Particular To show that every element of a domain satisfies a certain property, suppose x is a particular but arbitrarily chosen element of the domain, and show that x satisfies the property.
Example Math Trick: Pick a number, add 5, multiply by 4, subtract 6, divide by 2 and subtract twice the original number. result = 7 x is the particular because it represents a single quantity x is also the arbitrarily chosen or generic because it can represent any number.
Proving Universal Statements
Method of Direct Proof application of generalizing from the generic particular with the conditional, if P(x) then Q(x). ∀x in D, if P(x) then Q(x), you suppose x is a particular but arbitrarily chosen element of D that satisfies P(x), and then you show that x satisfies Q(x). Suppose x∈D and P(x) Show that conclusion, Q(x), is true
Example Prove that the sum of any two even integers is even.
(formal) ∀integers m and n, if m and n are even then m+n is even. Starting point: Suppose m and n are particular but arbitrarily chosen integers that are even. To Show: m + n is even m = 2r for some integer r, n = 2s for some integer s m + n = 2r + 2s 2(r + s) = 2 k , where k is (r+s) by definition 2(r+s)=2k is even hence, m + n is even (QED) Theorem “ the sum of any two integers is even”
Writing Proofs of Universal Statements
Copy the statement of the theorem to be proved. Clearly mark beginning of proof with Proof. Make your proof self contained. identify each variable and initialize variables Write your proof in complete sentences. use of shorthand is allowed, e.g. Then m+n = 2r+2s Give a reason for each assertion. by hypothesis, by definition of, by theorem Use words to make logic arguments clear. Therefore, It follows, Hence, Then, Thus, etc.
Getting Proofs Started
Write the first sentence of a proof “starting point” and the last sentence of a proof “conclusion to be shown”. Example: Every complete, bipartite graph is connected (formal) ∀graphs G, if G is complete and bipartite, then G is connected. Starting Point: Suppose G is a graph such that G is complete and bipartite. Conclusion to be shown: G is connected The proof will have the first and last sentences: First: Suppose G is a graph such that G is complete and bipartite. Last: Therefore G is connected.
Disproving an Existential Statement
Negation of existential is universal statement, so to prove an existential false you must prove its (negation) universal true.
Example There is a positive integer n such that n2 + 3n + 2 is prime.
(negation) For all positive integers n, n2 + 3n + 2 is not prime. Proof Suppose n is any positive integer. We can factor n2 + 3n + 2 to obtain n2 + 3n + 2 = (n+1) (n+2). We know that n+1 and n+2 are integers (b/c they are sums of integers) and n+1 > 1 and n+2 > 1. Thus n2 + 3n + 2 is a product of two integers greater than 1, and so n2 + 3n + 2 is not prime (by definition of primes).
|
2018-04-26 11:59:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496618866920471, "perplexity": 630.1083990326913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00247.warc.gz"}
|
http://www.mathmaa.com/Proof.html
|
online user counter
MATHMAA
# Proof of Cauchy - Riemann equations
Theorem :
The necessary conditions that a function f(z)=U(x, y) + i V(x, y) be analytic in a domain D is that U and V satisfy the Cauchy- Riemann equations i.e.$$\frac{\partial U}{\partial x}=\frac{\partial V}{\partial y};\frac{\partial U}{\partial y}=-\frac{\partial V}{\partial x}$$.
Proof :
Let f(z) is analytic in D, then f'(z) exists uniquely at every point of D . Therefore for all z $$\epsilon$$ D
$${f}'(z)=\lim_{\Delta z\rightarrow 0} \frac{f(x+\Delta)-f(z)}{\Delta z}$$
exists, and is unique as $$\Delta z\rightarrow 0$$ along any path we choose.
Now we consider the following two cases:
Case I. Suppose first that in
$$\frac{f(z+\Delta z)-f(z)}{\Delta z}$$
$$\Delta z$$ approaches zero along the real axis or x-axis then $$\Delta z = \Delta x$$ and $$\Delta y = 0$$.
Thus $$\frac{(z + \Delta z) - f(z)}{\Delta z}$$
= $$\frac{u(x +\Delta x, y) + iv(x + \Delta x, y) - u(x,y) - iv(x,y)}{\Delta x}$$
= $$\frac{u(x + \Delta x,y) - u(x,y)}{\Delta x}$$ + i $$\frac{v(x +\Delta x,y)-v(x,y)}{\Delta x}$$
Now
$$\lim_{\Delta z \rightarrow 0} \frac{f(z+\Delta z)-f(z)}{\Delta z}$$
= $$\lim_{\Delta x \rightarrow 0} \frac{u(x,y + \Delta y) - u(x,y)}{\Delta x}$$ + i $$\lim_{\Delta x \rightarrow 0} \frac{v(x + \Delta x,y) - v(x,y)}{\Delta x}$$
Thus f'(z) = $$\frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x}$$ .........(1)
Case II. Next let $$\Delta z$$ approaches zero along the imaginary axis (or y-axis), then $$\Delta z$$ = i$$\Delta y, \Delta x = 0$$ and we have
$$\frac{f(z + \Delta z) - f(z)}{\Delta z}$$
= $$\frac{u(x, y+\Delta y) + iv(x, y+\Delta y) - u(x,y) - iv(x,y)}{i\Delta y}$$
= $$\frac{u(x, y+ \Delta y) - u(x,y)}{i\Delta y}$$ + $$\frac{v(x, y+\Delta y)-v(x,y)}{\Delta y}$$
Thus
f'(z) = $$\lim_{\Delta z \rightarrow 0} \frac{f(z+\Delta z)-f(z)}{\Delta z}$$
= $$\lim_{\Delta y \rightarrow 0} \frac{u(x,y + \Delta y) - u(x,y)}{i\Delta y}$$ + $$\lim_{\Delta y \rightarrow 0} \frac{v(x, y+ \Delta y) - v(x,y)}{\Delta y}$$
= $$\frac{1}{i} \frac{\partial u}{\partial y} + \frac(\partial v}{\partial y}$$
$$\therefore f'(z)$$= $$\frac{\partial v}{\partial y} - i \frac{\partial u}{\partial y)$$ .........(2)
From (1) and (2), we have
f'(z) = \(\frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x}
= \(\frac{\partial v}{\partial y} + i \frac{\partial u}{\partial y}
Equating real and imaginary parts in above, we obtain
\(\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \frac{\partial v}{\partial x} = \frac{-\partial u}{\partial y} ............(3)
Equations (3), known as the Cauchy-Riemann equations give the necessary condition for a function f(z) to be analytic.
|
2020-09-28 14:36:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9814485907554626, "perplexity": 1508.403057063113}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00084.warc.gz"}
|
https://totallydisconnected.wordpress.com/tag/la-courbe/
|
## The Newton stratification is true
Let $G$ be a connected reductive group over $\mathbf{Q}_p$, and let $\mu$ be a $G$-valued (geometric) conjugacy class of minuscule cocharacters, with reflex field $E$. In their Annals paper, Caraiani and Scholze defined a very interesting stratification of the flag variety $\mathcal{F}\ell_{G,\mu}$ (regarded as an adic space over $E$) into strata $\mathcal{F}\ell_{G,\mu}^{b}$, where $b$ runs over the Kottwitz set $B(G,\mu^{-1})$. Let me roughly recall how this goes: any (geometric) point $x \to \mathcal{F}\ell_{G,\mu}$ determines a canonical modification $\mathcal{E}_x \to \mathcal{E}_{triv}$ of the trivial $G$-bundle on the Fargues-Fontaine curve, meromorphic at $\infty$ and with “mermorphy $\mu$” in the usual sense. On the other hand, Fargues proved that $G$-bundles on the curve are classified up to isomorphism by $B(G)$, and then Caraiani-Scholze and Rapoport proved that $\mu$-meromorphic modifications of the trivial bundle are exactly classified by the subset $B(G,\mu^{-1})$ (CS proved that only these elements can occur; R proved that all of these elements occur). The Newton stratification just records which element of this set parametrizes the bundle $\mathcal{E}_x$.
The individual strata are pretty weird. For example, if $G=GL_n$ and $\mu=(1,0,\dots,0)$, then $\mathcal{F}\ell_{G,\mu} \simeq \mathbf{P}^{n-1}$ and the open stratum is just the usual Drinfeld space $\Omega^{n-1}$, but the other strata are of the form $\Omega^{n-i-1} \times^{P_{n-i,i}(\mathbf{Q}_p)} GL_n(\mathbf{Q}_p)$, where $P_{n-i,i}$ is the evident parabolic in $GL_n$ and the action on $\Omega^{n-i-1}$ is via the natural map $P_{n-i,i}(\mathbf{Q}_p) \twoheadrightarrow GL_{n-i}(\mathbf{Q}_p)$. Qualitatively, this says that they’re unions of profinitely many copies of lower-dimensional Drinfeld spaces. In particular, the non-open strata are not rigid analytic spaces. There are also examples of strata which don’t have any classical rigid analytic points. However, the $\mathcal{F}\ell_{G,\mu}^{b}$‘s are always perfectly well-defined from the topological or diamond point of view.
Anyway, I’m getting to the following thing, which settles a question left open by Caraiani-Scholze.
Theorem. Topologically, the Newton stratification of $\mathcal{F}\ell_{G,\mu}$ is a true stratification: the closure of any stratum is a union of strata.
The idea is as follows. After base-changing from $E$ to the completed maximal unramified extension $E'$ (which is a harmless move), there is a canonical map $\zeta: \mathcal{F}\ell_{G,\mu,E'} \to \mathrm{Bun}_{G}$ sending $x$ to the isomorphism class of $\mathcal{E}_x$. Here $\mathrm{Bun}_{G}$ denotes the stack of $G$-bundles on the Fargues-Fontaine curve, regarded as a stack on the category of perfectoid spaces over $\overline{\mathbf{F}_p}$. This stack is stratified by locally closed substacks $\mathrm{Bun}_{G}^{b}$ defined in the obvious way, and by construction the Newton stratification is just the pullback of this stratification along $\zeta$. Now, by Fargues’s theorem we get an identification $|\mathrm{Bun}_{G}| = B(G)$, so it is completely trivial to see that the stratification of $\mathrm{Bun}_{G}$ is a true stratification (at the level of topological spaces). We then conclude by the following observation:
Proposition. The map $\zeta$ is universally open.
The idea is to observe that $\zeta$ factors as a composition of two maps $\mathcal{F}\ell_{G,\mu,E'} \to [\mathcal{F}\ell_{G,\mu,E'}/\underline{G(\mathbf{Q}_p)}] \to \mathrm{Bun}_{G}$. Here the first map is a $\underline{G(\mathbf{Q}_p)}$-torsor by construction, so it’s universally open by e.g. Lemma 10.13 here. More subtly, the second map is also universally open. Why? Because it is cohomologically smooth in the sense of Definition 23.8 here; universal openness then follows by Proposition 23.11 in the same document.
For the cohomological smoothness claim, take any affinoid perfectoid space with a map $T \to \mathrm{Bun}_{G}$, corresponding to some bundle $\mathcal{F} / \mathcal{X}_T$. After some thought, one works out the fiber product $X = T \times_{\mathrm{Bun}_{G}} [\mathcal{F}\ell_{G,\mu,E'}/\underline{G(\mathbf{Q}_p)}]$ “explicitly”: it parametrizes untilts of $T$ over $E'$ together with isomorphism classes of $\mu^{-1}$-meromorphic modifications $\mathcal{E}\to \mathcal{F}$ supported along the section $T^{\sharp} \to \mathcal{X}_T$ induced by our preferred untilt, with the property that $\mathcal{E}$ is trivial at every geometric point of $T$. Without the final condition, we get a larger functor $X'$ which etale-locally on $T$ is isomorphic to $T \times_{\mathrm{Spd}(\overline{\mathbf{F}_p})} \mathcal{F}\ell_{G,\mu^{-1},E'}^{\lozenge}$. (To get the latter description, note that etale-locally on $T$ we can trivialize $\mathcal{F}$ on the formal completion of the curve along $T^{\sharp}$, and then use Beauville-Laszlo to interpret the remaining data as a suitably restricted modification of the trivial $G$-torsor on $\mathrm{Spec} \mathbf{B}_{dR}^{+}(\mathcal{O}(T^{\sharp}))$. This is a Schubert cell in a Grassmannian. Then use Caraiani-Scholze’s results on the Bialynicki-Birula map.) Anyway anyway, after a little more fiddling around the point is basically that the projection $X' \to T$ is cohomologically smooth because it’s the base change of a smooth map of rigid spaces. By Kedlaya-Liu plus epsilon, the natural map $X \to X'$ is an open immersion, so $X \to T$ is cohomologically smooth. Since $T$ was arbitrary, this is enough.
Advertisement
## Riemann-Roch sur la courbe
Let $C/\mathbf{Q}_p$ be a complete algebraically closed extension, and let $X = X_{C^\flat}$ be the Fargues-Fontaine curve associated with $C^\flat$. If $\mathcal{E}$ is any vector bundle on $X$, the cohomology groups $H^i(X,\mathcal{E})$ vanish for all $i>1$ and are naturally Banach-Colmez Spaces for $i=0,1$. Recall that the latter things are roughly “finite-dimensional $C$-vector spaces up to finite-dimensional $\mathbf{Q}_p$-vector spaces”. By a hard and wonderful theorem of Colmez, these Spaces form an abelian category, and they have a well-defined Dimension valued in $\mathbf{N} \times \mathbf{Z}$ which is (componentwise-) additive in short exact sequences. The Dimension roughly records the $C$-dimension and the $\mathbf{Q}_p$-dimension, respectively. Typical examples are $H^0(X, \mathcal{O}(1)) = B_{\mathrm{crys}}^{+,\varphi = p}$, which has Dimension $(1,1)$, and $H^1(X,\mathcal{O}(-1)) = C/\mathbf{Q}_p$, which has Dimension $(1,-1)$.
Here I want to record the following beautiful Riemann-Roch formula.
Theorem. If $\mathcal{E}$ is any vector bundle on $X$, then $\mathrm{Dim}\,H^0(X,\mathcal{E}) - \mathrm{Dim}\,H^1(X,\mathcal{E}) = (\mathrm{deg}(\mathcal{E}), \mathrm{rk}(\mathcal{E}))$.
One can prove this by induction on the rank of $\mathcal{E}$, reducing to line bundles; the latter were classified by Fargues-Fontaine, and one concludes by an explicit calculation in that case. In particular, the proof doesn’t require the full classification of bundles.
So cool!
## What does an inadmissible locus look like?
Let $H/ \overline{\mathbf{F}_p}$ be some p-divisible group of dimension d and height h, and let $\mathcal{M}$ be the rigid generic fiber (over $\mathrm{Spa}\,\breve{\mathbf{Q}}_p$) of the associated Rapoport-Zink space. This comes with its Grothendieck-Messing period map $\pi: \mathcal{M} \to \mathrm{Gr}(d,h)$, where $\mathrm{Gr}(d,h)$ is the rigid analytic Grassmannian paramatrizing rank d quotients of the (covariant) rational Dieudonne module $M(H) /\breve{\mathbf{Q}}_p$. Note that $\mathrm{Gr}(d,h)$ is a very nice space: it’s a smooth connected homogeneous rigid analytic variety, of dimension d(h-d).
The morphism $\pi$ is etale and partially proper (i.e. without boundary in Berkovich’s sense), and so the image of $\pi$ is an open and partially proper subspace* of the Grassmannian, which is usually known as the admissible locus. Let’s denote this locus by $\mathrm{Gr}(d,h)^a$. The structure of the admissible locus is understood in very few cases, and getting a handle on it more generally is a famous and difficult problem first raised by Grothendieck (cf. the Remarques on p. 435 of his 1970 ICM article). About all we know so far is the following:
• When d=1 (so $\mathrm{Gr}(d,h) = \mathbf{P}^{h-1}$) and $H$ is connected, we’re in the much-studied Lubin-Tate situation. Here, Gross and Hopkins famously proved that $\pi$ is surjective, not just on classical rigid points but on all adic points, so $\mathrm{Gr}(d,h)^a = \mathrm{Gr}(d,h)$ is the whole space. This case (along with the “dual” case where h>2,d=h-1) turns out to be the only case where $\mathrm{Gr}(d,h)^a = \mathrm{Gr}(d,h)$, cf. Rapoport’s appendix to Scholze’s paper on the Lubin-Tate tower.
• When $H \simeq \mathbf{G}_m^{d} \oplus (\mathbf{Q}_p/\mathbf{Z}_p)^{h-d}$, i.e. when $H$ has no bi-infinitesimal component, it turns out that $\mathrm{Gr}(d,h)^a = \mathbf{A}^{d(h-d)}$ is isomorphic to rigid analytic affine space of the appropriate dimension, and can be identified with the open Bruhat cell inside $\mathrm{Gr}(d,h)$. This goes back to Dwork, who proved it when d=1,h=2. (I don’t know a citation for the general result, but presumably for arbitrary d,h this is morally due to Serre-Tate/Katz?)
• In general there’s also the so-called weakly admissible locus $\mathrm{Gr}(d,h)^{wa} \subset \mathrm{Gr}(d,h)$, which contains the admissible locus and is defined in some fairly explicit way. It’s also characterized as the maximal admissible open subset of the Grassmannian with the same classical points as the admissible locus. In the classical rigid language, the map $\mathrm{Gr}(d,h)^a \to \mathrm{Gr}(d,h)^{wa}$ is etale and bijective; this is the terminology used e.g. in Rapoport-Zink’s book.
• In general, the admissible and weakly admissible loci are very different. For example, when $H$ is isoclinic and (d,h)=1 (i.e. when $M(H)$ is irreducible as a $\varphi$-module), $\mathrm{Gr}(d,h)^a$ contains every classical point, and $\mathrm{Gr}(d,h)^{wa} = \mathrm{Gr}(d,h)$, so the weakly admissible locus tells you zilch about the admissible locus in this situation (and they really are different for any $1 < d < h-1$).
That’s about it for general results.
To go further, let’s switch our perspective a little. Since $\mathrm{Gr}(d,h)^a$ is an open and partially proper subspace of $\mathrm{Gr}(d,h)$, the subset $|\mathrm{Gr}(d,h)^a| \subseteq |\mathrm{Gr}(d,h)|$ is open and specializing, so its complement is closed and generalizing. Now, according to a very general theorem of Scholze, namely Theorem 2.42 here (for future readers, in case the numbering there changes: it’s the main theorem in the section entitled “The miracle theorems”), if $\mathcal{D}$ is any diamond and $E \subset |\mathcal{D}|$ is any locally closed generalizing subset, there is a functorially associated subdiamond $\mathcal{E} \subset \mathcal{D}$ with $|\mathcal{E}| = E$ inside $|\mathcal{D}|$. More colloquially, one can “diamondize” any locally closed generalizing subset of $|\mathcal{D}|$, just as any locally closed subspace of $|X|$ for a scheme $X$ comes from a unique (reduced) subscheme of $X$.
Definition. The inadmissible/nonadmissible locus $\mathrm{Gr}(d,h)^{na}$ is the subdiamond of $\mathrm{Gr}(d,h)^{\lozenge}$ obtained by diamondizing the topological complement of the admissible locus, i.e. by diamondizing the closed generalizing subset $|\mathrm{Gr}(d,h)^a|^c \subset |\mathrm{Gr}(d,h)| \cong |\mathrm{Gr}(d,h)^{\lozenge}|$.
It turns out that one can actually get a handle on $\mathrm{Gr}(d,h)^{na}$ in a bunch of cases! This grew out of some conversations with Jared Weinstein – back in April, Jared raised the question of understanding the inadmissible locus in a certain particular period domain for $\mathrm{GL}_2$ with non-minuscule Hodge numbers, and we managed to describe it completely in that case (see link below). Last night, though, I realized we hadn’t worked out any interesting examples in the minuscule (i.e. p-divisible group) setting! Here I want to record two such examples, hot off my blackboard, one simple and one delightfully bizarre.
Example 1. Take h=4, d=2 and $H$ isoclinic. Then $|\mathrm{Gr}(d,h)^a|^c$ is a single classical point, corresponding to the unique filtration on $M(H)$ with Hodge numbers $0,0,1,1$ which is not weakly admissible. So $\mathrm{Gr}(d,h)^a = \mathrm{Gr}(d,h)^{wa}$ in this case.
Example 2. Take h=5, d=2 and $H$ isoclinic\$. Now things are much stranger. Are you ready?
Theorem. In this case, the locus $\mathrm{Gr}^{na}$ is naturally isomorphic to the diamond $(X \smallsetminus 0)^{\lozenge} / \underline{D^\times}$, where $X$ is an open perfectoid unit disk in one variable over $\breve{\mathbf{Q}}_p$ and $D=D_{1/3}$ is the division algebra over $\mathbf{Q}_p$ with invariant 1/3, acting freely on $X \smallsetminus 0$ in a certain natural way. Precisely, the disk $X$ arises as the universal cover of the connected p-divisible group of dimension 1 and height 15, and its natural $D$-action comes from the natural $D_{1/15}$-action on $X$ via the map $D_{1/3} \to D_{1/3} \otimes D_{-2/5} \simeq D_{-1/15} \simeq D_{1/15}^{op}$.
This explicit description is actually equivariant for the $D_{2/5}$-actions on $X$ and $Gr$. As far as diamonds go, $(X \smallsetminus 0)^{\lozenge}/\underline{D^{\times}}$ is pretty high-carat: it’s spatial (roughly, its qcqs with lots of qcqs open subdiamonds), and its structure morphism to $\mathrm{Spd}\,\breve{\mathbf{Q}}_p$ is separated, smooth, quasicompact, and partially proper in the appropriate senses. Smoothness, in particular, is meant in the sense of Definition 6.1 here (cf. also the discussion in Section 4.3 here). So even though this beast doesn’t have any points over any finite extension of $\breve{\mathbf{Q}}_p$, it’s still morally a diamondly version of a smooth projective curve!
The example Jared and I had originally worked out is recorded in section 5.5 here. The reader may wish to try adapting our argument from that situation to the cases mentioned above – this is a great exercise in actually using the classification of vector bundles on the Fargues-Fontaine curve in a hands-on calculation.
Anyway, here’s a picture of $(X \smallsetminus 0)^{\lozenge} / \underline{D^{\times}}$, with some other inadmissible loci in the background:
*All rigid spaces here and throughout the post are viewed as adic spaces: in the classical language, $\mathrm{Gr}(d,h)^a$ does not generally correspond to an admissible open subset of $\mathrm{Gr}(d,h)$, so one would be forced to say that there exists a rigid space $\mathrm{Gr}(d,h)^a$ together with an etale monomorphism $\mathrm{Gr}(d,h)^a \to \mathrm{Gr}(d,h)$. But in the adic world it really is a subspace.
|
2023-03-25 13:11:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 163, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420137405395508, "perplexity": 455.06628105023646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00053.warc.gz"}
|
http://issac-conference.org/2011/papers.html
|
# List of accepted papers
Papers are ordered according to paper IDs used during the review process. Abstracts can be expanded.
1. [+] Daniel Cabarcas and Jintai Ding. Linear Algebra to Compute Syzygies and Gröbner Bases.
2. Abstract: In this paper, we introduce a new method to avoid zero reductions in Groebner basis computation. We call this method LASyz, which stands for Lineal Algebra to compute Syzygies. LASyz uses exhaustively the information of both principal syzygies and non-trivial syzygies to avoid zero reductions. All computation is done using linear algebra techniques. LASyz is easy to understand and implement. The method does not require an incremental computation and it imposes no restrictions on the reductions allowed. We provide a complete theoretical foundation for the LASyz method and we describe an algorithm for computing Groebner Bases for zero dimensional ideals based on this foundation. A qualitative comparison with similar algorithms is provided and the performance of the algorithm is illustrated with experimental data.
3. [+] Hongbo Li, Ruiyong Sun, Shoubin Yao and Ge Li. Approximate Rational Solutions for Rational ODEs Defined on Discrete Differentiable Curves.
4. Abstract: In this paper, a new concept is proposed for discrete differential geometry: discrete n-differentiable curve, which is a tangent n-jet on a sequence of space points. A complete method is proposed to solve ODEs of the form n(m) = F(r, rʹ, . . . , r(n), w, wʹ, . . . , w(m - 1), u) / G(r, rʹ, . . . , r(n), w, wʹ, . . . s, w(m - 1), u), where F, G are respectively vector-valued and scalar-valued polynomials, where r is a discrete curve obtained by sampling along an unknown smooth curve parametrized by u, and where w is the vector field to be computed along the curve. Our Maple program outputs an approximate rational solution with the highest order of approximation for given data and neighborhood size.
The method is used to compute rotation minimizing frames of space curves in CAGD. For one-step backward-forward chasing, a 6th-order approximate rational solution is found, and theorems in this paper guarantee that 6 is the highest order of approximation by rational functions. The theoretical order of approximation is also supported by numerical experiments. Prabhanjan Ananth and Ambedkar Dukkipati. Border basis detection is NP-complete
5. [+] Prabhanjan Ananth and Ambedkar Dukkipati. Border basis detection is NP-complete.
6. [+] Jonathan Borwein and Armin Straub. Special values of generalized log-sine integrals.
7. Abstract: We study generalized log-sine integrals at special values. At π and multiples thereof explicit evaluations are obtained in terms of multiple polylogarithms at ± 1. For general arguments we present algorithmic evaluations involving harmonic polylogarithms at related arguments. In particular, we consider log-sine integrals at π / 3 which evaluate in terms of polylogarithms at the sixth root of unity. An implementation of our results for the computer algebra systems Mathematica and SAGE is provided.
8. [+] Ernst W. Mayr and Stephan Ritscher. Space-efficient Gröbner Basis Computation without Degree Bounds.
9. Abstract: The computation of a Gröbner basis of a polynomial ideal is known to be exponential space complete. We revisit the algorithm by Kühnle and Mayr using recent improvements of various degree bounds. The result is an algorithm which is exponential in the ideal dimension (rather than the number of indeterminates).
Furthermore, we provide an incremental version of the algorithm which is independent of the knowledge of degree bounds. Employing a space-efficient implementation of Buchberger’s S-criterion, the algorithm can be implemented such that the space requirement depends on the representation and Gröbner basis degrees of the problem instance (instead of the worst case) and thus is much lower in average.
10. [+] Dustin Moody. Division Polynomials for Jacobi Quartic Curves.
11. Abstract: In this paper we find division polynomials for Jacobi quartics. These curves are an alternate model for elliptic curves to the more common Weierstrass equation. Division polynomials for Weierstrass curves are well known, and the division polynomials we find are analogues for Jacobi quartics. Using the division polynomials, we show recursive formulas for the n-th multiple of a point on the quartic curve. As an application, we prove a type of mean-value theorem for Jacobi quartics. These results can be extended to other models of elliptic curves, namely, Jacobi intersections and Huff curves.
12. [+] Wei Li, Xiao-Shan Gao and Chun-Ming Yuan. Sparse Differential Resultant.
13. Abstract: In this paper, the concept of sparse differential resultant for a differentially essential system of differential polynomials is introduced and its properties are proved. In particular, a degree bound for the sparse differential resultant is given. Based on the degree bound, an algorithm to compute the sparse differential resultant is proposed, which is single exponential in terms of the order, the number of variables, and the size of the differentially essential system.
14. [+] Kosaku Nagasaka. Computing a Structured Gröbner Basis Approximately.
15. Abstract: There are several preliminary definitions for a Gröbner basis with inexact input since computing such a basis is one of the challenging problems in symbolic-numeric computations for a couple of decades. A structured Gröbner basis is such a basis defined from the data mining point of view: how to extract a meaningful result from the given inexact input when the amount of noise is not small or we do not have enough information about the input. However, the known algorithm needs a suitable (unknown) information on terms required for the Buchberger algorithm. In this paper, we introduce an improved version of the algorithm that does not need any extra information.
16. [+] Yue Li and Gabriel Dos Reis. An Automatic Parallelization Framework for Algebraic Computation Systems.
17. Abstract: Concurrency brought by multicore machines has the potential of enabling efficient computations. However, the difficulty of parallel programming makes manual parallelization still a challenging task for non-experts. This paper proposes an automatic parallelization framework for an existing computer algebra system. The framework performs a semantics-based static analysis to extract reductions in library components. Reductions using associative binary operators are automatically transformed to their parallel versions. Our implementation is evaluated using algebraic library functions and a self-implemented application. Experimental results show that up to 5 times speed-up for the application is obtained. It is feasible to adapt the core of this framework to other algebraic computation systems and programming languages. The adaptation requires a type system which is able to provide semantics algebraic information from users.
18. [+] Zhikun She, Bai Xue and Zhiming Zheng. Algebraic Analysis on Asymptotic Stability of Continuous Dynamical Systems.
19. Abstract: In this paper we propose a mechanisable technique for asymptotic stability analysis of continuous dynamical systems. We start from linearizing a continuous dynamical system, solving the Lyapunov matrix equation and then check whether the solution is positive definite. For the cases that the Jacobian matrix is not a Hurwitz matrix, we first derive an algebraizable sufficient condition for the existence of a Lyapunov function in quadratic form without linearization, apply a real root classification based method step by step to formulate this derived condition as a semi-algebraic set such that the semi-algebraic set only involves the coefficients of the pre-assumed quadratic form, and then compute a sample point in the resulting semi-algebraic set for the coefficients resulting in a Lyapunov function. In this way, we avoid the use of generic quantifier elimination techniques for efficient computation. We prototypically implemented our algorithm based on DISCOVERER. The experimental results and comparisons demonstrate the feasibility and promise of our approach.
20. [+] Shaoshi Chen, Ruyong Feng, Guofeng Fu and Ziming Li. On the Structure of Compatible Rational Functions.
21. Abstract: A finite number of rational functions are compatible if they satisfy the compatibility conditions of a first-order linear functional system involving differential, shift and q-shift operators. We present a theorem that describes the structure of compatible rational functions. The theorem enables us to decompose a solution of such a system as a product of a rational function, several symbolic powers, a hyperexponential function, a hypergeometric term, and a q-hypergeometric term. We outline an algorithm for computing this product, and discuss how to determine the algebraic dependence of hyperexponential-hypergeometric elements.
22. [+] Leilei Guo and Feng Liu. An Algorithm for Computing Set-Theoretic Generators of an Algebraic Variety.
23. Abstract: Based on Eisenbud’s idea (see [Eisenbud, D., Evans, G., 1973. Every algebraic set in n-space is the intersection of n hypersurfaces. Invent. Math. 19, 107–112]), we present an algorithm for computing set-theoretic generators for any algebraic variety in the affine n-space, which consists of at most n polynomials. With minor modifications, this algorithm is also valid for projective algebraic variety in projective n-space.
24. [+] Yue Ma and Lihong Zhi. The Minimum-Rank Gram Matrix Completion via Fixed Point Continuation Method.
25. Abstract: The problem of computing a representation for a real polynomial as a sum of minimum number of squares of polynomials can be casted as finding a symmetric positive semidefinite real matrix (Gram matrix) of minimum rank subject to linear equality constraints. In this paper, we propose algorithms for solving the minimum-rank Gram matrix completion problem, and show the convergence of these algorithms. Our methods are based on the modified fixed point continuation (FPC) method. We also use the Barzilai-Borwein (BB) technique and a specific linear combination of two previous iterates to accelerate the convergence of modified FPC algorithms. We demonstrate the effectiveness of our algorithms for computing approximate and exact rational sum of squares (SOS) decompositions of polynomials with rational coefficients.
26. [+] Yao Sun and Dingkang Wang. A Generalized Criterion for Signature Related Gröbner Basis Algorithms.
27. Abstract: A generalized criterion for signature related algorithms to compute Gröbner basis is proposed in this paper. Signature related algorithms are a popular kind of algorithms for computing Gröbner basis, including the famous F5 algorithm, the extended F5 algorithm and the GVW algorithm. The main purpose of current paper is to study in theory what kind of criteria is correct in signature related algorithms and provide a generalized method to develop new criteria. For this aim, a generalized criterion is proposed. The generalized criterion only relies on a general partial order defined on a set of polynomials. When specializing the partial order to appropriate specific orders, the generalized criterion can specialize to almost all existing criteria of signature related algorithms. For admissible partial orders, a complete proof for the correctness of the algorithm based on this generalized criterion is also presented. This proof has no demand on the computing order of critical pairs, and is also valid for non-homogeneous polynomial systems. More importantly, the partial orders implied by existing criteria are admissible. Besides, one can also check whether a new criterion is correct in signature related algorithms or even develop new criteria by using other admissible partial orders in the generalized criterion.
28. [+] Deepak Kapur, Yao Sun and Dingkang Wang. Computing Comprehensive Gröbner Systems and Comprehensive Gröbner Bases Simultaneously.
29. Abstract: In Kapur et al (ISSAC, 2010), a new method for computing a comprehensive Gröbner system of a parameterized polynomial system was proposed and its efficiency over other known methods was effectively demonstrated. Based on those insights, a new approach is proposed for computing a comprehensive Gröbner basis of a parameterized polynomial system. The key new idea is not to simplify a polynomial under various specialization of its parameters, but rather keep track in the polynomial, of the power products whose coefficients vanish; this is achieved by partitioning the polynomial into two parts— part and part for the specialization under consideration. During the computation of a comprehensive Gröbner system, for a particular branch corresponding to a specialization of parameter values, nonzero parts of the polynomials dictate the computation, i.e., computing S-polynomials as well as for simplifying a polynomial with respect to other polynomials; but the manipulations on the whole polynomials (including their zero parts) are also performed. Gröbner basis computations on such pairs of polynomials can also be viewed as Gröbner basis computations on a module. Once a comprehensive Gröbner system is generated, both nonzero and zero parts of the polynomials are collected from every branch and the result is a comprehensive Gröbner basis; every polynomial retrieved from a comprehensive Gröbner system is faithful, in the sense that it belongs to the ideal of the original parameterized polynomial system. This technique should be applicable to other algorithms for computing a comprehensive Gröbner system as well, thus producing both a comprehensive Gröbner system as well as a faithful comprehensive Gröbner basis of a parameterized polynomial system simultaneously. The approach is exhibited by adapting the recently proposed method for computing a comprehensive Gröbner system in (ISSAC, 2010) for computing a comprehensive Gröbner basis. The timings on a collection of examples demonstrate that this new algorithm for computing comprehensive Gröbner bases has better performance than other existing algorithms.
30. [+] André Galligo and Daniel Bembe. Virtual Roots of a Real Polynomial and Fractional Derivatives.
31. Abstract: After the works of Gonzales-Vega, Lombardi and Mahé, (1998) and Coste, Lajous, Lombardi, and Roy (2005), we consider the virtual roots of a univariate polynomial f with real coefficients and give quick proofs to establish their main properties. Using fractional derivatives, we associate to f a bivariate polynomial Pf(x, t) depending on the choice of an origin a, then two type of plan curves we call the FDcurve and the stem of f. We show, in the generic case, how to locate the virtual roots of f on the Budan table and on each of these curves. The paper is illustrated with examples and pictures computed with the computer algebra system Maple.
32. [+] Alessandra Bernardi, Pierre Comon, Bernard Mourrain and Jérôme Brachat. Tensor decomposition and moment matrices.
33. Abstract: In the paper, we address the important problem of tensor decompositions which can be seen as a generalisation of Singular Value Decomposition for matrices. We consider general multilinear and multihomogeneous tensors. We show how to reduce the problem to a truncated moment matrix problem and give a new criterion for flat extension of Quasi-Hankel matrices. We connect this criterion to the commutation characterisation of border bases. A new algorithm is described which applies for general multihomogeneous tensors, extending the approach of J.J. Sylvester on binary forms. An example illustrates the algebraic operations involved in this approach and how the decomposition can be recovered from eigenvector computation.
34. [+] Angelos Mantzaflaris and Bernard Mourrain. Deflation and Certified Isolation of Singular Zeros of Polynomial Systems.
35. Abstract: We develop a new symbolic-numeric algorithm for the certification of singular isolated points, using their associated local ring structure and certified numerical computations. An improvement of an existing method to compute inverse systems is presented, which avoids redundant computations and reduces the size of the intermediate linear systems being solved. We derive a one-step deflation technique, from the description of the multiplicity structure in terms of differentials. The deflated system can be used in Newton-based iterative schemes with quadratic convergence. Starting from a polynomial system and a small-enough neighborhood, we obtain a criterion for the existence and uniqueness of a singular root of a given multiplicity structure, applying a well-chosen symbolic perturbation. Standard verification methods, based eg. on interval arithmetic and a fixed point theorem, are employed to certify that there exists a unique perturbed system with a singular root in the domain. Applications to topological degree computation and to the analysis of real branches of an implicit curve illustrate the method.
36. [+] Chee Yap and Michael Sagraloff. A Simple But Exact and Efficient Algorithm for Complex Root Isolation.
37. Abstract: We present a new exact subdivision algorithm CEVAL for isolating the complex roots of a square-free polynomial in any given box. It is a generalization of a previous real root isolation algorithm called EVAL. Under suitable conditions, our approach is applicable for general analytic functions. CEVAL is based on the simple Bolzano Principle and is easy to implement exactly. Preliminary experiments have shown its competitiveness.
We further show that, for the benchmark problem of isolating all roots of a square-free polynomial with integer coefficients, the asymptotic complexity of both algorithms EVAL and CEVAL matches (up a logarithmic term) that of more sophisticated real root isolation methods which are based on Descartes’ Rule of Signs, Continued Fraction or Sturm sequences. In particular, we show that the tree size of EVAL matches that of other algorithms.
Our analysis is based on a novel technique called δ-clusters from which we expect to see further applications.
38. [+] Jean-Charles Faugère and Chenqi Mou. Fast Algorithm for Change of Ordering of Zero-dimensional Gröbner Bases with Sparse Multiplication Matrices.
39. Abstract: Let I be a 0-dimensional ideal of degree D in K[x1,…,xn], where K is a field. It is well-known that obtaining efficient algorithms for change of ordering of Gröbner bases of I is crucial in polynomial system solving. Through the algorithm FGLM, this task is classically tackled by linear algebra operations in K[x1,…,xn]/I. With recent progress on Gröbner bases computations, this step turns out to be the bottleneck of the whole solving process.
Our contribution is an algorithm that takes advantage of the sparsity structure of multiplication matrices appearing during the change of ordering. This sparsity structure arises even when the input polynomial system defining I is dense. As a by-product, we obtain an implementation which is able to manipulate 0-dimensional ideals over a prime field of degree greater than 30000. It outperforms the Magma/Singular/FGb implementations of FGLM.
First, we investigate the particular but important shape position case. The obtained algorithm performs the change of ordering within a complexity O(D(N1+n log(D))), where N1 is the number of nonzero entries of a multiplication matrix. This almost matches the complexity of computing the minimal polynomial of one multiplication matrix. Then, we address the general case and give corresponding complexity results. Our algorithm is dynamic in the sense that it selects automatically which strategy to use depending on the input. Its key ingredients are the Wiedemann algorithm to handle 1-dimensional linear recurrence (for the shape position case), and the Sakata algorithm from Coding Theory (which is a generalization of Berlekamp–Massey algorithm) to handle multi-dimensional linearly recurring sequences in the general case.
40. [+] Somit Gupta and Arne Storjohann. Computing Hermite forms of polynomial matrices.
41. Abstract: This paper presents a new algorithm for computing the Hermite form of a polynomial matrix. Given a nonsingular n × n matrix A filled with degree d polynomials with coefficients from a field, the algorithm computes the Hermite form of A using an expected number of (n3d)1 + o(1) field operations. This is the first algorithm that is both linear in the degree d and cubic in the dimension n. The algorithm is randomized of the Las Vegas type.
42. [+] Mark Giesbrecht and Daniel Roche. Diversification improves interpolation.
43. Abstract: We consider the problem of interpolating an unknown multivariate polynomial with coefficients taken from a finite field or as numerical approximations of complex numbers. Building on the recent work of Garg and Schost, we improve on the best-known algorithm for interpolation over large finite fields by presenting a Las Vegas randomized algorithm that uses fewer black box evaluations. Using related techniques, we also address numerical interpolation of sparse complex polynomials, and provide the first provably stable algorithm (in the sense of relative error) for this problem, at the cost of modestly more interpolation points. A key new technique is a randomization which makes all coefficients of the unknown polynomial distinguishable, producing what we call a diverse polynomial. Another departure of our algorithms from most previous approaches is that they do not rely on root finding as a subroutine. We show how these improvements affect the practical performance with trial implementations.
44. [+] Erich Kaltofen, Michael Nehring and B. David Saunders. Quadratic-time certificates in linear algebra.
45. Abstract: We present certificates for the positive semidefiniteness of an n by n matrix A, whose entries are integers of binary length log ||A||, that can be verified in O(n^(2+epsilon) (log ||A||)^(1+epsilon)) binary operations for any epsilon > 0. The question arises in Hilbert/Artin-based rational sum-of-squares certificates (proofs) for polynomial inequalities with rational coefficients. We allow certificates that are validated by Monte Carlo randomized algorithms, as in Rusins Freivalds’s famous 1979 quadratic time certification for the matrix product. Our certificates occupy O(n^(3+epsilon) (log ||A|| )^(1+epsilon)) bits, from which the verification algorithm randomly samples a quadratic amount.
In addition, we give certificates of the same space and randomized validation time complexity for the Frobenius form, which includes the characteristic and minimal polynomial. For determinant and rank we have certificates of essentially-quadratic binary space and time complexity via Storjohann’s algorithms.
46. [+] John Perry and Christian Eder. Signature-based algorithms to compute Gröbner bases.
47. Abstract: This paper describes a Buchberger-style algorithm to compute a Groebner basis of a polynomial ideal, allowing for a selection criterion based on “signatures”. We explain how three recent algorithms can be viewed as different strategies for the new algorithm, and how other selection strategies can be formulated. We describe a fourth as an example. We analyze the strategies both theoretically and empirically, leading to some surprising results.
48. [+] Curtis Bright and Arne Storjohann. Vector Rational Number Reconstruction.
49. Abstract: The final step of some algebraic algorithms is to reconstruct the common denominator d of a collection of rational numbers (ni / d)1 ≤ i ≤ n from their images $(a_i)_{1\leq i\leq n} \bmod M$, subject to a condition such as 0 < d ≤ N and ni∣ ≤ N for a given magnitude bound N. Applying elementwise rational number reconstruction requires that M ∈ Ω (N2). Using the gradual sub-lattice reduction algorithm of van Hoeij and Novocin (2010), we show how to perform the reconstruction efficiently even when the modulus satisfies a considerably smaller magnitude bound M ∈ Ω (N1 + 1 / c) for c a small constant, for example 2 ≤ c ≤ 5. Assuming c ∈ O(1) the cost of the approach is O(n(logM)3) bit operations using the original LLL lattice reduction algorithm, but is reduced to O(n(logM)2) bit operations by incorporating the L$^2$ variant of Nguyen and Stehle (2009). As an application, we give a robust method for reconstructing the rational solution vector of a linear system from its image, such as obtained by a solver using p-adic lifting.
50. [+] Erich Kaltofen and Michael Nehring. Supersparse black box rational function interpolation.
51. Abstract: We present a method for interpolating a supersparse blackbox rational function with rational coefficients, for example, a ratio of binomials or trinomials with very high degree. We input a blackbox rational function, as well as an upper bound on the number of non-zero terms and an upper bound on the degree. The result is found by interpolating the rational function modulo a small prime p, and then applying an effective version of Dirichlet’s Theorem on primes in an arithmetic progression progressively lift the result to larger primes. Eventually we reach a prime number that is larger than the inputted degree bound and we can recover the original function exactly. In a variant, the initial prime p is large, but the exponents of the terms are known modulo larger and larger factors of p–1.
The algorithm, as presented, is conjectured to be polylogarithmic in the degree, but exponential in the number of terms. Therefore, it is very effective for rational functions with a small number of non-zero terms, such as the ratio of binomials, but it quickly becomes ineffective for a high number of terms.
The algorithm is oblivious to whether the numerator and denominator have a common factor. The algorithm will recover the sparse form of the rational function, rather than the reduced form, which could be dense. We have experimentally tested the algorithm in the case of under 10 terms in numerator and denominator combined and observed its conjectured high efficiency.
52. [+] Soumojit Sarkar and Arne Storjohann. Normalization of row reduced polynomial matrices.
53. Abstract: This paper gives gives a deterministic algorithm to transform an already row reduced matrix to canonical Popov form. Let $\K$ be a field. Given as input a row reduced matrix R over $\K[x]$, our algorithm computes the Popov form in about the same time as required to multiply together over $\K[x]$ two matrices of the same dimension and degree as R. We also show that the problem of transforming a row reduced matrix to Popov form is harder than polynomial matrix multiplication.
54. [+] Tingting Fang and Mark van Hoeij. 2-descent for Second Order Linear Differential Equations.
55. Abstract: Let L be a second order linear ordinary differential equation with coefficients in C(x). The goal in this paper is to reduce L to an equation that is easier to solve. The starting point is an irreducible L, of order two, and the goal is to decide if L is projectively equivalent to another equation L~ that is defined over a subfield C(f) of C(x).
This paper treats the case of 2-descent, which means reduction to a subfield with index [C(x):C(f)]=2. Although the mathematics has already been treated in other papers, a complete implementation could not be given because it involved a step for which we do not have a complete implementation. The contribution of this paper is to give an alternative approach that is fully implementable. Examples illustrate that this algorithm is very useful for finding closed form solutions (2-descent, if it exists, reduces the number of true singularities from n to at most n/2 + 2).
56. [+] Michael Kerber and Michael Sagraloff. Efficient Real Root Approximation.
57. Abstract: We consider the problem of approximating all real roots of a square-free polynomial f. Our algorithm assumes that corresponding isolating intervals are already provided and refines each of them to a certain width 2 - L, that is, each of the roots is approximated to L bits after the binary point. Our method is the first one that gives a certified answer to this problem in the context of bitstream polynomials, that is, it is assumed that the polynomial coefficients can be approximated to any specified error bound. For the refinement of an interval, we consider a variant of the quadratic interval refinement method which uses approximate evaluations only. We derive a general bit complexity in terms of characteristic values of the polynomial such as the size of the coefficients or the separation of the polynomial; in the special case of integer polynomials, our bound improves the previously best approach by a factor of degf.
58. [+] Manuel Kauers and Carsten Schneider. A Refined Denominator Bounding Algorithm for Multivariate Linear Difference Equations.
59. Abstract: We continue to investigate which polynomials can possibly occur as factors in the denominators of rational solutions of a given partial linear difference equation. In an earlier article we had introduced the distinction between periodic and aperiodic factors in the denominator, and we gave an algorithm for predicting the aperiodic ones. Now we extend this technique towards the periodic case and present a refined algorithm which also finds most of the periodic factors.
60. [+] Alexey Pospelov. Fast Fourier Transforms over Poor Fields.
61. Abstract: We present a new algebraic algorithm for computing the Discrete Fourier Transforms over arbitrary fields. It computes DFTs of infinitely many orders n in O(nlogn) algebraic operations, while the complexity of the known FFT algorithms is Ω (n1. 5) for such n. Our algorithm is a novel combination of the classical FFT algorithms, and is never slower than any of the latter.
As an application we come up with an efficient way of computing DFTs of high orders in finite field extensions which can further boost fast polynomial multiplication. We relate the complexities of the DFTs of special orders with the complexity of polynomial multiplication.
62. [+] B. David Saunders, David H. Wood and Bryan Youse. Symbolic-numeric exact rational linear system solution.
63. Abstract: An iterative refinement approach is taken to rational linear system solution. Such methods produce, for each entry of the solution vector, a dyadic rational approximation from which the correct rational entry can be reconstructed. A dyadic rational is a rational number with denominator a power of 2. Our method is numeric-symbolic in that it uses an approximate numeric solver at each iteration together with a symbolic (exact arithmetic) residual computation and symbolic rational reconstruction. There is some possibility of failure of convergence. The rational solution may be checked symbolically. Alternatively, the algorithm may be used without the rational reconstruction to obtain an extended precision floating point approximation of any specified accuracy. In this case we cannot guarantee the result, but we give evidence (not proof) that the probability of error is extremely small.
The chief contributions of our method are (1) consistent continuation, (2) improved rational reconstruction. and (3) performance. By consistent continuation is meant that the primary evidence of convergence at each iterative step is not the size of the residual but rather the consistency of the low order portion of the previous approximant with the high order bits of the current correction term. Our improved rational reconstruction uses a new rationale. For good enough dyadic approximants it is able to be output sensitive (i.e. have early termination) with provably correct result. We also have a heuristic for even earlier, but speculative, termination. Regarding performance, experiments show that our implementation competes favorably with Dixon’s method (pure symbolic with padic iteration) as implemented in LinBox, and with previous numeric-symbolic iterative implementations. In many cases we achieve equivalent or better time than similar methods or succeed when they fail to converge.
64. [+] Thomas Sturm and Ashish Tiwari. Verification and Synthesis Using Real Quantifier Elimination.
65. Abstract: We present the application of real quantifier elimination methods to formal verification and synthesis of continuous and switched dynamical systems. Through a series of case studies, we show how first-order formulas over the reals arise when formally analyzing models of complex control systems. We present the models of the dynamical systems and the verification methodology in detail. The formulas described in the paper can serve as benchmarks for real quantifier elimination. Existing off-the-shelf quantifier elimination procedures are not successful in eliminating quantifiers from many of our benchmarks. We therefore automatically combine three established software components: virtual subtitution based quantifier elimination in Reduce/Redlog, cylindrical algebraic decomposition implemented in Qepcad, and the simplifier Slfq implemented on top of Qepcad. Corresponding computations can either be run in batch or interactively controlled from inside Reduce. We used this combination to successfully analyze models of various systems, including adaptive cruise control laws used in automobiles, the adaptive flight control system being proposed for next-generation flight control, and the classical inverted pendulum problem studied in control theory.
66. [+] Adam Strzebonski and Elias Tsigaridas. Univariate real root isolation in an extension field.
67. Abstract: We present algorithmic, complexity and implementation results for the problem of isolating the real roots of a univariate polynomial in Bα ∈ L[y], where $L=\QQ(\alpha)$ is a simple algebraic extension of the rational numbers.
We consider two approaches for tackling the problem. In the first approach using resultant computations we perform a reduction to a polynomial with integer coefficients. We compute separation bounds for the roots, and using them we deduce that we can isolate the real roots of Bα in $\sOB(N^{10})$, where N is an upper bound on all the quantities of the input (degree and bitsize) polynomials.
In the second approach we isolate the real roots working directly on the polynomial of the input. We compute separation bounds for real roots and we prove that they are optimal, under mild assumptions. For isolating the roots we consider a modified Sturm’s algorithm, and a modified version of Descartes’ algorithm introduced by Sagraloff. For the former we prove a complexity bound of $\sOB(N^8)$ and for the latter a bound of $\sOB(N^{7})$.
We implemented the algorithms in C as part of the core library of Mathematica and we illustrate their efficiency over various data sets.
Finally, we present complexity results for the general case of the first approach, in the case where the coefficients belong to multiple extensions.
68. [+] Jacques-Arthur Weil, Ainhoa Aparicio-Monforte, Moulay Barkatou and Sergi Simon. Formal first integrals along solutions of differential systems I.
69. Abstract: We consider an analytic vector field = X(x) and study whether it may possess analytic first integrals via a variational approach. We assume that one solution Γ is known and study the successive variational equations along Γ . Constructions of Morales-Ramis-Simo show that coefficients of the Taylor expansions of first integrals arise as rational solutions of the dual linearized variational equation. We show that they also satisfy linear “filter” conditions. Using this, we adapt the algorithms from to design algorithms optimized for this task and demonstrate their use. Part of this work stems from the first author’s PhD thesis .
70. [+] Victor Y. Pan, Guoliang Qian and Ai-Long Zheng. GCDs and AGCDS of Univariate Polynomials by Matrix Methods.
71. Abstract: We review the known algorithms for polynomial GCDs and present and analyze our novel techniques for the exact and numerical (approximate) computation of the GCDs by matrix methods.
72. [+] Aurélien Greuet and Mohab Safey El Din. On the reachability the infimum of an unconstrained global optimization problem and real equation solving.
73. Abstract: Let $$f\in \Q[X_1,\ldots,X_n]$$ of degree D. Algorithms for solving the unconstrained global optimization problem $$f^\star=\inf_{\x\in \R^n} f(\x)$$ are of first importance since this problem appears frequently in numerous applications in engineering sciences. This can be tackled by either designing appropriate quantifier elimination algorithms or by certifying lower bounds on $f^\star$ by means of sums of squares decompositions but there is no efficient algorithm for deciding if $f^\star$ is a minimum.
This paper is dedicated to this important problem. We design an algorithm that decides if $f^\star$ is reached over $$\R^n$$ and computes a point $$\x^\star\in \R^n$$ such that $$f(\x^\star)=f^\star$$ if such a point exists. If $L$ is the length of a straight-line program evaluating $f$, a probabilistic version of the algorithm runs in time $(n2(L + n2)(D(D - 1)n - 1)2)$. Experiments show its practical efficiency.
74. [+] Andrew Novocin, Mark Van Hoeij and Jürgen Klüners. Generating Subfields.
75. Abstract: Given a field extension K/k of degree n we are interested in finding the subfields of K containing k. There can be more than polynomially many subfields. We introduce the notion of generating subfields, a set of up to n subfields whose intersections give the rest. We provide an efficient algorithm which uses linear algebra in k or lattice reduction along with factorization in any extension of K. Our implementation shows that previously difficult cases can now be handled.
76. [+] Andrew Novocin, William Hart and Mark Van Hoeij. Practical Polynomial Factorization in Polynomial Time.
77. Abstract: State of the art factoring in Q[x] is dominated in theory by a combinatorial reconstruction problem while, excluding some rare polynomials, performance tends to be dominated by Hensel lifting. We present an algorithm which gives a practical improvement (less Hensel lifting) for these more common polynomials. In addition, factoring has suffered from a 25 year complexity gap because the best implementations are much faster in practice than their complexity bounds. We illustrate that this complexity gap can be closed by providing an implementation which is comparable to the best current implementations and for which competitive complexity results can be proved.
78. [+] Changbo Chen and Marc Moreno Maza. Algorithms for Computing Triangular Decompositions of Polynomial Systems.
79. Abstract: Methods for computing triangular decompositions of polynomial systems can be classified into two groups. First, those computing a series of regular chains C1…, Ce such that for each irreducible component V of the variety of the input system, one of the Ci’s encodes a generic zero of V. Secondly are those methods computing a series of characteristic sets (in the sense of Wu Wen Tsün) C1…, Cf such that the variety of the input system is the union of the quasi-components of the Ci’s.
A large number of methods fall in the second family. Some methods belong to both families, this is the case for those proceeding in an incremental manner, that is, solving one equation after another. These latter methods rely on an operation for computing the intersection of an hypersurface and the quasi-component of a regular chain. This is an attractive operation since its input can be regarded as well-behaved geometrical objects. However, known algorithms (the one of Daniel Lazard in 1991 and the one of the second author in 2000) are quite involved and difficult to analyze.
We revisit this intersection operation. We exhibit a simpler algorithm, which also appears to be practically much more efficient. To this end, we have weakened the standard notion of a polynomial CDG modulo a regular chain, while preserving the same specifications for our intersection operation. Another central idea is to prevent from repeating intermediate expensive computations. This feature is achieved by the algebraic properties of our algorithm and not as a result of any caching techniques in the implementation.
In our experimental results, realized with the RegularChains library in Maple, our new intersection outperforms the one of the second author by several orders of magnitude on sufficiently difficult problems.
80. [+] Changbo Chen, James H. Davenport, Marc Moreno Maza, Bican Xia and Rong Xiao. Computing with Semi-Algebraic Sets Represented by Triangular Decomposition.
81. Abstract: In a previous work, we introduced the notion of a triangular decomposition of a semi-algebraic system. We also proposed two algorithms for computing such decompositions, by adapting to the semi-algebraic case techniques that are standard in the algebraic one. Under genericity assumptions one of our algorithms runs in singly exponential in the number of variables.
Increasing the practical efficiency of these two algorithms and the utility of their output are the motivations of this new article. To this end, we propose theoretical results, new algorithms and an implementation report.
First, we establish properties of border polynomials and fingerprint polynomial sets under splitting. These results are used in our algorithms in order to recycle intermediate data when computations split in several branches.
Second, observing that triangular decomposition algorithms are essentially recursive, we propose a technique, that we call relaxation, for reducing the complexity of the arguments in those recursive calls. Experimental results confirm the effectiveness of this technique.
Third, we present practical procedures for basic set-theoretical operations on semi-algebraic sets represented by triangular decompositions, in particular for performing inclusion tests. This allows us to ensure that semi-algebraic sets can be represented by irredundant triangular decompositions.
82. [+] Li Guo, William Sit and Ronghua Zhang. On Rota’s Problem for Linear Operators in Associative Algebras.
83. Abstract: A long standing problem of G. Rota for associative algebras is the classification of all linear operators that can be defined on them. In the 1970s, there were only a few known classes of such operators, for example, the derivative operator, the difference operator, the average operator and the Rota-Baxter operator. A few more similar operators have appeared after Rota posed his problem. However, little progress has been made to solve this problem in general. In part, this is because the precise meaning of the problem is not so well understood. In this paper, we propose a formulation of the problem using the context of operated algebras and viewing an associative algebra with a linear operator as one that satisfies a certain operated polynomial identity. This approach is analogous to the study of rings with polynomial identities. To narrow our focus more on the operators that Rota was interested in, we further consider two particular classes of operators, namely, those that generalize differential or Rota-Baxter operators. With the aid of computer algebra, we are able to come up to a list of these two classes of operators, and provide some evidence that these lists may be complete. Our search have revealed quite a few new operators of these types whose properties are expected to be similar to the differential operator and Rota-Baxter operator respectively.
In recent years, a more uniformed approach has emerged in related areas, such as difference algebra and differential algebra, and Rota-Baxter algebra and Nijenhuis algebra. The similarities in related theories can be more efficiently explored by advances on Rota’s problem.
84. [+] Benjamin A. Burton. Detecting genus in vertex links for the fast enumeration of 3-manifold triangulations.
85. Abstract: Enumerating all 3-manifold triangulations of a given size is a difficult but increasingly important problem in computational topology. A key difficulty for enumeration algorithms is that most combinatorial triangulations must be discarded because they do not represent topological 3-manifolds. In this paper we show how to preempt bad triangulations by detecting genus in partially-constructed vertex links, allowing us to prune the enumeration tree substantially.
The key idea is to manipulate the boundary edges surrounding partial vertex links using expected logarithmic time operations. Practical testing shows the resulting enumeration algorithm to be significantly faster, with up to 249x speed-ups even for small problems where comparisons are feasible. We also discuss parallelisation, and describe new data sets that have been obtained using high-performance computing facilities.
86. [+] Jeremy-Yrmeyahu Kaminski and Yann Sepulcre. Using Discriminant Curves to Recover a Surface of P^4 From Two Generic Linear Projections.
87. Abstract: We study how an irreducible smooth and closed algebraic surface X embedded in $$\C\P^4$$, can be recovered using its projections from two points onto embedded projective hyperplanes. The different embeddings are unknown. The only input is the defining equation of each projected surface. We show how both the embeddings and the surface in $$\C\P^4$$ can be recovered modulo some action of the group of projective transformations of $$\C\P^4$$.
We show how in a generic situation, a characteristic matrix of the pair of embeddings can be recovered. Then we use this matrix to recover the class of the couple of maps and as a consequence to recover the surface.
For a generic situation, two projections define a surface with two irreducible components. One component has degree d(d - 1) and the other has degree d, being the original surface.
|
2018-01-16 21:04:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7975800037384033, "perplexity": 616.3744974958089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886739.5/warc/CC-MAIN-20180116204303-20180116224303-00331.warc.gz"}
|
https://pypi.org/project/pycommand/
|
Library / toolkit for creating command line programs with minimal effort.
## Project description
Library / toolkit for creating command line programs with minimal effort.
Pycommand is essentially a fancy wrapper around getopt that consists of one simple CommandBase class that you can inherit to create executable commands for your (Python) programs with very simplistic and readable code. It has support for subcommands and also nesting commands, so you can create (multiple levels of) subcommands, with the ability to pass the values of optional arguments of a command object to its subcommand objects. Supported Python versions are 2.7 and 3.2 and later.
## Features
• Parsing of optional and positional arguments
• Minimalistic approach with a clean API
• Create scripts in a matter of minutes using the code generator
• Auto compiled usage messages
• Graceful semi-automatic handling of exit status codes
• Subcommands can have subcommands that can have subcommands (each with their own optional arguments)
• Pass values for –some-option from a parent command into child commands.
If you have pip installed, you can just do:
# pip install pycommand
## Script generator
To quickly start writing a command from a template (much like the examples below), use the script generator by running:
\$ python -m pycommand init
This will ask you for an executable name, class name and template type and it will save it to an executable python script, ready to be used as a command line program.
You can have a very basic command line program that handles -v, --version and -h, --help arguments set up in less than a minute.
## Example
For full documentation and examples, visit http://pythonhosted.org/pycommand/
Here is an undocumented code example of getting automated usage text generation and parsing of optional arguments. If we name the script for which you can see the code below basic-example and execute it, the following will be the output for running basic-example -h or basic-example --help:
usage: basic-example [options]
An example of a basic CLI program
Options:
-h, --help show this help information
-f <filename>, --file=<filename> use specified file
--version show version information
And here is the code:
#!/usr/bin/env python
import pycommand
import sys
class BasicExampleCommand(pycommand.CommandBase):
'''An example of a basic CLI program'''
usagestr = 'usage: basic-example [options]'
description = __doc__
optionList = (
('help', ('h', False, 'show this help information')),
('file', ('f', '<filename>', 'use specified file')),
('version', ('', False, 'show version information')),
)
def run(self):
if self.flags.help:
print(self.usage)
return 0
elif self.flags.version:
print('Python version ' + sys.version.split()[0])
return 0
elif self.flags.file:
print('filename = ' + self.flags.file)
return 0
if __name__ == '__main__':
# Shortcut for reading from sys.argv[1:] and sys.exit(status)
pycommand.run_and_exit(BasicExampleCommand)
# The shortcut is equivalent to the following:
# cmd = BasicExampleCommand(sys.argv[1:])
# if cmd.error:
# print('error: {0}'.format(cmd.error))
# sys.exit(1)
# else:
# sys.exit(cmd.run())
## Why was it created?
When parsing command line program arguments, I sometimes work with argparse (a replacement for optparse). I don’t really like the API and the output it gives, which is the main reason I’ve always used getopt for parsing arguments whenever possible.
The CommandBase class was originally written for DisPass, which is a password manager/generator, as a means to easily define new subcommands and have auto-generated usage messages. Because I want to have this in other projects I’ve decided to put it in the cheeseshop in 2013. It has since been refined for more generic usage and has proven to be stable and workable throughout the years.
Copyright (c) 2013-2016, 2018 Benjamin Althues <benjamin@babab.nl>
Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
## Change Log
### 0.4.0 - 2018-03-27
• Full templates can now (also) be auto generated
• CI testing for Python 3.5 and 3.6
#### Changed
Note
The pycommand init script is removed and is included in the pycommand package itself.
To auto generate scripts from templates, from now on use:
python -m pycommand init
• The code is split up into several modules and pycommand is now distributed as a package rather than a single module. The public API does not change however, all relevant members (CommandBase, run_and_exit) that are now placed in pycommand.pycommand are exposed through __init__ and therefore are still available as pycommand.CommandBase and pycommand.run_and_exit.
• Code generator is included in the package itself instead of using an installed script (pycommand init)
• All templates are now embedded as well
#### Removed
• Pycommand init script (installed into /usr/local/bin)
• Templates directory
• GNU info docs and manpage from distribution (they can still be generated)
• pycommand.3 (prev. installed into /share/man/man3)
• pycommand.info
### 0.3.0 - 2015-06-04
• Shortcut run_and_exit() for reading from sys.argv[1:] and exiting the interpreter via sys.exit(status)
• Package as wheel distribution to speed up installations
• Add man pycommand ability, i.e. install mandoc in /usr/share/man3/
#### Changed
• Add support for getting flags by attribute like self.flags.help. The default approach for normal dicts like self.flags['help'] remains valid.
### 0.2.0 - 2015-05-21
• Full example of a command with subcommands
• Create quick templates via pycommand script (pycommand init)
• Unit tests and automatic testing via Travis-CI
• Documentation man (.3) and info (.info) pages
### 0.1.0 - 2013-08-08
• Initial release
## Project details
### Source Distribution
pycommand-0.4.0.tar.gz (12.0 kB view hashes)
Uploaded source
### Built Distribution
pycommand-0.4.0-py2.py3-none-any.whl (24.2 kB view hashes)
Uploaded py2 py3
|
2022-08-20 00:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17225641012191772, "perplexity": 10647.152731387023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00490.warc.gz"}
|
https://forum.rclone.org/t/how-to-back-up-only-new-files-to-onedrive-from-local-folder-and-then-delete/33343
|
# How to back up only new files to onedrive from local folder and then delete?
Hi all, this is a wonderful tool and I'm glad it exists!
I have some questions; let me start with what I want to do and then what I've tried.
### What I want to do
I want to back up folders from my local machine to onedrive. The local folders have a mix of new and already backed up files, I want to backup/move/copy only the new files and delete everything locally (all files) once the backup is complete.
The items to be backed up are coming from my phone via syncthing. The flow I have in mind is the following:
1. Phone > PC (via syncthing; already set up)
2. PC > onedrive (check and only upload new files; files deleted locally shouldn't affect anything)
3. Delete from PC
4. Do this operation automatically overnight after 12am
Note that step 3 is mostly because I don't currently have enough storage to keep files on my PC but I plan to in the future, so delete for now but it will be nice to have the option to not delete later.
### What I've tried
I experimented a little and this is the "ideal" command I ended up with. Ideal meaning I thought this had everything I needed
Command:
rclone move "local\windows\path" onedrive-rclone:"onedrive/path" --dry-run --log-file="logs.txt" --exclude-from "exclude.txt" --metadata --checksum -vv
Notes:
• Using --checksum because I saw that onedrive can have issues with file sizes and I would prefer not to have renamed duplicates.
• Doing a move so local files are deleted after the operation.
• --metadata seems useful and I thought better to have than to not have.
• exclude.txt is to prevent deletion of syncthing [hidden] files.
exclude.txt:
# https://forum.rclone.org/t/exclude-hidden-files/268/5
.*
.*/**
#### Version
rclone v1.59.2
- os/version: Microsoft Windows 10 Home 21H2 (64 bit)
- os/kernel: 10.0.19044.2006 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.18.6
- go/tags: cmount
#### Config file
[onedrive]
type = onedrive
token = {"access_token":"banana","token_type":"Bearer","refresh_token":"another banana","expiry":"queen's 100th birthday"}
drive_id = 69420
drive_type = personal
#### -vv log
Snippet:
2022/09/30 14:30:24 DEBUG : Screenshot_20220522-193237.png: sha1 = d026a13e370e22f79e4ef240a556e81400951072 OK
2022/09/30 14:30:24 DEBUG : Screenshot_20220522-193237.png: Size and sha1 of src and dst objects identical
2022/09/30 14:30:24 DEBUG : Screenshot_20220522-193237.png: Unchanged skipping
2022/09/30 14:30:24 NOTICE: Screenshot_20220522-193237.png: Skipped delete as --dry-run is set (size 1.294Mi)
2022/09/30 14:30:24 DEBUG : One drive root 'onedrive/path': Waiting for transfers to finish
2022/09/30 14:30:24 INFO : There was nothing to transfer
2022/09/30 14:30:24 NOTICE:
Transferred: 5.154 GiB / 5.154 GiB, 100%, 0 B/s, ETA -
Checks: 25827 / 25827, 100%
Deleted: 8768 (files), 0 (dirs)
Renamed: 8291
Elapsed time: 12m16.7s
2022/09/30 14:30:24 DEBUG : 3 go routines active
### Questions
1. Why are there so many renamed files? Is this like appending "(2)" to a file name? I tried to ctrl+f with "rename" to find specific files that were renamed but it just finds the "Renamed: 8291" portion at the end of the log.
2. What mistakes have I made?
3. Am I doing anything unecessary?
4. Is there a better way to do this? Simpler the better!
5. What's the DEBUG : 3 go routines active all about?
You'd have to share the full log as you've asked some questions, but there isn't anything the log to really explain them as you've got a snippet of a log.
1 Like
What specific need(s) made you choose not to use the built-in OneDrive client of Windows?
and not to use the photo upload of the OneDrive app for your phone?
1 Like
Apologies, I tried to use pastebin but I got an error because it was over 512Kb. Do you know where I can paste the whole log?
Ah good question!
I want to upload to a specific path. Doesn't seem like I can do this on desktop.
And on mobile, everything gets uploaded to the camera roll with the automatic backup. For example, both screenshots and photos get uploaded to the same folder and I want it better organized than that.
Not sure I fully understand. How about making syncthing copy the phone content to
C:\Users\batman\OneDrive\some\folder\in\onedrive\
I think you can make it more robust by finding the OneDrive folder in an environment variable, haven't checked (yet).
Do you have other needs not supported by OneDrive on Windows?
I don't know syncthing, so really can't compare it to the phone app for OneDrive - just trying to ask some good questions to help you see any blind spots.
Perhaps, you could do the organization directly in OneDrive using rclone after the phone app has uploaded them (by a nightly script)? If so, then there will be no need for them to hit the PC.
1 Like
Not sure I fully understand. How about making syncthing copy the phone content to
C:\Users\batman\OneDrive\some\folder\in\onedrive\
When I tried last, it seems I will have to add the onedrive folder locally first, meaning it will have to be downloaded to the PC
I don't know syncthing, so really can't compare it to the phone app for OneDrive - just trying to ask some good questions to help you see any blind spots.
Thank you! I appreciate the help
Syncthing is basically like rclone but doesn't connect to cloud services. I can install syncthing on two pr more devices and keep folders in sync and/or backup
Perhaps, you could do the organization directly in OneDrive using rclone after the phone app has uploaded them (by a nightly script)? If so, then there will be no need for them to hit the PC.
Good idea. This sounds appealing but might be a mess especially considering I want to back up things within various subfolders
@Animosity022 I tired with a smaller upload but that was still too big for pastebin and similar services I tried. I cannot paste it here either, I get Body is limited to 32000 characters; you entered 673410.
In the template, we offer some advice / options:
You should use 3 backticks to begin and end your paste to make it readable. Or use a service such as https://pastebin.com or https://gist.github.com/
1 Like
This isn't the case anymore, now the OneDrive client for Windows is using On-demand downloads by default:
Save disk space with OneDrive Files On-Demand for Windows
Sync files with OneDrive Files on Demand
and you can optionally activate Storage Sense, to automatically free up space of unused locally downloaded cloud content:
Use OneDrive and Storage Sense in Windows 10 to manage disk space
Sounds like your need is more than the photo upload/backup offered in the OneDrive Phone app, so you probably need syncthing and the PC.
Thanks!
1 Like
Here it is: rclone logs using -vv · GitHub
Wow on demand sync is nice! Didn't know about that thank you
Will read into that, sounds good.
One more thing; do you know if onedrive will only backup new items? A few years ago things got duplicated when using the android app
1 Like
Very good question!
It only copies new items, but not sure exactly how the app compares to decide.
Seems like it compares all the photos on the phone (mine is iOS) to OneDrive on initial start - it takes quite some time (hours) so make sure have the phone plugged in and OneDrive in foreground to speed things up - I also remove the cover to maximize cooling (to reduce battery wear).
I am not sure whether it compares to everything in Pictures or Camera Roll and whether it compares on folder level, but I am sure it uploads to year (and optionally month) folders in Pictures/Camera Roll.
I would make a backup of Camera Roll before activating; to allow for easy roll back in case of unexpected behavior.
I can see that the debug log has a lot of photos from WhatApp, I doubt the OneDrive photo upload will find and upload those. It probably only looks into the CameraRoll folder of your phone. Seems like you should stick to syncthing!
Does syncthing copy everything from the phone or just deltas?
If deltas, then sending directly into the local OneDrive folder is worth considering, to avoid the nightly upload.
If full, then you are probably better of with the move and nightly upload outlined - otherwise the OneDrive client might upload everything each time. You could test to see, I am not sure.
I doubt you will see them on upload, I only remember seeing them on downloads - and doubt --checksum will help in that case, so I would remove to keep things simple and as close to default as possible (at least until an issue arises).
OneDrive doesn't support this, so I suggest you remove to make things simple.
(https://rclone.org/overview/#features)
The renamed in the end of your log are the files moved, try searching for "skipped move" in the log.
Low level debug info telling how many Go routines were active at program end; only rclone programmers can use this info. 3 go routines is fine in your situation.
1 Like
Thanks this is useful. Haha yes good move trying to keep it cool!
Ah so it's deltas, but I've moved to a new device. Previously I was using an old android phone with syncthing + an app called autosync by metactrl (this uploaded deltas to onedrive and deleted locally), that phone suddenly died and I came to rclone to replace autosync.
Basically it is deltas but since I've moved to a new device (old android > PC) it will have to, for the first run, look over every file. Once this is done, it'll just be new files/deltas.
Oh my bad I thought this was what was used to check onedrive if the file existed or not. So if there was a.jpg locally and on onedrive with slightly different sizes, checksum would allow rclone to know it already exists.
Thank you!
The moved files are logged as renamed? That's a little unexpected haha. But if so, then I think my move and delete were successful! Minus --metadata and --checksum of course
Thanks
1 Like
rclone by default (conceptually) performs these two commands to determine if a file needs to be transferred:
rclone lsl source:some/folder/somefile
rclone lsl target:some/folder/somefile
if they both show the same output, then the file is already up-to-date and skipped (and deleted if using move).
If you add --checksum then it also performs these commands (when source=Windows and target=OneDrive):
rclone sha1sum source:some/folder/somefile
rclone sha1sum target:some/folder/somefile
If the size from the first set of commands and the hashsum from the second set are the same, then the file is already up-to-date and skipped (and deleted if using move). Note: that this option ignores the modification time of the file.
More details here: https://rclone.org/docs/#c-checksum
Haha, this is the typical Windows versus Linux terminology confusion
Windows users are used to rename filename newfilename
Linux users are used to mv filename newfilename
I hardly notice the difference anymore, except when typing
1 Like
Thank you for the summary, it makes a lot more sense now! Modification times I don't think are a big deal (although do correct me if they are!), the main thing for me are that timestamps (created at or photo taken at) are preserved.
Omg this was the biggest point of confusion for me
I thought my file names were being updated lol.
Thank you so much! This is great, means the simple move and exclude-from commands are all I need!
1 Like
@Ole little update: first chunk of files has backed up successfully! Thank you again for your help
1 Like
This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.
|
2023-02-01 07:01:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2952638864517212, "perplexity": 4338.148533766199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00536.warc.gz"}
|
https://math.stackexchange.com/questions/2317502/what-are-the-primitive-ideals-of-a-matrix-algebra
|
What are the primitive ideals of a matrix algebra?
Consider a $C^*$ algebra $\mathcal A$. A closed two-sided hermitian ideal $I$ is called primitive if it is the kernel of an irreducible $*$ representation, alternatively if $\mathcal A/I$ admits a faithful irreducible $*$ representation. Let $\mathrm{Prim}(\mathcal A)$ be the set of primitive ideals of $\mathcal A$.
One topologises $\mathrm{Prim}(\mathcal A)$ by defining a closure operation: $$Z\subset\mathrm{Prim}(\mathcal A)\quad :\quad\overline Z:=\left\{\rho\in\mathrm{Prim}(\mathcal A)\ \middle|\ \ \rho\supseteq\bigcap_{\sigma\in Z}\sigma\right\}$$ this is called the "hull kernel topology".
If $\mathcal A$ is commutative, one can verify that $\mathrm{Prim}(\mathcal A)$ with this topology corresponds to the Gelfand transformation of $\mathcal A$ (one uses that the only commutative $C^*$ algebra that admits a faithful $*$ representation is $\Bbb C$, giving an identification of $\mathrm{Prim}(\mathcal A)$ with the character space of $\mathcal A$, definition pushing of the construction of the Gelfand transform shows that the topologies are the same).
Beside this fact I have no understanding of this space, so I'm interested in figuring out what the simplest non-commutative $\mathrm{Prim}(\mathcal A)$ spaces look like.
What is the space $\mathrm{Prim}(M_{n\times n}(\Bbb C))$?
My problem in understanding this space is that I don't know what the primitive ideals of $M_{n\times n}$ are. So I guess at this point I am treating that question to be equivalent to
What are the primitive ideals of $M_{n\times n}(\Bbb C)$?
This algebra is simple, so it doens't have any non trivial two-sided ideal.
• Huh, I had forgotten that! Thats kind of weird and makes this construction seem a bit singular. – s.harp Jun 11 '17 at 11:57
• @s.harp It looks like your definition of "primitive" matches the classical ring theory version. $\{0\}$ is the (one and only) primitive ideal of any full matrix ring over a field. – rschwieb Jun 11 '17 at 23:36
You will not get anything interesting out of the matrices.
For a non-trivial example, consider the Toeplitz algebra, $\mathcal A=K(H)+C(\mathbb T)$ (this is the C$^*$-algebra generated by the unilateral shift). The irreducible representations are $\pi_0:k+f\longmapsto k$ and $\pi_\lambda:k+f\longmapsto f(\lambda)$ for each $\lambda\in\mathbb T$.
So $$\text{Prim}(\mathcal A)=\{0\}\cup\mathbb T,$$ where $0$ represents $\ker\pi_0=0+C(\mathbb T)$ and each $\lambda$ represents $\ker\pi_\lambda=K(H)+I_\lambda$ (where $I_\lambda=\{f:\ f(\lambda)=0\}$.
The interesting part comes when you pay attention to the topology. For any $X\subset \mathbb T$, \begin{align} \overline X&=\{\mu\in\{0\}\cup\mathbb T:\ K(H)+I_\mu\supset\bigcap_{\lambda\in X}K(H)+I_\lambda\}\\ \ \\ &=\{K(H)+I_\mu:\ \mu\in \overline X\}. \end{align} No surprise here, the closure of $X$ is the closure in the usual topology in $\mathbb T$. So the circle inherits its usual topology.
But consider the point $\{0\}$. It corresponds to the irreducible representation that is nonzero on the compacts. It is not hard to check that this representation has to be the identity representation, and so its kernel is trivial. Then $$\overline{\{0\}}=\{J\in\text{Prim}(\mathcal A):\ J\supset\{0\}=\text{Prim}(\mathcal A).$$ That is, the closure of the singleton $\{0\}$ is the whole uncountable space. Thus the space is not Hausdorff (for instance, the constant sequence $\{0\}$ converges to every point; not being Hausdorff is weird).
• Thank you for the details! Here $K(H)$ is the algebra of compact operators on $\ell^2(\Bbb Z)$ which is identified with $L^2(\Bbb T)$ via the fourier transform? And $C(\Bbb T)$ is identified as the commutative sub-algebra of multiplications with continuous functions? – s.harp Jun 11 '17 at 11:54
• No. The Toeplitz algebra is the C $^*$-algebra generated by the unilateral shift. $C (\mathbb T)$ is identified with the Toeplitz operators with continuous symbol. – Martin Argerami Jun 11 '17 at 14:40
|
2020-01-27 22:10:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857752919197083, "perplexity": 223.62190953075685}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00459.warc.gz"}
|
https://www.studypug.com/math-5/patterns-describing-patterns-using-tables-and-solving-variables
|
# Patterns: Describing patterns using tables and solving variables
#### Everything You Need in One Place
Homework problems? Exam preparation? Trying to grasp a concept or just brushing up the basics? Our extensive help & practice library have got you covered.
#### Learn and Practice With Ease
Our proven video lessons ease you through problems quickly, and you get tonnes of friendly practice on questions that trip students up on tests and finals.
#### Instant and Unlimited Help
Our personalized learning platform enables you to instantly find the exact walkthrough to your specific type of question. Activate unlimited help now!
0/6
##### Intros
###### Lessons
1. Introduction to Describing Patterns using Tables and Solving Variables:
2. What is a function machine and what is a function table?
3. What are two-step rules?
4. How do we write number pattern rules as formulas with variables?
5. Solving the formula for one-step rules
6. Solving the formula for two-step rules with consecutive inputs
7. Solving for formula for two-step rules with random inputs
0/16
##### Examples
###### Lessons
1. Solve for the Function Table's Missing Variables
Use the rule to complete the function table:
2. Solving for Function Table Rules
Write the rule for the function table
• Write the one-step rule as a formula with a variable
3. Using Two-Step Rules to Complete Function Tables
Use the two-step rule to complete the function table.
4. Solving for Two-Step Rules in Function Tables
Write the two-step rule for the function table.
1. $output = (m) input \pm b$
2. $output = (m) input \pm b$
3. $output = (m) input \pm b$
4. $output = (m) input \pm b$
0%
##### Practice
###### Free to Join!
StudyPug is a learning help platform covering math and science from grade 4 all the way to second year university. Our video tutorials, unlimited practice problems, and step-by-step explanations provide you or your child with all the help you need to master concepts. On top of that, it's fun - with achievements, customizable avatars, and awards to keep you motivated.
• #### Easily See Your Progress
We track the progress you've made on a topic so you know what you've done. From the course view you can easily see what topics have what and the progress you've made on them. Fill the rings to completely master that section or mouse over the icon to see more details.
• #### Make Use of Our Learning Aids
###### Practice Accuracy
See how well your practice sessions are going over time.
Stay on track with our daily recommendations.
• #### Earn Achievements as You Learn
Make the most of your time as you use StudyPug to help you achieve your goals. Earn fun little badges the more you watch, practice, and use our service.
• #### Create and Customize Your Avatar
Play with our fun little avatar builder to create and customize your own avatar on StudyPug. Choose your face, eye colour, hair colour and style, and background. Unlock more options the more you use StudyPug.
###### Topic Notes
In this lesson, we will learn:
• How to describe number patterns using a function table (input output table)
• How to write formulas with variables for function tables and solve for variables
• The steps for solving the rule (one-step and two-step) or formula for a function table
Notes:
• We can think of the relationship between numbers in a pattern as a machine
• The machine takes the number you give it (the “input”), applies a function (the “rule” or math operations), and gives you a resulting number (the “output”)
• The input output table (or function table) keeps track of these inputs and outputs
• Unlike the number sequence, order is not necessary for a function table
• Ex. for the number sequence/pattern “start at 1 and add 3 each time” it would be:
• Ex. but for the function table with a rule of “add 3” it could be:
• It is also possible to have two-step rules for function tables
• The first step is to either multiply or divide (× or ÷)
• The second step is to either add or subtract (+ or –)
• Instead of writing “input” and “output” in the function table, variables can be written instead
• Variables are symbols (letters) that represent values that can change (“varying”)
• Variables can be used to write a formula for the function table using the format:
• $(output variable) = (multiplier/divisor) x (input variable) \pm (addend/subtrahend)$
• Or more commonly written as $y = m x + b$
• To solve for the variables in function tables:
• If solving for an output: plug the input value into the formula
• If solving for an input: plug the output value in and solve backwards (algebra)
• If you are given a complete function table and asked to solve for the formula:
• Check horizontally across input/output for one-step rules
• If it is not a one-step rule:
• If the inputs are consecutive, the multiplier m (in formula $y = m x + b$) is the difference between outputs
• If the inputs are random, the formula can be either found by:
• (#1) trial and error
• OR (#2) using two pairs of input/output and m is the ratio of $\large\frac{\Delta y}{\Delta x}$
|
2022-10-07 11:38:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4612191915512085, "perplexity": 2751.158759255685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00349.warc.gz"}
|
https://schoollearningcommons.info/question/the-denominator-of-a-rational-number-is-greater-than-its-numerator-by-5-if-the-numerator-is-incr-19838906-40/
|
## The denominator of a rational number is greater than its numerator by 5. If the numerator is increased by 6 and the denominator is dec
Question
The denominator of a rational number is greater than its numerator by 5. If the numerator is increased by 6
and the denominator is decreased by 4, the number obtained is. Find the rational number.
in progress 0
1 month 2021-08-16T20:18:07+00:00 2 Answers 0 views 0
|
2021-09-28 19:33:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693623185157776, "perplexity": 381.1352756973231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060882.17/warc/CC-MAIN-20210928184203-20210928214203-00603.warc.gz"}
|
http://stackoverflow.com/questions/3521692/how-can-i-access-oracle-from-python?answertab=oldest
|
# How can I access Oracle from Python?
How can I access Oracle from Python? I have downloaded a cx_Oracle msi installer, but Python can't import the library.
I get the following error:
import cx_Oracle
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
import cx_Oracle
ImportError: DLL load failed: The specified module could not be found.
I will be grateful for any help.
-
Which cx_Oracle did you download? There are many. Also, which version of Python, which version of Oracle, and which operating system are you using? – Bill the Lizard Aug 19 '10 at 12:36
cx_Oracle-5.0.2-10g.win32-py26 – user425194 Aug 19 '10 at 13:06
Sounds like it may not be extracted into the PATH python is using to look for modules. Have you tried installing it using easy_install rather than explicitly (it could be missing another dependency). – JulesLt Aug 19 '10 at 14:23
In addition to cx_Oracle, you need to have the Oracle client library installed and the paths set correctly in order for cx_Oracle to find it - try opening the cx_Oracle DLL in "Dependency Walker" (http://www.dependencywalker.com/) to see what the missing DLL is.
-
Here's what worked for me. My Python and Oracle versions are slightly different from yours, but the same approach should apply. Just make sure the cx_Oracle binary installer version matches your Oracle client and Python versions.
My versions:
• Python 2.7
• Oracle Instant Client 11G R2
• cx_Oracle 5.0.4 (Unicode, Python 2.7, Oracle 11G)
• Windows XP SP3
Steps:
1. Download the Oracle Instant Client package. I used instantclient-basic-win32-11.2.0.1.0.zip. Unzip it to C:\your\path\to\instantclient_11_2
2. Download and run the cx_Oracle binary installer. I used cx_Oracle-5.0.4-11g-unicode.win32-py2.7.msi. I installed it for all users and pointed it to the Python 2.7 location it found in the registry.
3. Set the ORACLE_HOME and PATH environment variables via a batch script or whatever mechanism makes sense in your app context, so that they point to the Oracle Instant Client directory. See oracle_python.bat source below. I'm sure there must be a more elegant solution for this, but I wanted to limit my system-wide changes as much as possible. Make sure you put the targeted Oracle Instant Client directory at the beginning of the PATH (or at least ahead of any other Oracle client directories). Right now, I'm only doing command-line stuff so I just run oracle_python.bat in the shell before running any programs that require cx_Oracle.
4. Run regedit and check to see if there's an NLS_LANG key set at \HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE. If so, rename the key (I changed it to NLS_LANG_OLD) or unset it. This key should only be used as the default NLS_LANG value for Oracle 7 client, so it's safe to remove it unless you happen to be using Oracle 7 client somewhere else. As always, be sure to backup your registry before making changes.
5. Now, you should be able to import cx_Oracle in your Python program. See the oracle_test.py source below. Note that I had to set the connection and SQL strings to Unicode for my version of cx_Oracle.
Source: oracle_python.bat
@echo off
set ORACLE_HOME=C:\your\path\to\instantclient_11_2
set PATH=%ORACLE_HOME%;%PATH%
Source: oracle_test.py
import cx_Oracle
conn = cx_Oracle.connect(conn_str)
c = conn.cursor()
c.execute(u'select your_col_1, your_col_2 from your_table')
for row in c:
print row[0], "-", row[1]
conn.close()
Possible Issues:
• "ORA-12705: Cannot access NLS data files or invalid environment specified" - I ran into this before I made the NLS_LANG registry change.
• "TypeError: argument 1 must be unicode, not str" - if you need to set the connection string to Unicode.
• "TypeError: expecting None or a string" - if you need to set the SQL string to Unicode.
• "ImportError: DLL load failed: The specified procedure could not be found." - may indicate that cx_Oracle can't find the appropriate Oracle client DLL.
-
Does anyone know if this works with other versions (i.e. 3.4? and 64 bit) as long as all the version numbers and platforms are aligned? – The Red Pea Jul 10 at 7:14
In addition to the Oracle instant client, you may also need to install the Oracle ODAC components and put the path to them into your system path. cx_Oracle seems to need access to the oci.dll file that is installed with them.
Also check that you get the correct version (32bit or 64bit) of them that matches your: python, cx_Oracle, and instant client versions.
-
Ensure these two and it should work:-
1. Python, Oracle instantclient and cx_Oracle are 32 bit.
2. Set the environment variables.
Fixes this issue on windows like a charm.
-
that was not intentional, not sure how that came in though. – Venu Murthy Jun 25 '13 at 15:24
If you are using virtualenv, it is not as trivial to get the driver using the installer. What you can do then: install it as described by Devon. Then copy over cx_Oracle.pyd and the cx_Oracle-XXX.egg-info folder from Python\Lib\site-packages into the Lib\site-packages from your virtual env. Of course, also here, architecture and version are important.
-
|
2015-11-30 08:49:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32571277022361755, "perplexity": 4421.777424252794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461132.22/warc/CC-MAIN-20151124205421-00214-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://userweb.jlab.org/~ungaro/mauripage/html/projects/pi0/pi0.html
|
# $$\pi^0$$ in the first and second resonance region¶
## Abstract¶
We report the analysis of exclusive single $$\pi^0$$ electroproduction in the first and second resonance region at Jefferson Lab in the Q^2 range 3 → 6 GeV2. $$\pi^0$$ c.m. angular distributions are obtained over the entire 4π c.m. solid angle. The c.m. differential cross sections and beam spin asymmetries are measured.
|
2018-12-15 16:11:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46062859892845154, "perplexity": 2405.4866559528787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826892.78/warc/CC-MAIN-20181215152912-20181215174912-00565.warc.gz"}
|
https://agenda.infn.it/event/28874/contributions/169705/
|
# ICHEP 2022
Jul 6 – 13, 2022
Bologna, Italy
Europe/Rome timezone
## Search for Environmentally-Induced Decoherence Effects on $\nu$-oscillation at Long-baseline Experiments
Jul 8, 2022, 7:05 PM
1h 25m
Bologna, Italy
#### Bologna, Italy
Palazzo della Cultura e dei Congressi
Poster Neutrino Physics
### Speaker
Mr Arnab Sarker (Tezpur University, Assam, India)
### Description
In a neutrino system, the phenomenon of decoherence refers to the loss of coherence between the three neutrino mass eigenstates. The neutrino system, like any other system, is open to the environment and should be treated as such. Now as we know, the oscillation of neutrinos is caused by the coherent superposition of the neutrino mass eigenstates. But due to the open nature of the system, dissipative interactions between the neutrino sub-system and the environment lead to a loss of coherence with the propagation distance. As a result, the presence of decoherence in the neutrino sub-system alters the probabilities of neutrino oscillation. Herein, we use the Lindblad Master equation to examine the temporal evolution of the neutrinos, with decoherence as an additional term to account for the dissipative interaction with the environment. The effects of such interactions can be seen in the neutrino oscillation probabilities and this has been studied in our present work. We use the general framework developed to compute the modified neutrino oscillation probabilities and analyze the changes. In this study, we investigate how different values of the decoherence parameter affect oscillation probability. We'll present our understanding of the effect of decoherence on the neutrino probabilities in the long-baseline experiments.
In-person participation No
### Primary authors
Mr Arnab Sarker (Tezpur University, Assam, India) Moon Moon Devi (Tezpur University, Assam, India)
### Presentation materials
There are no materials yet.
|
2023-03-27 00:53:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7503383755683899, "perplexity": 1720.3352033347471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00529.warc.gz"}
|
https://proxies-free.com/c-can-anyone-help-make-my-code-simpler-thank-you-in-advance/
|
c++ – Can anyone help make my code simpler thank you in advance
Can anyone help me make my code simpler and also I wanna replace INT_MIN, INT_MAX with something else but idk what (without using the #include <bits/stdc++.h> header) In this program I am asked to design a 2 x 2 x 2 Rubik’s cube that is filled with numbers instead of colors.
The squares on every side are initially filled with the same side index. This means that the four squares on the front side are filled with 0’s, the four squares on the right side are filled with 1’s, etc. You are allowed to move the cube in two directions only: Horizontal move (H or h): The first row moves one step in four sides in this order (0 -> 1 -> 2 -> 3 -> 0) Vertical move (V or v): The left column moves one step in four sides in this order ( 0 -> 5 -> 2 -> 4 -> 0)
The program accepts from the the user the number of moves he/she wants to make, accepts the sequence of desired moves. The code then rotates the cube accordingly, then calculates the sum of the four squares on every side of the cube. It finally prints out the maximum and minimum sums found for the six sides on the same line.
``````#include <bits/stdc++.h>
using namespace std;
void horizontalRotation(int cube(6)(2)(2))
{
for (int i = 1; i < 4; i++)
{
std::swap(cube(0)(0)(0), cube(i)(0)(0));
std::swap(cube(0)(0)(1), cube(i)(0)(1));
}
}
void verticalRotation(int cube(6)(2)(2))
{
const int n() = {5, 2, 4, 0};
for (int i = 0; i < 4; i++)
{
std::swap(cube(0)(0)(0), cube(n(i))(0)(0));
std::swap(cube(0)(1)(0), cube(n(i))(1)(0));
}
}
int main()
{
int cube(6)(2)(2);
for (int i = 0; i < 6; i++)
{
for (int j = 0; j < 2; j++)
{
cube(i)(j)(0) = i;
cube(i)(j)(1) = i;
}
}
int n;
cin >> n;
char ch;
while (n--)
{
cin >> ch;
if (ch == 'H' || ch == 'h')
{
horizontalRotation(cube);
}
else if (ch == 'V' || ch == 'v')
{
verticalRotation(cube);
}
else
{
cout << "Invalid";
return 0;
}
}
int max_sum = INT_MIN, min_sum = INT_MAX;
for (int i = 0; i < 6; i++)
{
int sum = 0;
for (int j = 0; j < 2; j++)
sum = sum + cube(i)(j)(0) + cube(i)(j)(1);
if (sum > max_sum)
max_sum = sum;
if (sum < min_sum)
min_sum = sum;
}
cout << max_sum << " " << min_sum << endl;
return 0;
}
$$```$$
``````
|
2021-06-24 09:06:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3121591806411743, "perplexity": 1629.2797932176836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00632.warc.gz"}
|
https://proxies-free.com/tag/reference/
|
By that time, the cryptocurrency had lost about 12% in the last 24 hours, and more than 17% over its recent high of nearly $9,100, according to additional CoinDesk figures. They offer the best Bitmex bot, as you can see in the picture above: #TRX milks continuous profit. 18% benefit from a bot-generated scalping trade on #TRX. Just stop and enjoy lifelong profits. These are outstanding and consistent profits – let Bitmex's automated trading strategy generate profits for you. What is the point of trading with too many coins if you are constantly benefiting from the best strategy for Bitmex? , ## Reference to ensablado & # 39; WindowsBase, Version = 4.0.0.0 … & # 39; Add I have a problem with a class because I can not compile because you need to add a reference to an assembly. However, it is assumed that I have downloaded and uploaded the version of the framework if it has already been added, but still displays the same error ## Calculation and Analysis – Can anyone suggest a reference where I can find out more about the function of Conway Base 13 in more detail Thank you for your reply to Mathematica Stack Exchange! • Please be sure too answer the question, Provide details and share your research! But avoid • Ask for help, clarification or answering other questions. • Make statements based on opinions; Cover them with references or personal experience. Use MathJax to format equations. Mathjax reference. For more information, see our tips for writing great answers. ## 8 – I'm trying to add headers: reference to cache context headers, but it does not work for anonymous users if ($ header === $this-> request-> getHost () ||$ header == NULL) {
return AccessResult :: allowed () -> addCacheContexts (['headers:Referer', 'user.roles']);
}
return AccessResult :: forbidden () -> addCacheContexts (['headers:Referer', 'user.roles']);
}
## Reference Request – Shortest Path in Dynamic Charts
I consider the following problem. We get a connected undirected graph. Each flank has a length and a wait before it becomes active.
We want to find the shortest way in the expectation.
I would like to know now whether related problems have been worked, d. H. The problem of the shortest path in dynamically evolving graphs.
## fa.functional analysis – Reference requirement: Stein-Wehrstraß on other topologies
To let $$X$$ be a locally compact metric space. The classic sentence of Stone-Weirestrass describes dense subsets of $$C (X, mathbb {R})$$ if it is equipped with the compact open topology.
Are there similar results for dense subsets of $$C (X, mathbb {R})$$ When is it equipped with any topologies?
## c # reference number and its use for comparing with the following floating point numbers
The project is based on Eye Tracker. Let me explain the idea behind the project to better understand my problem.
I have the hardware from Tobii C Eye Tracker. This eye tracker can output the coordinates of the X, Y I'm looking at. This device is very sensitive. When I look at a point, the Eyetracker sends a lot of different data from coordinates, but within ± 100 Area I found out. Although you are staring at one point, your eyes continue to move and therefore output a lot of data. These many data (floating-point numbers) are then stored in a text file. Now I just need 1 data (X coordinate), which means the 1 point I'm staring at, instead of the many data that's inside it ± 100 Area and move it to a new text file.
I have no idea how to program this.
These are the hover Numbers in the text file.
200
201
198
202
250
278
310
315
360
389
500
568
579
590
When I stare at point 1, the data is 200-300that are within the ± 100 Offer. I want to set this 200 as reference point subtracts with the next number and checks if the resulting value is within 100If this is the case, remove it. The reference point should continue to do this with the following numbers until it is outside the ± 100 Offer. Once outside the 100 Area, now is the number 310, then this number is now the next reference point and do the same and subtract with the following numbers and see if the resulting value is inside 100, Once outside the 100 Area is the next number 500Well, that's the new reference point, and do the same thing. That is my goal. In simpler terms, the reference points should be moved to a new file.
This is my code that retrieves the view coordinates and saves them in a text file.
using system;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using Tobii.Interaction;
Namespace ConsoleApp1
{
Class program
{
private static void programintro ()
{
Console.WriteLine ("Press any key to start");
}
public static void Main (string[] arguments)
{
programintro ();
double current X = 0.0;
Double current Y = 0.0;
double timeStampCurrent = 0.0;
double diffX = 0.0;
Double delta = 0.0;
int counter = 0;
var host = new host ();
host.EnableConnection ();
var gazePointDataStream = host.Streams.CreateGazePointDataStream ();
gazePointDataStream.GazePoint ((gazePointX, gazePointY, timestamp) =>
{
diffX = gazePointX - currentX;
diffY = gazePointY - currentY;
currentX = gazePointX;
currentY = gazePointY;
timeStampCurrent = timestamp;
if (diffX> 100 || diffX <= -100 || diffY >= 100 || diffY <= -100)
{
counter ++;
using (StreamWriter writer = new StreamWriter ("C: \ user \ student \ Desktop \ FYP 2019 \ ConsoleApp1 \ ConsoleApp1 \ Data \ TextFile1.txt", true))
{
writer.WriteLine ("Recorded Data" + Counter + " n ========================================================= ======================================= == ============================================================================================================================= Data collected at {2}, currentX,
writer.WriteLine ("========================================= ====================================== === ============= ");
}
Console.WriteLine ("Recorded Data" + Counter + " n ============================================ ======================================= == ============================================================================================================================= Data collected at {2}, currentX,
Console.WriteLine ("========================================== ====================================== === ============= ");
}
});
//host.DisableConnection ();
while (true)
{
if (counter <10)
{
continue;
}
otherwise
{
Environment.Exit (0);
}
}
Now my question is how do I code to read the text file and set a
Reference number and subtracts with the next number and check
if the resulting value is within 100 and have a new reference number, though
it outside the ± 100 Offer. These reference numbers are then stored in
a new text file.
If there is a code sample, I'll create a new program and save it there and test it first.
## Entity Framework – Set updated reference in IdentityDbContext.SaveChanges ()
My app should be filled Updated by Field for all entities. Updated by should refer I would from AuthUser which extends Identity Role,
Updated by is set in IdentityDbContext.SaveChanges (), The problem is, that IdentityDbContext must keep a reference User Manager to solve I would the current user. This leads to a circular dependency: User Manager need IdentityDbContext which needs User Manager, What would be the most elegant solution to this problem?
I could try different things:
• After initialization, set UserManager to IdentityDbContext
• Resolve the user ID by filtering DbSet for fields in ClaimsPrincipal
• Created by saving to the database as a name and not as an ID
• Make the IdentityDbContext clients responsible for setting the CreatedBy field
The problem is that none of these solutions look elegant to me. I'm curious what the common pattern of such functionality is.
## Reference requirement – Comprehensive overview of computations of equivalent stable strains
Where can I find a comprehensive overview of calculations of equivariant strains?
To my knowledge, the status is:
Classical work by Araki and Iriye, Osaka J. Math. 19 (1982). Calculations to 19 .. stable strain for the group $$mathbb {Z} / 2$$,
Araki and Iriye, equivariant stable homotopy ball groups with involvements. I.
Osaka J. Math. 19 (1982), no. 1, 1-55. and
Iriye, Kouyemon
Equivariant stable homotopy groups of spheres with involvations. II.
Osaka J. Math. 19 (1982), no. 4, 733-743.
Calculation of the Adams spectral sequence for first-order groups by Szymik's 2p-2 grade
J. Homotopy Relat. Struct. 2 (2007),
Comparison with motif strains and use of the Motivic Adams Spectral Sequence: by Dugger and Isaksen.
ℤ / 2-equivariant and ℝ-motivic stable stems.
Proc. Amer. Mathematics. Soc. 145 (2017), no. 8, 3617-3627.
Also from the Tom Dieck Segal split and the immediate consequences.
Do I miss something?
## fa.functional analysis – Reference requirement: norm topology on M (X) vs. M. weak topology
To let $$(X, d)$$ to be a metric space and $$mathcal {M} (X)$$ the space of regular (eg radon) measures $$X$$, There are two standard topologies $$mathcal {M} (X)$$: The weak topology (the probabilist) and the strong norm topology, where the norm is the overall variation norm.
Surprisingly, I have found in the literature very few discussions in which these two topologies were rigorously compared except for the oft-quoted claim that the norm topology is much stronger than the weak topology. I'm looking for a reference that discusses and compares these topologies, especially things like convergence, boundedness, open sets, projections, etc.
I mainly deal with probability measurements $$mathcal {P} (X) subset mathcal {M} (X)$$but I'm not sure what the difference is for topological concerns.
|
2019-06-25 03:38:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45719027519226074, "perplexity": 2434.605129741194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999787.0/warc/CC-MAIN-20190625031825-20190625053825-00040.warc.gz"}
|
https://greatroadchurch.org/messages/series/restore/
|
Select Page
### You Pursue Me
June 19, 2022
RESTORE: The Lord is my Shepherd As we emerge from the pandemic, we're also rediscovering how busy our lives are. In this spring series, we'll turn to one of the…
### You Anoint My Head With Oil
June 12, 2022
RESTORE: The Lord is my Shepherd As we emerge from the pandemic, we're also rediscovering how busy our lives are. In this spring series, we'll turn to one of the…
### You Prepare a Table Before Me
June 5, 2022
RESTORE: The Lord is my Shepherd As we emerge from the pandemic, we're also rediscovering how busy our lives are. In this spring series, we'll turn to one of the…
### You Comfort Me
May 29, 2022
RESTORE: The Lord is my Shepherd As we emerge from the pandemic, we're also rediscovering how busy our lives are. In this spring series, we'll turn to one of the…
### You Are With Me
May 22, 2022
RESTORE: The Lord is my Shepherd As we emerge from the pandemic, we're also rediscovering how busy our lives are. In this spring series, we'll turn to one of the…
|
2022-10-04 10:26:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333678841590881, "perplexity": 9402.796871759849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00203.warc.gz"}
|
http://chalkdustmagazine.com/regulars/dear-dirichlet/dear-dirichlet-issue-07/
|
# Dear Dirichlet, Issue 07
Pigs, popes and produce are among the topics of discussion in this issue’s Dear Dirichlet advice column
Moonlighting agony uncle Professor Dirichlet answers your personal problems. Want the prof’s help? Send your problems to deardirichlet@chalkdustmagazine.com.
### Dear Dirichlet,
I’ve recently had the good fortune of winning three pigs at the village fete. However, I’m not sure whether my triangular garden is big enough for them as well as my collection of metal, wooden and other deckchairs. The pigs are of substantial size and my tape measure is not long enough to measure the longest side of the garden. I’ve also heard that pigs are very intelligent and would like to hear suggestions for entertaining them.
— Pearl among swine, Lower Brailes
Dirichlet says:
It seems as though you have an issue with these pigs hogging your space. If your garden is right-angled, you can use Stythagoras’ theorem. Otherwise, I recommend pigonometric functions: the swine and coswine rules will be helpful. I shan’t boar you with the details. If they are math-ham-atically inclined, perhaps you could introduce them to Porkdust. Maybe skip the article about the ham sandwich theorem. My Bacon number is 2.
### Dear Dirichlet,
After a thrilling winter Olympics, I have been inspired to take up competitive sport. However, my previous interests lie mostly in multivariable calculus and I have no clue how to follow a sporting lifestyle. It’s completely different from anything I’ve done before. Do you have any experience in this area?
— Mr Kim, Pyongyang
Dirichlet says:
Congratulations on your change of variables. On the surface, it might just seem a bit of fun and games, but exercise is integral to a healthy life. I recommend heading down to the gym to see if you can join a combined aquatic and winter sports team. Once a member, you can expected to be $\nabla$ed on your $\nabla\times$ing and $\nabla\cdot$ing.
### Dear Dirichlet,
Thanks to your helpful advice in Chalkdust issue 06, I am now the pope! The first ever pope, in fact, to also understand finite element methods. Unfortunately, I went for a stroll the other day to purchase some badger feed and, being new to the area, I got completely lost. How can I get home?
— Benedict Cumberpope, Location unavailable
Dirichlet says:
Never fear! If you’re lost in Italy, just speak to Anna (my pal-in-Rome). She cannoli point you in the right direction. For future sojourns, however, I have one pizza advice. The Rome-bus will take you directly to St Peter’s Square. From there, it’s a short hop to the numerical-analysistine chapel. Make sure you get off at the right stop though — otherwise you’ll be pasta point of no return.
### Dear Dirichlet,
After the excesses of the festive season, I decided to participate in the trend known as Veganuary. For 31 days I forewent all animal-based products, in search of acceptance on my Instagram page. Now that the month is over, I have decided to permanently adopt a vegan lifestyle, and am looking to diversify my cooking. Would you happen to know of any good recipes?
— Paul Metcalfe, Winchester
Dirichlet says:
My dear child, it seems you are limiting yourself to s-kale-r products as you are cross with yourself. I consulted on this matter with my friend William Hamiltomatoes and my work colleague Henri Poincarrots, with whom I commute. I am afraid to report that your choice of ingredients will be limited to vegetabelian groups. Furthermore, you will no longer be able to eat duck a Lagrange (as we have realised that Lagrange is an animal). If you decide to weaken your constraints, there are stiltons of vegetarian options. I myself enjoy macaroni cheese, or for something actually Italian, ris-8. If you can’t find rennet-free parmigiano-reggiano, my briemann hypothesis is that any other hard cheese is a goudapproximation.
• ### Talkdust, episode 4
A podcast for the mathematically curious
• ### Read Issue 09 now!
Ride a phantom parabola into our latest issue. Billiards, maths, tiles, mistakes, plus all your favourite regulars.
• ### Dear Dirichlet, Issue 09
Coffee, Brexit and badgers are among the topics of discussion in this issue's Dear Dirichlet advice column
• ### Horoscope, Issue 09
Mystic mug has some predictions for you...
• ### Top Ten: Chalkdust regulars
The definitive chart of the best Chalkdust regulars
|
2019-04-20 04:31:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17711296677589417, "perplexity": 3640.0291424975726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528523.35/warc/CC-MAIN-20190420040932-20190420062932-00554.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-with-applications-10th-edition/chapter-3-the-derivative-3-1-limits-3-1-exercises-page-137/17
|
## Calculus with Applications (10th Edition)
Please see image attached for the values that need to be entered into the table. Reading the table, as $x$ approaches 2 from the left, $k(x)$ seems to approach 10, as $x$ approaches $2$ from the right, $k(x)$ seems to approach $10$. The one sided limits seem to exist and are equal, so we estimate that $\displaystyle \lim_{x\rightarrow 2}k(x)=10$
|
2018-06-20 08:09:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7193925976753235, "perplexity": 172.93868441754236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863489.85/warc/CC-MAIN-20180620065936-20180620085936-00027.warc.gz"}
|
http://mathhelpforum.com/calculus/162086-multiple-choice-stationary-points.html
|
Thread: multiple choice on stationary points
1. multiple choice on stationary points
x^3 -6x^2 +12x +18
Which one of these choices are correct( Choose A,B,C OR D only)
A. X=2 is a point of inflection
B. X=2 is an upward turning point and x=0 is a downward turning point
C. X=2 is a downward turning point and x=0 is a upward turning point
D. X=2 can not be characterised because f’’(x)=0
The graph are below: Showing the point at X=2
Thanks a lot for answering the question.
2. Originally Posted by firebrend
x^3 -6x^2 +12x +18
Which one of these choices are correct( Choose A,B,C OR D only)
A. X=2 is a point of inflection
B. X=2 is an upward turning point and x=0 is a downward turning point
C. X=2 is a downward turning point and x=0 is a upward turning point
D. X=2 can not be characterised because f’’(x)=0
The graph are below: Showing the point at X=2
3. my answer is A, because at x=2 f'(x)=0 and f''(x)=0==> the point is stationary point but is neither maximum nor minimum point. therefore it is an inflection point. What about your answer? do you agree with my reasoning? thanks
4. Originally Posted by firebrend
my answer is A, because at x=2 f'(x)=0 and f''(x)=0==> the point is stationary point but is neither maximum nor minimum point. therefore it is an inflection point. What about your answer? do you agree with my reasoning? thanks
I agree, to a point.
I still believe that it is necessary to show that f''(x) changes sign at x = 2 to confirm a point of inflection.
I can't say for sure that there is not some function that exists where f'(a) = 0, f''(a) = 0 , has no extrema at (a,f(a)) , and that does not have an inflection point at x = a ... I just can't think of a rigorous proof of your statement or a suitable counterexample.
5. If $f(x)= x^3 -6x^2 +12x +18$ then $f'(x)= 3x^2- 12x+ 12= 3(x^2- 4x+ 3)= 3(x- 3)(x- 1)$ which is ot 0 at x= 2 so 2 is not a critical point and is not 0 at x= 0 so 0 is not a turning point. $f''(x)= 6x- 12= 6(x- 2)$. Yes, f''(2)= 0 and f''(x) changes sign at x= 2 (f''(x) is negative for x< 2, positive for x> 2).
|
2016-10-27 19:12:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3913896977901459, "perplexity": 1051.7755622578084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721387.11/warc/CC-MAIN-20161020183841-00265-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://www.dxhx.pku.edu.cn/article/2020/1000-8438/20200935.shtml
|
## Discussion on the Calculation of Rate Constant and Activation Energy of Second-Order Reaction
Li Qibiao, Hao Yajuan,
Abstract
In this paper, we discussed several problems on calculation of reaction rate constant in physical chemistry reference books. When calculating the rate constant k of the second-order reaction, we should make the half-life formula consistent with the reaction stoichiometric equation. And when calculating the activation energy with Arrhenius formula for ideal gas-phase reaction, attention should be paid to identifying the difference between the reaction rate constant kc and kp, as well as the difference between corresponding activation energy Eac and Eap. It is beneficial for students to have a correct understanding in calculation.
Keywords: Reaction rate constant ; Half-life ; Gas-phase reaction ; Activation energy
Li Qibiao. Discussion on the Calculation of Rate Constant and Activation Energy of Second-Order Reaction. University Chemistry[J], 2020, 35(9): 205-208 doi:10.3866/PKU.DXHX201910055
## 2 反应速率常数计算问题示例
${k_{p, 970{\rm{K}}}} = \frac{1}{{{p_{{{\rm{A}}_{\rm{0}}}}}{t_{1/2}}}} = \frac{1}{{39.2 \times {{10}^3}{\rm{ Pa}} \times 1529{\rm{ s}}}} = 1.7 \times {10^{ - 8}}{\rm{ P}}{{\rm{a}}^{ - {\rm{1}}}} \cdot {{\rm{s}}^{ - {\rm{1}}}}$
${k_{p, 1030{\rm{K}}}} = \frac{1}{{{p_{{{\rm{A}}_{\rm{0}}}}}{t_{1/2}}}} = \frac{1}{{48.0 \times {{10}^3}{\rm{ Pa}} \times 212{\rm{ s}}}} = 9.8 \times {10^{ - 8}}{\rm{ P}}{{\rm{a}}^{ - {\rm{1}}}} \cdot {{\rm{s}}^{ - {\rm{1}}}}$
${k_{p, 1030{\rm{K}}}} = \frac{1}{{2{p_{{{\rm{A}}_{\rm{0}}}}}{t_{1/2}}}} = 4.9 \times {10^{ - 8}}{\rm{ P}}{{\rm{a}}^{ - {\rm{1}}}} \cdot {{\rm{s}}^{ - {\rm{1}}}}$
$\ln \frac{{{k_p}_{(1030{\rm{K}})}}}{{{k_p}_{(970{\rm{K}})}}} = \frac{{{E_{\rm{a}}}}}{R}(\frac{1}{{970{\rm{ K}}}} - \frac{1}{{1030{\rm{ K}}}})$
$\ln \frac{{9.8 \times {{10}^{ - 8}}{\rm{ P}}{{\rm{a}}^{ - {\rm{1}}}} \cdot {{\rm{s}}^{ - {\rm{1}}}}}}{{1.7 \times {{10}^{ - 8}}{\rm{ P}}{{\rm{a}}^{ - {\rm{1}}}} \cdot {{\rm{s}}^{ - {\rm{1}}}}}} = \frac{{{E_{\rm{a}}}}}{{8.314{\rm{ J}} \cdot {\rm{mo}}{{\rm{l}}^{ - {\rm{1}}}} \cdot {{\rm{K}}^{ - {\rm{1}}}}}}(\frac{1}{{970{\rm{ K}}}} - \frac{1}{{1030{\rm{ K}}}})$
$\ln \frac{{{k_c}_{(1030{\rm{K}})}}}{{{k_c}_{(970{\rm{K}})}}} = \ln \frac{{{k_p}_{(1030{\rm{K}})} \cdot R \times 1030{\rm{ K}}}}{{{k_p}_{(970{\rm{K}})} \cdot R \times 970{\rm{ K}}}} = \frac{{{E_{{\rm{a}}c}}}}{R}(\frac{1}{{970{\rm{ K}}}} - \frac{1}{{1030{\rm{ K}}}})$
$\ln \frac{{{k_{c, {T_2}}}}}{{{k_{c, {T_1}}}}} = \frac{{{E_{{\rm{a}}c}}}}{R}(\frac{1}{{{T_1}}} - \frac{1}{{{T_2}}})$
$\ln \frac{{{k_{p, {T_2}}}}}{{{k_{p, {T_1}}}}} = \frac{{{E_{{\rm{a}}p}}}}{R}(\frac{1}{{{T_1}}} - \frac{1}{{{T_2}}})$
${E_{{\rm{a}}c}} - {E_{{\rm{a}}p}} = (n - 1)R\frac{{{T_2}{T_1}}}{{{T_2} - {T_1}}} \cdot \ln \frac{{{T_2}}}{{{T_1}}}$
(1) ${K_p} = \frac{{{k_{p, 1}}}}{{{k_{p, - 2}}}} = \frac{{{p_{\rm{B}}}{p_{\rm{C}}}}}{{{p_{\rm{A}}}}} = \frac{{0.21{\rm{ }}{{\rm{s}}^{ - {\rm{1}}}}}}{{5 \times {{10}^{ - 9}}{\rm{ P}}{{\rm{a}}^{ - {\rm{1}}}} \cdot {{\rm{s}}^{ - {\rm{1}}}}}} = 4.2 \times {10^7}{\rm{ Pa}}$
(2)由于${E_{{\rm{a}}, 1}}{\rm{ = }}{E_{{\rm{a}}, - 2}}{\rm{ = }}{E_{\rm{a}}}$
$\ln \frac{{{k_{{T_2}}}}}{{{k_{{T_1}}}}}{\rm{ = }}\ln 2 = \frac{{{E_{\rm{a}}}}}{R}(\frac{1}{{298{\rm{ K}}}} - \frac{1}{{310{\rm{ K}}}}),{E_{\rm{a}}} = 44.36{\rm{ kJ}} \cdot {\rm{mo}}{{\rm{l}}^{ - 1}}。$
(3)根据平衡常数与温度的关系式,在相同的升温区间内正、逆反应速率常数的变化值相同,所以
$\frac{{{\rm{d}}\ln {K_p}}}{{{\rm{d}}T}} = \frac{{{\Delta _{\rm{r}}}{H_{\rm{m}}}}}{{R{T^2}}} = 0,{\Delta _{\rm{r}}}{H_{\rm{m}}} = {\rm{0}}$
${\Delta _{\rm{r}}}{U_{\rm{m}}} = {\Delta _{\rm{r}}}{H_{\rm{m}}} - \sum {{\nu _{\rm{B}}}RT}$
T = 298 K时,ΔrUm = −2.48 kJ·mol−1T = 308 K时,ΔrUm = −2.56 kJ·mol−1
$\ln \frac{{{k_{{c_1}, 310{\rm{K}}}}}}{{{k_{{c_1}, 298{\rm{K}}}}}} = \ln \frac{{{k_{{p_1}, 310{\rm{K}}}}}}{{{k_{{p_1}, 298{\rm{K}}}}}} = \ln 2 = \frac{{{E_{{\rm{a}}c, 1}}}}{R}(\frac{1}{{298{\rm{ K}}}} - \frac{1}{{310{\rm{ K}}}})$
$\ln \frac{{{k_{{c_{ - 2}}, 310{\rm{K}}}}}}{{{k_{{c_{ - 2}}, 298{\rm{K}}}}}} = \ln \frac{{{k_{{p_{ - 2}}, 310{\rm{K}}}}R \times 310{\rm{ K}}}}{{{k_{{p_{ - 2}}, 298{\rm{K}}}}R \times 298{\rm{ K}}}} = \ln 2 + \ln \frac{{310{\rm{ K}}}}{{298{\rm{ K}}}} = \frac{{{E_{{\rm{a}}c, - 2}}}}{R}(\frac{1}{{298{\rm{ K}}}} - \frac{1}{{310{\rm{ K}}}})$
/
〈 〉
|
2021-12-05 18:09:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233545422554016, "perplexity": 2717.920549928962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00187.warc.gz"}
|
http://stats.stackexchange.com/help/badges/91/reviewer
|
# Help Center > Badges > Reviewer
Completed at least 250 review tasks. This badge is awarded once per review type.
Awarded 47 times
Awarded aug 25 at 20:07 to
for reviewing Suggested Edits
Awarded aug 20 at 13:55 to
for reviewing First Posts
Awarded jul 20 at 10:45 to
for reviewing First Posts
Awarded jul 4 at 19:38 to
for reviewing Close Votes
Awarded jul 1 at 9:08 to
for reviewing Low Quality Posts
Awarded jun 25 at 22:46 to
for reviewing Late Answers
Awarded jun 24 at 0:47 to
for reviewing Reopen Votes
Awarded jun 8 at 8:34 to
for reviewing Close Votes
Awarded jun 1 at 18:35 to
for reviewing Suggested Edits
Awarded apr 12 at 19:09 to
for reviewing First Posts
Awarded mar 21 at 17:20 to
for reviewing First Posts
Awarded mar 13 at 23:30 to
for reviewing First Posts
Awarded jan 28 at 7:28 to
for reviewing Suggested Edits
Awarded jan 22 at 15:04 to
for reviewing Close Votes
Awarded dec 12 at 17:47 to
for reviewing Suggested Edits
Awarded dec 6 at 23:58 to
for reviewing First Posts
Awarded nov 25 at 16:49 to
for reviewing Suggested Edits
Awarded nov 21 at 0:50 to
for reviewing Close Votes
Awarded nov 10 at 20:29 to
for reviewing Late Answers
Awarded oct 14 at 2:25 to
for reviewing First Posts
Awarded oct 7 at 6:39 to
for reviewing First Posts
Awarded oct 2 '13 at 14:48 to
for reviewing Close Votes
Awarded sep 22 '13 at 20:56 to
for reviewing Close Votes
Awarded sep 22 '13 at 12:59 to
for reviewing First Posts
Awarded sep 16 '13 at 23:23 to
for reviewing Close Votes
Awarded sep 15 '13 at 15:28 to
for reviewing Low Quality Posts
Awarded sep 14 '13 at 12:00 to
for reviewing Suggested Edits
Awarded aug 24 '13 at 14:46 to
for reviewing First Posts
Awarded jul 4 '13 at 8:50 to
for reviewing Suggested Edits
Awarded jun 14 '13 at 19:26 to
for reviewing First Posts
Awarded mar 31 '13 at 11:50 to
for reviewing Suggested Edits
Awarded feb 28 '13 at 12:38 to
for reviewing Close Votes
Awarded feb 19 '13 at 15:04 to
for reviewing First Posts
Awarded feb 17 '13 at 13:01 to
for reviewing Close Votes
Awarded dec 15 '12 at 20:47 to
for reviewing First Posts
Awarded dec 4 '12 at 19:13 to
for reviewing Suggested Edits
Awarded nov 7 '12 at 12:41 to
for reviewing First Posts
Awarded nov 5 '12 at 15:45 to
for reviewing Close Votes
Awarded oct 29 '12 at 19:35 to
for reviewing First Posts
Awarded oct 28 '12 at 14:58 to
for reviewing Close Votes
Awarded oct 24 '12 at 15:31 to
for reviewing First Posts
Awarded sep 21 '12 at 23:08 to
for reviewing Suggested Edits
Awarded sep 21 '12 at 23:08 to
for reviewing Suggested Edits
Awarded sep 21 '12 at 23:08 to
for reviewing Suggested Edits
Awarded sep 12 '12 at 14:07 to
for 1000 reviews, over 200 actioned in the old review system
Awarded aug 15 '12 at 4:09 to
for 1000 reviews, over 200 actioned in the old review system
Awarded jul 11 '12 at 16:13 to
for 1000 reviews, over 200 actioned in the old review system
|
2014-08-30 10:24:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137432336807251, "perplexity": 11556.42050338688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834883.60/warc/CC-MAIN-20140820021354-00318-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1634113/davenports-q-method-finding-an-orientation-matching-a-set-of-point-samples
|
# Davenport's Q-method (Finding an orientation matching a set of point samples)
I have an initial set of 3D positions that form a shape. After letting them move independently, my goal is to find the best rotation of the original configuration to try to match the current state. This is for a soft body physics simulation, the idea being that if I can construct an optimal 'rigid' frame for the deformed shape then I can apply a shape matching constraint that removes deformation without introducing energy.
Existing solutions tend to find the optimal linear transformation representing the deformation, and then use various methods to decompose the matrix into rotation and scale/shear components. However, I found the orientations provided by such methods tended to not be very stable. After significant searching I discovered that my problem was identical to a problem solved by NASA to determine satellite orientations. When I implemented their solution my simulation was remarkably stable. I want to gain a better understanding of why it works.
Details of Davenport's Q-method are here. Somehow, after taking a bunch of outer, cross and dot products of the original and deformed samples, jamming them into a symmetric 4x4 matrix, and then computing the eigenbasis for that matrix, the eigenvector corresponding to the largest eigenvalue can be reinterpreted as a quaternion that is the best orientation to use. The author of the linked paper claims this result is easy to prove, but I guess easy is relative. Can anyone walk me through why this works?
Since nobody has answered this yet and its been more than a year I'll take a stab. I'll apologize for my engineery answer from the start.
# Problem Description
The Davenport Q-Method Solution is a solution to what is referred to as Wahba's Problem, which was proposed by Grace Wahba in 1965(Wahba Paper). Wahba's problem is to find the rotation matrix that minimizes the cost function $$\min_{\mathbf{T}} J(\mathbf{T})=\frac{1}{2}\sum_{i}{w_i}\left\|\mathbf{b}_i-\mathbf{T}\mathbf{a}_i\right\|^2$$ where $\mathbf{a}_i$ are a set of unit vectors expressed in frame $A$, $\mathbf{b}_i$ are the same set of unit vectors expressed in frame $B$, $\mathbf{T}$ is the rotation matrix to transform from frame $A$ to $B$, and $w_i$ is some weight corresponding to each vector pair (usually set to be the inverse of the variance of the measurement that your vectors are generated from). Note that the $1/2$ comes from the maximum likelihood estimate (MLE) formulation of Wahba's problem. Since it is just a constant multiplier, it will have no effect on the minimization problem so we can ignore it in future steps. This is a constrained minimization problem, with the constraint that $$\mathbf{T}^{-1}=\mathbf{T}^T,\qquad\left\|\mathbf{T}\right\|=1$$
# The Q-Method Solution
## Linear Algebra Transformations
In 1968, Davenport came up with a solution to Wahba's problem using attitude quaternions (Davenport Paper). To get to Davenport's solution we need to manipulate the cost function. First, express the vector norm as an inner product $$\min_T J(\mathbf{T}) = \sum_i{w_i(\mathbf{b}_i-\mathbf{T}\mathbf{a}_i)^T(\mathbf{b}_i-\mathbf{T}\mathbf{a}_i)}$$ Distributing the multiplication and recalling (a) that $\mathbf{a}_i$ and $\mathbf{b}_i$ are unit vectors (and thus their inner product with themselves is 1), (b) the constraint that $\mathbf{T}^T\mathbf{T}=\mathbf{I}$, and that an inner product is a scalar and thus symmetric (that is $\mathbf{a}_i^T\mathbf{b}_i=\mathbf{b}_i^T\mathbf{a}_i$) the minimization problem can be written as $$\min_{\mathbf{T}}J(\mathbf{T})=\sum_i{2w_i(1-\mathbf{b}_i^T\mathbf{T}\mathbf{a}_i})$$ Dropping the constant multiplier 2, and recognizing that $\sum{w_i}$ will have no effect on the minimization problem we can further write this as $$\min_{\mathbf{T}}J(\mathbf{T})=-\sum_i{w_i\mathbf{b}_i^T\mathbf{T}\mathbf{a}_i}$$
Now, making use of the fact that the trace operator is a linear operator, and that the trace of a scalar is the scalar, we can write $$\min_{\mathbf{T}}J(\mathbf{T})=-\text{Tr}\left[\sum_i{w_i\mathbf{b}_i^T\mathbf{T}\mathbf{a}_i}\right]$$ which, using the cyclic property of the trace can be written as $$\min_{\mathbf{T}}J(\mathbf{T})=-\text{Tr}\left[\mathbf{T}\sum_i{w_i\mathbf{a}_i\mathbf{b}_i^T}\right]=\text{Tr}\left[\mathbf{T}\mathbf{B}^T\right]$$ where $\mathbf{B}=\sum_i{w_i\mathbf{b}_i\mathbf{a}_i^T}$ is known as the attitude profile matrix.
We can now use the equation to convert an attitude quaternion to a rotation matrix $$\mathbf{T}=(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\mathbf{I}+2\mathbf{q}_v\mathbf{q}_v^T-2q_s\left[\mathbf{q}_v\times\right],$$ where $$\left[\mathbf{a}\times\right]=\left[\begin{array}{rrr}0 & -\mathbf{a}(3) & \mathbf{a}(2) \\ \mathbf{a}(3) & 0 & -\mathbf{a}(1) \\ -\mathbf{a}(2) & \mathbf{a}(1) & 0\end{array}\right]$$ is the skew-symmetric cross product matrix, $\mathbf{q}_v$ is the vector portion of the attitude quaternion, and $q_s$ is the scalar portion of the attitude quaternion, to substitute in for $\mathbf{T}$ $$\min_{\mathbf{q}}J(\mathbf{q})=-\text{Tr}\left[\left((q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\mathbf{I}+2\mathbf{q}_v\mathbf{q}_v^T-2q_s\left[\mathbf{q}_v\times\right]\right)\mathbf{B}^T\right]$$ Distributing $\mathbf{B}^T$ and the trace operator leaves us with $$\min_{\mathbf{q}}J(\mathbf{q})=-(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\text{Tr}\left[\mathbf{B}^T\right]-2\text{Tr}\left[\mathbf{q}_v\mathbf{q}_v^T\mathbf{B}^T\right]+2q_s\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right].$$
At this point it again becomes necessary to make use of trace properties (the cyclic property, the scalar property $\text{Tr}\left[a\right]=a$, and the transpose property $\text{Tr}\left[\mathbf{A}^T\right]=\text{Tr}\left[\mathbf{A}\right]$). Applying these properties we can simplify to $$\min_{\mathbf{q}}J(\mathbf{q})=-(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\text{Tr}\left[\mathbf{B}\right]-2\mathbf{q}_v^T\mathbf{B}\mathbf{q}_v+2q_s\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right].$$ Further, recognizing that $2\mathbf{a}^T\mathbf{A}\mathbf{a}=\mathbf{a}^T(\mathbf{A}+\mathbf{A}^T)\mathbf{a}$ we can reduce this to $$\min_{\mathbf{q}}J(\mathbf{q})=-(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\text{Tr}\left[\mathbf{B}\right]-\mathbf{q}_v^T(\mathbf{B}+\mathbf{B}^T)\mathbf{q}_v+2q_s\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right].$$
Examine the term $\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right]$. Applying the operators it can be seen that $$\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right]=\mathbf{q}_v(1)(\mathbf{B}(3, 2)-\mathbf{B}(2, 3))+\mathbf{q}_v(2)(\mathbf{B}(1, 3)-\mathbf{B}(3, 1))+\mathbf{q}_v(3)(\mathbf{B}(2, 1)-\mathbf{B}(1, 2))$$ Defining $$\mathbf{z}=\left[\begin{array}{ccc} \mathbf{B}(2, 3)-\mathbf{B}(3, 2) \\ \mathbf{B}(3, 1) - \mathbf{B}(1, 3) \\ \mathbf{B}(1, 2)-\mathbf{B}(2, 1)\end{array}\right]$$ (which implies that $\left[\mathbf{z}\times\right]=\mathbf{B}^T-\mathbf{B}$), then we can write $$\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right]=-\mathbf{z}^T\mathbf{q}_v$$ and our minimization problem becomes $$\min_{\mathbf{q}}J(\mathbf{q})=-(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\text{Tr}\left[\mathbf{B}\right]-\mathbf{q}_v^T(\mathbf{B}+\mathbf{B}^T)\mathbf{q}_v-2q_s\mathbf{z}^T\mathbf{q}_v.$$
Defining $\mathbf{S}=\mathbf{B}+\mathbf{B}^T$ and $\mu=\text{Tr}\left[\mathbf{B}\right]$ and simplifying gives $$\min_{\mathbf{q}}J(\mathbf{q})=-\left(\mathbf{q}_v^T(\mathbf{S}-\mu\mathbf{I})\mathbf{q}_v+q_s\mathbf{z}^T\mathbf{q}_v+q_s\mathbf{q}_v^T\mathbf{z}+q_s^2\mu\right).$$ This can equivalently be written as an inner product $$\min_{\mathbf{q}}J(\mathbf{q})=-\left[\begin{array}{cc} \mathbf{q}_v^T(\mathbf{S}-\mu\mathbf{I})+q_s\mathbf{z}^T & \mathbf{q}_v^T\mathbf{z}+q_s\mu\end{array}\right]\left[\begin{array}{c} \mathbf{q}_v \\ q_s \end{array}\right].$$ Finally, we can write this as $$\min_{\mathbf{q}}J(\mathbf{q})=-\left[\begin{array}{cc} \mathbf{q}_v^T & q_s \end{array}\right]\left[\begin{array}{cc} \mathbf{S}-\mu\mathbf{I} & \mathbf{z} \\ \mathbf{z}^T & \mu\end{array}\right]\left[\begin{array}{c} \mathbf{q}_v \\ q_s \end{array}\right]=-\mathbf{q}^T\mathbf{K}\mathbf{q}$$ where $\mathbf{q}$ is the attitude quaternion (vector first), and $\mathbf{K}$ is the Davenport matrix.
## Optimization Problem
Having finally sufficiently simplified the cost function, we can now perfom a constrained minimization using a Lagrange multiplier to enforce the constraint that $\mathbf{q}^T\mathbf{q}=1$ $$\min_{\mathbf{q}\text{, }\lambda} J(\mathbf{q}\text{, }\lambda) =-\mathbf{q}^T\mathbf{K}\mathbf{q}+\lambda(\mathbf{q}^T\mathbf{q}-1)$$ Applying the first differential condition to this results in $$\mathbf{K}\mathbf{q}=\lambda\mathbf{q}$$ which is a 4x4 eigenvalue/eigenvector problem. The attitude quaternion that minimizes the cost function in the unit eigenvector corresponding to the largest (most positive eigenvalue). To understand this remember that the function we are minimizing is $$\min_{\mathbf{q}\text{, }\lambda} J(\mathbf{q}\text{, }\lambda) =-\mathbf{q}^T\mathbf{K}\mathbf{q}+\lambda(\mathbf{q}^T\mathbf{q}-1)$$ which, when $\mathbf{q}$ is a unit eigenvector of $\mathbf{K}$, simplifies to $$-\mathbf{q}^T\mathbf{K}\mathbf{q}+\lambda(\mathbf{q}^T\mathbf{q}-1)=-\lambda$$ since $\mathbf{q}^T\mathbf{K}\mathbf{q}=\lambda$ when $\mathbf{q}$ is an eigenvector of $\mathbf{K}$.
So overall, yes the derivation is simple in that it only requires relatively basic linear algebra/optimization but it is complex in that it requires a good deal of creativity in order to get everything into the proper form.
• Thanks for the breakdown, looking forward to going through it when I have a bit more time. I have since developed a bit more intuition about why this method works in terms of geometric algebra - thought process here: docs.google.com/document/d/… Major insight for me was that each point sample can be equally well explained by a complete circle of attitude quaternions, which is why methods which just contribute the shortest angular paths to rotate each point are less robust. – Jason Hise May 13 '17 at 3:49
|
2019-05-23 03:19:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935553431510925, "perplexity": 187.84093043965441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257002.33/warc/CC-MAIN-20190523023545-20190523045545-00482.warc.gz"}
|
https://computergraphics.stackexchange.com/questions/2206/spectral-path-tracing-image-color-brightness-incorrect
|
# Spectral path tracing - image color/brightness incorrect
I implemented a spectral path tracing using physically base BRDF models such as Oren-Nayar,Specular Reflection and Transmission, Lambertian. All calculation in the path tracer uses standard illuminant and macbeth color checker SPD, spectral power distribution. The result of the path tracer for each pixel is the SPD obtained as a sum of SPD obtained from each sample calculated by the pat tracer. This SPD is then converted to CIE XYZ color and to RGB. The result scene obtained is the following one (in this example taking 500 samples per pixel):
As you can see, everything seems fine, except for the brightness/luminance of the scene. Every object in the scene is darker than it have to be. The floor and the front wall of the cornell box in the scene must be white and neutral8 (from macbeth color checker), but they are dark gray. The following method is the one that trace the samples for a pixel of the path tracer:
Vector3D PathTracer::getPixelColor(const Ray& ray, int bounce) {
Spectrum<constant::spectrumSamples> L(0.0f);
int numberOfSamples = 500;
float sampleWeight = 1.0f/(float)numberOfSamples;
for (int i = 0; i < numberOfSamples; i++) {
Spectrum<constant::spectrumSamples> spectrumSample = trace(ray, bounce);
L = L + spectrumSample * sampleWeight;
}
Spectrum<constant::spectrumSamples> Li = scene->light->spectrum;
ColorMatchingFunction* colorMatchingFunction = new Standard2ObserverColorMatchingFunction();
//Get tristimulus values.
Vector3D tristimulus = CIE1931XYZ::tristimulusValues(L, Li, colorMatchingFunction);
//Convert tristimulus to sRGB.
Vector3D color = CIE1931XYZ::tristimulusTosRGB(tristimulus);
//Apply sRGB gamma correction.
sRGB::sRGBGammaCorrection(color, GammaCompanding);
//Convert to standard 0 - 255 RGB value.
sRGB::sRGBStandardRange(color);
delete colorMatchingFunction;
return color;
}
As you can seen, I already apply a gamma correction to the color obtained. Do you have any idea why my image rendered is so dark? My concerns are in the part where I convert the SPD sum, obtained from the samples, into RGB color. Do you see any error? Am I missing something? Do I need other operation to execute a correct conversion from the SPD obtained from the sampling to an RGB color?
To avoid to write a too long question, I will link the main classes used by the path tracer for the calculation:
The other files/classes used are all on this repository (branch luminance)
https://github.com/chicio/Spectrum-Clara-Lux-Tracer/tree/luminance
Thanks you all guys, I hope someone could help me.
• It is perfectly OK if your material looks dark gray if you illuminate it with weak light source. What is the brightness of the light source? – ivokabel Mar 20 '16 at 23:14
• @ivokabel but my materials should look white and light gray. The SPD of the illuminant used is the D65. Do i need to tweak the spd of the light in some way? – Fabrizio Duroni Mar 20 '16 at 23:17
• @ivokabel Do i need to define a brightness paramter and use it somewhere? – Fabrizio Duroni Mar 20 '16 at 23:17
• If I am not mistaken, D65 only defines the shape of the spectrum, not the intensity. Therefore, you will really have to add a parameter telling the amount of emitting radiance, or something similar. Related topic is the renderer exposure value, but I saw that you take 1 as the limit value, so you don't have to bother with this one. – ivokabel Mar 20 '16 at 23:37
• Thank you @ivokabel for the suggestion about the parameter radiance. Could it be just a constant that will be multiplied with the spd of the illuminant during the tracing of rays? Or do i need to multiply the spd of the illuminant during the conversion from spd to cie xyz? Also I don't understand what you mean with renderer exposure value. Where do I take 1 as its value? – Fabrizio Duroni Mar 20 '16 at 23:45
The problem lies mainly in CIE1931XYZ::tristimulusValues() function, where you normalize the resulting color to the luminance of your illuminant which causes that directly observed light source has luminance 1, but everything else is much darker. That is a nice thing to do if you just want to visualize colours of various reflectance spectra under a given illumination, but is probably not the best thing to do in a global illumination renderer.
|
2019-07-16 08:25:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5173165798187256, "perplexity": 1837.2962758736273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524517.31/warc/CC-MAIN-20190716075153-20190716101153-00397.warc.gz"}
|
https://www.esaral.com/q/in-the-given-figure-ahk-is-similar-to-abc-if-ak-10-cm-bc-3-5-cm-and-hk-7-cm-find-ac-60118
|
# In the given figure, ∆AHK is similar to ∆ABC. If AK = 10 cm, BC = 3.5 cm and HK = 7 cm, find AC.
Question:
In the given figure, ∆AHK is similar to ∆ABC. If AK = 10 cm, BC = 3.5 cm and HK = 7 cm, find AC.
Solution:
Given: $\triangle \mathrm{AHK} \sim \triangle \mathrm{ABC}$
AK = 10 cm
BC = 3.5 cm
HK = 7 cm
To find: AC
Since $_{\triangle \mathrm{AHK}} \sim \Delta \mathrm{ABC}$, so their corresponding sides are proportional.
$\frac{\mathrm{AC}}{\mathrm{AK}}=\frac{\mathrm{BC}}{\mathrm{HK}}$
$\frac{\mathrm{AC}}{10}=\frac{3.5}{7}$
$\mathrm{AC}=5 \mathrm{~cm}$
|
2023-02-06 19:38:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260715246200562, "perplexity": 3374.734378726773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00738.warc.gz"}
|
https://pballew.blogspot.com/2018/08/fabian-franklins-beautiful-proof-of.html
|
Monday, 13 August 2018
Fabian Franklin's Beautiful Proof of the Pentagonal Theorem
*Wik
On Aug 16, 1878 Charles Hermite wrote to J J Sylvester at Johns Hopkins concerned about his accepting a Math Chair in America and questioning the ability of the American people to contribute to research-level mathematics. Only three years later he would be reading the paper of Fabian Franklin, a young assistant mathematics instructor at Johns Hopkins, before the French Academy. The paper was on a short, purely graphic, proof of Euler's theorem on pentagonal numbers. Hans Rademacher called this proof “the first major achievement of American mathematics.”
Some background for students: Pentagonal numbers are named for the ways of arranging dots into pentagons, much like the square numbers or triangular numbers. The true or pure pentagonal numbers are 1, 5, 12, 22, 35, 51, 70, 92,...
*Wik
You can get them by using the formula $\frac{3n^2-n}{2}$ with n a positive integer.
But for what we are doing today, we need to also include the generalized pentagonal numbers. They are obtained from the formula given above, but with n taking values in the sequence 0, 1, −1, 2, −2, 3, −3, 4..., producing the sequence 0, 1, 2, 5, 7, 12, 15, 22, 26, 35,...
One of the amazing things about this sequence is that it shows up in relation to finding the sum of the divisors of n, and finding the number of partitions of n. As Euler used it for the Pentagonal Number theorem, it was written out as $1 − n − n^2 + n^5 + n^7 − n^{12} − n^{15} + n^{22} + n^{26} − n^{35} − etc$ Notice all these numbers are the same as the ones in the generalized pentagonal numbers, hence, the pentagonal number theorem.
All that is beautiful math, but today I focus on a detail of the theorem that led to the graphic proof. What the terms of the Pentagonal Number theorem really say, and a question about them. If we look, for instance, in the partitions of five, they are {5},{4,1}, {3,2}, {3,1,1}, {2,2,1}, {2,1,1,1,1}, and {1,1,1,1,1} These can be divided into two sets, can you figure out how some are different than the others? Look at the first three. Now look at the last four. Each of the last four have repeats of one or more numbers. The first three are all distinct. Of the three with distinct digits, two of them have an even number of integers in their composition, {4,1} and {3,2}. The other, {5}, has an odd number of integers. And if you look at the exponent of X5 in the theorem, you see that it is positive, and that's what the theorem really says. If you look at all the partitions of any number, this polynomial gives you the the number of even distinct partitions (an even number of integers in it) minus the number of odd distinct partitions. So 5 has one more even than odd, and 7 does as well, but 12 has one more odd than even. but what first aroused Franklin's curiosity, was why there were so many missing exponents, why there were so many like X3 and X4 and X6. These would all have an equal number of odd and even partitions, hence the terms with zero for a coefficient which simply did not appear. what would explain this?
Franklin's insight was that for numbers like 6, the numbers could be matched up into odd and even pairs that offset each other in the count. For example, the distinct partitions of 6 are {6}, {5,1}, {4,2}, {3,2,1} the other seven partitions of 6 all have a repeated value. In his plan, the first two were matched together, and the last two were converted to each other, and he even had a graphic plan to show it always would work.
Here are two partitions of the number 33. The first is a partition into {9,8,7,5,4} The right diagonal which has 3 dots in it, and the bottom row which has 4 dots in it are the key. Since 3 is less than four (and four then, is the smallest number in the partition, we can move the 3 dots in the diagonal to make a new row in the bottom, and reduce the top three rows by one. So the matching partition is {8,7,6,5,4,3} at right. Note that we have changed the odd permutation into an even one. And we can go back by moving the three on the bottom row in the right partition to return to the left. These two always match. But he realized that there might be a situation in which this didn't work. For instance if the number of dots in the diagonal and the number of dots in the bottom row are the same (they share a corner dot) then you couldn't move either one. This can only happen if the bottom row is equal to the diagonal, or if the diagonal is one more than the bottom row. Here are examples of numbers that can't be matched to another. The top row is made up of numbers that have them equal. If you try to shift either one to make and odd partition even, or vice versa it just won't work. And the bottom sets show numbers that can be arranged with a partition that has the lowest row one more than the diagonal. They won't work either.
Now look at these numbers, count the dots, what do you notice. These are all the numbers in the Pentagonal Theorem Polynomial. And try as you may, you can't find another partition of any of these numbers that can't be transformed from odd to even or even to odd. If this partition has an odd number of rows, then the coefficient of that power in the polynomial is positive (5 in top row and 7 in bottom row are both positive coefficients since there is one odd partition that can't be matched to this even partition. The distinct partitions of 7 are {7}, {6,1}. {5,2}, {4,3} and {4,2,1} . Draw the diagrams, 7 can be transformed to {6,1} by dropping the last dot (a diagonal of one) to be a second row. {5,2} and be transformed in the same way to make {4,2,1} but the {4,3} has no match, so 3 evens, 2 odds makes the coefficient one.
I had never seen this before until I read a paper "The Pentagonal Number Theorem and All That" by Dick Koch from 2016. I no longer have the link to it, but still have the PDF (sometimes my reading list gets far longer than my free time) , so if you can't find it online somewhere, drop me a note and I'll send a copy out to you. Lots more really complicated and fascination stuff in it.
tzvi said...
good stuff
|
2021-10-21 07:50:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6402438282966614, "perplexity": 299.4005220575449}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00030.warc.gz"}
|
https://socratic.org/questions/what-is-the-square-root-of-150-in-simplified-radical-form
|
# What is the square root of 150 in simplified radical form?
$\sqrt{150} = 5 \sqrt{6}$
Since $150 = 25 \cdot 6$,
$\sqrt{150} = \sqrt{25 \cdot 6} = \sqrt{25} \cdot \sqrt{6} = 5 \sqrt{6}$
|
2019-09-19 10:24:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157932996749878, "perplexity": 1203.587830410932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573476.67/warc/CC-MAIN-20190919101533-20190919123533-00279.warc.gz"}
|
https://math.stackexchange.com/questions/1194636/atiyah-macdonald-exercise-5-4
|
# Atiyah-Macdonald, Exercise 5.4
I was having some trouble with the following exercise from Atiyah-Macdonald.
Let $A$ be a subring of $B$ such that $B$ is integral over $A$. Let $\mathfrak{n}$ be a maximal ideal of $B$ and let $\mathfrak{m}=\mathfrak{n} \cap A$ be the corresponding maximal ideal of $A$. Is $B_{\mathfrak{n}}$ integral over $A_{\mathfrak{m}}$?
The book gives a hint which serves as a counter-example. Consider the subring $k[x^{2}-1]$ of $k[x]$ where $k$ is a field, and let $\mathfrak{n}=(x-1)$. I am trying to show that $1/(x+1)$ could not be integral over $k[x^{2}-1]_{\mathfrak{n}^{c}}$.
I have understood why this situation serves as a counterexample. But I am essentially stuck at trying to draw a contradiction. A hint or any help would be great.
Maybe you already noticed that $\mathfrak n^c=(x^2-1)$. Now apply the definition of integrality and after clearing the denominators you get $\sum_{i=0}^n a_is_i(x+1)^{n-i}=0$ with $a_i\in A$, $a_n=1$, and $s_i\in A-\mathfrak n^c$. Then $x+1\mid s_n$ (in $B$), so $s_n\in (x+1)B\cap A=(x^2-1)$, a contradiction.
|
2021-10-23 04:05:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9838204383850098, "perplexity": 49.31649731834757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585561.4/warc/CC-MAIN-20211023033857-20211023063857-00328.warc.gz"}
|
http://overanalyst.blogspot.com/2012/03/
|
Friday, March 30, 2012
a net is a net is a net, i bet!
i'm currently LaTeχing a revision of a manuscript, and now there are problems with terminology ..
i just realised that i'm using the term "net" in two different senses:
1. from topology, a net is a generalisation of a sequence: the index set no longer consists of integers, but an arbitrary directed set but not necessarily a poset. (the usual example consists of subsets of a fixed set that are partially ordered by inclusion, but not necessarily totally so.)
2. an ε-net (with ε positive), on the other hand, is a notion from metric geometry: roughly speaking, it is a locally finite approximation of a metric space.
it's not that much of an annoyance: for each ε-net of a doubling space, i'm constructing an approximation fε of a fixed Lipschitz function f.
in other words, i'm building a net from a net of nets!
*grins*
..
*frowns*
*sighs*
sometimes we mathematicians have to be more inventive with terminology ..!
Tuesday, March 27, 2012
decisions, decisions vs. conventions, conventions ..
a few minutes ago [1] i was latexing and realised that i needed another name for a function .. yet i had already used f, g, and h ..
argh ..
i can't just call it f' ("f prime") either, because i already used (prime) for differentiation of functions on the real line .. and f0 just looks .. weird:
i mean, what's the subscript for?
[runs through alphabet]
[sighs]
i guess i'll use u;
it feels the least strange, to me.
the greek letter φ is close to f, but it looks too much like a smooth, compactly-supported function for my taste ..
odd, how some conventions become crippling. to me, for instance,
• a and b are points or parameters (or very rarely, indices)
• c is a constant,
• d is the exterior differential,
• e is base e (and occasionally an embedding*)
• f, g, and h are functions,
• i, j, k are indices (with i sometimes the inclusion map [2])
• l denotes a line,
• m and n are natural numbers,
• o is a base point in a space*
• p and q are either points, exponents, or polynomials,
• r is the radius of a ball (occasionally a third polynomial),
• s and t are parametrisation variables,
• u and v are vectorfields,
• w is a weight function*
• x, y, and z are spatial variables.
as for uppercase letters,
• A is a matrix, sometimes a constant,
• B is a ball,
• C is a constant, subject to change, line by line,
• D is the total derivative map,
• E is the base space for a fibre bundle,
• F and G are mappings between spaces,
• H is used for homology,
• I is the identity map,
• J is used for jacobians,
• K is a distorsion function for quasiconformal mappings*
• L is a linear operator, or a space of integrable functions,
• M and N denote sobolev spaces of functions* (on metric spaces)
• O is an open set,
• P is .. an affine hyperplane? (i rarely use this: huh ..)
• Q is a cube,
• R is the larger of two radii,
• S is a symmetric tensor,
• T is a linear operator between normed linear spaces,
• U is a unitary operator,
• V and W are vector spaces,
• X, Y, and Z are spaces.
and, of course, greek:
• α and β are multi-indices,
• γ is a curve,
• δ and ε are small numbers,
• ζ is an embedding [2]
• η is a standard, smooth mollifier,
• θ is an angle,
• ι is the inclusion map,
• κ denotes curvature,
• λ is an eigenvalue,
• μ and ν are measures,
• ξ are coordinates on a differentiable structure* (or a phase space variable)
• ο looks too much like an o, so it's still a base point,
• π is either a projection map or a homotopy group,
• ρ is the density function to an absolutely continuous measure,
• σ is surface area measure,
• τ is a dummy variable for integration,
• υ, i never use, though Υ is used for jets* (a la viscosity solutions for PDE)
• φ and ψ are test functions,
• χ is a characteristic (indicator) function,
• ω is a solid angle.
[1] .. and yes, clearly i'm blogging now. q-:
[2] i don't do complex analysis unless absolutely necessary.
Monday, March 26, 2012
a few snapshots, from a few weeks ago.
to my relief, the special session in early march was held in a rather nice building:
above left: the (usual) display of AMS book titles,
above right: a lobby busy with registrations, someone trying to concentrate ..
above: the lobby, less busy ..
.. and below is a photo of the emerging crowd,
upon hearing that there would be free lunch provided .. (-:
Saturday, March 24, 2012
thoughts about online education (wired.com article post)
somehow i suspect that online-available courses and degrees are not going away, despite recent outcries that some are scams.
it could be a good solution for access to education, amidst a trend of ever increasing university tuition costs. moreover,
"People around the world have gone crazy for this opportunity. Fully two-thirds of my 160,000 classmates live outside the US. There are students in 190 countries—from India and South Korea to New Zealand and the Republic of Azerbaijan. More than 100 volunteers have signed up to translate the lectures into 44 languages, including Bengali. In Iran, where YouTube is blocked, one student cloned the CS221 class website and—with the professors’ permission—began reposting the video files for 1,000 students."
from "The Stanford Education Experiment Could Change Higher Learning Forever" @wired.
this is putting your money where your mouth is:
if you truly believe that a more educated public is the path to a better society, then we should make more lessons accessible to everyone, locally and around the world.
it seems that, nowadays, the internet is the best way to disseminate that opportunity.
i wonder, though, about the wisdom of removing students from a physical classroom and whether a virtual presence can ever replace an actual physical attendance.
(to be fair, i'm an academic who's been (reasonably) successful in his career [1],
so necessarily i have some incentive in maintaining the status quo that has rewarded me.)
when i think of what a student gets out of a lecture, though, i'm hard-pressed to identify what would be so crucial in physically showing up to class.
the only thing that currently comes to mind is scope, at least if the lecturer uses a chalkboard.
it's harder to see the "whole picture" when all you have is a camera shot of the board. on the other hand, being in the classroom allows a student to turn her head [π/2], look at the part of the lesson that came just before, remind herself of how the topic initially came about, the motivations ..
(this obstruction could be solved, however, by having better video cameras and a big computer screen, so that a viewer could "see it all.")
i can imagine, however, an analogous feature from having videos of lectures available on-line. despite a smallish projector screen in the classroom, where only a limited amount of information can be shown [2], students can rewind the lecture and catch something that they missed.
this is very crucial and incredibly helpful for students.
many of my students have told me that i go quite fast through the material, to the extent that they cannot take effective notes. keep in mind that most students don't have any real time to think about the contents being discussed in lecture; often they are simply copying what you're writing on the chalkboard, and then they will read them later to understand [3] ..!
honestly, it almost seems that having students in the classroom is more a benefit for the lecturer than the students themselves (although they do get something out of it, too). you see, body language is incredible effective; to me, its lack is almost like being blind [4].
it is impossible to gauge reaction from a video camera. there have been plenty of times that my colleagues and i have turned to an absolutely confused room of students, realised that we were talking nonsense, and made a completely improvised, down-to-earth example on the spot [5].
sure, you could read through students' comments after posting the lecture online, realise that you gaffed up something and could have given a better example, and redo the lecture and re-post it. then again, why go through that trouble if you only have to do it once during a live lecture, in front of students that react to you?
the fact is, we humans are physical creatures and interact best, face-to-face. it's how we are hard-wired. i mightn't be able to point out all the explicit strengths that come from the physical manifestation of lectures, but i don't think it should be so easily dismissed either.
and now .. for a random, nitpicking opinion:
".. Thrun acknowledges some harsh feedback from his students. “We made a lot of mistakes,” he says. “In the beginning I made each problem available only once. I got a flaming email from a student saying, ‘Look, you’re behaving like one of these arrogant Stanford professors looking to weed out students.’ I realized we should set up the student for success, not for failure.” KnowLabs tweaked the software to allow students to keep trying problems."
i don't know how i feel about that response. when i think about it .. if you really want a student to master a particular concept, then the best thing to do is allow them opportunity to try until they succeed. otherwise, will they really learn the material?
thinking about it more, though: if a student knows that there are countless chances to do something .. at least, up until the end of the term .. then the "natural" thing to do is to procrastinate, or at least, not to give your best effort. if this is but one task, then it doesn't matter too much .. but if this is one of many concepts to master, throughout the course, then the student inevitably falls behind.
our students may physically be adults, but many of them come straight from high school and new to all the responsibilities of adulthood and learning on their own.
that said, drilling the point that there is only one opportunity to complete a task for the course emphasizes the point: don't slack off, because there is a penalty ..
.. but i've rambled on enough;
perhaps there's more to say, but not tonight. (-:
[1] by which i mean that i'm still employed, and found a tenure-track job.
[π/2] i learned recently that women are now the majority of university enrolled-students, so "her" might be a better pronoun to use, here.
[2] this is by far the most compelling reason that i prefer chalkboard talks to slides. with slides, it is harder to track what came before; as for a chalkboard, it's there until someone erases it.
[3] not that it's something that i take pride in, but students have told me that they like my teaching style because i write out everything that i say (nowadays); it's how it even occurred to me that students have this problem. their notes from my lectures "read like a book," apparently. i suppose it is good that i'm clear, but .. you know, i did assign a textbook to the course, and i'm taking the material from there ..!
[4] this is why i hate talk to people on telephones.
[5] for example, one friend of mine was trying to explain to her class why
$$\#(A \cup B) \;=\; \#A \;+\; \#B \;-\; \#(A \cap B)$$ and why strict inequality can occur (here, $\#$ means cardinality of a set). none of the students were having any of it, and then it occurred to her: she physically made the students break up into groups:
1. those that prefer Batman to Superman,
2. those that prefer Superman to Batman,
3. those that like S and B equally,
4. those that don't care for either.
when she asked the students which groups are included in the set of people that like S or B, everyone gave the right answer, right away. she then pointed out that, unlike on their homework, they didn't count group #3 twice .. and then a collective "ah" erupted in the room.
you could argue that a really good instructor would have came up with an example like that in advance, but come on: how well do you read your students' minds? i credit my friend for being that insightful to explain something so effectively!
Wednesday, March 21, 2012
"simple" still means simple, but ..
finally: back in helsinki!
*sigh of relief*
i don't care if grοmov himself invites me .. to new york, france, anywhere;
i solemnly vow to stay put for at least one month!
last week i helped a friend stay put;
it's a lot easier than helping someone move.
i just went over to his house and made sure that he did not start to load all his stuff into a truck ..
- mitch hedberg.
to "celebrate" this occasion, i'm making corrections to a particular manuscript that i've set aside for close to 6 months. in one part, i had written that:
"The proof of Lemma 4.3 is technical but the idea is simple."
i re-read the proof, then frowned: "wait: why is this simple" ..?
then i remembered: i drew a picture, felt satisfied,
decided to rewrite the statement as:
"The idea of Lemma 4.3 is simple, but the proof is rather technical" ..
.. and added another clarifying sentence or two.
to be fair, if you're not used to weak topolοgies, then the strategy doesn't look like it should work at all:
the point is that weakly-convergent yet geometric approximations of metric spaces give rise to isomorphisms of certain generalised differential operators (called derivations);
in the case of manifolds, this corresponds to how a diffeomorphism gives rise to a push-forward map between tangent bundles.
in the generality of metric spaces, though, the really cool part is that you can do this with certain embeddings. the bundles may degenerate, but the dimension won't!
Thursday, March 15, 2012
among other things, a bad joke.
earlier, a conversation [1]:
Niobe: By the way, Janus, this is Castor.
[Niobe points to Castor, who nods at Janus]
Niobe: Castor's a second-year.
Janus: Hello Castor. You could say that I'm ..
[Janus thinks]
Janus: .. a ninth-year.
[Janus grins, Castor looks confused, Niobe shakes her head in dismay.]
aren't we all students, in some sense?
so i'm currently visiting the university of my ph.d. for a few days .. but nothing at all official. in particular, nobody invited me ..
.. so there is no way,
none at all ..
that i'm being suckered into a talk #6 on week 6!
on a related note, i don't feel tired anymore.
i think that seeing old friends and following familiar routines has rejuvenated me, reminded me of who i once was. this is not to say that i dislike finland ..
(in fact, it's been very enjoyable and fulfilling)
.. but i feel like a different person, there. i guess you could say that i'm not completely used to "him" yet, and the "old me" still remains a better fit.
(more to come, perhaps ..)
[1] all names changed, of course.
Tuesday, March 13, 2012
less disparate bits, after a conference.
in most of my recent talks, an audience member always interrupts me with questions or comments. most of the time it is a matter of clarification, which is suggestive that, apparently ..
1. i'm not good at details while at the chalkboard;
2. i'm not scary or intimidating enough to stop people from asking questions. q-:
i'm probably repeating myself here, but i used to hate questions. they would unnerve me with their suddenness, throw me off my rhythm. sometimes after answering a question, i'd completely forget what i was talking about, and it would take a few silent seconds to recollect.
(those seconds probably felt like strange, awkward minutes to those audiences.)
i guess i've gotten used to questions. in fact, now i tend to expect them more than not, and a silent audience is becoming an exception and odd. they're a good sign, i suppose: it means that somebody is listening, even if they don't quite understand me.
until recently i've never asked questions at the end of talks .. not publicly, anyway. to be honest, questions don't come naturally to me.
more often than not, i'm thinking through what the speaker has just said, or trying to determine whether the discussion is related to anything i've seen before.
far be it for me to take crucial minutes of the speaker's time and ask for help to jog my memory!
so i tried a different tack at the special session: i tried to ask every speaker a question, at the end of their talk.
suffice it to say: i found it quite hard to think of good, relevant questions. it helped that my co-organiser and i were the ones to got to chose the speakers, of course, and clearly we opted for reasonably comprehensible people ..
.. but knowing my own experiences, i hope that i didn't throw anyone off their game ..!
perhaps i'll change my ways, perhaps not .. but it's certainly given me a new benchmark for attentiveness: if you're following a talk, then usually you can think of a good question for the speaker.
Sunday, March 11, 2012
disparate comments, just after "organising" a conference ..
being a host is not natural to me.
why do i feel like some kind of diplomat, in co-organising this special session?
well .. at least it wasn't a disaster.
i'm reaching a strange age:
1. soon it might be correct to call me a "professor" .. which is strange: i've spent 10 minutes of many a first day of class, telling my new students that it would be wrong to call me "professor" but that "doctor" or "janus" would suffice:
you wouldn't call a lieutenant a general, would you?
2. my friends and colleagues are picking up their first ph.d. students. i wonder if that makes me some kind of unofficial mathematical "uncle" ..?
an odd thought: i wonder if they're "scared" of me,
in the same way that profs used to scare the younger me? (-:
(perhaps more, to come ..)
Wednesday, March 07, 2012
paper-folding (cool!)
i'm stuck; for some reason i can't get this kind of folding right ..
[ reposted from wheatpond.com ]
The Mιura-orι is a method for folding up a sheet such that it can be opened or closed in one smooth motion. A Mιura sheet has only one degree of freedom, and can be thought of as having only two states: fully open, or fully closed. Since reversing one fold in the sheet (that is, making a “mountain” into a “valley”) requires reversing all of the adjacent folds as well, the Mιura sheet feels as though it has a memory, and is very resistant to deformation.
Monday, March 05, 2012
not quite your everyday occurrence, but ..
two unusual things happened to me, today:
1. someone walked into my office, took a copy of federεr from the bookshelf, turned precisely to one page, and told me about an open problem that i could solve. (he was friendly about it, though.)
2. i wrote to a fιelds medalist, which shook me to my nerves.
i kept re-reading the text of the email, making sure that nothing could possibly be taken as an offense or annoyance.
here's hoping that it works out ..
Sunday, March 04, 2012
the good from the bad (also: time off)
to borrow from an old joke:
we mathematicians need pen and paper, but we also need wastebaskets.
to quote from the Wired.com article: "how do we identify good ideas?" (via lifehacker):
"How can we sort our genius from our rubbish? The bookshelves groan with how-to guides for bolstering the powers of the imagination. But how can we become better at self-criticism? How can we get excel at the rejection process?"
jοnah lehrεr has a lot to say about the human brain. every time i read him, i learn something interesting. for example, here is something that might be (initially) non-intuitive:
"After writing down as many ideas as they could think of, both groups were asked to choose which of their ideas were the most creative. Although there was no difference in idea generation, giving the unconscious a few minutes now proved to be a big advantage, as those who had been distracted were much better at identifying their best ideas. (An independent panel of experts scored all of the ideas.)"
of course, it would help to know what "best" means, here. (this is discussed a little further in the article.)
it could just be my confirmation bias, but this would explain why i often struggle with an idea .. and after leaving it alone for a while, it suddenly becomes clear(er) why the idea will or won't work. it only becomes more pronounced if, say, i "sleep on it." [1]
at any rate, it's nice to know --- though i'm being lazy about following something up --- that it could have some good side effects. reading lehrεr today also affords me a scientific justification of ... well, why i should take a vacation.
i mean, if it's really going to make the mathematics better, then sure ..? (-:
"Taking a break is important. But make sure you do something that makes you happy, as positive moods make us even better at diagnosing the value of our creative work. After a few relaxing days of vacation, you'll suddenly know which new ideas deserve more time and which need to be abandoned."
every time i think i'm very busy, i run into a colleague that is even busier .. so experience tells me that it's not going that badly. [2]
still, i've been feeling tired lately.
i'm also traveling again on friday, with the goal of giving that aforementioned talk #5 for week 5. though i'll be very happy to see old friends and make new ones, experience also tells me that the jet lag is going to cost me.
*sighs*
there is some lag time between that conference, a research visit, a wedding to attend, and coming back. for once, i've settled on a plan of taking thursday (next, next week) off, as well as the following monday.
that monday happens to include a 6-hour layover in london, u.k. as long as my bags are checked anyway, it means that i can wander the city freely for an afternoon ..
.. it's not like it would be easy to work anyway, so i might as well have a spot of fun!
[1] this is probably one of the main reasons why i've changed my mind about mornings. it took me a while to realise that i really am more productive when i have a fresh start.
[2] i have a similar rule with whether or not i am getting old. everyone ages, of course, but i know too many people older than me who .. upon hearing my saying that "i'm getting old" .. will not hesitate to give me a thorough tongue-lashing.
Thursday, March 01, 2012
the troubles of time zones, even when not traveling.
a friend of mine once suggested to me that jet lag is the difference in time that your soul takes to catch up with your body.
in retrospect, it reminds me of the storyline from eastern standard tribe, by cοry dοctorοw.
one oddity about eastern european time is that the end of the "usual" workday here is approximately the start of the workday on eastern standard time.
as a consequence, my american colleagues usually don't email me until i'm heading out of the office for dinner (or a road run). instead, i get most of my messages at night.
the same thing happens for facebook updates .. which, i suppose, is a good thing: fewer distractions, right?
then there are ways to exploit the time difference. if you are unlucky enough to ..
1. be on the job market,
2. live in europe,
3. prefer to return eventually to the united states,
4. and have a lot of work to do,
.. then typically you might find yourself completing a full day of work, and then doing your applications at night.
as you may imagine: productive or otherwise, this gets very old .. very quickly.
it's not that i specifically choose this routine myself. it's just that the last-minute procrastination, so inherent in my nature, causes this to happen automatically.
well, at least this last round of applying and hiring is ending. i feel like taking off for a holiday ..
in other news, the preprint i've been working on is still short:
at one point it grew to 15 pages, even creeping onto the first few lines of page 16 .. but now it's back to 13 pages.
13 (thirteen!) tight-knit pages, man.
i'm set:
i might submit it tomorrow!
|
2017-07-20 18:26:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5871340036392212, "perplexity": 1961.6523694648872}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423320.19/warc/CC-MAIN-20170720181829-20170720201829-00355.warc.gz"}
|
http://math.stackexchange.com/questions/49956/value-of-cyclotomic-polynomial-evaluated-at-1/137777
|
# Value of cyclotomic polynomial evaluated at 1
Let $\Phi_n(x)$ be the usual cyclotomic polynomial (minimal polynomial over the rationals for a primitive nth root of unity).
There are many well-known properties, such as $x^n-1 = \Pi_{d|n}\Phi_d(x)$.
The following fact appears to follow pretty easily:
Fact:
$\Phi_n(1)=p$ if $n$ is a prime power $p^k$.
$\Phi_n(1)=1$ if $n$ is divisible by more than one prime.
My question is, is there a reference for this fact? Or is it simple enough to just call it "folklore" or to just say it "follows easily from properties of cyclotomic polynomials".
-
Seems pretty straightforward to me. It should follow from $n = \prod_{d | n, d > 1} \Phi_d(1)$ by Mobius inversion. – Qiaochu Yuan Jul 6 '11 at 21:06
If you need a reference, see the proof of the corollary to Theorem 1 in Section 1 of Chapter IV of Lang's Algebraic Number Theory (page 74 of the 2nd edition). But "it follows easily..." is good enough too. – KCd Apr 29 '12 at 20:44
Möbius Inversion:
As outlined in Qiaochu's comment, Möbius inversion will solve this problem. Since I am more comfortable with sums then products, lets just take logs. We have $$\log n=\sum_{d|n\ d\neq 1}\log\Phi_{d}(1).$$ Then for $d\neq1$, $$\log\Phi_{d}(1)=\sum_{d|n}\mu\left(\frac{n}{d}\right)\log d=\Lambda(n)$$ where $\Lambda(n)$ is the Von Mangoldt Lambda Function. Since $\Lambda(p^k)=\log p$, and $\Lambda(n)=0$ for $n$ composite, the result then follows upon exponentiating.
Other:
This relation follows from some other identities. For an integer $n$ and a prime $p$ we have that $$\Phi_{np}(x)=\frac{\Phi_{n}\left(x^{p}\right)}{\Phi_{n}(x)}\ \text{when }\gcd(n,p)=1$$
$$\Phi_{np}(x)=\Phi_{n}\left(x^{p}\right)\ \text{when }\gcd(n,p)=p.$$
We know that $\Phi_p(1)=p$, and from the above it follows that $\Phi_{p^\alpha}(1)=p$ and $\Phi_{pq}(1)=1$.
Hope that helps,
-
Another proof follows directly from the formula $X^{n} - 1 = \prod_{d \mid n} \Phi_d(x)$, since we can deduce from it that $$X^{n-1} + \cdots + X + 1 = \prod_{d \mid n, d>1} \Phi_d(x).$$ Thus, if $n = p^{k}$, we have $$X^{p^{k}-1} + \cdots + X + 1 = \Phi_{p}(x) \cdots \Phi_{p^{k-1}}(x) \Phi_{p^{k}}(x).$$ After evaluating in 1 we obtain $p^{k} = \Phi_{p}(1) \cdots \Phi_{p^{k-1}}(1) \Phi_{p^{k}}(1)$ and induction on $k$ gives $\Phi_{p^{k}}(1) = p$ for all $k$.
If $n = p_{1}^{\alpha_{1}} \cdots p_{r}^{\alpha_{r}}$, where $\alpha_{i}$'s are positive integers and $r \geq 2$, then $$n = \Phi_{n}(1) \prod_{d \mid n, d\neq 1,n} \Phi_d(1).$$ If we assume the statement true for all positive integers $<n$ then the product in the left member of the equation equals $n$, since $$\prod_{i=1}^{r}\Phi_{p_{i}}(1) \cdots \Phi_{p_{i}^{\alpha_{i}}}(1) = p_{1}^{\alpha_{1}} \cdots p_{r}^{\alpha_{r}} = n$$ and the rest of the factors are 1. Thus, $\Phi_{n}(1) = 1$ also.
|
2016-02-08 15:28:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9637331962585449, "perplexity": 109.97382475031081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153585.76/warc/CC-MAIN-20160205193913-00103-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/225659/largish-cardinals
|
# “Largish” cardinals
In what follows, $\mathsf{ZCKP}$ refers to the subset of $\mathsf{ZFC}$ consisting of the axioms of Zermelo set theory with choice and foundation ($\mathsf{ZC}$) plus those of Kripke-Platek set theory ($\mathsf{KP}$): equivalently, as these two theories have much overlap, $\mathsf{ZCKP}$ consists of $\mathsf{ZC}$ plus the axiom of replacement for $\Sigma_1$-formulæ (or equivalently, $\mathsf{KP}$ plus choice, foundation, powerset, infinity, and full separation). A "largish" cardinal property means, informally, one whose existence is provable in $\mathsf{ZFC}$ but not in $\mathsf{ZCKP}$.
Here is the simplest example of a largish cardinal notion. A theorem of Azriel Lévy (see, e.g., Barwise, Admissible Sets and Structures (1975), theorem II.3.5 on page 53 and theorem II.9.1 on page 76) states that for every uncountable cardinal $\kappa$, if $H(\kappa)$ is the set of sets hereditarily of cardinality $<\kappa$, then $H(\kappa)$ is a $1$-elementary submodel of the universe (meaning that every $\Sigma_1$ formula with parameters in $H(\kappa)$ is true [in $V$] iff it is true in $H(\kappa)$). This implies that $H(\kappa)$ satisfies $\Delta_0$-collection (eqvt. $\Sigma_1$-replacement), and if $\kappa$ is a strong limit, then $H(\kappa) \models \mathsf{ZCKP}$ (and the converse is clear). In particular, $\mathsf{ZCKP}$ does not prove the existence of strong limit cardinals.
Now I am interested in strengthenings of this condition on $\kappa$ such that the existence of these cardinals is still provable in $\mathsf{ZFC}$. Two obvious candidates are:
• $H(\kappa)$ satisfies $\mathsf{ZC}$ plus replacement for $\Sigma_n$ formulæ,
• $H(\kappa)$ is an $n$-elementary submodel of the universe (i.e., every $\Sigma_n$ formula with parameters in $H(\kappa)$ is true iff it is true in $H(\kappa)$).
These should at least imply that $\kappa$ is a fixed point of the beth function, so that in fact $H(\kappa) = V_\kappa$. (Perhaps this should be added as a precondition to be worthy of the term "largish cardinal".)
Edit (on 2015-12-11, following the answer by Joel David Hamkins): I didn't realize how very different the two notions above are: the first (call them "$\Sigma_n$-replacing" cardinals) is "local" in that it involves only sets from $V_\kappa$ and can thus be expressed as a $\Delta_1$ property of $V_\kappa$, whereas the latter ("$\Sigma_n$-correct cardinals") is "global" and involves the entire universe. This has a consequence of size: the smallest $\Sigma_2$-correct cardinal, as explained in Joel's response, is larger than the first $\Sigma_n$-replacing cardinal for all $n$, or even the first inaccessible, etc., and there is no hope of "computing" it. There may be some hope for the first $\Sigma_2$-replacing cardinal, however. I really should have asked two different questions.
My question is this: Can these conditions, at least for $n=2$, or perhaps some related ones, be rephrased in purely cardinal-theoretic terms (without appealing to model theory and if possible avoiding the Lévy hierarchy)? Even better, can the smallest cardinal satisfying such a condition be "described" or "computed" in some way? (In the same way that $\beth_\omega$, or "the limit of the sequence defined by $\kappa_0 = \omega$ and $\kappa_{n+1} = \beth_{\kappa_n}$" are descriptions/computations of the smallest strong limit cardinal and the smallest fixed point of the beth function.)
More generally, any comments on these or related properties would be welcome (including a better term than "largish cardinal"). There is probably some connection with powerset-admissible ordinals, although the exact relation escapes me.
One reason why one might be interested in such cardinals is that the corresponding $H(\kappa)$ might serve as a drop-in replacement for Grothendieck universes in a $\mathsf{ZFC}$ formulation of category theory (they are not fully Grothendieck universes, but the point is that the use of a construction that escapes from such a "universish" set is likely to be so rare as to be very conspicuous; and unlike Grothendieck universes, their existence follows from $\mathsf{ZFC}$).
Your cardinals are known as the $\Sigma_n$-correct cardinals, and they arise in diverse set-theoretic contexts. For example, we use them extensively in our paper:
It is a ZFC theorem that the $\Sigma_n$-correct cardinals form a closed unbounded proper class often denoted $C^{(n)}$.
One subtle point about the $\Sigma_n$-correct cardinals is that although we have a concept of $\Sigma_2$-correct and $\Sigma_3$-correct and so on, $\Sigma_n$-correct for any particular $n$, there is no uniform-in-$n$ way to express the concept of $\Sigma_n$-correctness in first-order set theory. The concept is uniformly expressible in some second-order set theories, such as Kelley-Morse set theory, which prove that there is a truth-predicate for first-order truth.
Concerning your question, when $n\geq 2$ there can be no way to define what it means for $\kappa$ to be $\Sigma_n$-correct by looking only below $\kappa$, say, as a limit process, since such a property would be too simple, as it could be verified inside $V_\kappa$ itself, but such verifiable-in-$V_\kappa$ properties have complexity at worst $\Delta_2$. For example, the property of being $\Sigma_n$-correct cannot be $\Sigma_n$-expressible, for then the assertion "There is a $\Sigma_n$-correct cardinal" would reflect from $V$ to $V_\kappa$, even when $\kappa$ is the least $\Sigma_n$-correct cardinal, which gives a contradiction since there are none below the least one. Meanwhile, the property of being $\Sigma_n$-correct is $\Pi_n$-expressible, since one need only say that all the instances of $\Pi_n$ truth in $V_\kappa$ are actually true.
The case of $\Sigma_2$-correct cardinals is particularly attractive, and perhaps this is an example that interests you. The $\Delta_2$ properties are precisely the properties that are local, in the sense that they can be determined in any sufficiently large $H(\theta)$. You can read more on my blog post:
It follows that a cardinal $\kappa$ is $\Sigma_2$-correct, if whenever there is an object having a certain properties inside some possibly very large $H(\theta)$, then there is such an object inside such an $H(\theta)$ with $\theta<\kappa$. In other words, $\kappa$ is $\Sigma_2$-correct, if whenever anything verifiable happens anywhere, then it happens inside $V_\kappa$. Alternatively, everything verifiable has already happened by the time you get to $H_\kappa$. Such a way of understanding $\Sigma_2$-correctness is extremely useful, since it aligns with how set theorists often think about verifying set-theoretic facts.
(A small matter: the distinction between $V_\kappa$ and $H_\kappa$ disappears once $n\geq 2$, since in this case the cardinals are $\beth$-fixed points and so $V_\kappa=H_\kappa$.)
Lastly, your idea of using the $\Sigma_n$-correct cardinals as a universe replacement idea is well known. This is known as the Feferman theory, and I also discussed it here on MathOverflow in my answer to the question What interesting/nontrivial results in Algebraic geometry require the existence of universes?.
• Thanks! Just to clarify, the $\Sigma_n$-correct cardinals are those which satisfy the second property I mentioned ($V_\kappa \mathrel{\prec_n} V$). The first (viz., $V_\kappa \models \mathsf{ZC} + \Sigma_n\textrm{-replacement}$), is of a different nature since it can be checked by looking at $V_\kappa$ alone. (It's also implied by $\Sigma_n$-correctness, although I'm not sure I didn't miss a $\pm1$ on the $n$ here.) So I'll wait a bit before approving your answer to see if someone (or you yourself) has something to say about that other property. – Gro-Tsen Dec 9 '15 at 22:34
• Yes, that's right. Officially, it is defined with $H_\kappa\prec_n V$, but this difference only matters for $n=1$, since once $n\geq2$ then we have $H_\kappa=V_\kappa$ as you noted. – Joel David Hamkins Dec 9 '15 at 22:37
|
2020-07-12 03:59:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9288241267204285, "perplexity": 353.5400771940757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129517.82/warc/CC-MAIN-20200712015556-20200712045556-00352.warc.gz"}
|
https://mcpt.ca/problem/jobhunt
|
## Job Hunting
View as PDF
Points: 10
Time limit: 1.5s
Memory limit: 64M
PyPy 3 128M
Author:
Problem type
Corey wants to climb the corporate ladder to greatness! To do so, he's going to jump between many jobs in order to increase his skills, but of course, he would prefer more jobs so he can escape commitment gather more experience! Due to his connections, he found positions that would open sometime in the future. For brevity, let's say that on the -th day, a job with skill level will open. Whenever Corey switches jobs from one to another, the second job must have a greater or equal skill level compared to his current job. Additionally, Corey is currently unemployed, so he will accept a job of any skill level as his first. Can you help him find how many different positions he can try, given that he must always switch to a position of higher or equal skill level?
#### Input Specification
The first line contains one integer: .
The second line contains integers, the -th of which being
#### Output Specification
One integer: the largest number of jobs Corey can try out
#### Sample Input
6
4 2 2 6 4 5
#### Sample Output
4
#### Sample Explanation
It is best to take the second, third, fifth, and sixth jobs. Notice that their skill levels are 2 2 4 5, so each job has a higher or equal skill level to the previous. This is also the maximum number of jobs Corey can take.
|
2023-03-23 21:55:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4294794797897339, "perplexity": 1940.7382147998812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00275.warc.gz"}
|
http://new-contents.com/Indiana/fourth-order-runge-kutta-error.html
|
Address 413 S Tillotson Ave Ste 1, Muncie, IN 47304 (765) 216-6701
# fourth order runge kutta error Redkey, Indiana
The matrix [aij] is called the Runge–Kutta matrix, while the bi and ci are known as the weights and the nodes.[6] These data are usually arranged in a mnemonic device, known Lambert, J.D (1991), Numerical Methods for Ordinary Differential Systems. If we now express the general formula using what we just derived we obtain: y t + h = y t + h { a ⋅ f ( y t , In contrast, the order of A-stable linear multistep methods cannot exceed two.[24] B-stability The A-stability concept for the solution of differential equations is related to the linear autonomous equation y ′
The Butcher tableau for this kind of method is extended to give the values of b i ∗ {\displaystyle b_ ˙ 5^{*}} : 0 c 2 {\displaystyle c_ ˙ 3} a Note that, in general, an th-order Runge-Kutta method requires evaluations of this function per step. Butcher, John C. (May 1963), Coefficients for the study of Runge-Kutta integration processes, 3 (2), pp.185–201, doi:10.1017/S1446788700027932. Adaptive Runge–Kutta methods The adaptive methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step.
Second edition. Compare to 28 billion steps, taking about 3 months, for the Euler method! Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN978-3-540-56670-0. Forsythe, George E.; Malcolm, Michael A.; Moler, Cleve B. (1977), Computer Methods for Mathematical Computations, Prentice-Hall (see Chapter 6).
Robert 2002-01-28 ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.8/ Connection to 0.0.0.8 failed. Although there is no hard and fast general rule, in most problems encountered in computational physics this point corresponds to . It is given by the tableau 0 2/3 2/3 1/4 3/4 with the corresponding equations k 1 = f ( t n , y n ) , k 2 = p. 215. ^ Press et al. 2007, p.908; Süli & Mayers 2003, p.328 ^ a b Atkinson (1989, p.423), Hairer, Nørsett & Wanner (1993, p.134), Kaw & Kalu (2008, §8.4) and
Jones and Bartlett Publishers: 2011. Contents 1 The Runge–Kutta method 2 Explicit Runge–Kutta methods 2.1 Examples 2.2 Second-order methods with two stages 3 Usage 4 Adaptive Runge–Kutta methods 5 Nonconfluent Runge–Kutta methods 6 Implicit Runge–Kutta methods Butcher, John C. (1975), "A stability property of implicit Runge-Kutta methods", BIT, 15: 358–361, doi:10.1007/bf01931672. v t e Numerical methods for integration First-order methods Euler method Backward Euler Semi-implicit Euler Exponential Euler Second-order methods Verlet integration Velocity Verlet Trapezoidal rule Beeman's algorithm Midpoint method Heun's method
See also List of Runge–Kutta methods. A Runge–Kutta method applied to the non-linear system y ′ = f ( y ) {\displaystyle y'=f(y)} , which verifies ⟨ f ( y ) − f ( z ) , We can construct a more symmetric integration method by making an Euler-like trial step to the midpoint of the interval, and then using the values of both and at the midpoint In averaging the four increments, greater weight is given to the increments at the midpoint.
Please try the request again. Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (2007), "Section 17.1 Runge-Kutta Method", Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN978-0-521-88068-8. Table 1: The minimum practical step-length, , and minimum error, , for an th-order Runge-Kutta method integrating over a finite interval using double precision arithmetic on an IBM-PC clone. 1 2 In an implicit method, the sum over j goes up to s and the coefficient matrix is not triangular, yielding a Butcher tableau of the form[10] c 1 a 11 a
A Gauss–Legendre method with s stages has order 2s (thus, methods with arbitrarily high order can be constructed).[18] The method with two stages (and thus order four) has Butcher tableau: 1 Note the Taylor series we get local error . Implicit Runge–Kutta methods All Runge–Kutta methods mentioned up to now are explicit methods. In other words, in most situations of interest a fourth-order Runge Kutta integration method represents an appropriate compromise between the competing requirements of a low truncation error per step and a
Runge–Kutta methods From Wikipedia, the free encyclopedia Jump to: navigation, search In numerical analysis, the Runge–Kutta methods are a family of implicit and explicit iterative methods, which includes the well-known We begin by defining the following quantities: y t + h 1 = y t + h f ( y t , t ) y t + h 2 = k 1 {\displaystyle k_ − 5} is the increment based on the slope at the beginning of the interval, using y {\displaystyle y} (Euler's method); k 2 {\displaystyle k_ − 3} Please try the request again.
This would agree with the claim that the global error is . The lower-order step is given by y n + 1 ∗ = y n + h ∑ i = 1 s b i ∗ k i , {\displaystyle y_ ˙ 7^{*}=y_ Here is the method: This corresponds to Simpson's Rule, because in the case we would have , , and thus which is Simpson's Rule. Using the same example we had before, with step size , and the first step goes as follows: The true solution has so the error in this step was only .
In fact, the above method is generally known as a second-order Runge-Kutta method. Explicit methods have a strictly lower triangular matrix A, which implies that det(I − zA) = 1 and that the stability function is a polynomial.[21] The numerical solution to the linear In Tab.1, these values are tabulated against using (the value appropriate to double precision arithmetic on IBM-PC clones). Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
The system returned: (22) Invalid argument The remote host or network may be down. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Now pick a step-size h > 0 and define y n + 1 = y n + h 6 ( k 1 + 2 k 2 + 2 k 3 + This can be contrasted with implicit linear multistep methods (the other big family of methods for ODEs): an implicit s-step linear multistep method needs to solve a system of algebraic equations
Likewise, three trial steps per interval yield a fourth-order method, and so on.15 The general expression for the total error, , associated with integrating our o.d.e. However, the relative change in these quantities becomes progressively less dramatic as increases. This increases the computational cost considerably. Explicit Runge–Kutta methods are generally unsuitable for the solution of stiff equations because their region of absolute stability is small; in particular, it is bounded.[14] This issue is especially important in
Its extended Butcher tableau is: 0 1 1 1/2 1/2 1 0 The error estimate is used to control the step size. Runge and M. The main reason that Euler's method has such a large truncation error per step is that in evolving the solution from to the method only evaluates derivatives at the beginning of Its tableau is[10] 0 1/2 1/2 1/2 0 1/2 1 0 0 1 1/6 1/3 1/3 1/6 A slight variation of "the" Runge–Kutta method is also due to Kutta in 1901
The corresponding concepts were defined as G-stability for multistep methods (and the related one-leg methods) and B-stability (Butcher, 1975) for Runge–Kutta methods. W. External links Hazewinkel, Michiel, ed. (2001), "Runge-Kutta method", Encyclopedia of Mathematics, Springer, ISBN978-1-55608-010-4 Runge–Kutta 4th-Order Method Runge Kutta Method for O.D.E.'s DotNumerics: Ordinary Differential Equations for C# and VB.NET — Initial-value Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN978-0-521-55655-2.
By using two trial steps per interval, it is possible to cancel out both the first and second-order error terms, and, thereby, construct a third-order Runge-Kutta method. This is the only consistent explicit Runge–Kutta method with one stage.
|
2019-02-16 22:19:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504685163497925, "perplexity": 1245.4751753407998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481122.31/warc/CC-MAIN-20190216210606-20190216232606-00148.warc.gz"}
|
https://www.zbmath.org/?q=an%3A0555.62056
|
# zbMATH — the first resource for mathematics
Estimation of the quadratic errors-in-variables model. (English) Zbl 0555.62056
The authors have constructed an estimator of the coefficient vector $$\beta$$ in the quadratic functional model with errors $$(e_ t,u_ t)$$ that are independent normal random variables with zero mean and known covariance matrix. The asymptotic properties of the estimator have been studied.
Small-sample behaviour of the estimator $${\hat \beta}$$ has been investigated using Monte Carlo method. The results are summarized with the help of two tables. An example from the earth sciences has been analysed.
Reviewer: U.P.Singh
##### MSC:
62J02 General nonlinear regression 62J99 Linear inference, regression 62E20 Asymptotic distribution theory in statistics 65C05 Monte Carlo methods
|
2021-05-09 10:02:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38035616278648376, "perplexity": 601.4576277911427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00553.warc.gz"}
|
https://www.avrfreaks.net/comment/2667646
|
## I posted a new version of my crc tool, CRCSNTOOL in the projects area...
40 posts / 0 new
Author
Message
Hi Everyone,
It is here:
https://www.avrfreaks.net/project...
You can use it to add a crc16 and/or serial number to an ELF or HEX file and unlike srec_cat, it works with debugging (ELF support) and doesn't bloat the flash code up to the flash size.
Thanks,
Alan
I like that your tool allows using a CRC with both HEX and ELF which allows easy debugging but it seems that you CRC the entire flash section, not just the memory described by the HEX/ELF.
I have need to:
1) Allow an application to self-verify at startup
2) Allow a bootloader to see if there is a valid application in the application flash section
crcsntool fails in this usage because it CRCs the entire flash, not just the application itself.
Is there a reason that you CRC more than just the area that is actually contained/initialized in the HEX/ELF itself?
Are you willing to release the source to crcsntool so that I can adapt it to my situation?
Regards,
Chuck Hackett
Try this as an alternative.
## Attachment(s):
#1 Hardware Problem? https://www.avrfreaks.net/forum/...
#2 Hardware Problem? Read AVR042.
#3 All grounds are not created equal
#4 Have you proved your chip is running at xxMHz?
#5 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand."
Hi ChuckH,
#1 is for sure supported as I've always done this (but I can see it not working in a bootloader situation). The tool excludes the signature string from the CRC calculation which allows the signature/CRC to be embedded in the main code.
#2 is something I am doing now, so I have some code that can do this. The project above looks like it is from 2015 so it is a bit dated. Let me look through what I have now and see what I can get for you. I am very swamped right now, so it might be a day or so.
Thanks,
Alan
Last Edited: Thu. Mar 28, 2019 - 03:42 AM
Hi ChuckH,
Ok, I put together a zip file for you. This should contain everything you need. The text file in the zip also contains code from a app/bootloader project I'm using this in to give some examples of how to use it.
I also included the crcsntag source code (main.c) so you can see how it reads/modifies HEX/ELF files too or alter it if you need to. I compile it with Borland C++ Builder 6, but it is pretty stock C/CPP stuff so you can likely recompile it with something else...
From the text file in the zip:
CRCSNTAG 1.10
_____________
I previously used SREC_CAT to add CRC protection to firmware images, but it has a couple of drawbacks. The first is that it only works with HEX files, but ELF files are required for debugging or for using the convenient CTRL-ALT-F5 program and flash command which allows you to skip opening the programming dialog each time. The second drawback is that it expands the firmware image size to some static value (often the full size of the flash) which makes programming and verifying take considerably longer.
CRCSNTAG addresses both of these issues! It works with both HEX and ELF files. It also embeds a crc16 or crc32 into a tag within the firmware so that the firmware expands only a 9-17 bytes. It knows the firmware size and only tests the active bytes. It can also optionally embed a 4 byte serial number as well which is compatible with my autoprogrammer product (it can automatically increment the serial number for each device it programs). A library is provided with functions to test the crc, get the serial number, or search for the tag (in the case of a bootloader, it doesn't know the location of the application tag, it needs to find it).
Bootloader support has been thought out and you can specify a start address so that the bootloader can have its own tag to validate itself, and the application can have its own tag to validate itself. You can choose to put a serial number in either tag, but I would recommend putting it in the bootloader tag so you can release an application update that is not serial number specific. The tag can also be used to locate other information externally - you can add a version number after the tag in the application section that the bootloader can retrieve and provide to the user. The self test makes for recoverable system where if the application code does not pass the CRC check, it can stay in the bootloader waiting for a good application update. As always it makes sense to have something such as a button also trigger staying in the bootloader so that the option to replace application section code is always present.
The tag uses a special 4 byte preceding signature (cRcM). This must only be in your compiled code in one location as there can only be one tag in the compiled code. You can combine a bootloader and application into a single HEX file if you want AFTER using the crcsntag tool, but you must sign the HEX files individually beforehand.
More from the crcsntag.txt file:
crcsntag.c/.h contain 3 functions:
findtag
searches from a start address through a stop addresss to find a tag
an application or bootloader KNOWS its own tag location and simply can reference it by name, but a bootloader that needs to find the application tag would
use this function.
testcrc
tests the crc of the tag against what is in flash
getserial
retrieves the serial number from the tag if it has a serial number
crcsntag_settings.c/.h
the .h has options to enable crc32 instead of crc16 or flash more than 64K or not.
the .c has example tags, you can enable the one you need (crc16 or crc32, with or without serialno)
_____
In AVR Studio you can automatically call this tool as a post build event by going to project properties, build events, post build event command line and add:
My main application build post build event command line:
crcsntag "$(MSBuildProjectDirectory)\$(Configuration)\$(OutputFileName).hex" "$(MSBuildProjectDirectory)\$(Configuration)\$(OutputFileName).elf"
My bootloader build post build event command line (note the -start=0x7000 difference!)
crcsntag "$(MSBuildProjectDirectory)\$(Configuration)\$(OutputFileName).hex" "$(MSBuildProjectDirectory)\$(Configuration)\$(OutputFileName).elf" -start=0x7000
_____
Some example code:
I have a project with a bootloader and app and I chose to include the crcsntag.c/.h/crcsntag_settings.c/.h files only in the bootloader. The application needed a tag so I just copied the correct tag into it, but it doesn't do anything with the crcsntag functions as the bootloader does anything relevant with that already.
bootloader main.c - this is code snippets, some code not shown like all variable declarations, etc.
#include "crcsntag.h"
#define FUSE_EXTENDED 0xfd
#define FUSE_HIGH 0xd0
#define FUSE_LOW 0xc2
#define FUSE_LOCK 0xcc
int main(void)
{
//test crc
if (boot_lock_fuse_bits_get(GET_EXTENDED_FUSE_BITS)!=FUSE_EXTENDED ||
boot_lock_fuse_bits_get(GET_HIGH_FUSE_BITS)!=FUSE_HIGH ||
boot_lock_fuse_bits_get(GET_LOW_FUSE_BITS)!=FUSE_LOW ||
boot_lock_fuse_bits_get(GET_LOCK_BITS)!=FUSE_LOCK ||
!testcrc(BOOT_LOCATION,(uint16_t)MsgCRCM))
{
//halt so it will not be responsive and this error condition will be detected
for(;;)
;
}
//test for enter bootloader condition here UPD
if (PIND==0xbf && PINC==0xff && PINA==0xfe && PINF==0xfe)
goto update;
//get serial
if (!getserial(&sn,(uint16_t)MsgCRCM))
sn=0;
//does the app have a tag and its crc is good
if (findtag(0,BOOT_LOCATION,&ui1) && testcrc(0,ui1))
{
//store sn in GPIOR0 so it can be retrieve by app
GPIOR0=((uint8_t*)&sn)[0];
GPIOR1=((uint8_t*)&sn)[1];
//execute application
asm("jmp 0");
}
update:
//more code
}
Application main.c:
tag so crcsntag will find/update it:
const uint8_t PROGMEM MsgCRCM[]={'c','R','c','M', 0,0,0, 0,0,};
volatile uint8_t *dummy_ptr;
int main()
{
//grab serialno from bootloader
serialno=GPIOR0 | (GPIOR1<<8);
//make sure gcc doesn't optimize out our darn tag!
dummy_ptr=(uint8_t*)MsgCRCM;
//more code
}
ALSO - you can include the crcsntag code in the main application and it could do a findtag to locate the bootloader tag and then get the serial out of the bootloader directly instead of my passing 16 bits of it using registers in the example above.
## Attachment(s):
Last Edited: Thu. Mar 28, 2019 - 04:08 AM
Hi Alan,
I had just gotten crcgen to work (it was missing the code to skip the CRC in the signature) but crcntag looks like it will do everything I need - thanks much.
I don't serial number my firmware because it will be uploaded to my website and downloaded by customers to load into their controllers but I do store a version ID for feature/bug tracking.
This is for one of my hobbies - Live Steam Railroads (7.5" gauge). You can see my locomotive (about 2,500 lbs) here:
http://www.whitetrout.net/Chuck/images/8444%20Joan%20&%20Chuck%20at%20KC.jpg
I have developed an automatic signal system for these railroads which I sell. Not a big market but the system is installed at 4 railroads and I have not started advertising yet.
Currently the controller must be loaded via the USART bootloader but I am in the process of upgrading the bootloader so that one/several/all controllers (our railroad has 19 controllers installed) can be loaded at once over the CAN data bus that they use to communicate.
http://www.minirailsolutions.com/
There are currently about 28,000 lines of code in the controller (ATMega1284) but one of the biggest challenges is lightning:
http://www.minirailsolutions.com/living-with-lightning/
Thanks again ...
Regards,
Chuck
Super - I spent a lot of time with crcsntag trying to get it to do _everything_ I'd ever need so I didn't have to go back and rework it again later.
Great pictures - impressive sir!
ChuckH wrote:
... but one of the biggest challenges is lightning:
Thanks for the good read!
http://www.minirailsolutions.com/living-with-lightning/
...
The device is not an insulator, it’s a lightning arrester.
...
This will provide an “air gap” to protect the power supply and signal system when the system is turned off (which is usually 90% of the time).
...
Y'all are well placed in Florida for lightning testing operations (consistent thunderstorms)
Oil field equipment also have lightning arrestors to protect contactors, motors, and etc.
MOV can protect electronics though these tend to, with exposure, short, partially short, or become very leaky.
A TBU can be after a MOV for additional protection.
Lightning sensors can be in operator's equipment such that operators evacuate customers to shelter in a walled structure then an operator throws the system's switch.
Delta Lightning Arrestors - How Arrestors Work
MOV - Metal Oxide Varistor
SOV - Silicon Oxide Varistor
Delta SOV for 120V AC or DC
Delta's MOV : Delta Data Line Circuit Protectors
Bourns - TBU High-Speed Protectors (HSPs)
Incoming Storm A Lightning Detector from ams | DigiKey
"Dare to be naïve." - Buckminster Fuller
gchapman wrote:
Y'all are well placed in Florida for lightning testing operations (consistent thunderstorms)
Lightning capital of the world! :-)
gchapman wrote:
MOV can protect electronics though these tend to, with exposure, short, partially short, or become very leaky.
A TBU can be after a MOV for additional protection.
Yes, the reason I did not go with MOVs is that they degrade. I use a TVS diode on the track input and a fuse to protect the TVS diode (TVS diodes usually fail 'shorted' which is better in this case). The signal then goes to an RC conditioning filter and directly to an ADC input on the ATMega1284. I can draw a 3/16" arc from a 6,000vac supply to the track input terminal without harm.
The power (30vdc) and CAN bus were more difficult. I started with fuse/TVS but that was not sufficient. I have now gone to 75v Gas Discharge Tube (GDT) and inductor on the power and GDTs and Bourns TBUs on the CAN bus. Looks good so far but we have not been through a full lightning season with them yet.
BTW: Why do you have different code in crcntag and testcrc()? I have not studied them in detail yet but so far I am getting mismatched CRCs. First couple bytes compared ok by after about 12k (of 78k) i have a mismatch. I'm going to cut back from 12k gradually and see if I can see where they diverge.
Chuck
OOps, that crcntag stuff was for Alan :-)
Chuck
ChuckH wrote:
BTW: Why do you have different code in crcntag and testcrc()?
Do you mean the crssntag.exe (from main.c) vs testcrc() from crcsntag.c ? What is different?
ChuckH wrote:
I have not studied them in detail yet but so far I am getting mismatched CRCs. First couple bytes compared ok by after about 12k (of 78k) i have a mismatch. I'm going to cut back from 12k gradually and see if I can see where they diverge.
I don't understand how you know some of it is ok and some not. The crcsntag tool figures out the size of a hex/elf and calculates a crc for that entire blob of memory. When you compile if you are using the post build command you can see the crcsntag executeable run and it will say what the length was, the crc calculated, etc. For example:
crcsntool test.hex [hex, found 0x151, size 25206, crc 0x01A0]
crcsntool test.elf [elf, found 0x151, size 25206, crc 0x01A0]
Thanks,
Alan
Are you using the >64K option? I'm can't remember if I tested that.
ChuckH wrote:
You can see my locomotive (about 2,500 lbs) here:
Wow - that's a pretty serious bit of kit!!
ChuckH wrote:
I started with fuse/TVS but that was not sufficient.
TVS destruction?
PolyZen have an automotive power use case though lightning is an order of magnitude worse than automotive and PolyZen's time duration to clamp is a few orders of magnitude slower than a TVS (some regulator pass transistors can withstand the avalanche though some won't)
PolyZen Devices for Overvoltage-Overcurrent Protection - Littelfuse
PolyZen Device Fundamentals
(page 2, right column, bottom half)
Transient Protection Design Considerations
Transient protection is especially critical when designing peripherals that may be powered off computer buses and automotive power buses. Automotive power buses are notoriously dirty. Although they are nominally 12V, they can range in normal operation from 8V to 16V. Still, battery currents can exceed 100 Amps and be stopped instantly via a relay or fuse, generating large inductive spikes on the bus and increasing voltage by 5 times or more. In operation, automotive supplies are subject to damage from misconnected batteries [1] and double battery jumpstarts (24V) [2]. A condition known as “load dump” can also generate large voltages on the bus [3].
[1] Correct battery replacement sequence is +, -, -, +
[2] IIRC, there's a procedure to jump start a 12V automobile from a 24V automobile (semi-tractor, heavy wrecker, medium truck)
[3] Up to approximately 300V plus or minus
"Dare to be naïve." - Buckminster Fuller
clawson wrote:
ChuckH wrote:
You can see my locomotive (about 2,500 lbs) here:
Wow - that's a pretty serious bit of kit!!
Especially when it comes off the track ... and the boiler is HOT (350 degrees with steam at 125 psi) ...
Chuck
alank2 wrote:
Are you using the >64K option? I'm can't remember if I tested that.
Yes, I am using the >64k option. Actual code size: 78654
Chuck
ChuckH wrote:
This is for one of my hobbies - Live Steam Railroads (7.5" gauge). You can see my locomotive (about 2,500 lbs) here:
Nice looking 4-8-4! My cousin was in to that too, last time I saw his it had not yet been painted. Sadly he has passed, so I don't know what became of it....
Jim
(Possum Lodge oath) Quando omni flunkus, moritati.
"I thought growing old would take longer"
alank2 wrote:
ChuckH wrote:
BTW: Why do you have different code in crcntag and testcrc()?
Do you mean the crssntag.exe (from main.c) vs testcrc() from crcsntag.c ? What is different?
Yes, I am using Atmel Studio post-build step of:
crcsntag "$(MSBuildProjectDirectory)\$(Configuration)\$(OutputFileName).hex" "$(MSBuildProjectDirectory)\$(Configuration)\$(OutputFileName).elf"
code from cr[c]sntag.exe (from main.c) (in "UpdateSignature"):
uint16_t _crc16_update(uint16_t ACRC, uint8_t AValue)
{
uint8_t c1,c2;
//xor value
ACRC^=AValue;
//for each bit
for (c1=0; c1<8; c1++)
{
/* if (ACRC&1)
ACRC=(uint16_t)((ACRC>>1)^0xA001);
else ACRC=(uint16_t)(ACRC>>1);*/
//remember lsb
c2=(uint8_t)(ACRC & 1);
//shift 1 bit
ACRC>>=1;
//if lsb apply polynomial
if (c2)
ACRC^=0xA001;
}
return ACRC;
}
... and in UpdataSignature ...
{
//calculate crc16
crc16=0xffff;
for (ul1=startaddr; ul1<imagesize; ul1++)
if (ul1<sigpos || ul1>=sigpos+SIG_SIZE)
crc16=_crc16_update(crc16,image[ul1]);
crc16=(uint16_t)~crc16;
printf(", crc 0x%04X",crc16);
memcpy(sigstr+7,&crc16,2);
}
.vs. in testcrc:
//calculate crc
{
{
#ifdef CRCSNTAG_ENABLE_CRC32
....
#endif
{
for (c1=0;c1<8;c1++)
{
c2=(uint8_t)(crc16 & 1);
crc16>>=1;
if (c2)
crc16^=0xA001;
}
}
}
break;
}
... as I said, I am still investigating but I noticed that they were different code sections ... BUT ... they may be identical in function, I still have to step the code and check stuff to be sure I'm using it correctly. I just though it was strange that the function wasn't used by "testcrc". I am probably doing something wrong and not ready to question it yet until I verify that they are, in fact, operating on the exact same code image.
alank2 wrote:
ChuckH wrote:
I have not studied them in detail yet but so far I am getting mismatched CRCs. First couple bytes compared ok by after about 12k (of 78k) i have a mismatch. I'm going to cut back from 12k gradually and see if I can see where they diverge.
I don't understand how you know some of it is ok and some not. The crcsntag tool figures out the size of a hex/elf and calculates a crc for that entire blob of memory. When you compile if you are using the post build command you can see the crcsntag executeable run and it will say what the length was, the crc calculated, etc. For example:
crcsntool test.hex [hex, found 0x151, size 25206, crc 0x01A0]
crcsntool test.elf [elf, found 0x151, size 25206, crc 0x01A0]
Thanks,
Alan
Sorry for the confusion, I'll try to clarify: When I found that I was coming up with a different crc I verified (by breakpoints) that they were both operating on the same range (0 : 78654). I then set a breakpoint in crcsntag.exe and testcrc as they did the first couple of bytes (both starting at 0) and it was OK. Then I tried to set a conditional breakpoint after they had both done 0x3000 worth of code to verify the crc but Atmel Studio wasn't cooperating (always broke instead of when "addr > 0x3000". I will investigate more today.
Hope that clarifies it. I'll let you know what I find ...
Regards,
Chuck
Chuck, did you enable this line in the crcsntag_settings.h (by removing the //)?
//#define CRCSNTAG_ENABLE_FAR //enable if you are using >64K flash
alank2 wrote:
Chuck, did you enable this line in the crcsntag_settings.h (by removing the //)?
//#define CRCSNTAG_ENABLE_FAR //enable if you are using >64K flash
Yes ...
I am in the middle of a 'binary search' (setting breakpoints in crcsntag and testcrc at different addresses values). I have narrowed the issue down to somewhere between 2039 & 4080 ... getting close!
FYI: tag is at 1397 and size is 78752
Chuck
Very odd. I've got to look and see if I have an easy to program part over 64K to test it with, so I might be able to look at it a bit this weekend. Maybe you will figure it out first!!
Another test - remark out some of your code so it drops below 65536 bytes. Does it work/fail then?
If it fails then, disable the tag above, Does it work then?
The code difference is probably me trying to optimize the C to produce better ASM on the AVR side.
Last Edited: Fri. Mar 29, 2019 - 06:49 PM
I found a MEGA1284P project I had laying around an expanded it to 93K flash. Added crcsntag to it and it works. I tried tags with and without serial number successfully. If I disable the >64K define it fails as expected. Are you testing it in the bootlaoder or in the app?
alank2 wrote:
Very odd. I've got to look and see if I have an easy to program part over 64K to test it with, so I might be able to look at it a bit this weekend. Maybe you will figure it out first!!
Another test - remark out some of your code so it drops below 65536 bytes. Does it work/fail then?
If it fails then, disable the tag above, Does it work then?
The code difference is probably me trying to optimize the C to produce better ASM on the AVR side.
I have finished my 'binary search' test with the following:
• It matches until it attempts to CRC location 2138 (0x085A)
• At this point the accumulation (current CRC) is 10049 (0x2741)
• The byte it is trying to add to the CRC is 0x63 ***
• The result on the PC is 6567 (0x19A7)
• The result on the ATMega1284p is 39654 (0x9AE6)
*** News flash: for some reason that I can not explain, when the loop in the AVR goes to do a "PGM_READ_BYTE(addr)" and "addr" is 2138 (0x085A) the value returned is NOT the value shown as being in that location by the debug/memory window (BTW: using JTAG) nor the value shown in the HEX file for that location.
The following (unoptimized) code:
crc16 = _crc16_update(crc16, PGM_READ_BYTE(addr));
compiles to:
6c88: 8a 81 ldd r24, Y+2 ; 0x02
6c8a: 9b 81 ldd r25, Y+3 ; 0x03
6c8c: ac 81 ldd r26, Y+4 ; 0x04
6c8e: bd 81 ldd r27, Y+5 ; 0x05
6c90: 8a 8f std Y+26, r24 ; 0x1a
6c92: 9b 8f std Y+27, r25 ; 0x1b
6c94: ac 8f std Y+28, r26 ; 0x1c
6c96: bd 8f std Y+29, r27 ; 0x1d
6c98: 8a 8d ldd r24, Y+26 ; 0x1a
6c9a: 9b 8d ldd r25, Y+27 ; 0x1b
6c9c: ac 8d ldd r26, Y+28 ; 0x1c
6c9e: bd 8d ldd r27, Y+29 ; 0x1d
6ca0: ab bf out 0x3b, r26 ; 59
6ca2: fc 01 movw r30, r24
6ca4: 87 91 elpm r24, Z+
6ca6: 8e 8f std Y+30, r24 ; 0x1e
6ca8: 2e 8d ldd r18, Y+30 ; 0x1e
6caa: 8e 81 ldd r24, Y+6 ; 0x06
6cac: 9f 81 ldd r25, Y+7 ; 0x07
6cae: 62 2f mov r22, r18
6cb0: 07 df rcall .-498 ; 0x6ac0 <_crc16_update>
Now, I'm no expert at AVR assembler but, setting a breakpoint at 6CA4 (program read) shows the Z register contains 0x085A. Stepping to the next instruction shows a value of 0x98 loaded into R24 ... not 0x65 !
This (wrong value) continues for the next location (then I was tired and confused).
Just now, researching the ELPM instruction I discovered this (see ATMega1284p datasheet, section 7.5.2):
RAMPZ – Extended Z-pointer Register for ELPM/SPM(1)
For ELPM/SPM instructions, the Z-pointer is a concatenation of RAMPZ, ZH, and ZL, as shown
in Figure 7-4 on page 15. Note that LPM is not affected by the RAMPZ setting.
Figure 7-4. The Z-pointer used by ELPM and SPM
The actual number of bits is implementation dependent. Unused bits in an implementation will
always read as zero. For compatibility with future devices, be sure to write these bits to zero.
Followed ominously by:
Note: 1. RAMPZ is only valid for ATmega1284/ATmega1284P
Since I'm using the ATMega1284p this warrants more study ... I'll get back to you, probably Sunday as I have a commitment all day tomorrow.
The good news is that it does not appear to be your code ...
GCC not being told the correct stuff about the ATMega1284p? ... Stay tuned ...
Regards,
Chuck
App ... but the BOOTRST fuse is programmed.
CPU registers show execution in normal app space.
Chuck
Hi Chuck,
I've dug into it a bit too and one of the confusing things I think is that in gcc, it treats PROGMEM variables as 16-bit pointers (such as the MsgCRCM tag). I don't think this is a huge issue because I've seen some messages about how gcc packs the PROGMEM's into the first 64K anyway, but there is a function pgm_get_far_address() that I *think* I've used in a bootloader before to get a correct pointer (I could be wrong about this). That only has to do with using the MsgCRCM reference, if searching for it, it should be fine. Even then, it might only be an issue in a bootloader loaded above 64K referencing its MsgCRCM tag.
I'm not following post #24...
Thanks!
Alan
Well, I couldn't go to bed until I checked the value of the RAMPZ when doing the good and bad memory reads ...
The RAMPZ was always 0 as it should be in the lower 64k (As shown in the Atmel Studio "I/O" pane).
... so I am at a loss at the moment ... I have no idea why the ELPM instruction is getting the wrong byte ...
A long time ago I had the honor (?) of finding a microcode bug in a fairly new computer on the market (Tandem) ... this kind of smells like that ... but, I've been in this business too long to go blaming something on compiler or hardware without rock-solid evidence ... so I'm not going there now ...
I'm going to drag myself to bed where I will probably lie awake thinking about it ...
Chuck
alank2 wrote:
I'm not following post #24...
Chuck
I've been there Chuck!!! Those unexplained things can be vexing!
So you have BOOTRST set, but you are testing the crc in the app section? Do you have the bootloader as a separate project? I've had BOOTRST set before and when I flash the app alone, it may start at the BOOTRST address, but if it is empty, it rolls around and eventually gets to run the app section...
Hi Alan,
Unless I'm missing something, the:
PGM_READ_BYTE(addr)
(due to CRCSNTAG_ENABLE_FAR being defined) resolves to (in pgmspace.h):
/** \ingroup avr_pgmspace
Read a byte from the program space with a 32-bit (far) address.
\note The address is a byte address.
The address is in the program space. */
#define pgm_read_byte_far(address_long) __ELPM((uint32_t)(address_long))
which handles 32-bit addresses and this would seem to be born out by the code quoted in #23 where there is the "OUT" to RAMPZ and the ELPM so it should work.
Stepping through RAMPZ and the Z register look fine -- unless Atmel Studio is wrong ...
Regards,
Chuck
Correct. Could there be something corrupting the flash? If you read the byte that comes back the wrong value in your main function before calling the testcrc function using the pgm_read_byte_far, what does it return? Is it what it should return?
alank2 wrote:
So you have BOOTRST set, but you are testing the crc in the app section?
Yes ...
alank2 wrote:
Do you have the bootloader as a separate project? I've had BOOTRST set before and when I flash the app alone, it may start at the BOOTRST address, but if it is empty, it rolls around and eventually gets to run the app section...
Yes, separate project. My controller normally runs with a bootloader in memory but, when testing with Studio it's easier to just have the app loaded and I have not had an issue in the past when it just rolls off the end to 0.
Chuck
ChuckH wrote:
with Studio it's easier to just have the app loaded and I have not had an issue in the past when it just rolls off the end to 0.
I do the same thing and agree.
alank2 wrote:
Correct. Could there be something corrupting the flash? If you read the byte that comes back the wrong value in your main function before calling the testcrc function using the pgm_read_byte_far, what does it return? Is it what it should return?
If I understand what you are asking:
I placed this before the call:
volatile uint16_t TheByte;
.... loop ...
TheByte = pgm_read_byte_far( addr );
For location 0x0859 it returned 0x01 (correct)
For 0x085A it returned 0x98 (wrong)
For 0x085B it returned 0x95 (wrong)
For 0x085C it returned 0x98 (wrong)
....
Chuck
I suspect there is something residual in a register that ELPM is using that I don't know is there ...
I can't imaging there is anything special about location 0x085A ...
Ok, it had to be done ... I have been testing with this controller for awhile, so I switched to a different board ...
... No change ...
Chuck
Well, it gets stranger ...
I placed this and a call to it just about first thing in my app:
void DumbTest( void )
{
volatile uint8_t TheByte;
for ( uint32_t Address = 2136; Address < 3000; Address++ )
}
... and it worked (code bytes were shifted slightly due to code change in lower code space).
When I added it just ahead of "testcrc" and a call to it first thing in testcrc ... it failed !
testcrc is higher in memory (0x35BE) but don't see how that makes a difference.
Ok, it's 12:02 AM here ...
Chuck
I think the out 0x3b, r26 is setting the rampz register. I see that in your code example above - do you see it in yours tests in post 35/36 ?
What does removing the loop do: What should the 3 bytes be?
volatile uint16_t TheByte;
TheByte = pgm_read_byte_far( 0x085A );
TheByte = pgm_read_byte_far( 0x085B );
TheByte = pgm_read_byte_far( 0x085C );
Last Edited: Sat. Mar 30, 2019 - 04:17 AM
alank2 wrote:
What does removing the loop do
alank2 wrote:
Ok, I put together a zip file for you.
Ok, things seem to be going better now.
I have been doing some testing ...
Be aware that, if crcsntag is not run on the hex/elf file (user forgot, gave it the wrong file, etc.) the tag will be cRcM followed by 0x00's. This causes your code to have a 'startaddr' of 0 AND a 'stopaddr' of 0. On a 32-bit processor this causes it to loop for a LOOOOONG time even at 20 MHz. I assume it would eventually fail but I didn't wait that long :-)
Is there a reason you don't use the value of the linker symbol "__data_load_end"?
You can get it in C with:
//
// Get the 32-bit address of the linker symbol "__data_load_end"
//
// Note: Not doable in WinAVR because GCC treats pointers as 16 bits
//
{
uint32_t tmp;
asm volatile("ldi %A0, lo8(__data_load_end)" "\n"
"ldi %B0, hi8(__data_load_end)" "\n"
"ldi %C0, hlo8(__data_load_end)" "\n"
: "=r"(tmp));
return tmp;
}
This way it would always be valid (assuming you want to CRC all of flash).
Chuck
Hi Chuck,
The code that signs it with the crc16 already knows the length so it embeds the length along with the crc16 at the same time. You could though modify the tag to have something like the above automatically done however. You can't forget to do the crcsntag command if it is a post build instruction (unless you disable it!). Let me know if you come up with an improvement!!
Thanks,
Alan
|
2021-05-08 04:58:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34120824933052063, "perplexity": 5911.456708782771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00265.warc.gz"}
|
http://nodus.ligo.caltech.edu:8080/40m/page325?rsort=Type&attach=0
|
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 325 of 327 Not logged in
ID Date Author Type Category Subject
15145 Thu Jan 23 15:32:42 2020 gautamConfigurationComputersMegatron: starts up grade
The burt snapshotting is still not so reliable - for whatever reason, the number of snapshot files that actually get written looks random. For example, the 14:19 backup today got all the snaps, but 15:19 did not. There are no obvious red flags in either the cron job logs or the autoburt log files. I also don't see any clues when I run the script in a shell. It'll be good if someone can take a look at this.
15150 Thu Jan 23 23:07:04 2020 JonConfigurationPSLc1psl breakout board wiring
To facilitate wiring the c1psl chassis and scripting loopback tests, I've compiled a distilled spreadsheet with the Acromag-to-breakout board wiring, broken down by connector. This information is extractable from the master spreadsheet, but not easily. There were also a few apparent typos which are fixed here.
The wiring assignments at the time of writing are attached below. Here is the link to the latest spreadsheet.
15158 Mon Jan 27 14:01:01 2020 JordanConfigurationGeneralRepurposed Sorenson Power Supply
The 24 V Sorenson (2nd from bottom) in the small rack west of 1x2 was repurposed to 12V 600 mA, and was run to a terminal block on the north side of 1X1. Cables were routed underneath 1X1 and 1X2 to the terminal blocks. 12V was then routed to the PSL table and banana clip terminals were added.
15159 Mon Jan 27 18:16:30 2020 gautamConfigurationComputersSluggish megatron?
I've also been noticing that the IMC Autolocker scripts are running rather sluggishly on Megatron recently. Some evidence - on Feb 11 2019, the time between the mcup script starting and finishing is ~10 seconds (I don't post the raw log output here to keep the elog short). However, post upgrade, the mean time is more like ~45-50 seconds. Rana mentioned he didn't install any of the modern LIGO software tools post upgrade, so maybe we are using some ancient EPICS binaries. I suspect the cron job for the burt snapshot is also just timing out due to the high latency in channel access. Rana is doing the software install on the new rossa, and once he verifies things are working, we will try implementing the same solution on megatron. The machine is an old Sun Microsystems one, but the system diagnostics don't signal any CPU timeouts or memory overflows, so I'm thinking the problem is software related...
Quote: The burt snapshotting is still not so reliable - for whatever reason, the number of snapshot files that actually get written looks random. For example, the 14:19 backup today got all the snaps, but 15:19 did not. There are no obvious red flags in either the cron job logs or the autoburt log files. I also don't see any clues when I run the script in a shell. It'll be good if someone can take a look at this.
15164 Tue Jan 28 15:39:04 2020 gautamConfigurationComputersSluggish megatron?
There were a bunch of medm processes stalled on megatron (connected with screenshot taking). To see if they were interfering with the other scripts, I killed all of the medm processes, and commented out the line in the crontab that runs the screenshots every 10 mins. Let's see if this improves stability.
15167 Tue Jan 28 17:36:45 2020 gautamConfigurationComputersLocal EPICS7.0 installed on megatron
[Jon, gautam]
We found that the caput commands were taking much longer to execute on megatron than on pianosa (for example). Suspecting that this had something to do with the fact that megatron was using EPICS binaries from the shared NFS drive which were compiled for a much older OS, I installed the latest stable release of EPICS on megatron. The new caput commands execute much faster. I also added the local EPICS directory to the head of the $PATH variable used by the MC autolocker and FSS Slow scripts, so that they use the new caput command. But mcup is still slow - maybe my new path definition isn't picked up and it is still using the NFS binaries? To be looked into... Quote: There were a bunch of medm processes stalled on megatron (connected with screenshot taking). To see if they were interfering with the other scripts, I killed all of the medm processes, and commented out the line in the crontab that runs the screenshots every 10 mins. Let's see if this improves stability. 15168 Tue Jan 28 19:12:30 2020 JonConfigurationPSLSpare channels added to c1psl chassis After some discussion with Gautam, I decided to build more spare channels into the new c1psl machine. This is anticipation of adding new laser and ISS channels in the near future, to avoid having to disconnect the installed chassis and pull it out of the rack. The spare channels will be wired to DB37M feedthroughs on the front side of the chassis, with enough wire length to be able to pull the breakout boards out of the front to reconfigure their wiring as needed (e.g., split off channels onto a separate connector). To have enough overhead, this will require installing 1 additional ADC unit (XT1221) and 1 additional DAC (XT1541). We have enough spare BIO channels among the existing units (both sinking and sourcing). This will give us: • 13 spare ADC channels • 14 spare DAC channels • 16 spare sinking BIO channels • 12 spare sourcing BIO channels The updated c1psl chassis wiring assignments are attached. It adds 4 new DB37M connectors for the spare channels (highlighted in yellow) and fixes one typo Jordan found while wiring today. The most current spreadsheet is available here. 15421 Mon Jun 22 10:43:25 2020 JonConfigurationVACVac maintenance at 11 am The vac system is going down at 11 am today for planned maintenance: • Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417] • Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413] We will advise when the work is completed. 15424 Mon Jun 22 20:06:06 2020 JonConfigurationVACVac maintenance complete This work is finally complete. The dry pump replacement was finished quickly but the controls updates required some substantial debugging. For one, the mailer code I had been given to install would not run against Python 3.4 on c1vac, the version run by the vac controls since about a year ago. There were some missing dependencies that proved difficult to install (related to Debian Jessie becoming unsupported). I ultimately solved the problem by migrating the whole system to Python 3.5. Getting the Python keyring working within systemd (for email account authentication) also took some time. Edit: The new interlock flag channel is named C1:Vac-interlock_flag. Along the way, I discovered why the interlocks had been failing to auto-close the PSL shutter: The interlock was pointed to the channel C1:AUX-PSL_ShutterRqst. During the recent c1psl upgrade, we renamed this channel C1:PSL-PSL_ShutterRqst. This has been fixed. The main volume is being pumped down, for now still in a TP3-backed configuration. As of 8:30 pm the pressure had fallen back to the upper 1E-6 range. The interlock protection is fully restored. Any time an interlock is triggered in the future, the system will send an immediate notification to 40m mailing list. 👍 Quote: The vac system is going down at 11 am today for planned maintenance: Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417] Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413] 15425 Tue Jun 23 17:54:56 2020 ranaConfigurationVACVac maintenance complete I propose we go for all CAPS for all channel names. The lower case names is just a holdover from Steve/Alan from the 90's. All other systems are all CAPS. It avoids us having to force them all to UPPER in the scripts and channel lists. 15446 Wed Jul 1 18:03:04 2020 JonConfigurationVACUPS replacements I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference: • Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models... • Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager. • Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring. I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant. 15465 Thu Jul 9 18:00:35 2020 JonConfigurationVACUPS replacements Chub has placed the order for two new UPS units (115V for TP2/3 and a 220V version for TP1). They will arrive within the next two weeks. Quote: I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference: Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models... Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager. Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring. I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant. 15510 Sat Aug 8 07:36:52 2020 Sanika KhadkikarConfigurationCalibration-RepairBS Seismometer - Multi-channel calibration Summary : I have been working on analyzing the seismic data obtained from the 3 seismometers present in the lab. I noticed while looking at the combined time series and the gain plots of the 3 seismometers that there is some error in the calibration of the BS seismometer. The EX and the EY seismometers seem to be well-calibrated as opposed to the BS seismometer. The calibration factors have been determined to be : BS-X Channel: $\dpi{150} \small {\color{Blue} 2.030 \pm 0.079 }$ BS-Y Channel: $\dpi{150} \small {\color{Blue} 2.840 \pm 0.177 }$ BS-Z Channel: $\dpi{150} \small {\color{Blue} 1.397 \pm 0.182 }$ Details : The seismometers each have 3 channels i.e X, Y, and Z for measuring the displacements in all the 3 directions. The X channels of the three seismometers should more or less be coherent in the absence of any seismic excitation with the gain amongst all the similar channels being 1. So is the case with the Y and Z channels. After analyzing multiple datasets, it was observed that the values of all the three channels of the BS seismometer differed very significantly from their corresponding channels in the EX and the EY seismometers and they were not calibrated in the region that they were found to be coherent as well. Method : Note: All the frequency domain plots that have been calculated are for a sampling rate of 32 Hz. The plots were found to be extremely coherent in a certain frequency range i.e ~0.1 Hz to 2 Hz so this frequency range is used to understand the relative calibration errors. The spread around the function is because of the error caused by coherence values differing from unity and the averages performed for the Welch function. 9 averages have been performed for the following analysis keeping in mind the needed frequency resolution(~0.01Hz) and the accuracy of the power calculated at every frequency. 1. I first analyzed the regions in which the similar channels were found to be coherent to have a proper gain analysis. The EY seismometer was found to be the most stable one so it has been used as a reference. I saw the coherence between similar channels of the 2 seismometers and the bode plots together. A transfer function estimator was used to analyze the relative calibration in between all 3 pairs of seismometers. In the given frequency range EX and EY have a gain of 1 so their relative calibration is proper. The relative calibration in between the BS and the EY seismometers is not proper as the resultant gain is not 1. The attached plots show the discrepancies clearly : • BS-X & EY-X Transfer Function : Attachment #1 • BS-Y & EY-Y Transfer Function : Attachment #2 The gain in the given frequency range is ~3. The phase plotting also shows a 180-degree phase as opposed to 0 so a negative sign would also be required in the calibration factor. Thus the calibration factor for the Y channel of the BS seismometer should be around ~3. • BS-Z & EY-Z Transfer Function : Attachment #3 The mean value of the gain in the given frequency range is the desired calibration factor and the error would be the mean of the error for the gain dataset chosen which is caused due to factors mentioned above. Note: The standard error envelope plotted in the attached graphs is calculated as follows : 1. Divide the data into n segments according to the resolution wanted for the Welch averaging to be performed later. 2. Calculate PSD for every segment (no averaging). 3. Calculate the standard error for every value in the data segment by looking at distribution formed by the n number values we obtain by taking that respective value from every segment. Discussions : The BS seismometer is a different model than the EX and the EY seismometers which might be a major cause as to why we need special calibration for the BS seismometer while EX and EY are fine. The sign flip in the BS-Y seismometer may cause a lot of errors in future data acquisitions. The time series plots in Attachment #4 shows an evident DC offset present in the data. All of the information mentioned above indicates that there is some electrical or mechanical defect present in the seismometer and may require a reset. Kindly let me know if and when the seismometer is reset so that I can calibrate it again. 15526 Fri Aug 14 10:10:56 2020 JonConfigurationVACVacuum repairs today The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up. 15527 Sat Aug 15 02:02:13 2020 JonConfigurationVACVacuum repairs today Vacuum work is completed. The TP2 and TP3 interlocks have been overhauled as proposed in ELOG 15499 and seem to be performing reliably. We're now back in the nominal system state, with TP2 again backing for TP1 and TP3 pumping the annuli. I'll post the full implementation details in the morning. I did not get to setting up the new UPS units. That will have to be scheduled for another day. Quote: The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up. 15528 Sat Aug 15 15:12:22 2020 JonConfigurationVACOverhaul of small turbo pump interlocks # Summary Yesterday I completed the switchover of small turbo pump interlocks as proposed in ELOG 15499. This overhaul altogether eliminates the dependency on RS232 readbacks, which had become unreliable (glitchy) in both controllers. In their place, the V4(5) valve-close interlocks are now predicated on an analog controller output whose voltage goes high when the rotation speed is >= 80% of the nominal setpoint. The critical speed is 52.8 krpm for TP2 and 40 krpm for TP3. There already exist hardware interlocks of V4(5) using the same signals, which I have also tested. # Interlock signal Unlike the TP1 controller, which exposes simple relays whose open/closed states are sensed by Acromags, what the TP2(3) controllers output is an energized 24V signal for controlling such a relay (output circuit pictured below). I hadn't appreciated this difference and it cost me time yesterday. The ultimate solution was to route the signals through a set of new 24V Phoenix Contact relays installed inside the Acromag chassis. However, this required removing the chassis from the rack and bringing it to the electronics bench (rather than doing the work in situ, as I had planned). The relays are mounted to the second DIN rail opposite the Acromags. Each TP2(3) signal controls the state of a relay, which in turn is sensed using an Acromag XT1111. # Signal routing The TP2(3) "normal-speed" signals are already in use by hardware interlocks of V4(5). Each signal is routed into the main AC relay box, where it controls an "interrupter" relay through which the Acromag control signal for the main V4(5) relay is passed. These signals are now shared with the digital controls system using a passive DB15 Y-splitter. The signal routing is shown below. # Interlock conditions The new turbo-pump-related interlock conditions and their channel predicates are listed below. The full up-to-date channel list and wiring assignments for c1vac are maintained here. Channel Type New? Interlock-triggering condition C1:Vac-TP1_norm BI No Rotation speed < 90% nominal setpoint (29 krpm) C1:Vac-TP1_fail BI No Critical fault occurrence C1:Vac-TP1_current AI No Current draw > 4 A C1:Vac-TP2_norm BI Yes Rotation speed < 80% nominal setpoint (52.8 krpm) C1:Vac-TP3_norm BI Yes Rotation speed < 80% nominal setpoint (40 krpm) There are two new channels, both of which provide a binary indication of whether the pump speed is outside its nominal range. I did not have enough 24V relays to also add the C1:Vac-TP2(3)_fail channels listed in ELOG 15499. However, these signals are redundant with the existing interlocks, and the existing serial "Status" readback will already print failure messages to the MEDM screens. All of the TP2(3) serial readback channels remain, which monitor voltage, current, operational status, and temperature. The pump on/off and low-speed mode on/off controls remain implemented with serial signals as well. The new analog readbacks have been added to the MEDM controls screens, circled below: # Other incidental repairs • I replaced the (dead) LED monitor at the vac controls console. In the process of finding a replacement, I came across another dead spare monitor as well. Both have been labeled "DEAD" and moved to Jordan's desk for disposal. • I found the current TP3 Varian V70D controller to be just as glitchy in the analog outputs as well. That likely indicates there is a problem with the microprocessor itself, not just the serial communications card as I thought might be the case. I replaced the controller with the spare unit which was mounted right next to it in the rack [ELOG 13143]. The new unit has not glitched since the time I installed it around 10 pm last night. 15738 Fri Dec 18 22:59:12 2020 JonConfigurationCDSUpdated CDS upgrade plan Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan: • Existing FEs stay where they are (they are not moved to a single rack) • Dolphin IPC remains PCIe Gen 1 • RFM network is entirely replaced with Dolphin IPC Please send me any omissions or corrections to the layout. 15742 Mon Dec 21 09:28:50 2020 JamieConfigurationCDSUpdated CDS upgrade plan Quote: Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan: Existing FEs stay where they are (they are not moved to a single rack) Dolphin IPC remains PCIe Gen 1 RFM network is entirely replaced with Dolphin IPC Please send me any omissions or corrections to the layout. I just want to point out that if you move all the FEs to the same rack they can all be connected to the Dolphin switch via copper, and you would only have to string a single fiber to every IO rack, rather than the multiple now (for network, dolphin, timing, etc.). 15746 Wed Dec 23 23:06:45 2020 gautamConfigurationCDSUpdated CDS upgrade plan 1. The diagram should clearly show the host machines and the expansion chassis and the interconnects between them. 2. We no longer have any Gentoo bootserver or diskless FEs. 3. The "c1lsc" host is in 1X4 not 1Y3. 4. The connection between c1lsc and Dolphin switch is copper not fiber. I don't know how many Gbps it is. But if the switch is 10 Gbps, are they really selling interface cables that have lower speed? The datasheet says 10 Gbps. 5. The control room workstations - Debian10 (rossa) is the way forward I believe. it is true pianosa remains SL7 (and we should continue to keep it so until all other machines have been upgraded and tested on Debian 10). 6. There is no "IOO/OAF". The host is called "c1ioo". 7. The interconnect between Dolphin switch and c1ioo host is via fiber not copper. 8. It'd be good to have an accurate diagram of the current situation as well (with the RFM network). 9. I'm not sure if the 1Y1 rack can accommodate 2 FEs and 2 expansion chassis. Maybe if we clear everything else there out... 10. There are 2 "2GB/s" Copper traces. I think the legend should make clear what's going on - i.e. which cables are ethernet (Cat 6? Cat 5? What's the speed limitation? The cable? Or the switch?), which are PCIe cables etc etc. I don't have omnigraffle - what about uploading the source doc in a format that the excellent (and free) draw.io can handle? I think we can do a much better job of making this diagram reflect reality. There should also be a corresponding diagram for the Acromag system (but that doesn't have to be tied to this task). Megatron (scripts machine) and nodus should be added to that diagram as well. Please send me any omissions or corrections to the layout. 15771 Tue Jan 19 14:05:25 2021 JonConfigurationCDSUpdated CDS upgrade plan I've produced updated diagrams of the CDS layout, taking the comments in 15476 into account. I've also converted the 40m's diagrams from Omnigraffle ($150/license) to the free, cloud-based platform draw.io. I had never heard of draw.io, but I found that it has most all the same functionality. It also integrates nicely with Google Drive.
Attachment 1: The planned CDS upgrade (2 new FEs, fully replace RFM network with Gen 1 Dolphin IPC)
Attachment 2: The current 40m CDS topology
The most up-to-date diagrams are hosted at the following links:
Please send me any further corrections or omissions. Anyone logged in with LIGO.ORG credentials can also directly edit the diagrams.
15772 Tue Jan 19 15:43:24 2021 gautamConfigurationCDSUpdated CDS upgrade plan
Not sure if 1Y1 can accommodate both c1sus2 and c1bhd as well as the various electronics chassis that will have to be installed. There may need to be some distribution between 1Y1 and 1Y3. Does Koji's new wiring also specify which racks hold which chassis?
Some minor improvements to the diagram:
1. The GPS receiver in 1X7 should be added. All the timing in the lab is synced to the 1pps from this.
2. We should add hyperlinks to the various parts datasheets (e.g. Dolphin switch, RFM switch, etc etc) so that the diagram will be truly informative and self-contained.
3. Megatron and nodus, but especially chiara (NFS server), should be added to the diagram.
15921 Mon Mar 15 20:40:01 2021 ranaConfigurationComputersinstalled QTgrace on donatella for dataviewer
I installed QTgrace using yum on donatella. Both Grace and XMgrace are broken due to some boring fight between the Fedora package maintainers and the (non existent) Grace support team. So I have symlinked it:
controls@donatella|bin> sudo mv xmgrace xmgrace_bak
controls@donatella|bin> sudo ln -s qtgrace xmgrace
controls@donatella|bin> pwd
/usr/bin
I checked that dataviewer works now for realtime and playback. Although the middle click paste on the mouse doesn't work yet.
15928 Wed Mar 17 09:05:01 2021 Paco, AnchalConfigurationComputers40m Control Room Changes
• Switched positions of allegra and donatella.
• While doing so, the hdmi cable previously used by donatella snapped. We replaced this cable by another unused cable we found connected only on one end to rossa. We should get more HDMI cables if that cable was in use for some other purpose.
• Paco bought a bluetooth speaker/mic that is placed infront of allegra and it's usb adapter is connected to iMac's keyboard in the bottom. With the new camera installed, the 40m video call environment is now complete.
• Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.
16027 Wed Apr 14 13:16:20 2021 AnchalConfigurationComputers40m Control Room Changes
• I have confirmed that the old two monitors' backlighting is not working. One can see the impression of the display without any brightness on them. Both old monitors are on the shelf behind.
• Today we got a monitor and mouse from Mike. I had to change /etc/default/grub GRUB_GFXMODE to 1920x1200@30 on allegra for it to work with the(any) monitor.
• Allegra is Debian 10 with latest cds-workstation installed on it. It is a good test station to migrate our existing scripts to start using updated cds-workstation configuration.
Quote: Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.
16163 Wed May 26 11:45:57 2021 Anchal, PacoConfigurationIMCMC2 analog camera
[Anchal, Paco]
We went near the MC2 area and opened the lid to inspect the GigE and analog video monitors for MC2. Looked like whatever image is coming through the viewport is split into the GigE (for beam tracking) and the analog monitor. We hooked the monitor found on the floor nearby and tweaked the analog video camera around to get a feel for how the "ghost" image of the transmission moves around. It looks like in order to try and remove this "extra spots" we would need to tweak the beam tracking BS. We will consult the beam tracking authorities and return to this.
16302 Thu Aug 26 10:30:14 2021 JamieConfigurationCDSfront end time synchronization fixed?
I've been looking at why the front end NTP time synchronization did not seem to be working. I think it might not have been working because the NTP server the front ends were point to, fb1, was not actually responding to synchronization requests.
I cleaned up some things on fb1 and the front ends, which I think unstuck things.
On fb1:
• stopped/disabled the default client (systemd-timesyncd), and properly installed the full NTP server (ntp)
• the ntp server package for debian jessie is old-style sysVinit, not systemd. In order to make it more integrated I copied the auto-generated service file to /etc/systemd/system/ntp.service, and added and "[install]" section that specifies that it should be available during the default "multi-user.target".
• "enabled" the new service to auto-start at boot ("sudo systemctl enable ntp.service")
• made sure ntp was configured to serve the front end network ('broadcast 192.168.123.255') and then restarted the server ("sudo systemctl restart ntp.service")
For the front ends:
• on fb1 I chroot'd into the front-end diskless root (/diskless/root) and manually specifed that systemd-timesyncd should start on boot by creating a symlink to the timesyncd service in the multi-user.target directory:
$sudo chroot /diskless/root$ cd /etc/systemd/system/multi-user.target.wants
$ln -s /lib/systemd/system/systemd-timesyncd.service • on the front end itself (c1iscex as a test) I did a "systemctl daemon-reload" to force it to reload the systemd config, and then restarted the client ("systemctl restart systemd-timesyncd") • checked the NTP synchronization with timedatectl: controls@c1iscex:~ 0$ timedatectl
Local time: Thu 2021-08-26 11:35:10 PDT
Universal time: Thu 2021-08-26 18:35:10 UTC
RTC time: Thu 2021-08-26 18:35:10
Time zone: America/Los_Angeles (PDT, -0700)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2021-03-14 01:59:59 PST
Sun 2021-03-14 03:00:00 PDT
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2021-11-07 01:59:59 PDT
Sun 2021-11-07 01:00:00 PST
controls@c1iscex:~ 0$ Note that it is now reporting "NTP enabled: yes" (the service is enabled to start at boot) and "NTP synchronized: yes" (synchronization is happening), neither of which it was reporting previously. I also note that the systemd-timesyncd client service is now loaded and enabled, is no longer reporting that it is in an "Idle" state and is in fact reporting that it synchronized to the proper server, and it is logging updates: controls@c1iscex:~ 0$ sudo systemctl status systemd-timesyncd
â— systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
Active: active (running) since Thu 2021-08-26 10:20:11 PDT; 1h 22min ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 2918 (systemd-timesyn)
Status: "Using Time Server 192.168.113.201:123 (ntpserver)."
CGroup: /system.slice/systemd-timesyncd.service
└─2918 /lib/systemd/systemd-timesyncd
Aug 26 10:20:11 c1iscex systemd[1]: Started Network Time Synchronization.
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 64s/+0.000s/0.000s/0.000s/+26ppm
Aug 26 10:21:15 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 128s/-0.000s/0.000s/0.000s/+25ppm
Aug 26 10:23:23 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 256s/+0.001s/0.000s/0.000s/+26ppm
Aug 26 10:27:40 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 512s/+0.003s/0.000s/0.001s/+29ppm
Aug 26 10:36:12 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 1024s/+0.008s/0.000s/0.003s/+33ppm
Aug 26 10:53:16 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/-0.026s/0.000s/0.010s/+27ppm
Aug 26 11:27:24 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/+0.009s/0.000s/0.011s/+29ppm
controls@c1iscex:~ 0\$
So I think this means everything is working.
I then went ahead and reloaded and restarted the timesyncd services on the rest of the front ends.
We still need to confirm that everything comes up properly the next time we have an opportunity to reboot fb1 and the front ends (or the opportunity is forced upon us).
There was speculation that the NTP clients on the front ends (systemd-timesyncd) would not work on a read-only filesystem, but this doesn't seem to be true. You can't trust everything you read on the internet.
50 Thu Nov 1 19:53:02 2007 Andrey RodionovBureaucracyPhotosTobin's picture
51 Thu Nov 1 19:53:34 2007 Andrey RodionovBureaucracyPhotosRobert's photo
52 Thu Nov 1 19:54:22 2007 Andrey RodionovBureaucracyPhotosRana's photo
53 Thu Nov 1 19:55:03 2007 Andrey RodionovBureaucracyPhotosAndrey's photo
54 Thu Nov 1 19:55:59 2007 Andrey RodionovBureaucracyPhotosAndrey, Tobin, Robert - photo
55 Thu Nov 1 19:58:07 2007 Andrey RodionovBureaucracyPhotosSteve and Tobin's picture
57 Fri Nov 2 08:59:30 2007 steveBureaucracySAFETYthe laser is ON
The psl laser is back on !
75 Wed Nov 7 02:14:08 2007 AndreyBureaucracyIOOMore information about MC2 ringdown
As Tobin wrote two hours ago, we (Andrey, Tobin, Robert) made a series of ringdown measurements for MC2
in the spirit of the measurement described by Rana -> see
entry from Mon Oct 29 23:47:29 2007, rana, Other, IOO, MC Ringdowns.
I attach here some pictures that we saw on the screen of the scope, but I need to admit that I am not experienced enough to present a nice fit to these data, although I attach fits that I am able to do today.
I definitely learned a lot of new Matlab functions from Tobin - thanks to him!, but I need to learn two more things:
Firstly, I do not know how to delete "flat" region (regions before the ringdown starts) in Matlab ->
I needed to delete the entries for times before the ringdown ("negative times") by hand in the text-file, which is extremely non-elegant method;
Secondly, I tried to approximate the ringdown curve by a function ydata=a*exp(b*xdata) but I am not exactly sure if this equation of the fitting curve is a good fit or if a better equation can be used.
It seems, in this situation it is better for me to ask more experienced "comrades" on November 7th.
P.S. It seems I really like the type of message "Bureaucracy" - I put it for every message. As Alain noted, maybe that is because some things are very bureacratized in the former USSR / Russia. By the way, when I was young, November 7th was one of two most important holidays in the USSR - I liked that holiday because I really liked military parades on the red square. I attach a couple of pictures. November 7 is the anniversary of the Revolution of 1917.
113 Fri Nov 16 18:46:49 2007 steveBureaucracyPSL MOPA was turned off & on
The "Mohana" boys scouts and their parents visited the 40m lab today.
The laser was turned off for their safety.
It is back on !
115 Mon Nov 19 14:32:10 2007 steveBureaucracySAFETYgrad student safety training
John Miller and Alberto Stochino has received the 40m safety bible.
They still have to read the laser operation manual and sign off on it.
130 Wed Nov 28 12:43:53 2007 AndreyBureaucracy Here was the PDF-file of my presentation
I was making a report with powerpoint presentation during that Wednesday's 40-m meeting.
Here was the pdf-file, but LATER IN THE EVENING I CREATED A WIKI-40M-page describing the algorithm, and now the pdf-file is ON THAT WIKI-40M PAGE.
NOTE ADDED AFTER THE PRESENTATION: I double checked, I am indeed taking the root-mean-square of a difference, as we discussed during my talk.
My slide #17 "Calculation of differential length" was wrong, but now I corrected it.
135 Wed Nov 28 19:02:41 2007 AndreyBureaucracyWIKI-40M UpdateNew WIKI-40M page describing Matlab Suspension Modeling
I created the WIKI-40m page with some details about my today's talk on the 40-m lab meeting.
The address is:
http://lhocds.ligo-wa.caltech.edu:8000/40m/Modeling_of_suspensions
(or you can go to the main page, http://lhocds.ligo-wa.caltech.edu:8000/40m/ , and click on the link "Modeling of suspensions").
The WIKI-40m page describes my transfer functions and contains the pdf-file of my presentation.
224 Thu Jan 3 12:38:49 2008 robBureaucracyTMISore throat
Quote: I did not feel anything wrong yesterday, but unfortunately I have a very much sore throat today. I need to drink warm milk with honey and rinse my throat often today. So far I do not have other illness symptomes (no fever), so I hope that this small disease will not last for a long time, but I feel that it is better for me to cure my sore throat today at home (and probably it is safer for others in 40-m). I took yesterday the book "Digital Signal Processing", so I have it for reading at home. Hope to see you tomorrow.
I've added a new category--TMI--for entries along these lines.
262 Thu Jan 24 22:52:18 2008 AndreyBureaucracyGeneralAnts around a dirty glass (David - please read!)
Dear coleagues,
there are rains outside these days, so ants tend to go inside our premises.
David was drinking some beverage from a glass earlier today (at 2PM) and left a dirty glass near the computer.
There are dozens, if not hundreds, of ants inside of that glass now.
Of course, I am washing this glass.
A.
279 Mon Jan 28 12:42:48 2008 DmassBureaucracyTMICoffee
There is tea in the coffee carafe @ the 40m. It is sitting as though it were fresh coffee. There is also nothing on the post it.
339 Fri Feb 22 21:19:38 2008 AndreyBureaucracyComputer Scripts / ProgramsMDV library does not work at "LINUX 2"
While working on Thursday evening with Matlab scripts "dttfft2" and "get_data", I noticed that mDV library does not work at computer "LINUX 2" (the third computer in the control-room if you enter it from the restroom). There are multiple error messages if we try to run "hello_world", "dttfft2" or "get_data". In order to take data from accelerometers, I changed the computer - I was working from "LINUX 3" computer, the most right computer in the control room, but for the future someone should resolve the issue at "LINUX 2". I am not experienced enough to revive the correct work of mDV directory at "LINUX 2".
Andrey.
343 Thu Feb 28 12:31:33 2008 robBureaucracyComputer Scripts / ProgramsMDV library does not work at "LINUX 2"
Quote: While working on Thursday evening with Matlab scripts "dttfft2" and "get_data", I noticed that mDV library does not work at computer "LINUX 2" (the third computer in the control-room if you enter it from the restroom). There are multiple error messages if we try to run "hello_world", "dttfft2" or "get_data". In order to take data from accelerometers, I changed the computer - I was working from "LINUX 3" computer, the most right computer in the control room, but for the future someone should resolve the issue at "LINUX 2". I am not experienced enough to revive the correct work of mDV directory at "LINUX 2". Andrey.
This turned out to be due to /frames not being mounted on linux2, as a result of a reboot. The issue is discussed in entry 270. I remounted the /frames, and added a line to mdv_config.m to check whether the frames are mounted.
382 Fri Mar 14 16:56:03 2008 DmassBureaucracyComputersNew 40m control machine.
I priced out a new control machine from Dell and had Steve buy it.
GigE cards (jumbo packet capable) will be coming seperately.
Specs:
Quad core (2+GHz)
4 Gigs @ 800MHz RAM
24" LCD
low end video card (Nvidia 8300 - analog + digital output for dual head config)
No floppy drive on this one (yet?)
488 Tue May 20 09:28:42 2008 steveBureaucracySAFETYsafety traning for 40m
Tara Chalernsongsak, new gradstudent for K. Libbrecht was introduced to the basics of the 40m operations.
491 Thu May 22 11:21:45 2008 steveBureaucracySAFETYearly surf sudent
Caltech under grad Eric Mintun received the 40m safety training.
Now he has to read and sign SOP of laser and ifo.
He'll be working with GigE cameras with Joe
505 Thu May 29 16:49:49 2008 steveBureaucracyPhotosYoichi has arrived
Yoichi had his first 40m meeting. We welcomed him and Tobin, who is visiting, by sugar napoleons that
Bob made.
511 Mon Jun 2 12:20:35 2008 josephbBureaucracyCamerasBeam scan has moved
The beamscan has been moved from the Rana lab back over to the 40m, to be used to calibrate the Prosilica cameras.
523 Fri Jun 6 15:56:00 2008 steveBureaucracySAFETYYoichi received safety training
Yoichi Aso received 40m specific safety training.
552 Mon Jun 23 15:22:04 2008 ranaBureaucracySAFETYLaser Safety Walkthrough today
ELOG V3.1.3-
|
2021-12-02 05:50:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3942784070968628, "perplexity": 8016.24786439461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00283.warc.gz"}
|
http://www.acmerblog.com/c%e8%af%ad%e8%a8%80%e7%a8%8b%e5%ba%8f%e8%ae%be%e8%ae%a1-digital-roots%e7%bb%bc%e5%90%88%e5%ba%94%e7%94%a8-1192.html
|
2013
11-19
# C语言程序设计-Digital Roots[综合应用]
10065 选做题:Digital Roots
【问题描述】The digital root of a positive integer is found by summing the digits of the integer. If the resulting value is
a single digit then that digit is the digital root, If the resulting value contains two or more digits,those digits
are summed and the process is repeated.This is continued as long as necessary to obtain a single digit.
For example, consider the positive integer 24. Adding the 2 and the 4 yields a value of 6. Since 6 is a
single digit, 6 is the digital root of 24. Now consider the positive integer 39. Adding the 3 and the 9 yields
12, since 12 is not a single digit, the process must be repeated. Adding the 1 and the 2 yeilds 3, a single
digit and also the digital root of 39.
【输入形式】The input file will contain a list of positive integers,one per line.The end of the input will be indicated by an
integer value of zero.
【输出形式】For each integer in the input, output its digital root on a separate line of the output.
【样例输入】24
39
0
【样例输出】6
3
【样例说明】样例如上。
【评分标准】本题共4个测试点,每个测试点0.25分,共1.0分。
#include <stdio.h>
int root(int);
int distribute(int);
int main()
{
int num;
while (scanf("%d",&num)&&num != 0)
{
// printf("%d\n",num);
printf("%d\n",root(num));
// int temp = distribute(num);
// printf("%d\n",temp);
}
return 0;
}
int root (int n)
{
if (n < 10)
{
return n;
}
else
{
//查分n
return root(distribute(n));
}
}
int distribute(int n)
{
int count = 0 ;
int tem = n;
while (tem != 0)
{
count += tem % 10;
tem = tem / 10;
}
return count;
}
1. 约瑟夫也用说这么长……很成熟的一个问题了,分治的方法解起来o(n)就可以了,有兴趣可以看看具体数学的第一章,关于约瑟夫问题推导出了一系列的结论,很漂亮
2. 问题3中,GetRightPosition()和GetLeftPosition()与STL中的upper_bound()和lower_boune()的代码实现一样吗?
|
2016-10-28 04:34:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2520216703414917, "perplexity": 3276.0794096656305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.54/warc/CC-MAIN-20161020183841-00160-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://www.statsmodels.org/dev/generated/statsmodels.discrete.discrete_model.BinaryResults.predict.html
|
# statsmodels.discrete.discrete_model.BinaryResults.predict¶
BinaryResults.predict(exog=None, transform=True, *args, **kwargs)
Call self.model.predict with self.params as the first argument.
Parameters: exog (array-like, optional) – The values for which you want to predict. see Notes below. transform (bool, optional) – If the model was fit via a formula, do you want to pass exog through the formula. Default is True. E.g., if you fit a model y ~ log(x1) + log(x2), and transform is True, then you can pass a data structure that contains x1 and x2 in their original form. Otherwise, you’d need to log the data first. kwargs (args,) – Some models can take additional arguments or keywords, see the predict method of the model for the details. prediction – See self.model.predict ndarray, pandas.Series or pandas.DataFrame
Notes
The types of exog that are supported depends on whether a formula was used in the specification of the model.
If a formula was used, then exog is processed in the same way as the original data. This transformation needs to have key access to the same variable names, and can be a pandas DataFrame or a dict like object.
If no formula was used, then the provided exog needs to have the same number of columns as the original exog in the model. No transformation of the data is performed except converting it to a numpy array.
Row indices as in pandas data frames are supported, and added to the returned prediction.
|
2018-11-16 22:08:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3800324499607086, "perplexity": 1655.2210792257845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743216.58/warc/CC-MAIN-20181116214920-20181117000920-00102.warc.gz"}
|
https://etheses.bham.ac.uk/id/eprint/5689/
|
# First observation of a 5- Resonance in 12C; evidence for triangular D3h symmetry
Marin-Lambarri, Daniel Jose (2015). First observation of a 5- Resonance in 12C; evidence for triangular D3h symmetry. University of Birmingham. Ph.D.
Preview
Marin-Lambarri15PhD.pdf
PDF
## Abstract
This thesis reports a measurement of the break-up reaction $$^1$$$$^2$$C($$^4$$He, $$^1$$$$^2$$C*)$$^4$$He performed at the Birmingham MC40 cyclotron facility at a beam energy of 40 MeV. An array of four double-sided silicon-strip detectors was used to detect the final-state products of the reaction. Results from previous measurements of this reaction have shown that various excited states in $$^1$$$$^2$$C can be populated. In the present study the 13.3 MeV and the 22.4 MeV resonances were, in particular, populated. The latter has been seen for the first time.
The analysis for both resonances was performed using the angular correlations technique from which their spin and parity were determined with values of J$$^p$$=4$$^+$$ and J$$^p$$=5$$^-$$, respectively. Monte-Carlo simulations were performed both before and after the experiment in order to have the optimal detector-array efficiency and normalise the results of the angular correlations analysis. The results have been compared to predictions of the algebraic cluster model, for which the J$$^p$$=4$$^+$$ is thought to be related to a rotational band based on the Hoyle state and the J$$^p$$=5$$^-$$ resonance is part of the ground-state band. This latter state provides strong evidence for triangular D$$_3$$$$_h$$ symmetry corresponding to an equilateral triangle structure, observed here for the first time in a nucleus.
Type of Work: Thesis (Doctorates > Ph.D.)
Award Type: Doctorates > Ph.D.
Supervisor(s):
Supervisor(s)EmailORCID
Freer, MartinUNSPECIFIEDUNSPECIFIED
Kokalova, TzanyUNSPECIFIEDUNSPECIFIED
Wheldon, CarlUNSPECIFIEDUNSPECIFIED
Licence:
College/Faculty: Colleges (2008 onwards) > College of Engineering & Physical Sciences
School or Department: School of Physics and Astronomy
Funders: Other
Other Funders: Consejo Nacional de Ciencia y Tecnología, Mexico
Subjects: Q Science > QC Physics
URI: http://etheses.bham.ac.uk/id/eprint/5689
### Actions
Request a Correction View Item
|
2021-05-16 18:19:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.669094443321228, "perplexity": 3313.747607419259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991178.59/warc/CC-MAIN-20210516171301-20210516201301-00090.warc.gz"}
|
https://piping-designer.com/index.php/properties/fluid-mechanics/517-stokes-law
|
# Stokes' Law
Written by Jerry Ratzlaff on . Posted in Fluid Dynamics
Stokes' law, abbreviated as St, is the force that is put on a small sphere, slowing down the movement through a viscous fluid.
## Stokes' law formula
$$\large{ F = 6 \; \pi \; r \; n \; v }$$
### Where:
Units English Metric $$\large{ F }$$ = force $$\large{ lbf }$$ $$\large{N}$$ $$\large{ \pi }$$ = Pi $$\large{3.141 592 653 ...}$$ $$\large{ r }$$ = radius of sphere $$\large{ ft }$$ $$\large{ m }$$ $$\large{ v }$$ = velocity $$\large{\frac{ft}{sec}}$$ $$\large{\frac{m}{s}}$$ $$\large{ n }$$ = viscosity $$\large{\frac{lbf-sec}{ft^2}}$$ $$\large{\frac{m}{s}}$$
Tags: Force Equations
|
2022-10-01 17:36:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7839332222938538, "perplexity": 2592.9748531186874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00575.warc.gz"}
|
https://www.sunclipse.org/?p=297
|
Numbers like $50 Billion Isabel reminds me to look at The Onion. Lo and behold, scientists are asking Congress to fund a$50 billion science thing! Apparently, the machine in question is both large and expensive, and it uses gamma rays.
“While expense is something to consider, I think it’s very important that we have this kind of scientific apparatus, because, in the end, I have always said that science is more important than it is unimportant,” Committee chairman Rep. Bart Gordon (D-TN) said. “And it’s essential we stay ahead of China, Japan, and Germany in science. We are ahead in space, with the NASA rockets going to other planets, so we should be ahead in science too.”
Unfortunately, all was not rosy on Capitol Hill:
“These scientists could trim $10 million if they would just cut out some of the purple and blue spheres,” said Rep. Roscoe Bartlett (R-MD), explaining that he understood the need for an abundance of reds and greens. “With all of those molecules and atoms going in every direction, the whole thing looks a bit unorganized, especially for science.” Isabel makes note of something else, a thought which should linger after the chuckles fade away. Consider this paragraph from The Onion‘s stellar reportage: Another diagram presented to lawmakers contained several important squiggly lines, numbers, and letters. Despite not being numbers, the letters were reportedly meant to represent mathematics too. The scientists seemed to believe that correct math was what would help make the science thing go. Math isn’t just numbers? It can use letters too? Both Isabel and I have seen many a chalkboard filled to its dusty brim with mathematics, yet with nary a number in sight. Reportedly, when the mathematician Stanislaw Ulam went to work at the Manhattan Project, he mourned that he was now forced to work with actual numbers — and to add insult to injury, they were numbers with decimal points! At some point in a student’s mathematical development, the subject becomes more about patterns, symbols and interrelationships than it is about the digits 0 through 9. A while back, I wrote about math in the movies. It now occurs to me that you can also classify movies by how “numerical” their math appears. Aronofsky’s Pi (1998) is very numerical: Max is looking for a string of 216 digits. By contrast, Good Will Hunting (1997) uses math talent as a MacGuffin, and the math we see being done is algebraic and diagrammatical. We get lots of stick figures on blackboards, showing dots connected by lines. A Beautiful Mind (2001) is similar, in that Russell Crowe’s scribblings on bedroom and library windows (compare that to Will Hunting’s bathroom mirror!) have lots of arrows and letters, even Greek letters. The most digit-intensive “work” which we see the fictional John Nash doing is when he is most strongly medicated. Proof (2005) may be the most unusual in this regard. We see Gwyneth Paltrow’s character working on a theorem (something to do with prime numbers, we’re told, which uses random matrix theory and other “hip” modern techniques). When the camera slips in for a close-up on the notebook, she’s writing words in between the equations. All in all, the more realistic the movie’s portrayal of mathematics is, the less it uses straight-up strings of digits. This is an interesting pattern to consider when we’re trying to understand what people outside our ivy-colored walls think about mathematics, which is in turn something we have to know when trying to popularize science. One thought on “Numbers like$50 Billion”
1. “and to add insult to injury, they were numbers with decimal points!”
That reminds me of something that I’m trying to drum into my students’ heads (without much success) — that it is much more preferable to give an answer of, say, (1+&sqrt;5)/2 than 1.618… With the answer which only involves integers, one can see “where the number came from”; with the decimal all that history is lost.
and I hadn’t noticed that particular trend re: mathematicians in the movies. It’s an interesting one.
|
2020-08-10 18:55:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.587838888168335, "perplexity": 1994.7687931304517}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737168.97/warc/CC-MAIN-20200810175614-20200810205614-00230.warc.gz"}
|
http://science.sciencemag.org/content/309/5735/news-summaries
|
# News this Week
Science 29 Jul 2005:
Vol. 309, Issue 5735, pp. 678
1. REPRODUCTIVE BIOLOGY
# Controversial Study Finds an Unexpected Source of Oocytes
1. Gretchen Vogel
Scientists have made some surprising claims about bone marrow and blood cells in the last few years, but this week brings perhaps the most surprising of all: that cells in the bone marrow and blood are a source of developing oocytes found in the ovaries. If true, this work in mice would rewrite the current understanding of the female reproductive system. It could also open new discussions about the ethics and potential consequences of bone marrow and even blood donation.
Although the study's authors do not have evidence that such blood-derived oocytes could be fertilized and develop into babies, they suggest that human donors might be sharing germ cells along with their lifesaving immune cells and clotting factors. They also say they hope this work will lead to new treatments for infertility, especially for women who must undergo chemotherapy.
For decades, scientists have thought that female mammals are born with a lifetime supply of potential oocytes in the ovary. That view was challenged last year by Jonathan Tilly, Joshua Johnson, and their colleagues at Massachusetts General Hospital in Boston, who reported in a controversial paper in Nature that new oocytes could form throughout an adult mouse's lifetime (Science, 12 March 2004, p. 1593). That finding has not been replicated in another lab.
Tilly, Johnson, and colleagues have now dropped another bombshell at a meeting* and in the 29 July issue of Cell: They report that they have found ovary-replenishing germ cells in the bone marrow and circulating blood of adult mice. They build their case on several lines of evidence. First, looking for the source of oocyte stem cells that might explain their previous results, the team found signs that genes typical of germ cells were expressed in samples of bone marrow from mice and from humans. They also found that the level of at least one of these genes, called Mvh, varies during the animals' estrus cycle. That made them wonder if cells in the bone marrow might be a source of new oocytes.
To check that idea, the team treated mice with two chemotherapy drugs that cause infertility, cyclophosphamide and busulfan. Mice that received the drugs, as expected, suffered extensive ovary damage and stopped producing new oocytes. But in the ovaries of treated mice that later received bone marrow transplants from female donors, the scientists found “several hundred” oocyte-containing follicles at various stages of maturity.
The effect of treatment was rapid: New oocytes appeared 28 to 30 hours after a transplant. Some oocyte development experts are dubious, noting that fruit fly oocytes take a week to mature from stem cells. “You just can't do it in a day,” says Allan Spradling of the Carnegie Institute of Washington in Baltimore, Maryland. But Tilly says the oocytes might begin to mature in the bone marrow and continue developing as they travel through the bloodstream.
The team also reports using bone marrow and blood transplants to prompt the growth of oocytes in mice that are genetically infertile. Mice with a mutation in a gene called ataxia-telangiectasia mutated can't produce mature germ cells, and their ovaries usually lack follicles and developing oocytes. But after receiving either bone marrow or blood from healthy donors, the team reports, the animals' ovaries started producing follicles containing healthy-looking oocytes. The team concludes that bone marrow provides a continuous source of germ cell stem cells to the ovaries throughout adult life.
So far, however, they have not been able to prove that these cells can trigger ovulation or give rise to new offspring. “Until the authors have shown that the putative oocytes are functional, we should be cautious,” says Margaret Goodell of Baylor College of Medicine in Houston, Texas, who studies bone marrow stem cells. She and others say the markers the team used to identify oocytes can be misleading. For instance, similar techniques have led others to conclude mistakenly that bone marrow cells had become neurons or lung cells. “It will be important to transplant [green fluorescent protein] positive bone marrow cells into GFP-negative adult mice to test whether those mice go on to give birth to GFP-positive pups,” says Sean Morrison of the University of Michigan, Ann Arbor. “This experiment should be straightforward.”
Tilly says the team is working on such experiments but has had to find a new approach because the drugs they were using can damage the uterus and fallopian tubes, possibly preventing mice from becoming pregnant.
Turning to the clinic, Tilly suggests that the mouse results could explain a number of surprising reports of cancer patients and others who were expected to be infertile but who gave birth to children after receiving bone marrow transplants. One patient with Fanconi's anemia, for example, had a single menstrual period and then entered menopause at age 12. After receiving a bone marrow transplant from a sibling, Tilly says, her periods resumed, and she later gave birth to two children.
Although genetic tests of patients and their children might answer the question, Tilly says, they would be ethically problematic. And such cases wouldn't necessarily be easy to detect, he says, because bone marrow donors are often siblings.
Even if the new oocytes can't be fertilized, Tilly says, they may nevertheless enhance a woman's fertility. He speculates that they may function as “drone oocytes” that keep the ovary functioning to support the original “queen” oocytes set aside for procreation. If so, he says, the results open new possibilities for preserving or restoring the fertility of young cancer patients and might even provide a way to postpone menopause.
But until the team produces mice that can be traced without a doubt to a bone marrow donor, scientists are likely to remain wary. “The experiments will have a stimulating effect on the field,” says Hans Schöler of the Max Planck Institute for Molecular Biomedicine in Münster, Germany, “even if they stir quite some controversy.”
• * Society for the Study of Reproduction, Quebec City, Canada, 24-27 July.
2. PALEONTOLOGY
# Dinosaur Embryos Hint at Evolution of Giants
Paleontologists have long assumed that giant dinosaurs called sauropods, like all other dinosaurs, evolved from smallish bipedal ancestors and dropped down on all fours only as their bodies grew too large to be carried on two feet. But when they examined a pair of embryos dug up about 30 years ago—the oldest fossilized dinosaur embryos so far discovered—they got a surprise. As described on page 761 by Robert Reisz of the University of Toronto's Mississauga campus in Canada and colleagues, the embryos suggest that sauropods were already quadrupedal even as smaller creatures. “This would be significant because it means we might have to re-evaluate the origin of many features in sauropod skeletons we assumed had to do with weight support,” says Matthew Bonnan of Western Illinois University in Macomb.
The clues are indirect, because the embryos are not sauropods but members of their closest kin, a group of much smaller herbivores called the prosauropods. Paleontologists found them inside remarkably well-preserved eggs of a 5-meter-long animal called Massosponodylus, which 190 million years ago roamed the floodplains of what is now South Africa. “It's a really cool discovery,” says Kristi Curry Rogers of the Science Museum of Minnesota. The eggs clearly contained embryonic bones, but only recently did paleontologists dare to prepare them. It took Reisz's lab technician Diane Scott more than a year of full-time work to expose the delicate bones of the 6-centimeter-long eggs. As Reisz studied the specimens with colleagues from the Smithsonian Institution and the University of the Witwatersrand in Johannesburg, South Africa, he identified the largish skull as that of Massospondylus.
What was unusual was the rest of the body. “The proportions are just ridiculous,” Reisz says. The neck was long, the tail short, and the hind and forelimbs were all roughly the same length. “It was an awkward little animal,” he concludes. Because of the lack of developed teeth, huge head, and tiny pelvis (where leg muscles attach), the group proposes that Massospondylus hatchlings would have required parental care. “This is certainly suggestive but very difficult to test,” says Martin Sander of the University of Bonn, Germany.
To Reisz, the horizontal neck, heavy head, and limb proportions all suggest that the embryo would have walked quadrupedally after hatching. That's strange, because it means that as the Massospondylus hatchlings developed, they had to become bipedal—a pattern of development almost unheard of among vertebrates. To figure out how the hatchlings changed as they matured, the researchers measured nine other Massospondylus fossils of various sizes. They found that the neck grew much more rapidly, relative to the femur, than the rest of the body did, while the forelimbs and skull grew more slowly.
If the earliest sauropods also developed from embryos with quadrupedal proportions, Reisz and his colleagues propose, sauropods may have become quadrupedal adults by retaining their juvenile state into adulthood, a phenomenon called pedomorphosis. “It sheds some light in the evolutionary pathways through which the peculiar adaptations of giant dinosaurs were attained,” says Eric Buffetaut of France's major basic research agency, CNRS, in Paris.
Bonnan notes that other traits of adult sauropods seem to fit the same pattern. For example, the rough ends of sauropod limb bones indicate that the animals sported lots of cartilage in their joints. Paleontologists had assumed that the joints evolved because they helped sauropods support their weight. But cartilage-rich joints are more typical of young vertebrates, so adult sauropods might have acquired them by retaining a youthful trait.
Some paleontologists, however, are wary of trying to read too much of the history of sauropod evolution from two embryos. So little is known about dinosaur embryology, they say, that it's dicey to reconstruct the loco-motion of hatchlings and extrapolate to other taxonomic groups. “It's a stunning find,” says Anusuya Chinsamy-Turan of the University of Cape Town, South Africa, but “I have all these questions.”
3. EVOLUTION
# Rogue Fruit Fly DNA Offers Protection From Insecticides
1. Elizabeth Pennisi
Genomes are full of DNA that doesn't belong there. Called transposons, these small bits of sequence jump between chromosomes, often disrupting genes in the process. But sometimes, these interlopers do some good. Dmitri Petrov, a population geneticist at Stanford University in California, and his colleagues have discovered a transposon that, by changing a gene, seems to help fruit flies evolve resistance to certain insecticides. The work, reported on page 764 of this issue of Science, is one of a growing number of examples of natural selection preserving transposons, indicating that “they may play a much larger role in evolutionary novelty than is currently appreciated,” says Todd Schlenke, an evolutionary geneticist at Cornell University.
Typically, researchers have stumbled on such beneficial transposons while searching for mutations involved in disease or traits such as resistance to toxins. The general assumption has been that these movable DNA elements have long been intertwined with the gene in question. But Petrov and his colleagues demonstrated that transposon-mediated evolution can happen in real time to create novel solutions to changing conditions.
Working with Petrov, Stanford graduate student Yael Aminetzach had determined which of the 16 members of the Doc family of transposable elements were common in populations of the fruit fly Drosophila melanogaster. One stood out, Doc1420. Unlike other Doc transposons, which proved to be quite rare, this one appeared in 80% of fruit flies tested from eight different countries, suggesting that it plays some useful role. “The paper is a tour de force of population genetics,” says David Heckel, a geneticist at the Max Planck Institute for Chemical Ecology in Jena, Germany.
When the Stanford researchers then looked more closely at this transposon, they found that it had landed in a gene that, to date, has defied characterization. The gene exists intact in distantly related fruit flies, suggesting that it has a key function—one that was disrupted as Doc elements jumped around the D. melanogaster genome. By comparing Doc1420 to the other Doc sequences, Aminetzach and graduate student Michael MacPherson estimate that Doc1420 buried itself in this gene 90,000 years ago but did not become widespread until between 25 and 240 years ago, when human activities began to alter the environment dramatically. This recent expansion suggested that, rather than rendering the gene nonfunctional, the transposon altered it, possibly resulting in a different protein product—one that became important to the species' survival.
The sequence of the unaltered gene provided a clue to this new gene's role. That sequence resembles that of genes for choline metabolism, which operate in nerves affected by organophosphate pesticides. To test whether the new protein was involved in this pathway, the researchers bred fruit flies to create strains that differed only in whether they carried the Doc1420 insertion. The Doc1420 strain fared much better when Aminetzach and her colleagues treated the insects with an organophosphate insecticide: 19% died, compared to 68% of the fruit flies lacking Doc1420.
Researchers have already identified a few other examples of transposon-induced insecticide resistance, but this is the first to disrupt a gene whose protein is not a target of the pesticide, Petrov says. But Schlenke, Heckel, and others say that more work is needed to verify the transposon's role in resistance. “The data showing pesticide resistance [are] very weak,” notes Richard ffrench-Constant, a molecular entomologist at the University of Bath, U.K.
Nonetheless, Martin Feder of the University of Chicago is quite enthusiastic. “The paper is the latest in a series of recent discoveries that transposons can play a role in 'real time' microevolution in natural populations,” he says. “The phenomenon is [now] difficult to ignore.”
4. NATIONAL SCIENCE FOUNDATION
# Two Mines in Running for Underground Lab
The U.S. National Science Foundation (NSF) has decided that it's in the business of experimentation, not excavation. On 21 July, the $5.5 billion research agency chose two established mines—the Homestake Mine in Lead, South Dakota, and the Henderson Mine in Empire, Colorado—as possible sites for a multipurpose underground laboratory. In doing so, NSF passed over four “green field” sites that would have required builders to excavate thousands of feet of rock and existing sites in Nevada and Ontario, Canada. The proposed Deep Underground Science and Engineering Laboratory would house experiments in particle physics, geoscience, and microbiology. The original idea was for federal lawmakers to salvage Homestake for scientific ends before it was abandoned and flooded (Science, 6 June 2003, p. 1486). But that initiative was derailed by political and environmental considerations, leaving NSF free to pursue a more deliberate process that engaged a larger section of the scientific community. Last October, the agency solicited proposals for other sites. The two preliminary winners in that competition “stood out significantly above the rest” because they are deep, have desirable geologic characteristics, and come with some infrastructure already in place, says John Lightbody, executive officer of NSF's division of physics. Each team will receive$500,000 to work up a full conceptual design for the laboratory, which backers hope could win funding as early as 2009.
Both mines present challenges. Henderson is an active molybdenum mine, meaning that researchers would have to coordinate their activities with the mining operations. But a working mine also provides functioning lifts, vents, and other infrastructure that researchers can take advantage of, says Chang Kee Jung, a particle physicist at Stony Brook University in New York and spokes-person for the Henderson Mine collaboration.
In contrast, the abandoned Homestake gold mine was sealed in 2003 and is currently filling with groundwater. Once it reaches 1480 meters below the surface, possibly by 2007 or 2008, the mine's infrastructure could be ruined. However, South Dakota officials plan to open the upper levels of the mine for experiments and begin pumping out water as early as 2006, says Dave Snyder, executive director of the South Dakota Science and Technology Authority. Barrick Gold Corp. has agreed to transfer the mine to the state if the state legislature approves funds to open the site or if NSF builds the lab at Homestake, Snyder says.
Last weekend, the University of Minnesota, Twin Cities (UMTC), hosted a workshop to discuss the scientific mission of an underground lab. Some scientists feel that NSF short-circuited its own process by narrowing the choices to just two alternatives and excluding green-field sites. “If what they wanted was cheap and deep, they could have told us that right away, and we wouldn't have had to do all this work,” says Priscilla Cushman, a UMTC physicist who worked on a losing proposal to dig the laboratory at the Soudan Mine in Minnesota.
Despite their disappointment, most scientists are expected to rally behind one of the two remaining collaborations, says Bernard Sadoulet, a cosmologist at the University of California, Berkeley. “I'm convinced that the science is so compelling that the community will pull together,” says Sadoulet, who is leading a study to define the scientific mission of the lab. That teamwork, however, is only the first step in a long process.
5. BIODEFENSE
# U.S. University Backs Out of Biolab Bid
1. Andrew Lawler
The University of Washington (UW), Seattle, last week abruptly abandoned its attempt to build a biosafety level 3 (BSL-3) facility to study infectious diseases and bioterrorism agents. University officials say they were unable to come up with the $35 million required by the National Institutes of Health (NIH) to keep the proposal alive. But there was also intense opposition to the proposed$60 million facility from community activists, who saw it as a public health and safety hazard.
The university was one of several institutions that applied last December for a Regional Biocontainment Laboratory grant, part of a post-9/11 push to increase the nation's ability to study infectious agents. NIH has set aside approximately $125 million for a second national competition to complement an earlier round of nine labs funded in 2003 (Science, 10 October 2003, p. 206). It expects to make from five to eight awards for the BSL-2 and -3 labs, which handle materials such as plague. Three public forums in Seattle this spring drew hundreds opposed to the 5200-square-meter facility, which would have employed 100 scientists and staff. In May, university officials noted that community trust “has been dramatically undermined” and that building the lab despite opposition could prove “devastating” to community relations. An NIH grant to Boston University to build a lab to study even more dangerous biological agents is moving ahead despite citizen protests (Science, 28 January, p. 501). Despite that opposition, chief UW spokesperson Norm Arkans says that the real deal breaker for Washington was money: “We knew it would be difficult to raise the$35 million, since the university has a number of capital needs.” A letter from NIH asking for details of its cost-sharing plans triggered the university's pullout, according to Arkans. NIH officials declined comment on the competition, the winners of which are expected to be announced in September.
Community activists were delighted, but they don't take credit for preventing construction. “I think it came down to money,” says Kent Wills, head of the University Park Community Club. And some scientists are unhappy with the university's withdrawal. “We desperately need better facilities in the Pacific Northwest,” says Samuel Miller, a UW infectious disease specialist. The decision won't keep BSL-3 work away from Seattle: Two dozen university labs already provide that level of containment.
6. TISSUE ENGINEERING
# Technique Uses Body as 'Bioreactor' to Grow New Bone
1. Robert F. Service
Tissue engineers have long dreamed of starting with a small clutch of cells in a petri dish and growing new organs that can then be transplanted into patients. The strategy has worked for relatively simple, thin tissues such as skin and cartilage that don't depend on a well-formed network of blood vessels to deliver food and oxygen. But it hasn't panned out for more complex tissues shot through with vessels, such as bone and liver. Now a novel approach to tissue engineering that grows bone inside a patient's own body could change all that.
In a paper published online this week by the Proceedings of the National Academy of Sciences, researchers from the United States, the United Kingdom, and Switzerland report that they grew large amounts of new bone alongside the long leg bones of rabbits. When they harvested and transplanted the new bone into bone defects in the same animal, the defects healed and were indistinguishable from the original.
“This is a fresh, new strategy for tissue engineering that relies on the body's own capacity to regenerate itself,” says Antonios Mikos, a tissue engineering specialist at Rice University in Houston, Texas. “I think it will have an enormous impact on the field.”
The field of tissue engineering could use some help. Attempts to grow complex tissues outside the body have progressed in fits and starts. Italian researchers, for example, have coaxed bone marrow cells injected into a ceramic matrix to create new bone. But organisms have been unable to resorb and remodel the tissue, as occurs with normal bone. To avoid such problems, researchers led by tissue engineers Prasad Shastri at Vanderbilt University in Nashville, Tennessee, and Molly Stevens and Robert Langer at the Massachusetts Institute of Technology in Cambridge decided to see if they could let the body handle it itself.
Bones are sheathed in a thin membrane of cells called the periosteum. If a small wound or fracture occurs, cells in the periosteum can divide and differentiate into replacement tissue, including new bone, cartilage, and ligaments. Shastri wanted to see if he and his colleagues could use this same wound-healing response to generate new tissue.
The researchers injected a surgical saline solution between the tibia—the long, lower leg bone—and the periosteum of white rabbits, a standard small animal model for studying bone. This created a small, fluid-filled cavity into which they hoped new bone would grow. To prevent the cavity from collapsing as the saline is absorbed by the body, the researchers injected a gel containing a calcium-rich compound called alginate. Previous studies have suggested that calcium helps trigger cells in the periosteum to differentiate into new bone, and that is exactly what happened, the researchers report. Within a few weeks, the alginate cavities were filled with new bone. And when that bone was removed and transplanted to damaged bone sites within the same animals, the new bone integrated seamlessly.
“I think the strength of this approach is its simplicity,” Mikos says. “It doesn't rely on the delivery of exogenous growth factors or cells.” That could make it a boon to ortho-pedic surgeons, who often need to harvest large amounts of bone from patients to fuse vertebrae in spinal fusions. That harvested bone usually comes from a patient's hip, a procedure that often produces pain for years. But if this approach works in people, it could enable physicians to generate new bone alongside a patient's shin, for example, which could then be transplanted to other sites.
The technique could also prove useful for other tissues. With a few tweaks, says Shastri, it works to generate healthy new cartilage. Now the team is looking to see if it can be used to generate liver tissue as well. If so, it may turn tissue engineers' dreams into reality.
7. VETERANS AFFAIRS
# Gene Bank Proposal Draws Support--and a Competitor
1. Jennifer Couzin
The U.S. Department of Veterans Affairs (VA) is quietly moving forward with plans for a national gene bank that would link DNA donated by up to 7 million veterans and their family members with anonymous medical records. The bank, which is widely supported inside and outside the VA, would represent the first massive U.S. gene banking effort. But it is causing a furor among scientists, some VA employees, and politicians from New York state. They charge that top VA officials accepted a gene bank proposal from a cancer biologist at Stratton VA Medical Center and the State University of New York (SUNY), Albany, but are now privately circulating another gene bank plan that may leave Albany out. Most senior officials and scientists involved in both plans declined to comment for this story.
Although some smaller gene banks are sprouting in the United States, none can match those gearing up in Iceland, Estonia, the United Kingdom, and Japan (Science, 8 November 2002, p. 1158). In these cases, DNA samples from hundreds of thousands of people are linked with health information stripped of identifiers, making the banks powerful tools for sorting out "the complex interactions between gene and environment that lead to disease," says Alan Guttmacher, deputy director of the National Human Genome Research Institute (NHGRI) in Bethesda, Maryland.
The VA, say outside scientists, is a natural home for such a project because health records for the 7 million people it serves are computerized and standardized. The VA "has not only samples but histories," says Karen Hitchcock, president of SUNY Albany until early 2004 and now the principal and vice chancellor at Queens University in Kingston, Canada. Although there are potential dis-advantages to a VA bank—namely low numbers of females, if veterans but not family members are included—the population includes minorities underrepresented in gene banks overseas, says Guttmacher.
According to documents obtained by Science, in July 2002, Paulette McCormick, who held joint appointments at the Stratton VA Medical Center and as head of SUNY Albany's Center for Functional Genomics, sent a gene bank proposal to Mindy Aisen, then the VA's deputy chief of research and development and now chief of the VA's rehabilitation research division. McCormick's plan was to collect blood samples from at least 2 million volunteers. The data bank would be open "to VA scientists and other academic and industry scientists" after their projects were approved by the VA and the bank's scientific and ethics committees, one version of her proposal states. The samples would be owned by the VA; they and computers containing the data were to be stored in locked rooms at SUNY Albany. McCormick also proposed having companies pay to access gene bank data as a means of funding the bank. Strict privacy controls would protect DNA donors.
SUNY Albany officials and New York politicians saw the plan as a flagship project that could raise the profile of the university and the state. "We all kind of whooped. It was an absolutely fantastic idea," says Hitchcock.
On 11 December 2003, the VA signed an agreement with Albany suggesting that it would move forward with McCormick's plan and base the bank in New York state. SUNY Albany modified plans for a cancer research center then under construction, making "add-ons" to accommodate space for a gene bank at a cost of "multiple millions," says Hitchcock. In an e-mail sent on 19 March 2004, Jonathan Perlin, now VA undersecretary for health, wrote to three colleagues in VA headquarters that the gene bank "is a VA resource, first and foremost, and Albany would be a lead partner."
That May, a small VA delegation, including Perlin, traveled to Albany and met with New York State Senator and majority leader Joseph Bruno (R) and New York Governor George Pataki (R), say sources familiar with the meetings. At the time, it was generally understood that New York would supply most of the project's pilot funding—estimated at $10 million—while the VA would offer nominal support, such as staff to collect blood samples. But behind the scenes, the project was unraveling. An e-mail from Perlin sent in February 2004 noted that McCormick's proposal "has raised significant ethical, privacy and operational issues." An e-mail from Nora Egan, then VA Secretary Anthony Principi's chief of staff, reported that the secretary felt that "issues related to medical ethics, privacy, … and benefit to be derived by VA" needed to be addressed. Precise concerns were not specified. A fall 2003 review of McCormick's proposal by the director of the VA's National Center for Ethics in Health Care had concluded: "On the whole, the … Gene Bank proposes ethically appropriate measures to protect subjects' privacy and the confidentiality of their personal health and genetic information." Earlier this year, VA officials at the agency's headquarters began circulating memos of a separate gene bank proposal, reportedly crafted by Perlin, Timothy O'Leary, who heads VA's Biomedical Laboratory Research and Development Service, and Stephan Fihn, acting head of VA research and development until 31 May 2005. A recent confidential draft, obtained by Science, is dated 13 July 2005. Conceptually, the proposal is similar to McCormick's: It recommends gathering blood samples from "all enrollees" in the VA system over 5 years and linking them "to data in other clinical and administrative databases" within the VA. Clinical information would be stored in "highly secure" areas. A scientific advisory committee would offer advice on specimen collection, storage, and other matters; the proposal notes that NHGRI Director Francis Collins has agreed to serve on this committee. (Collins declined to comment.) Biotechnology firms seeking access to the gene bank for specific projects could provide "commercial support." Initial costs are pegged at$40 million to $60 million, and the proposal notes that given tight federal budgets, Congress is unlikely to supply the funds. The proposal diverges from McCormick's in its suggestion that the bank's infrastructure be based in Texas or in Colorado, the home of VA Secretary James Nicholson, to "capitalize on VA support" in those states. "In my view, there's an evolution in thinking rather than a competition," says Fihn, who explains that on this project of unprecedented scope, VA headquarters realized it had to be in control. Furthermore, Fihn says, it's ludicrous to argue that Albany owned the concept. "Anybody who takes credit for the idea of creating a gene bank in this day and age—it's like saying you invented the Internet," he notes. He can't say what role, if any, Albany will play in the bank and anticipates a competition for participation. "We were rather upset" by how VA has handled the project, says Richard Roberts, a board member at SUNY Albany's Center for Functional Genomics and the chief scientific officer of New England BioLabs in Ipswich, Massachusetts. Roberts, a Nobel laureate, says it appears that McCormick's idea is being "seized" by "people in Washington." Last year, as concerns from New York politicians intensified that the VA was backing out of the December 2003 agreement it had signed with SUNY Albany, VA officials asked the agency's general counsel, Tim McClain, for advice. He prepared a memorandum arguing that the agreement isn't binding. "Execution of the subject Agreement by VA did not constitute acceptance of the gene bank research proposal," it reads. McCormick, meanwhile, has returned full-time to SUNY Albany after being released from the VA last year. Late last month, McCormick's successor on the gene bank, her SUNY Albany colleague Richard Cunningham, was also released from his part-time appointment at the VA, although he continues to work there without pay. "Employee privacy" rules preclude elaborating on those releases, says Linda Blumenstock, a spokesperson for the Stratton VA Medical Center. 8. AVIAN INFLUENZA # WHO Faults China for Lax Outbreak Response 1. Dennis Normile Worried that Asia's bird flu outbreak could be on the verge of spreading worldwide, increasing the risk of a human pandemic, international health organizations are warning that China is not rigorously following up on a recent outbreak of the deadly H5N1 strain among wild birds in the western Qinghai region. In particular, the World Health Organization (WHO) is pressing Chinese officials to study migratory birds to see whether they may be able to spread the virus to previously unaffected areas. Chinese scientists point out that they have already sequenced virus from migratory birds and made the results publicly available through GenBank. Concerns are focused on the H5N1 outbreak at China's Lake Qinghai. The unprecedented 6000 death toll among wild birds, previously only slightly affected by infections, has experts worried that the virus has become more lethal and that surviving migratory birds could carry it to wintering grounds in India, which has not yet reported any H5N1 outbreaks. To assess this risk, WHO and the United Nations Food and Agriculture Organization (FAO) have urged Chinese authorities to sample surviving birds to see whether any are carrying the virus without obvious symptoms, as well as to tag birds for tracking. China's Ministry of Agriculture could not be reached for comment. But in an interview with the Wall Street Journal that appeared on 19 July, Jia Youling, director general of the ministry's Veterinary Bureau, was quoted as saying they haven't tested live migratory birds “because in catching them, it is easy to harm them.” FAO animal epidemiologist Juan Lubroth in Rome says that there are humane ways of testing of live birds. Such data, he adds, “would allow for preventive actions on the ground, such as vaccinating domestic poultry flocks near known rest areas” along migratory routes. Roy Wadia, a spokesperson for WHO in Beijing, says China has also not yet responded to requests for isolates of the virus circulating in Qinghai. Time is of the essence, he says, because authorities want to determine whether the virus has changed before the return migration. Wadia was unaware that DNA sequence information from samples from Lake Qinghai had been deposited in GenBank by a group at China's Institute of Microbiology; they reported online in Science that the virus appears to have changed in ways that could make it more lethal (Science, 8 July, p. 231). Meanwhile, Indonesia confirmed its first human deaths from bird flu, among a family that apparently had no contact with infected poultry—the usual route of transmission—raising questions about possible human-to-human transmission. And as Science went to press, Russian officials were trying to determine the H5 subtype responsible for an outbreak of avian influenza among poultry in Novosibirsk. 9. COSMIC-RAY PHYSICS # New Array Takes Measure of Energy Dispute 1. Adrian Cho Amid the incessant hail of cosmic rays striking Earth's atmosphere from outer space, every now and then one comes screaming in with the energy of a walnut-sized hailstone (Science, 21 June 2002, p. 2134). Such ultrahigh-energy cosmic rays could herald bizarre astronomical phenomena or new fundamental particles, so physicists are eager to know how often they come along. In recent years, Japanese experiments have indicated that the particles are unexpectedly common; American experiments say they're rare. Now the first results from the Pierre Auger Observatory, a gargantuan cosmic ray detector under construction on an ancient lakebed near Malargüe, Argentina, may have pinpointed the crux of the dispute: The apparent energy of the cosmic rays depends on which method is used to measure it. Auger's preliminary findings “go a long way to resolving the difference between the two [previous] data sets,” says Floyd Stecker, a theoretical astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Auger researchers will present their results next week at a conference in Pune, India.* When a high-energy cosmic ray crashes into the atmosphere, it triggers an avalanche of billions of lower energy particles known as an “air shower.” Between 1990 and 2004, researchers working with the now-defunct Akeno Giant Air Shower Array (AGASA) about 120 kilometers west of Toyko, Japan, caught some of the particles with detectors on the ground. They compared their readings with the results of a computer simulation to deduce the energy of the original cosmic ray. The researchers spotted about a dozen cosmic rays with energies exceeding 100 exa-electron volts (100 EeV, or 1020 eV). As they stream earthward, the particles in a shower excite nitrogen molecules in the air and cause them to fluoresce. Researchers working with the High-Resolution Fly's Eye (HiRes) detector at the U.S. Army's Dugway Proving Grounds in Utah use specialized telescopes to detect that light and estimate the energy of the original cosmic ray somewhat more directly. They observed only a few cosmic rays with energies above 100 EeV. The Auger Observatory possesses both types of detectors. Auger researchers observed dozens of cosmic rays with both the telescopes and the ground detectors and used the “hybrid events” to calibrate the ground detectors without resorting to the computer simulations. The results suggested that the computer simulations overestimate the energies of the cosmic rays by about 25%, says James Cronin, a physicist at the University of Chicago and co-founder of the Auger collaboration. Some physicists, however, question whether the energy estimates from the fluorescence detectors are really more accurate than those from the simulations. “The Auger measurement clearly explains the difference between the AGASA and HiRes results,” says Masahiro Teshima, a cosmic ray physicist at the Max Planck Institute for Physics in Munich, Germany, and former spokesperson for AGASA. “But at the moment, I don't know which is right.” All agree that as it gobbles up data, the massive Auger Observatory should settle the issue once and for all. “In a year and a half with a quarter of the array, we've matched the data set of the existing experiments,” Cronin says. “It's looking good.” The complete array will comprise 24 light telescopes and 1600 surface detectors covering 300 square kilometers. Within 2 years, Auger researchers expect to have collected seven times more data. • *29th International Cosmic Ray Conference, 3-10 August. 10. CLIMATE CHANGE # El Niño or La Niña? The Past Hints at the Future 1. Richard A. Kerr Two teams of researchers, studying the same evidence with the same techniques, have painted diametrically opposite pictures of a key period in the history of Earth's climate, which climatologists are probing for hints of what's to come. “It's a tough issue to sort out,” says climate modeler Raymond Pierrehumbert of the University of Chicago in Illinois. “What's at stake is the regional distribution of climate,” both past and future. But he's going to have to wait for more data from the past. The two groups, one British and one American, are studying what temperatures in the equatorial Pacific Ocean were like during the early Pliocene epoch, about 4.5 million to 3.0 million years ago. The world was about 3°C warmer then than it is today—much as it may be a century or two from now. Today, the tropical Pacific is the “engine” that drives much of the global climate system. Computer climate models disagree about how future global warming will affect it: whether the region will get stuck in the warmth of a permanent El Niño, slip into the relative cool of an endless La Niña, or keep swinging from one to the other as it does today. By showing how the tropical Pacific worked the last time the world got hot, climatologists hope the Pliocene will help them forecast what to expect next time. To find out ancient ocean temperatures, each group studied a pair of deep-sea sediment cores from either end of the pivotal equatorial Pacific, one taken from near the Galápagos Islands and one from 13,000 kilometers to the west. From the mud, they extracted the fossils of microscopic creatures called foraminifera, or forams, that lived in Pliocene surface waters and sank to the bottom after they died. By studying the ratio of the elements magnesium and calcium preserved in forams' carbonate shells, scientists can estimate the temperature of the water the creatures once floated in. The British group weighed in first (Science, 25 March, p. 1948). Rosalind Rickaby and Paul Halloran of the University of Oxford, U.K., published six eastern Pacific temperatures spanning the past 5 million years, including one from the Pliocene warm period. It showed that the eastern Pacific was dramatically cooler than the west—the hallmark of a dominant La Niña. Now, on page 758, the American group—Michael Wara, Christina Ravelo, and Margaret Delaney of the University of California, Santa Cruz—reaches a different conclusion. They produced more than 200 temperatures over 5 million years, including more than 50 from the time of Pliocene warmth. Wara and colleagues conclude that at that time the eastern Pacific was only slightly cooler than the west. The implication: El Niño, not La Niña, ruled the early Pliocene. It's a big difference. A dominant La Niña would have made the world slightly cooler on average than the alternative. More important, La Niña's regional climate effects—such as a wetter western Pacific and a cooler northwestern North America—would have been felt around the globe. If El Niño prevailed, on the other hand, that would have meant a warmer climate overall and much warmer and drier conditions in southern Africa, for example. So who is right? Outside experts say the Californians' hundreds of temperature readings give El Niño a tentative edge. “You need really dense data sets to do this work well, in my opinion,” says paleoceanographer David Lea of the University of California, Santa Barbara. “This is difficult work, and it's easy to be misled.” Paleoceanographer Gary Dwyer of Duke University in Durham, North Carolina, agrees, noting that sampling as sparse as the Oxford group's could make it easy to mistake a few rare cold-water interludes for a long-term La Niña regime. But Rickaby stands by her team's results and hints that superior British sample cleaning more than closes the numerical gap in data points. Researchers say only more research can settle what really happened during the Pliocene. “There may be missteps before it's done,” says Pierrehumbert, but “I can't overemphasize the importance of such data” to testing climate models 11. FOREST CONSERVATION # Learning to Adapt 1. Erik Stokstad The ambitious Northwest Forest Plan tried to balance desires for timber and biodiversity, but preservation trumped logging—and research. Can the plan be made as adaptable and science-friendly as intended? For decades, a steady stream of logging trucks rolled out of forests in the Pacific Northwest, piled high with ancient Douglas firs, valued for their huge trunks. Old-growth forests on private lands were the first casualties, and as they disappeared, the loggers turned to national forests. Despite outcries from environmentalists, the pace of clear-cutting intensified in the 1980s—reaching a peak of more than 5 billion board feet a year, enough to build 350,000 three-bedroom houses, much of it from old growth. Then in the early 1990s, environmentalists finally found a weapon powerful enough to fight destruction of these venerable forests: the northern spotted owl, which needs large tracts of old trees to survive. Not long after the owl was added to the endangered species list in 1990, environmental groups sued on its behalf, and a federal judge ordered a moratorium on logging in owl habitat. The rumble of trucks from the national forests silenced, but the volume of the debate only got louder. As it played on national media, the bitter battle pitted birds against jobs. Activists spiked trees to damage mills, while loggers held protests and cut down old-growth trees at night. The tension ratcheted up. Out of this political crisis came the largest, most ambitious forest conservation plan ever. Called the Northwest Forest Plan (NWFP), it covers 9.8 million hectares of federal land in California, Oregon, and Washington. Striving for compromise, the plan tried to balance the needs of loggers and endangered species. To meet that tall order, the architects set up special research areas to devise new ways of cutting timber that would be benign or even beneficial to wildlife. Economic and ecological progress would be monitored, and the plan would be altered decade by decade as needed—a process called adaptive management. Now, more than 10 years and$50 million in monitoring costs later, researchers and forest managers have taken the first major stab at assessing how well the plan is working. This fall, they will publish a series of extensive reports, with a synthesis slated for release this month. The bottom line, they say, is that the plan is basically on track: Old-growth forest has been preserved, and watersheds are improving. But several key goals have not been met. Some forests face the risk of catastrophic fires; the spotted owl population is still declining; and timber sales never came near projections, meaning lost jobs and dollars for both the timber industry and the U.S. Forest Service (USFS).
Another shortcoming is the relative dearth of new approaches for improving the plan. Despite good intentions, the goal of devising and studying alternative management strategies essentially fizzled. Officials say that fixing this is a top priority, as is reducing fire risk.
But keeping the plan on track—let alone boosting its activities—faces serious challenges, as funding for the USFS in the Pacific Northwest has fallen dramatically. Forest service officials say that changes in regulations governing the plan, implemented by the Bush Administration, will give them needed flexibility, but environmentalists worry that the changes provide license for irresponsible logging that could threaten remaining old-growth forests.
## Legal logjam
Several broad environmental laws passed in the 1970s made the conflict between logging and old-growth conservation all but inevitable. The Endangered Species Act (ESA) of 1973 requires the conservation of habitat that listed species depend on, and sections of the National Forest Management Act mandate that populations of species be kept viable. Forest service officials knew in the 1980s that the spotted owl was likely to be listed but, under pressure from politicians in the northwest, continued to allow cutting of old-growth forests—until the Seattle Audubon Society and other groups sued.
In March 1989, a federal circuit judge blocked sales of timber within the range of the owl, an area encompassing the remaining old growth. Congress intervened, allowing a few timber sales to go through, enraging environmentalists. The issue rose to prominence in the 1992 presidential campaign.
A few months after the election, President Clinton asked a large group of scientists from USFS, the Bureau of Land Management (BLM), and universities to provide a range of options that could end the judicial moratorium. The Forest Ecosystem Management Assessment Team (FEMAT) was charged with finding ways to protect the long-term health of the forest across the range of the spotted owl while providing “a predictable and sustainable level of timber sales and nontimber resources that will not degrade the environment.”
A core team of several dozen researchers, led by wildlife biologist Jack Ward Thomas of USFS, holed up for 3 months in a Portland office building, working around the clock and calling on more than 100 outside scientists when needed. “The mood was one of great intensity and focus,” says FEMAT participant Norman Johnson of Oregon State University in Corvallis. From this came a 1366-page document that laid out 10 distinct management options. All of them took a broad view, focusing on managing the entire ecosystem rather than just the spotted owl. But to survive court challenges, any plan had to comply with laws aimed at species protection.
Clinton picked Option 9, which set up a patchwork of old-growth areas—45 so-called Late Successional Reserves, totaling 2.8 million hectares or almost 30% of federal land in the plan area. The primary objective in these reserves was to ensure the survival of old-growth forest habitat that the owl requires. Some 1.9 million hectares outside the reserves, called the matrix, would be available for logging, except near owl nests.
To figure out what type of management would be most compatible with conservation and timber goals, the plan set aside 10 areas (see map, below), totaling 603,000 hectares, for experimentation with restoration and harvesting approaches. It also called for different management strategies in various reserves, depending on local conditions. For instance, the pine forests east of the Cascade Range are drier and more prone to fire than those to the west, and decades of fire suppression had led to a buildup of brush and deadwood. They would need aggressive management, including thinning and prescribed burns, to prevent catastrophic fires. To the west of the mountains, by contrast, the idea was to accelerate the development of old-growth habitat by thinning second-growth plantations.
Because officials expected salmon to be listed under ESA, the plan also includes a substantial Aquatic Conservation Strategy. To prevent erosion, which adds sediment and can destroy fish habitat, the plan creates a system of riparian reserves: 100-meter-wide no-logging strips on either side of streams, totaling 903,000 hectares. As more was learned about watershed ecology, the buffers were to be adjusted to the minimum size necessary to conserve fish, thus allowing more logging.
Before it was implemented, Option 9 went to the departments of Interior and Agriculture, where it was modified—presumably to make it legally more airtight—without scientific advice from FEMAT. The biggest change was to expand the scope of protection beyond species listed under the ESA to include several hundred largely unstudied species whose status was unknown. “The precautionary principle went berserk at that point,” Thomas says.
Under this additional “survey and manage” program, before any ground-disturbing activity could take place, the agency had to check for the presence of any of these organisms, including lichens and invertebrates, and devise a plan to minimize impact on them. Although this provision has helped the overall plan hold up to court challenges, it had unintended and wide-ranging consequences. In particular, because it made the plan substantially trickier to implement, much logging and many adaptive-management experiments never got off the ground. “It almost made it impossible to pursue the actions in Option 9,” says Thomas, who was chief of USFS from 1993 to 1996.
## Charting progress
This spring, USFS and BLM began previewing the first monitoring results. In some cases, the data are too sparse to yield a useful assessment, because it took several years to design and implement the monitoring programs. Researchers also note that a decade isn't much time compared to the pace of forest succession and the century-long horizon of the plan.
For old-growth forests, however, the trend appears positive. Older forest increased by 245,000 hectares between 1994 and 2003, about the amount originally expected. “Perhaps we can conclude for the short term that the policies are working,” says USFS's Melinda Moeur, who led the old-growth monitoring team. But environmentalists counter that the net increase—tabulated when an average tree diameter crosses a certain threshold—means only marginal improvement in habitat, while the 6800 hectares of older forest that were clear-cut represent real setbacks. “The losses are catastrophic, while the gains are incremental,” says Doug Heiken of the Oregon Natural Resources Council in Eugene.
The plan fell far short of its goal in terms of timber production. About 0.8 billion board feet per year were expected to be put up for sale each year; in most years less than half of that was. A major factor was the stringent requirements of the “survey and manage” program. Environmental groups also slowed things down with lawsuits to prevent any harvesting they thought detrimental.
This decline in timber harvesting had both economic and ecological effects. Although it cost roughly 23,000 timber-related jobs, that was less than some had feared. Jobs with USFS also disappeared and were not replaced. Yet over the decade, some 800,000 other jobs were created in the region. As former timber workers and USFS employees moved out, they were replaced by retirees and telecommuters. Overall, the Pacific Northwest did not suffer economically because of the plan, says forest economist Richard Haynes of USFS, but some rural communities were hit quite hard. The shortfall of cutting also has ecological implications. The paucity of clear-cutting in former plantations, which would mimic the effects of a severe windstorm or major fire, means that the northwest could end up many decades from now with a lack of early successional forests, which are prized for their biological diversity. And because there was little thinning, which both provides timber and helps accelerate forest succession to old growth, the fire hazard continued to increase in eastern old-growth forests.
Another disappointment is that despite the progress in habitat preservation, the population of spotted owls is estimated to be declining at 3.4% per year. The culprit is a surprise: invasive species. Barred owls, which are native to the central and eastern United States, have moved west over the past few decades. The newcomers seem to dissuade spotted owls from hooting, and spotted owls are apparently more likely to leave their territory if barred owls appear. Moreover, their diets overlap 75%, so they may be competing for food as well. “Barred owls may ultimately be as big or bigger a threat than habitat loss,” says Eric Forsman, a wildlife biologist with USFS in Corvallis.
A cornerstone of the original plan was adaptive management—essentially, learning by doing and monitoring—which had never been tried on this scale before. The plan called for setting aside 10 adaptive- management areas (AMAs), where scientists would test ideas about how to create or restore forest or riparian habitat and protect threatened species while integrating timber harvest. Most never got off the ground, which leaves the Forest Service with few new ideas to guide efforts to improve the plan. “It's been an extremely frustrating decade,” says forest ecologist Bernard Bormann of USFS. “The progress has been very slow.”
Several factors scuttled the projects. Tension and lack of trust between forest managers and environmental groups figured large. When environmental groups felt that foresters were using AMAs primarily to extract timber rather than to improve the ecosystems, they sued. However, Dave Werntz of the Northwest Ecosystem Alliance in Bellingham, Washington, says that trust has been building, thanks to better communication and good-faith efforts: “We're doing a better job today at implementing the Northwest Forest Plan than any time in the past.”
Other problems remain: When national forest budgets got tight, these experiments were axed or fell lower on priority lists. In addition, rather than being encouraged to try novel approaches, local managers had to offer evidence to the U.S. Fish and Wildlife Service (FWS) that experiments wouldn't harm listed species. In many cases, managers simply gave up trying to make projects work or walked on eggshells to avoid legal trouble. “Caution seems to have trumped creativity,” says Elaine Brong, BLM's director for Oregon and Washington.
There were a few exceptions. The Blue River Adaptive Management Area, for instance, was set up to recreate the effects of historical patterns of forest fires across 23,000 hectares in the Cascades near Eugene, Oregon. Cutting, combined with prescribed burns, has yielded timber at a low but constant rate. The project began only 5 years ago, so no results have emerged yet. But modeling indicates that the experiment will create more old forest than the standard design of the NWFP will and much more intermediate-age forests. “We'll end up with what we believe is a more natural system,” says geomorphologist Fred Swanson of USFS. And thinning experiments in the Siuslaw National Forest near Waldport, Oregon, are probing the best way to accelerate the maturation of younger forests, says Bormann, the lead scientist. Thanks to the thinning, the Siuslaw now produces more timber than any other national forest in the NWFP.
Overall, scientists say the plan is succeeding at its goal of conserving old-growth ecosystems. “So far so good,” sums up Thomas Spies, a forest ecologist with USFS. Conservation wasn't the exclusive goal at the outset, of course, but the agency seems resigned that it won't meet its timber harvests. “If we can keep them flat, then we'll be doing pretty good,” says USFS spokesperson Rex Holloway.
That state of affairs—if it holds—distresses the timber lobby but pleases environmentalists. The Bush Administration has, however, implemented several changes that could swing the balance, such as eliminating the “survey and manage” requirements last year to boost timber production. Other major changes, which affect all national forests, include removing the concept of retaining viable populations from the National Forest Management Act and lessening mandatory monitoring and requirements for environmental-impact statements. The changes “give total discretion to the local forest manager on how to manage the forest,” says Michael Leahy of Defenders of Wildlife in Washington, D.C., which has filed suit.
How these changes specifically affect the operation of the plan will be determined by the Regional Interagency Executive Committee (REIC), made up of officials from USFS, BLM, and other agencies. This group will also decide how to modify the plan based on what's been learned over the past decade. A key priority is “getting the AMAs to work,” says Linda Goodman, regional forester of USFS's Pacific Northwest Region and a REIC member. One strategy is increased involvement of FWS and the National Oceanic and Atmospheric Administration's National Marine Fisheries Service, which are responsible for endangered species, in research design so that scientists and managers have more latitude to take risks.
Yet as they hope to ramp up research and management activities for the next decade, Forest Service managers face a declining budget and downsizing. The agency's budget dropped 35% in the NWFP area during the first decade, which forced it to cut 36% of positions and close about 23% of its field offices in the plan area. “I'm very concerned,” says Jerry Franklin of the University of Washington, Seattle. “What's happening is a real threat to carrying forward the plan successfully.” To a large extent, the question of funding will determine how much monitoring and experimentation will continue—and what researchers will have learned about managing the forests 10 years from now.
12. RALPH CICERONE INTERVIEW
1. Eli Kintisch
Ralph Cicerone came to Washington, D.C., this month to lead the National Academy of Sciences—and walked smack into a hot climate debate
Last week, Ralph J. Cicerone showed the U.S. Senate what he might be like as the new president of the National Academy of Sciences (NAS): a politically savvy administrator who intends to make the voices of scientists heard in Washington, D.C., and beyond.
On consecutive days, the 62-year-old atmospheric scientist testified before separate panels examining the science of climate change. To the first panel, he explained firmly why the National Academies had waded into a fight brewing between an influential House committee chair and scientists whose research has linked rising temperatures with human causes by volunteering to look into the questions that Representative Joe Barton (R-TX) had raised about Michael Mann's work (Science, 22 July, p. 545). In the second, he addressed a legislator's concerns about the economic costs of capping greenhouse gas emissions by ticking off seven ways in which efficient energy use would help average Americans.
Colleagues say his performance, scarcely 2 weeks into his 6-year term as NAS president, was typical of someone who knows how to talk to politicians, peers, and the public. “He's very good at putting all the pieces together from different disciplines to provide a simple answer for societal questions,” says atmospheric chemist Guy P. Brasseur of the Max Planck Institute for Meteorology in Hamburg, Germany.
Policy-oriented answers to complex problems are the academies' stock in trade. More than 200 times a year, it delivers measured judgments on issues from teaching evolution to energy policy. In 2001, while still chancellor of the University of California (UC), Irvine, Cicerone himself chaired a White House-requested academies' review of climate science that said human activities could result in higher temperatures, drought, and increased rainfall while noting uncertainties. “We were all on the hot seat,” says botanist Peter Raven, who led the academy committee that nominated Cicerone to succeed Bruce Alberts. “But he really came through, with rigor and accuracy.”
Although Cicerone called his back-to-back Senate appearances “probably more than I'd like to do,” a busy, high-profile schedule is hardly a novelty for him. He maintained a productive research lab at Irvine during his 7-year stint at the helm, avoiding serious cuts to programs and personnel despite a tough budget environment. Raven says that Cicerone's public relations and fundraising skills helped him nab the NAS job.
Cicerone began his career as an electrical engineer studying atmospheric plasmas. At the University of Michigan, Ann Arbor, in 1973, he and Richard Stolarski showed that free chlorine atoms could decompose ozone catalytically, earning the pair a citation when UC Irvine colleague Sherwood Rowland won the Nobel Prize in 1995. His interests steadily broadened, from methane's role in greenhouse warming to climate change, and he reported his findings in regular testimony on Capitol Hill.
Cicerone spoke last week with Science about his new job. Here are excerpts from that conversation.
On his goals for NAS:
“In my lifetime, I think I've seen a pretty pronounced slippage of the public's enthusiasm for and understanding for science. And I'm going to try to get a number of academy members together and some of our staff to look at our past efforts on communicating and see what we can do better. …
“I'm [also] really worried about the U.S. science and technology base. … We have a couple of groups working right now to assemble some measures of how we track our progress and our relative standing around the world. … We'll be working this one with the National Academy of Engineering and with scientific and engineering society leaders, too.”
On the timeliness of reports:
“That's always been a criticism, but I think things have sped up a little bit. … There have been some fast ones lately, like what to do with the Hubble [Space Telescope]. … You couldn't take on the number of studies we're doing now if all of them were, let's say, 2-month turnaround. And I think by nature, many of the questions we're asked to look at are longer term, anyway.”
On the number of women members:
“Last year's [entering class] was the all-time record, with 19 out of 72. … We're doing better, but there are still a lot of ways in which women are not being involved enough, like in our choice of award winners and officers of the academy. We've got a long way to go.”
On his career progression:
“I think there's a real difference between leadership and management and administration. … [In 1994] we had a fantastic dean of physical sciences who had to step aside for personal reasons, and they asked me to take over the job. I was out of town when the faculty met. … [But] I've always enjoyed trying to do several things at once. Then when the opportunity came to be chancellor of the campus, … someone said to me, 'You've complained a lot at the way other people do these jobs. Maybe it's time for you to try it.'”
On a funding gap between the life and physical sciences:
“In the physical sciences, I think there are many discoveries out there waiting to happen, largely because of our new capabilities in measurement. … I think it was necessary to increase the portfolio for biological and health sciences, and I'm really glad we've done it. But the physical sciences have fallen too far behind.”
13. GENOMICS
# Tackling the Cancer Genome
1. Jocelyn Kaiser
Genome sequencers and cancer experts hope a pilot NIH project to find genetic glitches in tumors will build support for a complete catalog of human cancer genes
The scientists who brought you the human genome project are teaming up with cancer researchers for another big-biology moon shot. They want to compile a catalog of all common mutations found in human cancers, with the goal of jump-starting molecular approaches to treating cancer. Last week, at a Washington, D.C., workshop to explore the idea, scientists sketched out a game plan for a 3-year pilot project. And two institutes of the National Institutes of Health (NIH) in Bethesda, Maryland, announced a $100 million down payment on a project expected to cost$1.5 billion over a decade.
The human cancer genome project was hatched by a group of advisers to the National Cancer Institute (NCI) led by Eric Lander of the Broad Institute in Cambridge, Massachusetts, who unveiled the initial plan in February (Science, 25 February, p. 1182). It would identify the genetic glitches that lead to uncontrolled cell growth in most cancers. Lander's group proposes systematically searching for the common mutations in 12,500 tumor samples from 50 major cancer types. “If everybody were to pull together, we could at least know the enemy in a decade,” Lander says.
Planners say the project dovetails with the push by NCI Director Andrew von Eschenbach to translate genomic discoveries into the clinic. Von Eschenbach says the idea “will embed in the entire strategy of the NCI,” which has agreed to share the cost of the pilot project with the National Human Genome Research Institute. The full project, however, would require additional funding from Congress.
Cancer genome project backers hope to repeat the success of the human genome effort. But determining the sequence of the 3 billion bases in human DNA, although controversial at first, was a well-defined task compared to what cancerresearchers are proposing. The new project would collect samples from thousands of patients, analyze those samples for mutations found in at least 5% of cases, and measure gene-activity patterns in the tumors.
Because a full sequencing of each sample to find mutations would cost too much, Lander's group has proposed at first sequencing only the coding regions of 2000 or so genes implicated in cancer. Even then, much of the data may be meaningless, notes Michael Stratton of the Sanger Institute in Hinxton, U.K., which has a smaller cancer genome project under way. Stratton presented data at the workshop on protein kinases, enzymes involved in cell signaling, for several cancers. Only a small fraction of mutations in kinase genes cause abnormal cell growth, he reported, and although some tumor samples carried several mutations, others had none.
Given the uncertain results from sequencing, cancer researcher Ronald DePinho of Harvard University pushed for analyzing gross genetic changes that are relatively easy to detect and known to lead to cancer, such as extra copies of genes and chromosomal translocations. “That might be the quickest way to get the most bang for the buck,” DePinho says. Other workshop participants suggested that mutations in regulatory regions—which determine how much of a protein is produced—could prove even more important than coding regions. And some urged using emerging technologies to take a closer look at epigenetics, such as changes in DNA methylation patterns that affect whether genes are turned on or off.
Scientists also debated exactly where to begin the pilot. Some argued that an in-depth analysis of one type of cancer would be more likely to hit a home run—for example, find a mutation that flags which patients would benefit most from a particular treatment. But others argued that studying several cancers would boost the odds of a treatment breakthrough and keep more patient advocacy groups on board. “We need deliverables,” said Bruce Stillman, president of Cold Spring Harbor Laboratory in New York.
All this testing will require large amounts of tumor tissue with reliable clinical information attached and proper consent from patients. Because genes expressed in tumors change over time, scientists may need to test tumors at different stages. Attendees pondered whether to collect new samples over several years or to rely on existing tissue banks, assuming the researchers who collected them are willing to share. They also worry about community resistance to making the data freely available quickly. That provision may give some cancer researchers “the heebie-jeebies,” said one speaker.
Despite the challenges, workshop participants agreed to start by focusing on a few tumor types drawn from existing samples. Tissue banks will be invited to participate later this year, and the best proposals will determine which cancers to study. Meanwhile, the pilot will also begin developing methods for collecting new samples for later—and presumably cheaper—analysis. Other requests for applications will seek proposals for technologies, both high-throughput sequencing at genome centers and small-lab techniques such as microarrays for expression analysis.
To succeed, proponents will need lots of friends from a research and advocacy community that may have doubts not only about the project's eventual price tag but also about the value of fishing for data rather than investigating a hypothesis. “We have a lot of questions,” says Fran Visco, president of the National Breast Cancer Coalition in Washington, D.C., which is still studying the idea. “How are we going to prioritize so it's not creating data to keep scientists busy and not really helping patients?” That's one of many concerns scientists must address to make the cancer genome project a success.
14. ANIMAL BEHAVIOR
# Strong Personalities Can Pose Problems in the Mating Game
1. Elizabeth Pennisi
A closer look at confrontational behavior in various animals shows that aggression may help individuals survive, but it can impair reproductive success
For male fishing spiders, courtship is dangerous business. Females of the species are notoriously aggressive, and the male—which signals his arrival by gently tapping the surface of the water—often ends up as a meal rather than a mate. Yet each time the female eats her would-be partner, she lessens her chance of reproducing, leaving evolutionary biologists wondering just why this behavior persists. Aggressive female spiders just can't stop themselves, says J. Chadwick Johnson, a behavioral ecologist at the University of Toronto, Scarborough.
Johnson is among a small group of researchers investigating the “personalities” of animals from spiders and fish to insects and birds. Although many biologists once strongly protested attributing human qualities such as personalities to animals, more and more investigators are adopting such descriptive language. Individual animals, even simple invertebrates, do have consistent behavioral quirks that endow them with discernible dispositions, says Andrew Sih, a behavioral ecologist at the University of California, Davis.
Although he and his colleagues think of these dispositions as personalities, they have tried to steer clear of being criticized as anthropomorphic by instead coining the term “behavior syndromes.” In addition to identifying such syndromes in animals, Sih, Johnson, and several other investigators are finding that animal personality traits, such as being bold toward potential predators or aggressive toward cohorts, can have drawbacks, despite the traits' apparent value, say in hunting or defending territories. For example, Renee Duckworth of Duke University in Durham, North Carolina, has shown how one bluebird species' aggressiveness allows it to steal habitat from another—yet that same trait impairs the bird's reproductive fitness in certain conditions. Looking at animal personalities, and the good and bad they bring, represents “an important paradigm shift in our approach to the evolution of behaviors,” says Duckworth.
## Dangerous liaisons
Many researchers credit Sih for bringing to prominence the idea that animal personalities carry survival risks. The notion plays off a proposal made 25 years ago by the late paleontologist Steven J. Gould and geneticist Richard Lewontin, both from Harvard. At that time, the two stirred up the evolutionary biology community by arguing that maladaptive traits could persist if they were linked with beneficial ones in an often-precarious balancing act. For example, guppies living around predators reproduce as early as possible so as to pass on their genes before being eaten. But the eggs slow gravid females down, making them easier prey earlier in life, a finding that lent credibility to Gould and Lewontin's idea.
Now, by showing that a personality trait that is counterproductive in one context perseveres because of its utility in another, Sih is moving Gould and Lewontin's ideas “into a new arena,” says evolutionary ecologist Andrew Hendry of McGill University in Montreal, Canada. Sih argues that because some animals are very limited in their ability to moderate their personalities according to particular situations, they are stuck with the consequences throughout their daily lives.
Take the North American fishing spider, the subject of Johnson's studies. In 1997, Göran Arnqvist of Uppsala University in Sweden and a colleague suggested that aggressive females who eat males who come courting were simply following their strong instincts to catch prey. The drive to hunt would serve juvenile females quite well, enhancing their growth, particularly when competition for food was intense. But those instincts, if unfettered, may backfire when the females become adults and need mates.
Johnson has recently followed up on this proposal, verifying key elements. He found that even as young spiders, certain females were aggressive hunters, spending more time than their cohorts searching for the next meal and, as a result, bulking up more. This aggressiveness was also reflected as boldness in encounters with predators, Johnson discovered when he mimicked a bird's approach by tapping the water near these spiders. Although all fishing spiders dove into the water when they detected such tapping, the female superpredators surfaced more quickly.
These daredevils also were more likely than less aggressive females to try to snack on males, Johnson reported last month at Evolution 2005 in Fairbanks, Alaska. “Boldness to a simulated predator is proportional to the tendency to attack males,” he said. Overall, he concluded, the bold, aggressive female spiders ate more food, but they compromised their survival and productivity by treating males as food and taking predation risk lightly.
Daniel Promislow of the University of Georgia, Athens, is surprised that aggression can pervade all aspects of a female spider's life. If the fishing spiders could modulate their personality, he explains, then the females should be as aggressive as possible in hunting, less aggressive in the face of danger, and mild-mannered when approached by males—but that's not what the experiments indicate. “We often think of behaviors as relatively plastic traits compared to morphology, physiology, or life history,” he says, but Johnson's results challenge that premise.
Counterproductive aggression is not limited to female arachnids. Sih has found that militant males are the troublemakers among insects known as water striders. Sih graded aggressive tendencies in males by observing, for example, how much they fight, how long they were active, and how often they chased after potential female mates. He then put together 12 groups of water striders, each consisting of males with similar personalities from least aggressive to most aggressive, in separate artificial ponds. The researchers then put females into the ponds and monitored each group's mating successes and failures, keeping track of each individual's partners within their group. The investigators also tracked each water strider's feeding and tallied how often an individual retreated to riffles, supposedly a more dangerous habitat but also a refuge from aggressive peers.
Females tended to avoid the most aggressive males, the researchers found. Indeed, females often refused to put up with any “Rambo” male in their midst and moved as far away from him as they could, diminishing both his and his peers' mating opportunities. Aggressive individuals couldn't turn down their swagger. They ultimately “hurt not only themselves but, by being too aggressive, the entire group,” Sih reported at the evolution meeting.
## Group dynamics
Working with small fishes called three-spined sticklebacks, Alison Bell of the University of Glasgow, Scotland, has found that living conditions may narrow the range of personalities within a group of animals. Whereas researchers such as Sih and Johnson typically focus on the behavior of individuals in a population, she is assessing variation in “in your face” behavior—the combination of boldness and aggression—between and within whole populations of the fish. Because stickleback populations have diverged genetically, so might their behavior in different places, she hypothesized.
To examine this possibility, Bell collected groups of 20 juveniles from 13 different populations of freshwater and marine sticklebacks in various lochs and harbors around Scotland. Some of these populations regularly faced predators—pike, trout, and the like—and others lived in relatively predator-free environments. To measure boldness of the fish from each population, she set up a tank with a pike behind a glass divider, then counted how often individual fish approached the pike to inspect it. For a gauge of aggressiveness, she counted the number times a fish isolated in one tank tried to nip at other sticklebacks in an adjoining tank separated by glass.
The fish within each of the 13 populations seemed to share similar mindsets. Bell found that when one individual from a population fearlessly approached the pike, so did most of the others from those groups. In general, most of the fish within a particular group acted the same way, she reported. And fish from the boldest populations, as measured by the pike test, were also the most confrontational toward other sticklebacks. In the wild, says Bell, this bullying could translate into bigger territories, better food, and even increased mating for the biggest bully. But the fearlessness toward predators may also cost fish in these aggressive groups their lives, suggesting that whole groups of animals, not just individual ones, can have personality traits that threaten reproductive success at times.
Bell also observed that the bold, aggressive stickleback populations had higher breathing rates, more spines, and heavier body armor than more wimpy populations. Those correlations suggest that “behavioral syndromes might be part of a larger package of evolutionary [traits],” says Sih.
Duckworth's studies indicate that sometimes the bold personality of one species can help it beat out similar, but shyer, species, at least in a particular environment. Observations over the past 40 years show that western bluebirds have greatly expanded their range in Montana, displacing mountain bluebirds. By tallying the number of each bluebird in places where both species are present, Duckworth documented that western bluebirds in just a few years supplanted mountain bluebirds at valley study sites. Much of the western bluebird's success sprang from its fierceness, suggests Duckworth.
She placed tree swallows, a bluebird competitor, in nest boxes, and then watched as either of the bluebird species approached the box. She found that western bluebirds were more aggressive, an indication that they are better able to acquire and defend their territories against the swallows. The male western bluebirds also were fiercer than mountain bluebirds when competing for mates, another sign of pushy temperaments.
In this case, aggressiveness seems to go hand in hand with reproductive success. But a closer looks suggests that, as with fishing spiders and water striders, the western bluebird's obnoxiousness can come with a cost. Duckworth points out that western bluebirds spend so much time defending their nests and courting that they neglect their offspring. This poor parental behavior is especially problematic in tough environments, such as mountains. In contrast, mountain bluebirds are loyal parents and have an edge where weather can be rough, says Duckworth. As a result, they have maintained their foothold in Montana's mountains. “Behavioral syndromes can have profound ecological and evolutionary consequences by mediating species coexistence,” Duckworth says. Thus, in animals, as in people, personality can make or break one's success in life.
15. # The Hunt for a New Drug: Five Views From the Inside
1. Jeffrey Mervis
The world of drug discovery in big pharma can seem pretty mysterious to outsiders. But some patterns are visible from the inside
Waiting for his lunch to arrive, Graeme Bilbe wants to make sure that the reporter on the other end of his cell phone understands how hard it is to discover a new drug. The U.K.—born, Basel, Switzerland—based head of global neuroscience research at Novartis is dining in southern California with a former pharma colleague, Tamas Bartfai, now chair of the department of neuropharmacology at the Scripps Research Institute in La Jolla, California. The hors d'oeuvre is a lecture on the industry's staggering attrition rates.
“How many ideas do you think you need [to develop a drug]?” demanded Bilbe, who's been with Novartis since 1989. “Take a guess. One thousand? Ten thousand? You need at least that many, if not more. The chances that any of those ideas will ever become a drug are vanishingly small.”
Those mind-boggling numbers color everything about research in big pharma and make this research sector distinct from any other area of industrial research. Very few pharma scientists actually work on products. Instead, the vast majority toil at a much more basic level, looking for potential targets, synthesizing compounds that might act on those targets in a way that would be therapeutic, and then making the compound “druggable.” “I have never worked on a successful drug,” confesses Derek Lowe, a medicinal chemist with 16 years in the industry who writes what may be the only Web log (blog) dedicated to pharmaceutical research (www.corante.com/pipeline). “Heck, I haven't worked on anything that anybody with a disease has ever put in their mouths.”
The research environment has also been reshaped dramatically in the past decade or so by mergers, which can abruptly shift a researcher's focus onto a whole new area of study. And unlike research on a new computer chip or a more efficient engine, the output from pharma research labs is not so easy to measure (see p. 726).
The world of big pharma research is shrouded in a culture of secrecy that goes well beyond the specific compounds and targets a company is working on. Here's how an otherwise candid Lex Van der Ploeg, head of Merck's new research lab in Boston (see sidebar, p. 723), puts it when asked about his productivity goals. “If I told you that this lab was going to generate, say, eight lead candidates this year, then our competitors could look at the number of people we employ and figure out how many people it takes us to develop a candidate compound,” he says. “Then they would compare it to how many it takes them. And if we're lower, they'd try to figure out why, and what they can do to become more efficient. That would give them a competitive advantage.”
Knowing where they stand is an all-consuming interest for pharma executives. As a result, investment analysts and corporate consultants churn out reams of reports each year on industry trends, from early-stage alliances with biotech companies possessing intriguing compounds to the latest technology “platforms” that can improve efficiency. The documents are sprinkled liberally with breathless predictions about how these trends “will change everything.”
Most of the time they don't, of course. In the meantime, however, these big-picture studies provide little idea of what the view is like from inside the industry's labs. From the scores of scientists and research managers we interviewed for this special section, we have chosen five individuals whose stories provide glimpses of how those big trends trickle down to the labs and computer workstations around the globe that represent ground zero in the hunt for new drugs.
## A view from the bench: Change as a constant
Eric Gulve joined big pharma in 1993 in hopes of ameliorating the ravages of diabetes. A research assistant professor at Washington University in suburban St. Louis, Missouri, Gulve became part of a team at G. D. Searle (the pharmaceutical arm of Monsanto) that was just beginning to tackle insulin resistance in type 2 diabetes. Since then, he's worked on cholesterol metabolism for Monsanto, cardiovascular diseases for Pharmacia, and then diabetes again for Pharmacia. Today he's with Pfizer, seeking potential targets to treat two forms of cardiovascular disease, thrombosis and hypertension.
Although he's worked for three companies in 12 years, Gulve is no hired gun. He hasn't even changed his commute to work. Rather, the 46-year-old physiologist has spent his entire pharma career in the same four-story industrial lab in the St. Louis suburb of Creve Coeur. The job changes were the result of three corporate mergers, culminating in Pfizer's $53 billion acquisition of Pharmacia in April 2002. Those mergers triggered top-down reviews of existing research, followed by projects or entire areas of therapeutic research being cancelled or transferred to another site. During one gutwrenching transition, Gulve spent weeks interviewing scientists for a revamped department—without knowing whether he would be their boss or even if he would still have a job with the new company. There's no way to know if Gulve's career path is typical. Some scientists remain at one company their entire lives, and others switch jobs often and voluntarily. But mergers have clearly changed the landscape of big pharma in the past decade. Gulve's current employer, with$52 billion in sales last year, has become the industry's leader thanks to its ingestion of Warner-Lambert in 2000 and Pharmacia, each of which in recent years had swallowed smaller fish such as Parke-Davis, Upjohn, Monsanto, and G. D. Searle.
“I'm not complaining about any of the decisions that were made,” he says. “But it is frustrating when you've worked so long and hard on a project and still haven't gotten far enough along to know if your hypothesis is right or wrong. I know that mergers are part of the business. But I hope that I never have to go through another one.”
## A view from a loyal critic: The art of drugmaking
Derek Lowe may be unique in the pharmaceutical industry: He's a medicinal chemist for a big pharma who writes a blog on drug discovery. His column (www.corante.com/pipeline) is an irreverent look at the industry. It's filled with pinpricking commentaries on the latest clinical results, corporate reshufflings, and overhyped trends in the business. He's not embarrassed to describe his own failures, either, including an on-again, off-again attempt to test a hypothesis that stubbornly resists verification.
His daily musings generate 25,000 hits a month. That traffic feeds Lowe's need for an audience, a hunger that offsets the lack of payment for his labors. “It hasn't helped my research,” he confesses about the blog, which he started in 2002. “But it's given me a much broader perspective on the business.” His readers are both colleagues—“insiders write me about how they've tried the same things in their labs that I write about”—and outsiders with a voyeuristic streak. “Where else would I get to hear from people saying, 'When I took that drug you wrote about …' I've also done some historical reading about the fashions that sweep through the industry and the fact that most of them don't pan out.”
A 1988 chemistry Ph.D. from Duke University, Lowe wanted to teach at a small liberal arts college but couldn't find the right job. Answering a job ad has led to a career in industry that he says “has worked out pretty well.” He currently works for Bayer but goes to great lengths to separate his dual identities as a researcher and blogger.
Lowe doesn't hesitate to point out the foibles of the pharmaceutical industry. “We're not angels. And when we mess up, I say so. If I was rah-rah all the time, nobody would read me.”
Even so, he's as dedicated to improving human health through modern drug discovery as any pharma bigwig. Taking umbrage at a recent story in Business Week entitled “Biotech, At Last” that paints academic research as nimble and pharma science as hidebound, Lowe writes: “It's true that many of the basic discoveries that have led to the current biotechnology industry came from academic research. That's just as it should be. But none of it would have been turned into human therapies without corporate research and development.” And he's personally offended at the article's characterization of pharma's relationship with biotech in the 1980s and 1990s. “Shied away from biotech for years? We pumped uncountable billions into it, much of which we never saw again.”
## A career-eye view: A taste of industry
What's a postdoc doing in pharma? Scottishborn David Dornan has spent nearly 3 years at Genentech, which has a 30-year-old policy of seeking out promising young scientists to pursue basic research. And although the company has a rule that its postdocs don't move into permanent positions, Dornan sounds like someone whose career aspirations may have been altered by working at the South San Francisco, California, biotech giant.
“My future? I think of it every day,” says the 27-year-old Dornan, who earned his Ph.D. in molecular oncology at the University of Dundee, U.K. “And the longer I'm here, the more difficult it is to envision becoming an academic.”
A member of a team led by Genentech's head of oncology V. M. Dixit, Dornan was a co-author of papers in Science and Nature last year that describe the group's work on how cancer-related proteins are degraded by the ubiquitin system. And although the work is fundamental science, Dornan has also been bitten by the drug discovery bug. “We found something that could be a therapeutic, and we have a unique chance to put it into development. It depends on the next phase. And if it works, we'll be handing it off to the chemists. The point is that it's possible.”
Dornan is realistic about his chances of staying on the West Coast. “California is great, but you have to be willing to go where there's a job.”
## A view from a distance: Landing on her feet
When Myrlene Staten saw the job ad in the New England Journal of Medicine in 1989, she thought it could have been written just for her. “Roche wanted a junior faculty member with clinical experience in metabolic diseases,” she recalls. Her work as an endocrinologist at Washington University in St. Louis made her a perfect fit, she realized, and before long she had moved from Missouri to New Jersey to help the company develop drugs for obesity and diabetes.
It was the start of a 15-year odyssey through big pharma that she recalls with mixed emotions from her current post at the National Institute of Diabetes and Digestive and Kidney Diseases in Bethesda, Maryland, where she runs a program to encourage academics and small companies to develop new therapies for type 1 diabetes. After 4 years at Roche, she moved to Lederle, which was soon bought by Wyeth. In 1995, she headed out west to Amgen, where she was part of the team doing the company's first clinical trials on the protein leptin, once highly regarded as a potential diet drug. Then it was back to the East Coast for a 2-year stint with Bristol-Myers Squibb before joining Upjohn/Pharmacia, where she was head of metabolic diseases.
A 2000 merger with Searle resulted in a spinoff of the new company's metabolic diseases portfolio to a new biotech based in Stockholm, Sweden. But Staten found a way to stay in New Jersey. “Searle had a cardiovascular group, and it had an opening. So when the boss called me in and asked me how I felt about working on cardiovascular diseases, I said, 'Real good.'”
Two years later, Staten had to handle another merger, this one with Pfizer. After helping the new company analyze its combined portfolio—“you present your project to senior managers, who then go behind closed doors and come out months later with a list of what stays and what goes”—she was faced with finding a new position. Choosing a very different direction, Staten jumped back into the nonprofit world, landing at the National Institutes of Health.
She's lost none of her zeal for finding new medicines that can help people. But she's 15 years wiser about how difficult that is, and how failure is a much more likely option. “My goal, then and now, is to develop a drug that can achieve a 20% permanent weight loss with no unusual side effects. But I think maybe I'll have to leave that to the next generation.”
## A view from the executive office: Finding his niche
At 54, Bob Stein is a drug industry veteran who has done it all, including two stints with biotech. After several coast-to-coast moves, he says he's exactly where he wants to be: president of Roche's research lab in Palo Alto, California.
A graduate of a joint M.D./Ph.D. program at Duke University in Durham, North Carolina, Stein came to Merck in 1981 along with Edward Scolnick, who later became the company's legendary research chief. (Scolnick had originally offered him a job at the National Cancer Institute in Bethesda, Maryland, but then signed on at Merck before Stein said yes.) Stein says he found the company's scientific environment “much more exciting” than academic jobs he had been offered. Several years later, as head of pharmacology at Merck, “where I worked on some pretty good drugs,” he was dispatched one day to San Diego, California, to assess a possible collaboration with a company, Ligand Pharmaceuticals, that “had great science but no infrastructure for drug discovery.” After recommending that Merck walk away from the deal, Stein was headhunted to become Ligand's chief scientific officer.
Within a year, Ligand had raised $250 million in a public offering, and Stein had negotiated eight collaborations with big pharma. But the grueling schedule—including talks at 13 investment meetings and 250 business presentations—and the amount of work it took to “get other people to do what needed to be done” at the small company led him to embrace an offer from a mentor to return to East Coast pharma. The job was as head of research and preclinical development at DuPont Merck Pharmaceuticals, a joint venture of the two companies. “I'd have been happy to stay there, too,” Stein says about his 6 years there. But DuPont decided that the joint venture was chewing up too much of its research budget, he says, and after Bristol-Myers Squibb bought the company for$7.8 billion, “I didn't like what I saw.” So he jumped to Incyte Corp., where he spent 2 years as president and chief scientific officer before joining Roche in 2003.
Stein clearly loves the horsepower of a big pharma and enjoys the chance to apply what he's learned over a quarter-century about drug discovery. But greater capacity means a greater chance to fail, too. “The goal is to develop superior medicine,” he says. “But the process includes a million handoffs, and any dropping of the ball could be potentially devastating.”
16. # Boston Means Business for Drug Companies
1. Jeffrey Mervis
BOSTON, MASSACHUSETTS—Asked why he robbed banks, Willie Sutton is said to have responded: because that's where the money is. After more than a century, big pharma is following that logic by setting up shop here amid what may be the world's largest concentration of biological brainpower.
Two of the world's biggest drug companies, New Jersey-based Merck and Connecticut-based Pfizer, have opened small outposts to supplement their global R&D networks and put company turnaround artists in charge of them. A third pharma, Novartis, has gone even further by relocating its main research facility, the Novartis Institutes for Biomedical Research (NIBR), in a spectacularly remodeled former candy factory and two other buildings adjacent to the Massachusetts Institute of Technology (MIT) and picking an industry novice to run it. The 1000-strong scientific work force assembled in the past 2 years represents a serious bid by the Swiss-based company to find the sweet spot in drug discovery.
“Our kickoff career fair attracted more than 2000 people, and it was a fabulous opportunity to meet and greet leading scientists and business leaders,” says Lynne Cannon, vice president for human resources at NIBR. “That would have been difficult to do in Groton [Connecticut, the site of Pfizer's largest lab] or Princeton, New Jersey.”
Boston may be the cradle of American independence, and Cambridge the home of the country's oldest and most prestigious university, but until the past few years the region wasn't even on the map of big pharma. Area academics with backgrounds in molecular biology had formed many biotechnology companies, some of which aspired to become the next big pharma. However, the nation's chemical-based drug industry was confined to the mid-Atlantic region and the Midwest.
Pfizer made the first move in 1999, opening up a Discovery Technology Center in Cambridge that offered the latest technology to drug discovery scientists throughout the company. Last year, officials expanded the center's mission to the entire pipeline of drug development and plucked Phil Vickers from the company's ranks to run it. A 45-year-old biochemist who enjoys a challenge and a change in scenery, Vickers was born in England, received his Ph.D. at the University of Toronto, and did a postdoc at the National Institutes of Health in Maryland before joining Merck's Frosst laboratory in Montreal in 1988. He came to Pfizer in 1994 and earned his stripes in a series of management posts on both sides of the Atlantic.
Perched on the edge of the MIT campus, the renamed Research Technology Center aims to satisfy Pfizer's need for technological support by mixing in-house expertise with the skills of local academics and start-up companies. Vickers says his youthful but growing shop—he plans to add 25 scientists to the current 110-person roster by the end of the year—“offers the attributes of a biotech with the resources of a big pharma.”
Despite running an operation almost 10 times the size of Pfizer's, Mark Fishman describes NIBR in similar terms. A molecular cardiologist who had pioneered the use of zebrafish for gene discovery at Harvard Medical School (HMS) and Massachusetts General Hospital, Fishman is hoping to “functionalize the genome” by applying it to diseases where the biological mechanism is already understood. The lab's location—the region has supplied more than half the institute's talent, not to mention an ever-widening network of academic collaborations—provides an added boost, he says.
Already, Fishman has raided HMS to find global heads in cardiovascular research and modeling disease. He's also tapped biotech and pharma for chiefs in oncology, molecular pathways, and discovery chemistry, luring them with the prospect of painting on a fresh canvas. “We're getting who we want, and almost nobody has left,” he crows.
Across the Charles River and adjacent to Boston's medical complex sits Merck's Edward M. Scolnick Research Laboratory. Named in honor of its former research chief, the new 12-story, glass-faced lab opened last fall, and its site head, Lex Van der Ploeg, is busily recruiting talent. Van der Ploeg, 50, a specialist in infectious diseases who joined Merck in 1991, took on the challenge after a year spent shifting the focus of Merck's San Diego facility from neuroscience to stroke. Soon after he left, however, corporate officials decided to shut the lab and shift some resources to other sites.
His mission is to rev up the company's efforts in developing treatments for cancer, obesity, and Alzheimer's disease. He expects to double the size of the basic research team, now 140, by 2007, beginning with oncology and then moving into the neurosciences. “The proximity to talent is terrific, and our success rate is about 90%,” he says about current recruiting efforts. About a quarter of the scientists have migrated from other Merck labs.
17. # It's Still a Man's World at the Top of Big Pharma Research
1. Jeffrey Mervis
For a few years after their company was acquired by Wyeth in 1995, molecular biologist Abbie Celniker and several female colleagues at Genetics Institute in Cambridge, Massachusetts, hoped that the new management might boost their careers. But eventually they came to the opposite conclusion. “There was an established culture [at Wyeth] that said it would be harder to influence our peers…. Simply put, we didn't see a career progression unless we learned to play golf and use the men's room.”
What Celniker, now senior vice president for strategic research at Millennium Pharmaceuticals in Cambridge, had sensed becomes obvious by looking at the leadership rosters of the research divisions of big pharma: Drug discovery is a man's world. Not one of the chief scientists or heads of research at these companies is a woman. The precious few senior women executives with science Ph.D.s or M.D.s are most often found on the development/business side of the company or holding corporate posts without line responsibilities.
Why that's the case, however, is much less clear. Ask a man and you're likely to hear that the industry is no different from the rest of society. Then he'll note that his company is very concerned. “It's a tough issue that I think about a lot,” says Jonathan Knowles, head of global research for Roche. “I'd like to understand it better.” He'll also say that things are getting better.
Ask a woman—who by definition has not made it to the top—and her answer will be quite different, although equally nuanced. “The forces keeping women scientists down are more psychological and cultural than legal,” says Joanne Kamens, a project team leader at Abbott Bioresearch Center in Worcester, Massachusetts, and president of the state chapter of the Association for Women in Science. “People still have a problem seeing women as leaders rather than as caretakers and mothers. Men who decide to spend more time with their families also tend to be seen as weaker. But at least they have the option. If the father can't help out at home, it falls on the women.”
Lijun Wu, a 41-year-old unit head within the cardiovascular group at the flagship Novartis Institute for Basic Research (NIBR) in Cambridge, Massachusetts, remembers being asked as a graduate student if her decision to get married meant that she planned to drop out of the program to have a family. Several years later, after becoming pregnant with the first of her two children, colleagues told her that her bosses at Millennium were wondering if she'd return after giving birth. “My career was going well, and they didn't ask me directly. But I think it's unfair; they wouldn't have wondered that about a man.”
Wu doesn't understand why any employer would care whether she even has a family. But most pharma executives acknowledge that family responsibilities do matter. “One possible reason [for the dearth of women] is that any senior position requires a huge commitment,” says Knowles. “It would be difficult for someone to do that type of job while also looking after a home and small children.”
Amgen's research chief Richard Perlmutter offers similar thoughts. “I'm reluctant to generalize about gender differences,” he says. “At the same time, you can't get around the fact that the burden of early child rearing may be a career breaker [for some women]”
That burden can show up in subtle ways, notes Lynne Cannon, vice president for human resources at NIBR. “It's not just a question of having the door open to women,” she says. “Sometimes it's about how the door gets opened. If I can't stay until 8 p.m.—when a lot of decisions get made—because I have to pick up my kid at 6 from daycare, then I may miss out on something important.”
Many pharma companies have recently begun to identify and assist women scientists who want to move up the corporate ladder. Novartis has a “women to watch within the lab” program, Cannon says, to provide ongoing career guidance and support for outstanding women. “Mentoring is great,” says Cannon, “but there's a danger if you attach yourself to one person and that person leaves.” Although that's true for men, too, the dearth of women makes any loss of support costly.
Wyeth has a similar program for top-performing women, says Robert Ruffolo, president of research and development, that's modeled on a gender-blind program for the top 1% of its researchers. Gail Cassell, vice president for strategic planning for Eli Lilly, says that the Indianapolis, Indiana-based company offers a variety of programs for women scientists, from tips on how to ask for a promotion to networking with colleagues in other fields.
None of the programs has run long enough to accumulate meaningful data, however. And it's not clear that company executives have thought in much detail about what they want to achieve. “We don't know what enough is,” Ruffolo admits. “But we consider it a win as long as we're attracting more women and minorities each year than are leaving the company.”
18. # Productivity Counts--But the Definition Is Key
1. Jeffrey Mervis
With costs soaring, every company says it's becoming more efficient. But what exactly does that mean?
For all but a tiny fraction of big pharma scientists, their work isn't really about discovering new drugs to cure disease and improve human health. It's about looking for druggable compounds: molecules that might bind to targets that could block or enhance a biochemical process that leads to a particular pathological state or impairment. And success isn't measured by how much they have contributed to a drug or therapeutic medicine on the market. Rather, it means “hitting your numbers,” that is, achieving a preset goal of “deliverables”— be they compounds, animal data, or patients—that argue for moving along to the next step in the process.
Trouble is, that approach is hugely inefficient. The current cost of discovering and developing a new drug may be as high as $1.9 billion, according to an extrapolation by Joseph DiMasi of the Tufts University Center for the Study of Drug Development in Boston, Massachusetts, whose 2001 report pegging the number at$802 million was based on medicines that entered clinical trials as long as 20 years ago. Lowering that number is the current Holy Grail of the industry. “Productivity is our biggest challenge and the number one topic of conversation among my colleagues,” says Steven Paul, president of Lilly Research Laboratories and the top scientist at the Indianapolis, Indiana-based drug giant.
But consensus on the goal doesn't mean agreement on how to get there. Big pharma management features a multiplicity of organizational models, all aimed at achieving greater efficiency. Some companies such as Pfizer are highly centralized, whereas others pride themselves on having small, semiautonomous units. “Pfizer is probably at one end of the spectrum. Everything related to drug discovery has to go through either New London [Connecticut] or Sandwich, U.K.,” says industry analyst Roger Longman, co-managing partner of Windhover Information Inc. in Norwalk, Connecticut.
At the other end, he notes, is U.K.-based GlaxoSmithKline (GSK), second in global pharmaceutical sales to Pfizer. Under the leadership of research chief Tachi Yamada, GSK has created Centers of Excellence in Drug Discovery around the world in six therapeutic areas, plus one center for biologics. Each has its own budget and hiring authority. “I wanted them to be small, and studies show that you can know the names of 300 people but no more,” says Yamada. The centers “have total control of their budgets and hiring. But they still have targets.”
Falling somewhere in the middle is a “hub-and-spokes” system that Roche follows that allows its corporate headquarters in Basel, Switzerland, to keep tabs on research sites in the United States, Europe, and China. And although that arrangement can mean 2 a.m. teleconferences for Bob Stein, who oversees 1100 people at Roche Palo Alto, California, he says it's vastly preferable to having “one big R&D operation that, like a 10-foot spider, has outgrown its body plan.”
There are also many views on which metrics are the most meaningful, and if metrics can even take you where you want to go. One popular view, espoused by Pfizer CEO Hank McKinnell and others, embraces “shots on goal.” That's the belief that more compounds going into clinical trials translates into more successful outcomes and, ultimately, more marketable drugs.
But what kind of shots are most important? For Yamada, the key metric “is not the number of targets validated, or the number of chemicals selected. It's proof-of-concept in patients.” His counterpart at Novartis, Mark Fishman, puts it even more bluntly. “[A drug candidate] is not a success until we've treated a patient with it.”
At New Jersey-based Wyeth Pharmaceuticals, which sits on the centralized end of the management spectrum, R&D president Robert Ruffolo has done a scientific analysis of the science of drug development. A 55-yearold pharmacologist and 28-year industry veteran, Ruffolo likes to say that “we've got numbers on everything.” And since coming to Wyeth in 2000, Ruffolo has probably gone further than any other pharma honcho in trying to quantify what his researchers should accomplish at each stage of the process.
“Some people say that they can pick winners,” Ruffolo told a meeting of pharma scientists gathered this spring in Washington, D.C. “But I believe that it's still a crapshoot. I can't pick winners, and after 30 years in this business, I haven't met anybody who could.”
What Ruffolo can do, he says, is ride herd on the factors that he can control. Hence his insistence on production targets that take attrition into account and, if met, would allow for a sufficient flow of new compounds through the pipeline. Raises are based on achieving the goals, and it's all computerized.
The magic numbers for Ruffolo are 12, 8, and 2. That's a three-link chain of the annual number of compounds entering development, the number of investigational new drugs entering clinical trials each year, and the annual number of new drug applications submitted to the U.S. Food and Drug Administration. He says that his approach has helped turn around what he calls the company's “pathetic” track record of submitting new drug applications in the years before he arrived. And best of all, it's proven to be sustainable: Wyeth has met the targets every year since 2001, he says. “That's the most important point. It's a steady-state model.”
Ruffolo admits that approach didn't win him any popularity contests at Wyeth. “Scientists hate this approach,” he says. “When I was a scientist, we used to say that you can't manage science. But it needs to be.” Those who didn't buy into the approach left the company, he says—and those who have remained appreciate knowing where they stand.
Richard Scheller takes a very different approach as executive vice president of research at Genentech, which has eschewed large acquisitions and does all research at its ever-expanding South San Francisco, California, campus. A neuroscientist and former Howard Hughes Medical Institute investigator at Stanford University, Scheller came to Genentech in 2001 after deciding that its culture meshed with his own philosophy of doing science. Genentech's corporate strategy, labeled Horizon 2010, does include research goals for its more than 600 scientists over the next 5 years. But although they specify the number of new products to be moved forward for each of the company's three major therapeutic areas, some goals omit key steps in the process. And they aren't linked together in a formal manner.
Sitting in a top-floor office overlooking San Francisco Bay—and the pier that was allegedly the favorite fishing hole of cofounder Herbert Boyer—Scheller describes an ongoing study of Genentech's attrition rate and the nature of its pipeline in a way that suggests he doesn't view it as quite the priority that Ruffolo does. “It turns out that different types of projects fail for different reasons,” notes Scheller, who says that he “doesn't know very much about big pharma” despite the fact that, based on the value of its stock, Genentech is the fifth-largest drug company in the world.
“For example,” Scheller says, “I'm expecting small-molecule throughput rates to be lower than for protein therapeutics. I'm also leading a project to understand the bottlenecks. And I think that they will turn out to be what you'd expect: Some projects will be underresourced, some will suffer from poor internal communications. When we're finished, we'll react appropriately. But I suspect that when we f ix one problem, some other bottleneck will appear.”
Don't be fooled by that dispassionate tone, however. Scheller isn't afraid to be just as hard-nosed as Ruffolo in assessing the performance of his troops. But he doesn't plan to do it from a spreadsheet. Knowing how to maintain a healthy pipeline, he says, “is more or less a matter of intuition.” And the most important thing about dealing with scientists, he says, “is to be clear about the reasons for your decision [for killing a project or shifting resources]. I'm not always going to be right. But I've earned a lot of respect from my credentials at Stanford and my achievements as a scientist.”
19. # I See You've Worked at Merck …
1. Jeffrey Mervis
Senior hires at Amgen demonstrate how one company's loss can be another's gain
In 2000, Merck CEO Raymond Gilmartin came down from New Jersey to Washington, D.C., to extol the value of research partnerships involving the government, academia, and industry. In a talk marking the centennial of the Association of American Universities, Gilmartin mentioned two executives he hoped would help the company set up a research lab in Boston (see sidebar, p. 723) to tap its rich talent pool: Roger Perlmutter, head of basic research, and Ben Shapiro, his predecessor and current head of external research. He also noted that Merck's success in the development of the first protease inhibitor to treat AIDS, Crixivan, by a team led by chemist Paul Reider, rested on the government's long commitment to basic biomedical research.
Fast-forward 5 years—a generation in big pharma—and none of the four team members still works at Merck. In May, Gilmartin suddenly stepped down earlier than expected, a casualty of the company's voluntary withdrawal last fall of Vioxx, its arthritis painkilling COX-2 inhibitor pill. Shapiro had retired in 2003, in keeping with company policy for executives who reach age 65.
Perlmutter and Reider remain very active in the drug business. But they now work for Amgen, the southern California biotech giant that industry wags have dubbed “Merck West.” Their migration is illustrative of the company's role over the years as both a magnet for top academic talent and a fertile hunting ground for competitors. And even as market analysts wonder if Merck can recover from the blow to its reputation from Vioxx and the financial burden of stock shares trading at one-third their 2000 level, several successful alumni say that they retain warm feelings for “Mother Merck.” Several senior research officials at Merck declined comment for this story.
Shapiro, for one, thinks “it's logical that Merck would help seed the leadership ranks of other companies.” In 1990, when he was a department chair at the University of Washington (UW), Shapiro says he jumped at an invitation from Ed Scolnick, the former head of Merck research, to become head of basic research because “Merck had a reputation for caring about science.” In 1997, Shapiro, in turn, recruited UW immunologist Perlmutter.
While Shapiro was grooming Perlmutter for the top research job, saying: “He was special. There aren't that many academics who would be good at senior management in big pharma,” Scolnick had other plans. In December 2000, he brought on Peter Kim, a biochemist at the Massachusetts Institute of Technology. Although it would be another 2 years before Scolnick retired and Kim succeeded him, the line of succession was clear. So it was no surprise that in January 2001, Perlmutter was named executive vice president of research and development at Amgen. He says that he had been weighing several career options and chose Amgen because of “the magnitude of its commitment to building up its R&D operation.” He also was attracted to its relative youth—it was founded in 1979—compared with its century-plus-old pharma competitors, and its strength in biologics.
But Perlmutter wasn't turning his back on Merck. He says he had declined an offer to join Amgen in 1996 because “I wasn't ready. I didn't understand the totality of drug discovery and development enough to have the impact that I wanted to have.” Merck gave him that knowledge; he says: “My experience there informs everything that I do here.”
One lesson was to tap into his Merck connections. One call went to a former medical and graduate school colleague, pathologist Joseph Miletich, who had spent most of his career at Washington University in St. Louis, Missouri, before Shapiro convinced him that Merck offered “a bigger canvas.” Four years and several promotions later, Miletich heard a similar recruiting pitch from Amgen, which he joined in 2002 as senior vice president for research and preclinical development.
Not long after, Perlmutter reached out to Reider, who had come to Merck in 1980 right out of graduate school on a mission to conquer dread diseases. Reider found Perlmutter's description of Amgen as an eager teenager full of promise appealing, as well as his pledge that the company would only work on treatments for important and unmet medical needs.
“I can't get very excited working on the ninth molecule to correct male impotence, or to treat male pattern baldness,” says Reider, Amgen's vice president for chemistry. “I'm somebody who needs to go home every day with a sense that I've accomplished something. I'm also 53, and it would be nice if we could find a treatment for Alzheimer's by the time I need it.”
Reider says that his “dream job” would be to work with both Perlmutter and Kim, whom he says he “cherishes.” But he's worried that Merck could stumble and lose its way. “Amgen today is where Merck was 15 or 20 years ago. The ability to pounce on an idea and take it into development quickly is so important. Merck still has tons of good people. It would take 30 years to lose that edge. But when you get so big that your chief concern is what products to bring to market in what time frame, that's a warning sign.”
20. # The Brains Behind Blockbusters
1. Jennifer Couzin
The inventors of top-selling drugs talk about their unlikely paths to success, and whether today's scientists can pull off similar feats
How does a scientist hit a home run in the drug business? For Kenneth Koe and Willard Welch, it took curiosity, determination, and a series of lucky breaks. The payoff for their employer, Pfizer, was huge: the antidepressant Zoloft, one of 19 drugs that last year generated more than $2 billion in revenues in the U.S., according to IMS Health, a company that collects and markets health care data. Koe, a biochemist, and Welch, an organic chemist, are members of a small, exclusive club of drug discoverers whose labors have helped catapult their companies into the ranks of the world's most profitable. But while aggressive advertising has made household names of drugs such as Lipitor, Nexium, and Celebrex, their inventors remain relatively unknown. Science interviewed nearly a dozen of them to learn the stories behind their discoveries. Although these superinventors identified very different drugs, across oceans and decades, their experiences are more similar than one might expect. For one, few grasped the value of their discovery at the time or anticipated the hurdles standing in their way. Even fewer profited from their accomplishment, a fact that many quietly resent. Some doubt that they would be allowed to pursue the same lines of research today that they chased 15—or 40—years ago. “It was nice and unsophisticated”—and more fun—“in those days,” says Bruce Roth, the Pfizer chemist who, at the tender age of 31, helped invent Lipitor in 1985. For many scientists near or at retirement age, the advantages of today's powerful drug-hunting technologies are offset by what they see as a loss of freedom to stretch one's mind around novel ideas. “Too much computer and not enough brain,” grumbles former Merck biochemist Alfred Alberts, who helped invent Mevacor, the first successful statin, as well as its$6-billion-a-year successor Zocor. Strategies to unearth blockbusters today are “not working,” says Alberts, who retired in 1995 after 20 years at Merck. “I think that's fairly clear.”
## Graced by luck
What distinguishes past generations of drugmakers from the present? One difference is their starting point. Scientists then were often running blind, chasing new therapies without the benefits of modern biochemistry and the clues it can provide. Serendipity often planted the seeds of a new drug. Celebrex, Pfizer's antiarthritis drug, was chosen for further testing over another chemical in what was “more or less a coin flip,” says John Talley, one of its inventors. Talley did the work while at Searle, which later became part of Pfizer. He now heads drug discovery at Microbia, a biotech in Cambridge, Massachusetts.
But luck was only part of the equation. “Chance favors the prepared mind—is that how the saying goes?” asks Welch, who joined Pfizer's Groton, Connecticut, lab in 1970 and retired 3 years ago. Koe came to Pfizer in 1955 and studied penicillin offshoots before he was transferred to the company's tiny team of central nervous system researchers. After dabbling in potential antianxiety compounds, Koe turned to a then-new concept: the effects of serotonin in depression. He soon roped Welch into working on candidate antidepressants. Within a year he and Koe had hit on Zoloft.
One thing that hasn't changed for drug inventors, says Welch, is the need to stay abreast of developments in a given field. This is particularly crucial because, like drugmakers today, researchers are frequently transferred from their area of expertise into a new therapeutic area—say, from endocrinology to cardiology—and are expected to bring themselves up to speed quickly. Inventors recall their obsessive tracking of published chemical structures, patent filings by rival companies, and clinical trials of comparable drugs—even those that fail—as signposts helping point the way to their blockbuster.
The company's chair of atherosclerosis and its head of research thought the drug deserved a chance, however, and coaxed senior management into funding a short-term clinical trial of healthy employees. The drug appeared to be more effective than any other statin. Today, by a $6 billion margin, it's the world's best-selling drug—and Roth, now a vice president of chemistry, swallows his own invention daily. Reflecting on how drug discovery has changed since the days of Lipitor, Roth mourns the luxury of time that's been lost. But he also sees important enhancements. Regulatory hurdles mean that drugs developed today are safer, more selective, and more potent, he believes. Back then, “we didn't understand the science”—evinced by Parke-Davis's surprise at how well Lipitor performed in people. A drug's biology is still often baffling, but its behavior in humans is better understood, scientists agree. Tobert, however, perceives a more fundamental breakdown in the drug discovery process. “A company's got to be prepared to take some risks,” he says, risks that must be balanced against possible harm to patients. Public fears about drug safety are widespread these days, driven in part by recent revelations that blockbusters are not free of serious hazards: best-selling selective serotonin reuptake inhibitors such as Zoloft can trigger suicidality, and COX-2 inhibitors such as Celebrex and Vioxx increase the risk of heart attacks. ## No financial windfall Even if their drug is later roiled in controversy—an experience that Celebrex inventor Talley says he's found deeply distressing—scientists take enormous pride in their creations. Nearly all those interviewed recalled being told, in their early days as industry scientists, that few drug hunters ever find one that makes it to market. “Virtually everything we do fails,” says Roth. Success, if it comes, tastes sweet. But it doesn't fatten the discoverer's wallet. At best, scientists receive some stock options as a reward for their work. Although companies can informally thank researchers, such as with pay raises or promotions or internal awards, they do not normally offer drug discoverers a revenue share or a substantial cash reward once the drug reaches the market. It's a policy many would like to see changed. Talley, who in addition to Celebrex discovered the related drug Bextra (which was recently removed from the U.S. and European markets due to safety concerns), struggled to pay college tuition for his two daughters while the therapies he created brought in billions for the company. “The ideal … would be to get royalties,” says Alberts, who discovered Mevacor. “I don't know how to do it, but I think there should be a better way.” Not everyone agrees. Roth and some others say the current setup—in which companies suffer the risks and reap the benefits, and scientists enjoy a steady salary whether they hit on a blockbuster or not—may be the fairest. Monetary rewards could “create an enormous competition [between] people internally,” says Kennis, a competition he believes would be unhealthy. Pharmaceutical giants such as Pfizer, AstraZeneca, and GlaxoSmithKline say they do not pay discoverers of new drugs for their finds. It's not something his pharmaceutical industry group has been asked to consider, says Jeffrey Trewhitt, spokesperson for the Pharmaceutical Research and Manufacturers of America in Washington, D.C. In the end, it's not lack of financial reward that drives the occasional inventor to leave the place where his discovery was made. Rather, what most bothers these scientists is how drug discovery has evolved. Company management increasingly favors “the short-term developmental route” rather than investing in projects more speculative in nature, says Craig Smith, the co-inventor, with Raymond Goodwin, of the rheumatoid arthritis drug Enbrel. The pair discovered Enbrel while at the Seattle, Washington, biotech Immunex, initially working on the side when their bosses “weren't looking,” says Smith. Both left after Immunex was bought for$16 billion by biotech giant Amgen in 2001.
Smith predicts that drugmakers who avoid untrodden territory, based on inquiries that may not lead anywhere, will hit a wall down the road. “In the longer run,” he says, “you're shooting yourself in the foot.”
21. # Saving the Mind Faces High Hurdles
1. John Travis
Fierce competition to find a drug that could delay onset of or prevent Alzheimer's disease is a relatively recent phenomenon. Why was this potential blockbuster shunned for so long?
Cancer has been arguably the most feared disease in the United States for the past several decades. Now, as the baby boom generation starts to inch past middle age, a new contender has emerged for that unappealing label: Alzheimer's disease (AD).
An estimated 4.5 million Americans already have the neurodegenerative condition, and that number could more than triple by 2050. Devastating to both those afflicted and their caregivers, the illness exerts a $100-billion-a-year drain on the U.S. economy, according to the Biotechnology Industry Organization. “Alzheimer's disease probably has a larger impact on society than any other disease, in terms of economic and emotional costs,” says Dale Schenk, chief scientific officer at Elan, a biotechnology company based in Dublin, Ireland. So when Science looked for a condition that illustrates the challenges confronting the pharmaceutical industry—and the opportunities that beckon—AD was an obvious candidate. A drug that slows the disease could be especially lucrative because it presumably would need to be taken well before the first symptoms are likely to appear, and then for life. “Everyone recognizes that this is a great, unmet medical need. The drug company that succeeds here will be a very successful company,” says Peter Boxer, associate director of central nervous system (CNS) pharmacology at Pfizer Global Research and Development in Ann Arbor, Michigan. That recognition is fairly new, however. Academic and federal scientists had to lobby hard in the late 1980s to get Parke-Davis to conduct the first major clinical trial of an Alzheimer's drug. Although that drug, tacrine, and related compounds known as acetycholinesterase inhibitors rang up about$3 billion in sales for AD therapy in 2003, they are less than ideal medicines. They don't halt the underlying progression of the disease, and their slowing of cognitive decline is temporary.
But even a less-than-perfect AD drug could still be a blockbuster for companies. It would also be a boon for society: Because the prevalence of Alzheimer's disease increases exponentially with age, drugs that provide a modest 5-year delay in the onset of symptoms would reduce the number of affected people by as much as 50%.
## Industrial nihilism
Although drug development for AD is a relatively young endeavor, the condition was identified nearly a century ago by German neuropathologist and psychiatrist Alois Alzheimer. In 1906, he gave a lecture on a 51-year-old woman who had died with dementia. An autopsy found that her brain was littered with extracellular masses (plaques) and intracellular clumps (neurofibrillary tangles) that have since become the diagnostic hallmark of the disease that now bears his name. But for decades, because it wasn't diagnosable until after death, AD remained an obscure condition, and study of the illness was a scientific backwater. “No one wanted to get into [AD research] because it was seen as an unpromising career path leading to a scientific dead end,” recalls Zaven Khachaturian, former director of the Office of Alzheimer's Disease Research at the National Institute on Aging (NIA).
The same pessimism about AD held true in industry. “There was very little interest because the disease could not be diagnosed, and the prevailing wisdom considered it an untreatable normal consequence of aging,” says Khachaturian. The lack of a cause was equally stifling to drug development. “There was a nihilism around [AD],” says neuroscientist Geoff Dunbar, who has worked on CNS drugs at several major companies and is now at a small biotech firm, Targacept, in Winston-Salem, North Carolina. “No one knew what to do with the plaques and tangles.”
In the absence of hard evidence, a few vague theories took root. Some researchers argued that the dementia in general stemmed from inadequate blood flow within the brain, giving a slight boost to a class of drugs called cerebral vasodilators. Similarly, compounds that promoted learning and memory in animals—drugs known as nootropics, which means “growing the mind”—were also suggested as dementia treatments. “The assumption was that that would be sufficient to help the deficits in Alzheimer's disease,” recalls Boxer.
## The scientific hook
Drug development for AD didn't truly get started until the cholinergic hypothesis emerged in the late 1970s, largely through the efforts of British neuroscientists such as Peter Davies, now at Albert Einstein College of Medicine in New York City. In 1976, for example, he and a colleague reported that compared to normal brains, those from several people who had had the brain disorder had decreased levels of an enzyme that helps make the neuro-transmitter acetylcholine. Those data, combined with earlier evidence that drugs blocking the cholinergic system produced memory problems in people, led Davies and others to argue that the core defect in AD was a lack of acetylcholine.
“Until that time, dementia was primarily looked at as an amorphous mental disorder,” says Khachaturian. “The cholinergic hypothesis was the first scientific hook that could provide a clear path to understand the underlying neurochemistry of AD. It also gave us a plausible scientific rationale for developing treatments because so much was known about the cholinergic system.” That knowledge, says Dunbar, “meant we were in neuropharmacology that the industry understood.”
There was also an obvious therapeutic road map to follow. It drew from work a decade earlier showing that the symptoms of Parkinson's disease stemmed from the death of dopamineproducing neurons and that L-dopa, a dopamine precursor, could bring about miraculous recoveries in patients. Could curing AD, researchers asked, be as simple as replacing acetylcholine?
Not quite. Efforts to deliver acetylcholine precursors to the brain met with little success. In 1986, however, a different strategy grabbed the spotlight. A research team reported remarkable benefits for a few AD patients taking the well-studied compound oral tetrahydroaminoacridine, also called tacrine, which blocks the activity of an enzyme that breaks down acetylcholine.
Quickly deciding to push for a validation study on the efficacy of tacrine, Khachaturian and the directors of the recently created, NIA-funded network of Alzheimer's Disease Research Centers sought a company to formulate the compound, which was off-patent, into various doses and quantities needed for a full-scale trial. They found an advocate in Elkan Gamzu at Parke-Davis. “Having a person inside that company lobbying for an efficacy study was very important to getting that first drug to go,” says Khachaturian.
Parke-Davis, a division of Warner-Lambert Co. that later became part of Pfizer, started its tacrine study in 1987. But the drug, marketed as Cognex, failed to pass muster with a Food and Drug Administration (FDA) advisory board in 1991. After further trials with higher doses, the drug won FDA approval in 1993, albeit not without controversy. “The scientific community was not very enthusiastic about it because the benefits were marginal and it had a lot of side effects,” says Khachaturian.
Still, its approval validated the cognitive tests that had recently been developed to gauge drug efficacy for the disease and provided clear guidelines on how to conduct clinical trials for AD. “If the FDA had set the bar very high and not approved it, then that would have been the kiss of death. No other company would have gotten into developing [AD] therapies,” says Khachaturian. “Once tacrine was approved, a lot of other companies jumped on the bandwagon” to develop safer and more potent acetylcholinesterase inhibitors, notes Boxer. (Six of the seven drugs currently approved in the U.S. for AD are in this class.)
The quick follow-up to tacrine by other drugs targeting the same enzyme illustrates an important principle of drug development. Even before a company with a head start on a target proves the value of a class of drugs, other firms will generally have similarly acting “me, too” drugs with improved properties in their pipeline. For competition's sake, says Boxer, “you can't wait for other companies' clinical data.”
The acetylcholinesterase inhibitors spurred research into other ways of tweaking the cholinergic system. Acetylcholine operates through two classes of receptors, muscarinic and nicotinic, and major pharmaceutical companies vigorously pursued muscarinic agonists until troublesome side effects slowed their development. “Big pharma is still plugging away at the muscarinic hypothesis,” says Dunbar. That has left room for his current firm, Targacept, to develop AD drugs that target nicotinic receptors.
## The amyloid hypothesis
Still, halting the decline of the cholinergic system in AD is not the same as curing, preventing, or even slowing the actual pathology of the illness. In fact, the benefits of acetycholinesterase inhibitors are so questionable that a government panel evaluating drugs for the U.K. health care system recently issued a preliminary opinion that the drugs aren't worth buying, a viewpoint the makers of the drugs have strongly challenged.
Most companies seeking more fundamental treatments for AD are focusing on a protein fragment called β amyloid, which in 1984 was shown to be the primary component of the brain's plaques. That discovery spawned the amyloid hypothesis, which holds that the buildup of β amyloid causes AD by harming or killing brain cells. In 1991, scientists found that several families plagued by an early-onset form of AD had mutations in the gene encoding β amyloid precursor protein (APP), from which β amyloid is derived. A few years later, similar disease-causing mutations were found in genes encoding proteins called presenilins that were subsequently shown to affect APP processing into β amyloid.
The amyloid hypothesis provided a bounty of new targets and potential strategies. Some companies tried to prevent β-amyloid molecules from clumping together, for example, while others began testing whether known drugs, such as statins and nonsteroidal anti-inflammatories, alter β-amyloid production.
The novel hypothesis opened the door for small biotech companies, too. Neurochem, which was founded in 1993 in Laval, Canada, drew upon research licensed from Queen's University in Kingston regarding proteoglycan molecules in the brain that bind to β amyloid and promote the formation of the amyloid fibrils that make up plaques. The company has developed small organic molecules that mimic these proteoglycans, occupying their binding sites on β amyloid and preventing fibrils. Earlier this year, Neurochem launched a phase III trial of its lead Alzheimer's treatment, Alzhemed, seeking to become the first to bring an amyloid-modifying drug to market.
Few firms are trying to directly block β-amyloid molecules from aggregating, notes Dennis Garceau, senior vice president of drug development at Neurochem. “Big companies like to target enzymes; it's a more conventional target,” he says. Indeed, the fiercest competition has been to develop secretase inhibitors, compounds that block the enzymes that cut APP into the smaller β-amyloid fragment.
The race began in 1999 when a β secretase that acts upon APP was identified. (After the initial published report by Amgen, several other firms quickly revealed that they too had identified the same potential β secretase, perhaps setting the stage for a patent fight.) “Everyone went after that target right away. It was such a rational target,” says Boxer, who recalls hearing that another company had launched a major effort to inhibit the enzyme within a week of the announcement of its discovery.
The identified β secretase was a particularly inviting target because it belonged to the same family of enzymes as HIV's protease. Several protease inhibitors had already been approved as AIDS drugs, allowing companies to draw on those experiences.
It takes two cuts to make β amyloid out of APP, however. Drug companies weren't ignoring the other key enzyme, γ secretase, but they just weren't clear what it was. A theory that presenilins were γ secretases took several years to be accepted after its 1999 proposal. Still, even without a clear identification of the enzyme, several firms had developed in vitro systems displaying γ-secretase activity upon which they could test potential inhibitors.
Current efforts to develop secretase inhibitors remain shrouded in corporate secrecy. Bristol-Myers Squibb reportedly began clinical testing a γ-secretase inhibitor in 2001 and stopped because of side effects, but it has never publicly reported those results. Eli Lilly has also just begun clinical testing of a γ-secretase inhibitor. The challenge in developing such drugs seems to be blocking their action on enzymes needed for activities other than cutting up APP. γ secretases also cleave a protein called Notch, for example, that's important in development and the immune system. As a result, companies must find compounds that more specifically affect APP processing.
## A surprise vaccine
While the amyloid hypothesis has offered drug researchers a number of obvious targets and strategies, it also led to the most surprising attempt to thwart AD. In the late 1990s, long after his colleagues at Elan had tested their most promising compounds, Schenk suggested injecting a few mice with β amyloid itself. His goal was to raise an antibody or other immune response against plaques. “No one thought it would work. Even after the experiment was done, the results weren't analyzed for a while,” recalls Schenk.
The results were stunning. The immunization slowed or prevented the development of β-amyloid plaques in young mice and even wiped away preexisting ones in older mice. The episode illustrates how one person's idea can change the direction of a company or a field. “Dale was really brave,” says John Trojanowski of the University of Pennsylvania School of Medicine in Philadelphia.
How does big pharma react when a disease-treating strategy such as the Elan vaccine comes out of the blue? Most large companies working on CNS drugs have experience with small-molecule drugs, not antibodies, says Boxer. And although firms can always tweak an enzyme inhibitor to make a better drug and carve out some market share, vaccines tend to either work or not. “We look at this stuff and go, 'Huh?'” says Boxer. “Where is your unique drug?” As result, he says, most companies have conceded the vaccine approach to Elan.
The unexpected emergence of the Elan vaccine illustrates the importance and limitations of animal models. For several years, companies pursuing the amyloid hypothesis were largely stuck in vitro. Attempts to genetically engineer mice that overproduced APP seemed fruitless; there was even a notable fraud case in which a researcher published a picture of a human plaque as evidence that his mice had developed β-amyloid clumps. “The entire field was trying to make a mouse model,” says Schenk.
Then a failing biotech company trying to sell off its assets approached Elan and saved the day, ultimately setting the stage for the vaccine's proof of principle. The struggling company's transgenic rodents were greatly overexpressing APP, and when Elan scientists checked out the mice, they found numerous brain plaques. Elan acquired the rights to the mice and quickly began testing its compounds. The company eventually allowed the β-amyloid vaccine strategy to be tested. Without that animal model, the idea might have faded away.
Having animal models reduces the risk, and thus the cost, of developing drugs. For small companies such as Neurochem, they can also be a lifeline to continued funding from venture capitalists and other sources. “Until we got proof of concept in vivo, people were a little bit skeptical,” says Garceau.
Yet animal models also reveal the risks of drug development. Elan's vaccine approach seemed to work well in mice, but brain inflammation in a few patients triggered an abrupt halt to the clinical trial. Elan, together with its partner Wyeth, is now conducting clinical trials with plaque-targeting immunotherapy strategies such as passive administration of antibodies to β amyloid.
But how can a company pursuing β amyloid-based therapies for AD know if its drug or treatment is working? Showing that people maintain the same cognitive and memory skills, or improve such skills, can be difficult and time-consuming. Unfortunately, there are no well-accepted AD biomarkers, like cholesterol levels for heart disease or viral load for AIDS. A lack of animal models and biomarkers are “two difficult issues for developing a drug,” says Boxer.
The biomarker obstacle has led companies such as Pfizer, Merck, Eli Lilly, and Elan to partner with the Alzheimer's Association, NIA, the National Institute of Biomedical Imaging and Bioengineering, and FDA to identify ways of measuring progression of mild cognitive impairment and AD in people. Industry will pick up one-third of the cost of the $60 million, 5-year effort, known as the Alzheimer's Disease Neuroimaging Initiative, that will test various ways of imaging brain plaques and tangles as well as measuring levels of proteins in blood, urine, and cerebrospinal fluid. “It's so difficult [to develop an Alzheimer's treatment without biomarkers] that the drug companies are collaborating,” says Boxer. ## What about tau? It's sometimes forgotten that the effort to develop β amyloid-based treatments represents a huge and costly gamble on a single, unverified theory of AD. There are many other hypotheses being explored by small numbers of scientists or a handful of tiny biotech firms. One is the second major theory of AD, which involves tangles, the intracellular brain lesions identified by Alois Alzheimer. In the early days, Alzheimer's researchers were divided over whether plaques or tangles were more important. The identification of β amyloid in plaques and disease-causing mutations in the APP gene relegated tangles and their primary constituent, a hyperphosphorylated form of a protein called tau, to a sideshow. “We were the token other pathway at every meeting,” recalls Trojanowski; he and his wife Virginia Lee have been the most vocal proponents of tangles and tau research. For companies, that lack of interest was partly a matter of simple economics. “Even big pharma can only pick a certain number of targets,” says Dunbar, noting that Bristol-Myers Squibb, where he used to direct clinical development of CNS drugs, has never had a tau program to his knowledge. Tau is now drawing more attention, in part because of a 1998 paper in which researchers showed that mutations in a gene encoding one of the human versions of tau lead to a rare form of dementia that bears some similarities, such as tau tangles, to AD. “It launched studies that should have been done in the early 1990s,” says Trojanowski. Trojanowski contends that tau, when it becomes overloaded with phosphate groups, can no longer bind to and stabilize cellular filaments called microtubules. That change disrupts the ability of neurons to transport molecules down the long extensions known as axons. Back in 1994, his team proposed that microtubule-stabilizing compounds, such as the cancer drug Taxol, might treat AD. And earlier this year, in the 4 January Proceedings of the National Academy of Sciences, they offered a proof of concept in mice genetically engineered to overproduce a human version of tau. These rodents suffer from a neurodegenerative disorder that includes tanglelike masses of hyperphosphorylated tau and impaired axon function. As hypothesized, the administration of Taxol sped up the animals' axonal transport and ameliorated their motor problems. Trojanowksi and his colleagues are now working with Angiotech Pharmaceuticals in Vancouver, British Columbia, and other firms are sniffing around. “I know pharma is interested,” he says. “My phone rings more often.” ## Partnerships and future Will the next significant drug for AD come from a small biotech company or big pharma? Given the economics of drug development, it's likely that the Davids and Goliaths will end up working together. “It's very difficult for a small company to take a drug all the way to market,” notes Targacept's Dunbar. His company's strategy, for example, is to push a drug only through phase II trials and then “outsource it to big pharma.” And Neurochem says it would be open to partnerships with bigger companies given the right deal. Big pharma is certainly happy to let smaller companies take the initial plunge before it swoops in and buys up a promising drug. “They have such big wallets they can wait until almost all the risk is taken out,” says Dunbar. Still, the search for Alzheimer's drugs should leave room for many companies, small and large, to prosper. “This disease will need a cocktail of treatments,” predicts Neurochem's Garceau. 22. # Pharma Moves Ahead Cautiously in China 1. Yidong Gong* 1. Gong Yidong writes for China Features in Beijing. Companies can't resist the lure of China. But full-service research labs remain on the horizon SHANGHAI—As recently as 5 years ago, China was terra incognito for big pharma research organizations. To be sure, the global drug giants have been selling their products in China since the 1980s, and quite a few have built manufacturing plants there. Yet concerns over enforcement of the country's fledgling laws governing intellectual property rights (IPR) had prevented companies from taking the logical next step: opening a lab to do drug discovery. Those qualms remain. But they are balanced by the industry's growing desire to dip into China's intellectual talent pool. In 2002, Novo Nordisk broke from the pack and set up a small research facility in Beijing, the company's only research site outside its home in Denmark. Later that year, U.K.-based AstraZeneca set up the first Western-owned clinical research organization in China to collaborate on multisite trials. The next year, U.S.-based Eli Lilly inked a deal with the Chinese company ChemExplorer to purify, synthesize, and analyze compounds supplied by its researchers. And last fall, when the Swiss-based Roche dedicated its new research and development lab in Shanghai, Roche Chair and CEO Franz Humer predicted that China will “someday [be] one of Roche's important R&D centers rather than a mere market and production base.” Humer may well be right. And yet, his open-ended time reference sent the subtle but unmistakable message that big pharma still harbors doubts about China's ability to protect any valuable intellectual property that a company might create within its borders. Indeed, the research director of the Novo Nordisk site, Wang Baoping, concedes that his company is taking a risk. “China has in place a series of laws related to IPR protection. But their enforcement, particularly the amount that would be paid to the damaged side, remains a problem that needs to be addressed,” says Wang, a U.S.-trained geneticist. “I am not sure when the IPR environment in China will be truly favorable.” China's growing appetite for Western drugs—the current$15 billion market is expected to quadruple by 2010, and then double again by 2020—has certainly caught the attention of every drug company. So has its cheap but skilled scientific labor force. Not only do Ph.D.s receive annual salaries of \$10,000 or less, but the most expensive aspect of drug development—clinical trials—costs an estimated 30% less in China than in the United States or Europe. And then there is its growing prowess in science. “I'd say that setting up our own research lab there is only a matter of time,” Novartis CEO Dan Vasella remarked this spring. “It's not so much a need as it is a hunger to take advantage of the opportunities.”
Despite those inducements, the research centers being set up are shadows of pharma's existing full-service shops in the West. The Novo Nordisk and Roche labs are much smaller—some 40 to 50 scientists—and narrower in focus, typically medicinal chemists and biologists. But company officials still hope that the labs can make big contributions. Roche's Chen Li, chief scientific officer for the Shanghai site, says the key to its success in medicinal chemistry will be “giving full play to the initiatives of the scientists here, including access to information” throughout Roche's global research network. At Novo Nordisk's Beijing lab, scientists focus on protein expression to supplement the company's portfolio of diabetes drugs.
Lorenz K. Ng, vice president of research alliance and business development for Lilly Asia, says, “We looked at China because of its good supply of chemists.” Its partner Chem-Explorer has a team of 175 chemists. Still, as one ChemExplorer scientist notes, “What we do is only one piece of the core technology of drug development.” AstraZeneca's clinical research unit focuses on another piece of drug development by taking advantage of lower costs and access to a different population. The unit has been involved in six multicenter trials, totaling 765 patients. And Pfizer China is recruiting biometricians to staff a clinical trials data management center that it hopes to open early next year to help the company crunch the numbers from trials already under way.
Most Chinese scientists believe the arrival of Western pharmaceutical companies will be a long-term benefit for the country. “The biggest contribution of foreign research centers is the opportunity to learn drug development. That's a huge gap that needs to be filled in China,” says Hu Zhuohan, a professor of pharmacology at Fudan University in Shanghai.
But a few observers worry that the trend will stifle China's own efforts. “Research and development is one of the least profitable links of the pharmaceutical chain,” says Zhang Hua, a financial analyst in east China's Shandong Province, who predicts that small, young local drug companies “will inevitably fall prey to foreign companies.” Even so, Zhang thinks the process is irreversible, and he holds out hope that Chinese companies will learn drug innovation more quickly by watching it firsthand
|
2018-04-24 09:25:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18781477212905884, "perplexity": 5042.494707867601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00221.warc.gz"}
|
https://2017-ucsc-metagenomics.readthedocs.io/en/latest/circos_tutorial.html
|
# Using and Installing Circos¶
Circos is a powerful visualization tool that allows for the creation of circular graphics to display complex genomic data (e.g. genome comparisons). On top of the circular ideogram generated can be layered any number of graphical information (heatmaps, scatter plots, etc.).
The goals of this tutorial are to:
• Install circos on your Ubuntu system
• Use Circos to visualize our metagenomic data
Note: Beyond this brief crash course , circos is very well-documented and has a great series of tutorials and course materials that are useful.
## Installing Circos¶
You’ll need to install one additional ubuntu package, libgd:
sudo apt-get -y install libgd-perl
cd
mkdir circos
cd circos
curl -O http://dib-training.ucdavis.edu.s3.amazonaws.com/metagenomics-scripps-2016-10-12/circos-0.69-3.tar.gz
tar -xvzf circos-0.69-3.tar.gz
Circos runs within Perl and as such does not need to be compiled to run. So, we can just add the location of circos to our path variable. (Alternatively, you can append this statement to the end of your .bashrc file.)
export PATH=~/circos/circos-0.69-3/bin:$PATH Circos does, however, require quite a few additional perl modules to operate correctly. To see what modules are missing and need to be downloaded type the following: circos -modules > modules Now, to download all of these we will be using CPAN, a package manager for perl. We are going to pick out all the missing modules and then loop over those modules and download them using cpan. grep missing modules |cut -f13 -d " " > missing_modules for mod in$(cat missing_modules);
do
sudo cpan install \$mod;
done
This will take a while to run. When it is done check that you now have all modules downloaded by typing:
circos -modules
If you got all ‘ok’ then you are good to go!
And with that, circos should be up and ready to go. Run the example by navigating to the examples folder within the circos folder.
cd ~/circos/circos-0.69-3/example
bash run
This will take a little bit to run but should generate a file called circos.png. Open it and you can get an idea of the huge variety of things that are possible with circos and a lot of patience. We will not be attempting anything that complex today, however.
## Visualizing Gene Coverage and Orientation¶
First, let’s make a directory where we will be doing all of our work for plotting:
mkdir ~/circos/plotting
cd ~/circos/plotting
Now, link in the *gff file output from prokka (which we will use to define the location of genes in each of our genomes), the genome assembly file final.contigs.fa, and the SRR*counts files that we generated with salmon:
ln -fs ~/data/prokka_annotation/*gff .
ln -fs ~/data/final.contigs.fa .
ln -fs ~/quant/*counts .
We also need to grab a set of useful scripts and config files for this plotting exercise:
curl -L -O https://github.com/ngs-docs/2016-metagenomics-sio/raw/master/circos-build.tar.gz
tar -xvzf circos-build.tar.gz
curl -L -O https://s3-us-west-1.amazonaws.com/dib-training.ucdavis.edu/metagenomics-scripps-2016-10-12/subset_assembly.fa.gz
gunzip subset_assembly.fa.gz
mv subset_assembly.fa final.contigs.fa
We are going to limit the data we are trying to visualize and get longest contigs from our assembly. We can do this using a script from the khmer package:
extract-long-sequences.py final.contigs.fa -l 24000 -o final.contigs.long.fa
cp ~/data/quant/*counts .
Next, we will run a script that processes the data from the the files that we just moved to create circos-acceptable files. This is really the crux of using circos: figuring out how to get your data into the correct format.
python parse_data_for_circos.py
If you are interested– take a look at the script and the input files to see how these data were manipulated.
Circos operates off of three main types of files: 1) a config files that dictate the style and inputs to your circos plot, 2) a karyotype file that defines the size and layout of your “chromosomes”, and 3) any data files that you call in your config file that detail attributes you want to plot.
The above script generated our karyotype file and four different data files. What are they? How are they oriented?
Now, we all that is left is actually running circos. Navigate into the circos-build directory and type circos:
cd circos-build
circos
This command should generate an circos.svg and circos.png. Check out the circos.png!
Now, let’s take a look at the file that controls this crazy figure– circos.config.
Try changing a few parameters– colors, radius, size, to see what you can do. Again, if you are into this type of visualization, do check out the extensive tutorial.
LICENSE: This documentation and all textual/graphic site content is released under Creative Commons - 0 (CC0) -- fork @ github.
|
2018-03-19 06:32:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20306113362312317, "perplexity": 4019.4725952762687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646602.39/warc/CC-MAIN-20180319062143-20180319082143-00258.warc.gz"}
|
http://blog.jpolak.org/?tag=derived-functors
|
# Dihedral Groups and Automorphisms, Part 2
Welcome back readers! In the last post, Dihedral Groups and Automorphisms, Part 1 we introduced the dihedral group. To briefly recap, the dihedral group $D_n$ of order $2n$ for $n\geq 3$ is the symmetry group of the regular Euclidean $n$-gon. Any dihedral group is generated by a reflection and a certain rotation. Moreover, in Part 1 we gave two other descriptions of the dihedral group $D_n$. The first is the presentation
$\langle r, s | r^n, s^2, sr = r^{-1}s\rangle.$
We also discovered that if we consider the cyclic group $C_n$ as a $C_2 = \{ 1, \sigma\}$ module via $\sigma*k = -k$, then $D_n$ is isomorphic to a semidirect product: $D_n\cong C_n\rtimes C_2$, which was the second description.
Now here in Part 2, we are going to learn something new about the dihedral group when $n$ is even: in this case, $D_n$ has an outer automorphism. But in order to prove this, we will introduce group cohomology!
More »
|
2018-09-18 13:24:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8422052264213562, "perplexity": 176.9918291661924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155413.17/warc/CC-MAIN-20180918130631-20180918150631-00065.warc.gz"}
|
https://homework.cpm.org/category/MN/textbook/cc2mn/chapter/cc22/lesson/cc22.3.2/problem/2-129
|
### Home > CC2MN > Chapter cc22 > Lesson cc22.3.2 > Problem2-129
2-129.
Use your reasoning about numbers to answer the following questions. Homework Help ✎
1. If multiplying by $\frac { 1 } { 4 }$ makes a positive number smaller, then what does dividing by $\frac { 1 } { 4 }$ do to the value of the number? Explain your reasoning.
If multiplication and division are opposite operations, what do you think will happen?
If you are still unsure, choose a number and test by multiplying and dividing the number by one fourth. What do you notice? Make sure to explain what happens.
2. If multiplying by $1$ does not change the value of a number, then what effect does multiplying by $\frac { 2 } { 2 }$ have? Explain your reasoning.
What is $\frac{2}{2}$ equivalent to?
It does not change the value of a number. Be sure to explain why.
3. If you find $80\%$ of a number, do you expect the answer to be greater or less than the number? What if you find $120\%$? Explain your reasoning.
How do $80\%$ and 120% relate to $100\%$? Which value is smaller and which value is greater than $100\%$?
|
2019-10-16 03:20:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 10, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6018645167350769, "perplexity": 682.4103177309008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00258.warc.gz"}
|
https://www.baryonbib.org/bib/667dd2ea-047b-4e5e-86c3-1ec301e48408
|
PREPRINT
667DD2EA-047B-4E5E-86C3-1EC301E48408
# Separations inside a cube
A. F. F. Teixeira
arXiv:math/0112296
Submitted on 28 December 2001
## Abstract
Two points are randomly selected inside a three-dimensional euclidian cube. The value l of their separation lies somewhere between zero and the length of a diagonal of the cube. The probability density P(l) of the separation is obtained analytically. Also a Monte Carlo computer simulation is performed, showing good agreement with the formulas obtained.
## Preprint
Comment: 7 pages, 5 figures
Subjects: Mathematics - General Mathematics; Astrophysics
|
2022-07-05 03:02:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8630138635635376, "perplexity": 2356.58512721828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00657.warc.gz"}
|
http://memorize.com/stats-test/vojayoce
|
# Stats test
vojayoce's version from 2018-04-11 13:39
## Section
threshold P-value that determines when we reject a null hypothesis if we observe a statistic whose P-value based on the null hypothesis is less than (this), we reject that null hypothesisalpha level
when we estimate the standard deviation of a sampling distribution using statistics found from the data, the estimate is called thisstandard error
a level C (this) for a model parameter is an interval of values usually of the form estimate +/- margin of error found from data in such a way that C% of all random samples will yield intervals that capture the true parameter valueconfidence interval
number of standard errors to move away from the sample statistic to specify an interval that corresponds to the specified level of confidence (this), denoted z*, is usually found from a table or with technologycritical value
p is less than or greater than the null hypothesis when we are interested in deviations in only one direction away from the hypothesized parameter valueone-sided alternative
probability of observing a value for a test statistic at least as far from the hypothesized value as the statistic value actually observed if the null hypothesis is true a small (this) indicates either that the observation is improbable or that the probability calculation was based on incorrect assumptions assumed truth of the null hypothesis is the assumption under suspicionP-value
test of the null hypothesis that the proportion of a single sample equals a specified value by referring the statistic z = p(hat) - null hypothesis/SD(p(hat)) to a Standard Normal modelone-proportion z-test
alpha level is also called (this), most often in a phrase such as a conclusion that a particular test is "significant at the 5% (this)"significance level
probability that a hypothesis test will correctly reject a false null hypothesis is the (this) of the test to find (this), we must specify a particular alternative parameter value as the "true" valuepower
in a confidence interval, the extent of the interval on either side of the observed statistic value is called (this) it is typically the product of a critical value from the sampling distribution and a standard error from the data a small (this) corresponds to a confidence interval that pins down the parameter precisely a large (this) corresponds to a confidence level that gives relatively little information about the estimated parametermargin of error
the claim being assessed in a hypothesis test is called (this) usually, (this) is a statement of "no change from the traditional value," "no effect," "no difference," or "no relationship" for a claim to be a testable (this), it must specify a value for some population parameter that can form the basis for assuming a sampling distribution for a test statisticnull hypothesis
(this) proposes what we should conclude if we find the null hypothesis to be unlikelyalternative hypothesis
p doesn't equal the null hypothesis when we are interested in deviations in either direction away from the hypothesized parameter valuetwo-sided alternative
difference between the null hypothesis value and the actual value of population parametereffect size
when the P-value falls below the alpha level, we say that the test is ("this") at that alpha levelstatistically siginificant
error of failing to reject a null hypothesis when in fact it is false (also called a "false negative")Type II error
a confidence interval for the true value of a proportion the confidence interval is p(hat) +/- zSE(p(hat)), where z is a critical value from the Standard Normal model corresponding to the specified confidence levelone-proportion z-interval
error of rejecting a null hypothesis when in fact it is true (also called a "false positive")Type I error
|
2019-04-24 04:44:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654720187187195, "perplexity": 837.2193381283287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578626296.62/warc/CC-MAIN-20190424034609-20190424060609-00520.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcds.2011.29.141
|
# American Institute of Mathematical Sciences
January 2011, 29(1): 141-167. doi: 10.3934/dcds.2011.29.141
## Time-dependent attractor for the Oscillon equation
1 Indiana University Mathematics Department, Bloomington, IN 47405, United States 2 Rosenstiel School of Marine and Atmospheric Sciences, University of Miami, Miami, FL 33149, United States 3 The Institute for Scientific Computing and Applied Mathematics, Indiana University, 831 E. 3rd St., Rawles Hall, Bloomington, IN 47405
Received January 2010 Revised May 2010 Published September 2010
We investigate the asymptotic behavior of the nonautonomous evolution problem generated by the Oscillon equation
tt $u(x,t) +H$ t$u(x,t) -\e^{-2Ht}$ xx $u(x,t) + V'(u(x,t)) =0, \quad (x,t)\in (0,1) \times \R,$
with periodic boundary conditions, where $H>0$ is the Hubble constant and $V$ is a nonlinear potential of arbitrary polynomial growth. After constructing a suitable dynamical framework to deal with the explicit time dependence of the energy of the solution, we establish the existence of a regular global attractor $\A=\A(t)$. The kernel sections $\A(t)$ have finite fractal dimension.
Citation: Francesco Di Plinio, Gregory S. Duane, Roger Temam. Time-dependent attractor for the Oscillon equation. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 141-167. doi: 10.3934/dcds.2011.29.141
##### References:
[1] A. B. Adib, M. Gleiser and C. A. S. Almeida, Long lived oscillons from asymmetric bubbles: Existence and stability,, Phys. Rev. D, 66 (2002). doi: doi:10.1103/PhysRevD.66.085011. Google Scholar [2] L. Arnold, "Random Dynamical Systems,", Springer-Verlag, (1998). Google Scholar [3] A. V. Babin and M. I. Vishik, "Attractors of Evolution Equations,", North-Holland, (1992). Google Scholar [4] V. Belleri and V. Pata, Attractors for semilinear strongly damped wave equations on $\R^3$,, Discrete Cont. Dynam. Syst., 7 (2001), 719. doi: doi:10.3934/dcds.2001.7.719. Google Scholar [5] Z. Brzeźniak, M. Capiński and F. Flandoli, Pathwise global attractors for stationary random dynamical systems,, Probab. Theory Related Fields, 95 (1993), 87. doi: doi:10.1007/BF01197339. Google Scholar [6] T. Caraballo and J. A. Langa, On the upper semicontinuity of cocycle attractors for nonautonomous and random dynamical systems,, Dynam. Contin. Discrete Impuls. Systems A, 10 (2003), 491. Google Scholar [7] T. Caraballo, J. A. Langa and J. Valero, The dimension of attractors of nonautonomous partial differential equations,, ANZIAM J., 45 (2003), 207. doi: doi:10.1017/S1446181100013274. Google Scholar [8] T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for asymptotically compact nonautonomous dynamical systems,, Nonlinear Anal., 64 (2006), 484. doi: doi:10.1016/j.na.2005.03.111. Google Scholar [9] T. Caraballo, G. Ł ukaszewicz and J. Real, Pullback attractors for non-autonomous 2D-Navier-Stokes equations in some unbounded domains,, C. R. Acad. Sci. Paris, 342 (2006), 263. Google Scholar [10] D. N. Cheban, P. E. Kloeden and B. Schmalfuß, The relationship between pullback, forwards and global attractors of nonautonomous dynamical systems,, Nonlinear Dynam. Systems Theory, 2 (2002), 9. Google Scholar [11] V. V. Chepyzhov and M. I. Vishik, Attractors of nonautonomous dynamical systems and their dimension,, J. Math. Pures Appl., 73 (1994), 279. Google Scholar [12] V. V. Chepyzhov and M. I. Vishik, "Attractors of Equations of Mathematical Physics,'', American Mathematical Society Colloquium Publications, (2002). Google Scholar [13] E. J. Copeland, M. Gleiser and H. R. Muller, Oscillons: Resonant configurations during bubble collapse,, Phys. Rev. D., 52 (1995), 1920. doi: doi:10.1103/PhysRevD.52.1920. Google Scholar [14] H. Crauel, A. Debussche and F. Flandoli, Random attractors,, J. Dynam. Differential Equations, 9 (1997), 307. doi: doi:10.1007/BF02219225. Google Scholar [15] H. Crauel and F. Flandoli, Attractors for random dynamical systems,, Probab. Theory Related Fields, 100 (1994), 365. doi: doi:10.1007/BF01193705. Google Scholar [16] P. Fabrie, C. Galusinski, A. Miranville and S. Zelik, Uniform exponential attractors for a singularly perturbed damped wave equation,, Discrete Cont. Dyn. Systems, 10 (2004), 221. Google Scholar [17] E. Farhi, N. Graham, V. Khemani, et al., An oscillon in the $SU(2)$ gauged Higgs model,, Phys. Rev. D, 72 (2005). doi: doi:10.1103/PhysRevD.72.101701. Google Scholar [18] S. Gatti, M. Grasselli, A. Miranville and V. Pata, A construction of a robust family of exponential attractors,, Proc. Amer. Math. Soc., 134 (2006), 117. doi: doi:10.1090/S0002-9939-05-08340-1. Google Scholar [19] J.-M. Ghidaglia and R. Temam, Attractors for damped nonlinear hyperbolic equations,, J. Math. Pures Appl., 66 (1987), 273. Google Scholar [20] M. Gleiser and A. Sornberger, Longlived localized field configurations in small lattices: Application to oscillons,, Phys. Rev. E, 62 (2000), 1368. doi: doi:10.1103/PhysRevE.62.1368. Google Scholar [21] J. K. Hale, "Asymptotic Behavior of Dissipative Systems,'', Mathematical Surveys and Monographs, (1988). Google Scholar [22] A. Haraux, "Systèmes Dynamiques Dissipatifs et Applications,'', Recherches en Mathmatiques Appliques [Research in Applied Mathematics], (1991). Google Scholar [23] O. Ladyzhenskaya, "Attractors for Semigroups and Evolution Equations,'', Cambridge University Press, (1991). Google Scholar [24] B. B. Mandelbrot, "The Fractal Geometry of Nature,'', Schriftenreihe fr den Referenten. [Series for the Referee] W. H. Freeman and Co., (1982). Google Scholar [25] A. Miranville and S. Zelik, "Attractors for Dissipative Partial Differential Equations in Bounded and Unbounded Domains,'', Handbook of Differential Equations: Evolutionary Equations, IV (2008), 103. Google Scholar [26] P. Marín-Rubio and J. Real, On the relation between two different concepts of pullback attractors for non-autonomous dynamical systems,, Nonlinear Anal., 71 (2009), 3956. doi: doi:10.1016/j.na.2009.02.065. Google Scholar [27] I. Moise, R. Rosa and X. Wang, Attractors for noncompact nonautonomous systems via energy equations,, Discrete Cont. Dynam. Syst., 10 (2004), 473. doi: doi:10.3934/dcds.2004.10.473. Google Scholar [28] V. Pata and S. Zelik, A result on the existence of global attractors for semigroups of closed operators,, Comm. Pure Appl. Anal., 2 (2007), 481. Google Scholar [29] A. Riotto, Are oscillons present during a first order electroweak phase transition?,, Phys. Lett. B, 365 (1996), 64. doi: doi:10.1016/0370-2693(95)01239-7. Google Scholar [30] B. Schmalfuß, Backward cocycles and attractors of stochastic differential equations,, International Seminar on Applied Mathematics and Nonlinear Dynamics: Attractor Approximation and Global Behaviour (eds. V.\ Reitmann, 73 (1992), 185. Google Scholar [31] M. Schroeder, "Fractals, Chaos, Power Laws,'', W. H. Freeman and Company, (1991). Google Scholar [32] C. Sun, D. Cao and J. Duan, Non-autonomous dynamics of wave equations with nonlinear damping and critical nonlinearity,, Nonlinearity, 19 (2006), 2645. doi: doi:10.1088/0951-7715/19/11/008. Google Scholar [33] R. Temam, "Infinite-Dimensional Dynamical Systems in Mechanics and Physics,'' 2nd edition, Applied Mathematical Sciences, (1997). Google Scholar [34] P. B. Umbanhower, F. Melo and H. L. Swinney, Localized excitations in a vertically vibrated granular layer,, Nature, 382 (1996), 793. doi: doi:10.1038/382793a0. Google Scholar [35] Y. Wang, Pullback attractors for nonautonomous wave equations with critical exponent,, Nonlinear Anal., 68 (2008), 365. doi: doi:10.1016/j.na.2006.11.002. Google Scholar
show all references
##### References:
[1] A. B. Adib, M. Gleiser and C. A. S. Almeida, Long lived oscillons from asymmetric bubbles: Existence and stability,, Phys. Rev. D, 66 (2002). doi: doi:10.1103/PhysRevD.66.085011. Google Scholar [2] L. Arnold, "Random Dynamical Systems,", Springer-Verlag, (1998). Google Scholar [3] A. V. Babin and M. I. Vishik, "Attractors of Evolution Equations,", North-Holland, (1992). Google Scholar [4] V. Belleri and V. Pata, Attractors for semilinear strongly damped wave equations on $\R^3$,, Discrete Cont. Dynam. Syst., 7 (2001), 719. doi: doi:10.3934/dcds.2001.7.719. Google Scholar [5] Z. Brzeźniak, M. Capiński and F. Flandoli, Pathwise global attractors for stationary random dynamical systems,, Probab. Theory Related Fields, 95 (1993), 87. doi: doi:10.1007/BF01197339. Google Scholar [6] T. Caraballo and J. A. Langa, On the upper semicontinuity of cocycle attractors for nonautonomous and random dynamical systems,, Dynam. Contin. Discrete Impuls. Systems A, 10 (2003), 491. Google Scholar [7] T. Caraballo, J. A. Langa and J. Valero, The dimension of attractors of nonautonomous partial differential equations,, ANZIAM J., 45 (2003), 207. doi: doi:10.1017/S1446181100013274. Google Scholar [8] T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for asymptotically compact nonautonomous dynamical systems,, Nonlinear Anal., 64 (2006), 484. doi: doi:10.1016/j.na.2005.03.111. Google Scholar [9] T. Caraballo, G. Ł ukaszewicz and J. Real, Pullback attractors for non-autonomous 2D-Navier-Stokes equations in some unbounded domains,, C. R. Acad. Sci. Paris, 342 (2006), 263. Google Scholar [10] D. N. Cheban, P. E. Kloeden and B. Schmalfuß, The relationship between pullback, forwards and global attractors of nonautonomous dynamical systems,, Nonlinear Dynam. Systems Theory, 2 (2002), 9. Google Scholar [11] V. V. Chepyzhov and M. I. Vishik, Attractors of nonautonomous dynamical systems and their dimension,, J. Math. Pures Appl., 73 (1994), 279. Google Scholar [12] V. V. Chepyzhov and M. I. Vishik, "Attractors of Equations of Mathematical Physics,'', American Mathematical Society Colloquium Publications, (2002). Google Scholar [13] E. J. Copeland, M. Gleiser and H. R. Muller, Oscillons: Resonant configurations during bubble collapse,, Phys. Rev. D., 52 (1995), 1920. doi: doi:10.1103/PhysRevD.52.1920. Google Scholar [14] H. Crauel, A. Debussche and F. Flandoli, Random attractors,, J. Dynam. Differential Equations, 9 (1997), 307. doi: doi:10.1007/BF02219225. Google Scholar [15] H. Crauel and F. Flandoli, Attractors for random dynamical systems,, Probab. Theory Related Fields, 100 (1994), 365. doi: doi:10.1007/BF01193705. Google Scholar [16] P. Fabrie, C. Galusinski, A. Miranville and S. Zelik, Uniform exponential attractors for a singularly perturbed damped wave equation,, Discrete Cont. Dyn. Systems, 10 (2004), 221. Google Scholar [17] E. Farhi, N. Graham, V. Khemani, et al., An oscillon in the $SU(2)$ gauged Higgs model,, Phys. Rev. D, 72 (2005). doi: doi:10.1103/PhysRevD.72.101701. Google Scholar [18] S. Gatti, M. Grasselli, A. Miranville and V. Pata, A construction of a robust family of exponential attractors,, Proc. Amer. Math. Soc., 134 (2006), 117. doi: doi:10.1090/S0002-9939-05-08340-1. Google Scholar [19] J.-M. Ghidaglia and R. Temam, Attractors for damped nonlinear hyperbolic equations,, J. Math. Pures Appl., 66 (1987), 273. Google Scholar [20] M. Gleiser and A. Sornberger, Longlived localized field configurations in small lattices: Application to oscillons,, Phys. Rev. E, 62 (2000), 1368. doi: doi:10.1103/PhysRevE.62.1368. Google Scholar [21] J. K. Hale, "Asymptotic Behavior of Dissipative Systems,'', Mathematical Surveys and Monographs, (1988). Google Scholar [22] A. Haraux, "Systèmes Dynamiques Dissipatifs et Applications,'', Recherches en Mathmatiques Appliques [Research in Applied Mathematics], (1991). Google Scholar [23] O. Ladyzhenskaya, "Attractors for Semigroups and Evolution Equations,'', Cambridge University Press, (1991). Google Scholar [24] B. B. Mandelbrot, "The Fractal Geometry of Nature,'', Schriftenreihe fr den Referenten. [Series for the Referee] W. H. Freeman and Co., (1982). Google Scholar [25] A. Miranville and S. Zelik, "Attractors for Dissipative Partial Differential Equations in Bounded and Unbounded Domains,'', Handbook of Differential Equations: Evolutionary Equations, IV (2008), 103. Google Scholar [26] P. Marín-Rubio and J. Real, On the relation between two different concepts of pullback attractors for non-autonomous dynamical systems,, Nonlinear Anal., 71 (2009), 3956. doi: doi:10.1016/j.na.2009.02.065. Google Scholar [27] I. Moise, R. Rosa and X. Wang, Attractors for noncompact nonautonomous systems via energy equations,, Discrete Cont. Dynam. Syst., 10 (2004), 473. doi: doi:10.3934/dcds.2004.10.473. Google Scholar [28] V. Pata and S. Zelik, A result on the existence of global attractors for semigroups of closed operators,, Comm. Pure Appl. Anal., 2 (2007), 481. Google Scholar [29] A. Riotto, Are oscillons present during a first order electroweak phase transition?,, Phys. Lett. B, 365 (1996), 64. doi: doi:10.1016/0370-2693(95)01239-7. Google Scholar [30] B. Schmalfuß, Backward cocycles and attractors of stochastic differential equations,, International Seminar on Applied Mathematics and Nonlinear Dynamics: Attractor Approximation and Global Behaviour (eds. V.\ Reitmann, 73 (1992), 185. Google Scholar [31] M. Schroeder, "Fractals, Chaos, Power Laws,'', W. H. Freeman and Company, (1991). Google Scholar [32] C. Sun, D. Cao and J. Duan, Non-autonomous dynamics of wave equations with nonlinear damping and critical nonlinearity,, Nonlinearity, 19 (2006), 2645. doi: doi:10.1088/0951-7715/19/11/008. Google Scholar [33] R. Temam, "Infinite-Dimensional Dynamical Systems in Mechanics and Physics,'' 2nd edition, Applied Mathematical Sciences, (1997). Google Scholar [34] P. B. Umbanhower, F. Melo and H. L. Swinney, Localized excitations in a vertically vibrated granular layer,, Nature, 382 (1996), 793. doi: doi:10.1038/382793a0. Google Scholar [35] Y. Wang, Pullback attractors for nonautonomous wave equations with critical exponent,, Nonlinear Anal., 68 (2008), 365. doi: doi:10.1016/j.na.2006.11.002. Google Scholar
[1] Michael L. Frankel, Victor Roytburd. Fractal dimension of attractors for a Stefan problem. Conference Publications, 2003, 2003 (Special) : 281-287. doi: 10.3934/proc.2003.2003.281 [2] María Anguiano, Alain Haraux. The $\varepsilon$-entropy of some infinite dimensional compact ellipsoids and fractal dimension of attractors. Evolution Equations & Control Theory, 2017, 6 (3) : 345-356. doi: 10.3934/eect.2017018 [3] Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887 [4] Joseph Squillace. Estimating the fractal dimension of sets determined by nonergodic parameters. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5843-5859. doi: 10.3934/dcds.2017254 [5] Yejuan Wang, Chengkui Zhong, Shengfan Zhou. Pullback attractors of nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 587-614. doi: 10.3934/dcds.2006.16.587 [6] Bernd Aulbach, Martin Rasmussen, Stefan Siegmund. Approximation of attractors of nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 215-238. doi: 10.3934/dcdsb.2005.5.215 [7] P.E. Kloeden, Victor S. Kozyakin. Uniform nonautonomous attractors under discretization. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 423-433. doi: 10.3934/dcds.2004.10.423 [8] V. V. Chepyzhov, A. A. Ilyin. On the fractal dimension of invariant sets: Applications to Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 117-135. doi: 10.3934/dcds.2004.10.117 [9] Ioana Moise, Ricardo Rosa, Xiaoming Wang. Attractors for noncompact nonautonomous systems via energy equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 473-496. doi: 10.3934/dcds.2004.10.473 [10] Björn Schmalfuss. Attractors for nonautonomous and random dynamical systems perturbed by impulses. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 727-744. doi: 10.3934/dcds.2003.9.727 [11] David Cheban. Global attractors of nonautonomous quasihomogeneous dynamical systems. Conference Publications, 2001, 2001 (Special) : 96-101. doi: 10.3934/proc.2001.2001.96 [12] Bernd Aulbach, Martin Rasmussen, Stefan Siegmund. Invariant manifolds as pullback attractors of nonautonomous differential equations. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 579-596. doi: 10.3934/dcds.2006.15.579 [13] Arne Ogrowsky, Björn Schmalfuss. Unstable invariant manifolds for a nonautonomous differential equation with nonautonomous unbounded delay. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1663-1681. doi: 10.3934/dcdsb.2013.18.1663 [14] Yonghai Wang, Chengkui Zhong. Upper semicontinuity of pullback attractors for nonautonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3189-3209. doi: 10.3934/dcds.2013.33.3189 [15] Pierre Fabrie, Alain Miranville. Exponential attractors for nonautonomous first-order evolution equations. Discrete & Continuous Dynamical Systems - A, 1998, 4 (2) : 225-240. doi: 10.3934/dcds.1998.4.225 [16] Jianhua Huang, Wenxian Shen. Pullback attractors for nonautonomous and random parabolic equations on non-smooth domains. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 855-882. doi: 10.3934/dcds.2009.24.855 [17] Luís Silva. Periodic attractors of nonautonomous flat-topped tent systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1867-1874. doi: 10.3934/dcdsb.2018243 [18] Igor Kukavica. On Fourier parametrization of global attractors for equations in one space dimension. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 553-560. doi: 10.3934/dcds.2005.13.553 [19] Francisco Balibrea, José Valero. On dimension of attractors of differential inclusions and reaction-diffussion equations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 515-528. doi: 10.3934/dcds.1999.5.515 [20] Dung Le. Exponential attractors for a chemotaxis growth system on domains of arbitrary dimension. Conference Publications, 2003, 2003 (Special) : 536-543. doi: 10.3934/proc.2003.2003.536
2018 Impact Factor: 1.143
|
2019-08-24 16:15:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7215591073036194, "perplexity": 7561.228865533598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00518.warc.gz"}
|
https://bird.bcamath.org/browse?type=author&value=Escribano%2C+B.
|
Now showing items 1-13 of 13
• #### Assessment of van der Waals inclusive density functional theory methods for layered electroactive materials
(2017-01-01)
Computational-driven materials discovery requires efficient and accurate methods. Density functional theory (DFT) meets these two requirements for many classes of materials. However, DFT-based methods have limitations. One ...
• #### Brinicles as a case of inverse chemical gardens
(2013-12-31)
Brinicles are hollow tubes of ice from centimeters to meters in length that form under floating sea ice in the polar oceans when dense, cold brine drains downward from sea ice to seawater close to its freezing point. When ...
• #### Chemical-garden formation, morphology, and composition. I. Effect of the nature of the cations
(2011-12-31)
We have grown chemical gardens in different sodium silicate solutions from several metal-ion salts-calcium chloride, manganese chloride, cobalt chloride, and nickel sulfate-with cations from period 4 of the periodic table. ...
• #### Chemical-garden formation, morphology, and composition. II. Chemical gardens in microgravity
(2011-12-31)
We studied the growth of metal-ion silicate chemical gardens under Earth gravity (1 g) and microgravity (μg) conditions. Identical sets of reaction chambers from an automated system (the Silicate Garden Habitat or SGHab) ...
• #### Combining stochastic and deterministic approaches within high efficiency molecular simulations
(2013-12-31)
Generalized Shadow Hybrid Monte Carlo (GSHMC) is a method for molecular simulations that rigorously alternates Monte Carlo sampling from a canonical ensemble with integration of trajectories using Molecular Dynamics (MD). ...
• #### Constant pressure hybrid Monte Carlo simulations in GROMACS
(2014-12-31)
Adaptation and implementation of the Generalized Shadow Hybrid Monte Carlo (GSHMC) method for molecular simulation at constant pressure in the NPT ensemble are discussed. The resulting method, termed NPT-GSHMC, combines ...
• #### Crystal growth as an excitable medium
(2012-12-31)
Crystal growth has been widely studied for many years, and, since the pioneering work of Burton, Cabrera and Frank, spirals and target patterns on the crystal surface have been understood as forms of tangential crystal ...
• #### Enhancing sampling in atomistic simulations of solid state materials for batteries: a focus on olivine NaFePO$_4$
(2017-03-07)
The study of ion transport in electrochemically active materials for energy storage systems requires simulations on quantum-, atomistic- and meso-scales. The methods accessing these scales not only have to be effective but ...
• #### From chemical gardens to chemobrionics
(2015-12-31)
Chemical gardens in laboratory chemistries ranging from silicates to polyoxometalates, in applications ranging from corrosion products to the hydration of Portland cement, and in natural settings ranging from hydrothermal ...
• #### Molecular dynamics simulations of iron- and aluminum-loaded serum transferrin: Protonation of tyr188 is necessary to prompt metal release
(2012-12-31)
Serum transferrin (sTf) carries iron in blood serum and delivers it into cells by receptor-mediated endocytosis. The protein can also bind other metals, including aluminum. The crystal structures of the metal-free and ...
• #### Multiple-time-stepping generalized hybrid Monte Carlo methods
(2014-12-31)
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved ...
• #### Revealing the Mechanism of Sodium Diffusion in NaxFePO4 Using an Improved Force Field
(2018-04-02)
Olivine NaFePO4 is a promising cathode material for Na-ion batteries. Intermediate phases such as Na0.66FePO4 govern phase stability during intercalation-deintercalation processes, yet little is known about Na+ diffusion ...
• #### Runaway electrification of friable self-replicating granular matter
(2013-12-31)
We establish that the nonlinear dynamics of collisions between particles favors the charging of an insulating, friable, self-replicating granular material that undergoes nucleation, growth, and fission processes; we ...
|
2021-04-20 13:12:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5564074516296387, "perplexity": 8551.028621034022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00522.warc.gz"}
|
https://stats.stackexchange.com/questions/324133/minimum-of-poissons
|
# Minimum of Poissons
Let $X_i\sim\text{Pois}(\lambda_i)$ for $i=1,2,\ldots,n$ and $Y = \min X_i$. Can we show that, for example $\mathbb{E}[Y] \leq f(\lambda,n)\min\lambda_i$ for some $f : (\mathbb{R}^n,\mathbb{N}) \to [0,1]$ and lower bound the variance $\mathbb{V}[Y]$ by anything meaningful?
Jensen's inequality tells us $$\mathbb{E}[Y] = \mathbb{E}[\min_i X_i] \leq \min_i\mathbb{E}[X_i] = \min_i\lambda_i$$ already but I'd like something more concrete.
Note that this question asks a similar question, but is concerned with the entire distribution over $Y$ while I only need bounds on 2 moments. Accordingly, I'd expect a more closed-form answer available.
If it helps, in my case $X_i\sim\text{Pois}(\lambda)$ for $i=1,\ldots,n-1$ and $X_n \sim \text{Pois}(\lambda + \gamma)$ for $\lambda,\gamma>0$ so for $\gamma$ and $n$ large enough we may effectively assume $X_i\sim\text{Pois}(\lambda)$ are i.i.d (with high probability) for the purposes of estimating $Y$.
• What do you mean by "assume $X_i$" in the last sentence? Are some words missing?
– whuber
Jan 20, 2018 at 17:41
• @whuber ah thanks. I meant that $X_i$ are effectively i.i.d because for large enough $\gamma$, the probability that $Y = X_n$ is low. Jan 20, 2018 at 18:02
• are you sure you can use Jensen's inequality here? Jan 21, 2018 at 5:55
• @Taylor I’m quite sure: the minimum of a collection of linear functionals over $\mathbb{R}^n$ is concave (draw a picture to check). Denote $X=(X_1,\ldots,X_n)\in\mathbb{R}^n$. Then $\min X_i = \min \langle X,e_i\rangle$ is a minimum of linear functionals and hence concave, so Jensen applies. Jan 21, 2018 at 6:41
• You do not even need Jensen: $\min X_i\le X_i$ for all $i$'s, hence $\mathbb{E}[\min_i X_i] \leq\mathbb{E}[X_i]$ for all $i$'s. Jan 21, 2018 at 10:01
This only answers half of my question. Lower bounding the variance of $Y$ is still open.
Without loss of generality assume $\lambda_{\min} = \lambda_1\leq \lambda_2\leq \dots \leq \lambda_n = \lambda_{\max}$. Note that $$\mathbb{P}(Y > 0) = \prod_{i=1}^n (1 - e^{-\lambda_i}).$$ Moreover, $\mathbb{E}[Y | Y>0] \leq \mathbb{E}[X_1 | X_1>0]$. Bringing these together,
\begin{align} \mathbb{E}[Y] &= \mathbb{E}[Y | Y>0]\mathbb{P}(Y > 0)\\ &\leq \mathbb{E}[X_1 | X_1>0]\mathbb{P}(Y>0)\\ &= \lambda_{\min} \frac{\prod_{i=1}^n(1 - e^{-\lambda_{i}})}{1 - e^{-\lambda_{\min}}}\\ &= \lambda_{\min} \prod_{i=2}^n (1 - e^{-\lambda_i})\tag{$*$}\\ &\leq \lambda_{\min} (1 - e^{-\lambda_{\max}})^{n-1} \end{align} as desired. Note that this is sharp for $n=1$, and depending on the situation $(*)$ might be more helpful than the final bound.
|
2022-10-06 02:02:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907503128051758, "perplexity": 532.1425641506556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00128.warc.gz"}
|
http://gmatclub.com/forum/jury-94837.html?fl=similar
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 28 Jul 2016, 08:28
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
Jury
Author Message
TAGS:
Hide Tags
Senior Manager
Affiliations: SPG
Joined: 15 Nov 2006
Posts: 327
Followers: 13
Kudos [?]: 617 [0], given: 20
Show Tags
26 May 2010, 01:04
00:00
Difficulty:
(N/A)
Question Stats:
100% (10:10) correct 0% (00:00) wrong based on 0 sessions
HideShow timer Statistics
Can someone try to solve this?
Attachments
del1.jpg [ 11.15 KiB | Viewed 1462 times ]
_________________
press kudos, if you like the explanation, appreciate the effort or encourage people to respond.
Intern
Joined: 28 Feb 2010
Posts: 9
Followers: 0
Kudos [?]: 0 [0], given: 0
Show Tags
26 May 2010, 03:17
2/3 of 15 is 10Men
1/3 of 15 is 5Women
Jury consist of 10 Men and 5Women
We need to choose 12 from 15 - 15C12.
However of the 12, [highlight]"atleast "[/highlight] 2/3 should be men which means we should have atleast 8 or more men. This leads to the following selections
8m 4w
9m 3w
10m 2w
so the prob is (10c8*5c4 + 10c9*5c3 + 10c10*5c2)/15c12 ===> 67/91
Intern
Joined: 29 Dec 2009
Posts: 33
Followers: 0
Kudos [?]: 3 [0], given: 2
Show Tags
07 Jun 2010, 12:55
Why is (10C8 * 7C4)/15C12 incorrect?
10C8 should give us the no: of groups of 8 men and 7C4 should give us the groups of the remaining 4?
Senior Manager
Joined: 25 Jun 2009
Posts: 306
Followers: 2
Kudos [?]: 114 [0], given: 6
Show Tags
08 Jun 2010, 07:39
dimitri92 wrote:
Can someone try to solve this?
2/3 men out of 15 = 10 men
1/3 women = 5
now if we need to have have at least 2/3 men in 12 member jury, that means we need to have at least 8 men,
Now the $$P (of having at least 2/3 men in 12 member jury)= 1 - P ( not having 2/3 men in the jury)$$
now, there is only one possiblity when we cant have 8 men in the 12 member jury i.e when all the women are selected. ( 5 women and 7 men)
So the $$probablity of having 5 women and 7 men = \frac{5C_5 * 10C_7}{15C_12}$$
$$15C_12$$ = total ways of choosing 12 members out of 15
$$P (of having at least 2/3 men in 12 member jury) = 1 - \frac{10*9*8*3*2}{3*2*15*14*13} = 1- \frac{24}{91}$$ $$=\frac{67}{91}$$
Re: Jury [#permalink] 08 Jun 2010, 07:39
Similar topics Replies Last post
Similar
Topics:
47 If a jury of 12 people is to be selected randomly from a 22 04 Feb 2008, 16:40
Display posts from previous: Sort by
Jury
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2016-07-28 15:28:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2832556366920471, "perplexity": 8768.011691503381}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828283.6/warc/CC-MAIN-20160723071028-00147-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/coriolis-force-along-the-surface-of-the-earth.806753/
|
# Coriolis Force Along the Surface of the Earth
## Homework Statement
I don't want to post the actual question because I want to understand the situation in a general case. Basically, there is a bullet that moves south along the surface of the Earth as in this diagram: http://abyss.uoregon.edu/~js/images/coriolis_effect.gif. You have to find the deflection from the target.
## Homework Equations
Newton's Second Law in a Non inertial frame, Coriolis Force
## The Attempt at a Solution
I don't have a solution becasue I can't understand what's going on. In my textbook, they set up a "local coordinate system" that moves along the surface of the Earth like this: http://i.imgur.com/Eyhq1WF.png. I want to understand why and how they can do this.
I haven't worked out the actual direction of the deflection, but I assume from the picture above it would be westward. If the coordinate system moves along the Earth, can you write a simple DE like $$\ddot{y}=\text{Coriolis Acceleration in this Direction}$$ ? I don't think you can since the latitude changes as you move south.
|
2021-06-22 04:48:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7595718502998352, "perplexity": 220.58468232900296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488507640.82/warc/CC-MAIN-20210622033023-20210622063023-00195.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/an-asset-with-an-8-year-useful-life-has-an-initial-cost-of-780000-and-will-be-depreciated--q3264714
|
## Fin mgt
An asset with an 8-year useful life has an initial cost of $780,000 and will be depreciated straight-line to zero over its lifetime. The asset will be used in a 5-year project and at the end of the project the asset will be sold for$135,000. If the corporation’s tax rate is 35% what is the depreciation each year, the Book Value of the asset each year, and the after tax salvage value when the asset is sold at the end of the fifth year?
• Anonymous commented
Kindly transfer the points. Thanks Aman
|
2013-05-23 15:54:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20324136316776276, "perplexity": 2692.0025905078933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703532372/warc/CC-MAIN-20130516112532-00069-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.arpinvestments.com/arl/the-global-economy-post-covid-19
|
Thematic, innovative and bespoke investment solutions and thinking
Before you use our website we need you to know this. Click on each section for the full information.
We do not offer investment advice to private investors
Absolute Return Partners does not offer investment advice to private investors (Retail Clients as defined by the UK Financial Conduct Authority). All such investors are advised to contact Quartet Investment Managers on +44 20 8939 2920 or visit quartet-im.com.
Our website does not give investment advice
The information contained on the website you are about to access (Website) is for information purposes only and does not constitute and should not be construed as advice on which reliance should be placed, nor is it an offer by Absolute Return Partners LLP (ARP) to enter into any contract or investment agreement or a solicitation to buy or sell any investment in any jurisdiction or in any circumstances. Any information provided in relation to a specific fund is not intended to provide a sufficient basis on which to make any investment decision as any such decision requires careful study of the offering memorandum of the relevant fund.
Information about Unregulated Collective Investment Schemes is not intended for the general public
No information on the Website is intended to amount to the financial promotion of Unregulated Collective Investment Schemes which are not authorized or recognised by the UK Financial Conduct Authority (FCA) and cannot be promoted to the general public. Any such information is intended solely for certain classes of investors permitted to receive it under relevant legislation and regulations, including investors falling within the qualifying categories set out the Conduct of Business Rules contained in the FCA Handbook or in the Financial Services and Markets Act 2000 (Promotion of Collective Investment Schemes) (Exemptions) Order 2005, in each case as amended or replaced from time to time. This is because such investors are sufficiently experienced and sophisticated to understand the risks associated with such investments, including the possibility of a substantial loss or complete loss of their investment.
Our website stores information on your device
Our website uses technologies, such as cookies, to distinguish you from other users of our website, which helps us to provide you with a good experience when you browse our website and also allows us to improve our site. We do not use these technologies for third-party related advertising or for storing/collecting personal information. For more information please see our Privacy Policy.
Before accessing the Website you should carefully read the terms set out in our Terms of Website Use and our Privacy Policy as these will apply to the entire contents of the Website and to any correspondence between us and you.
By accessing any part of the Website you are indicating that you accept these terms and that you agree to abide by them. If you do not accept these terms, do not use the Website.
March 2021
# The Global Economy Post COVID-19
A V-shaped economic recovery is in the cards for later this year, but for it to be sustainable, we need to change one or two things. One simply cannot assume that a vanilla approach to managing the economy out of the current mess will do the trick, and I will argue that only those countries which are prepared to think out-of-the-box are likely to s
##### Preview
This pandemic has magnified every existing inequality in our society – like systemic racism, gender inequality, and poverty.
Melinda Gates
When I left our offices in Richmond on the 12th March last year, little did I expect having to write these lines a year later. I obviously knew about this nasty little bug called COVID-19, and of course I knew that the next few months would be difficult for all of us but, did I expect the entire world still to be caught in a nasty web a year later? No!
Next week, we can celebrate the first, and hopefully also the last, anniversary of Annus Horribilis. At Absolute Return Partners, Rishanth joined the research team in mid-November, and I am yet to meet him in person – quite a bizarre experience. That said, I am not asking for your sympathy – not at all. Our industry actually works reasonably well without everyone showing up in the office every morning, although I must admit that I miss the daily chatter, which is part of the joy of going into the office every day. Chatting on Zoom or MS Teams is just not quite the same! The problem for society at large is that many industries are not as fortunate as the financial industry is. Going in every day is critical to the survival of many businesses.
As I started to prepare for this month’s Absolute Return Letter, suddenly, the news rolled across my screen: ”UK suffers record 9.9% slump” it said. I quickly checked if we have ever experienced a steeper GDP decline in a single year in this country, and the answer is yes – in 1709, when a bout of severe frost destroyed the harvest (Exhibit 1).
That speaks volumes about the severity of the downturn in 2020, where Q2 was particularly nasty. Towards the end of the year, things began to look better. In December, for example, the UK economy grew a modest +1.2%. That said, the New Year is not exactly off to a good start. January and February have both been dreary, following the government’s decision to lock the country down again after a Christmas where distancing rules were widely disobeyed, leading to many more deaths in the first few weeks of the New Year.
Andy Haldane, the Chief Economist (and deputy governor) of Bank of England did his very best to spin an upbeat story when, in an interview on the 10th February, he compared the British economy to a coiled spring. By June 2021, he said, British households could have built up savings of about £250Bn, much of which will likely be spent later this year as people return to the high streets.
While I have no reason to believe Haldane is being unduly optimistic on consumer spending later this year, I still believe he ignores a few hard facts which will change not only the British but the global economy for many years to come. That is what this month’s Absolute Return Letter is all about – why the world of tomorrow will look different to the world we know today, successful vaccination programmes or not.
## Why sentiment is deteriorating in some regions
The New Year is off to a relatively good start from a health point-of-view. Several vaccines have been approved, and most countries are busy vaccinating as many as they can, given the constraints in vaccine supplies. With many of the most vulnerable having received at least the first injection already, we should soon be able to gradually re-open society. All other equal, that should lead to an uplift in sentiment. Why is it then that sentiment has actually deteriorated in parts of the world over the last couple of months (Exhibit 2)?
As you can see, both in Latin America, Asia-Pacific and Europe, sentiment has taken a turn for the worse more recently. The decline in optimism is most acute in Latin America, but the numbers don’t look too good in Europe and Asia-Pacific either. Why is that? In Latin America and in Europe, the main concern is rising unemployment and the impact that will have on consumers’ spending powers. In Latin America, unemployment is actually considered a bigger risk to the economy in 2021 than the pandemic itself.
In Asia, the concerns are somewhat different. COVID-19 has led to a drive towards economic nationalism at the cost of globalisation and, for many Asian economies to prosper, international trade must continue to fire on most cylinders, and companies from Europe and North America must continue to outsource to Asia. If the economic order has suddenly changed, many Asian countries could get into trouble.
Allow me to add a personal comment on that point. Yes, economic nationalism or, as we call it at Absolute Return Partners, localisation, is clearly challenging the trend towards globalisation which, until recently, was broadly accepted as the economic order of both today and tomorrow.
Having said that, you give the pandemic too much credit if you think it is the root cause which is driving the trend towards increased localisation. The introduction of advanced robotics in many industries have driven companies, which years ago moved it to Asia, to bring manufacturing back. Also, climate change is a major agenda point all over the world these days, and listed companies are increasingly being punished by investors if they don’t take the issue seriously. That has also favoured localisation over globalisation.
## Why economic growth may not be sustainable
With those caveats in mind, I do agree with Andy Haldane that we will most likely face a V-shaped economic recovery later this year, but that doesn’t mean it is all going to be plain sailing from here. Let me explain.
To begin with, global economic activity is very dependent on having well-functioning supply chains, and COVID-19 has done immense damage to many of those chains. If we want the global economy to perform robustly later this year, it is critical that we bring global supply chains up to scratch as quickly as possible. In the context of the UK economy, the challenge is even bigger due to the economic fiasco called Brexit, but more on that later.
Secondly, how do you create economic growth in an environment where there is little or no workforce growth and only very modest productivity growth? This must be addressed, and the sooner the better, if we want the economic rebound later this year to be sustained.
Thirdly, most leading nations have agreed to a set of sustainable development goals (SDGs), set by the United Nations and to be in place by 2030. Can we comply with all the SDGs we have committed to at the same time as we aim for robust economic growth? The short answer to that question is that SDG compliance combined with a vanilla approach to managing the economy will almost certainly lead to below average GDP growth. You’ll need to think out-of-the-box but more on that below.
## How to ensure reliable supply chains
Many underestimate how important international trade is to economic activity. Accor-ding to Statista, in 2019, the total volume of goods exported around the world amounted to no less than $18.9Tn – almost as much as total US output (see here). In that number, you’ll find goods such as oil, gas and coal without which the global economy would freeze up in an instant, and you will find many goods, for example electronics, without which we wouldn’t have been able to work from home during the pandemic. It is therefore fair to say that the damage done to global supply chains must be addressed ASAP for the economic recovery later this year to be as V-shaped as we would all like it to be. When the global economy was hit by the first wave of COVID-19 in the second quarter of 2020, global trade was down 18.5% year-on-year – much bigger than the drop in GDP growth, and the primary reason was faltering supply chains – fewer flights, limitations on cargo ships’ ability to enter many harbours and various other trade restrictions, many of which were, and still are, supposedly about national security but, in fact, were, and still are, little more than protectionist measures. In the context of rising protectionism, as you can see from Exhibit 3 below, the US economy imports more from China than most other OECD economies do, and there can be no doubt that the tariffs introduced by Trump’s administration were mostly about protectionism dressed up as issues to do with national security. Adding to that, in the$18.9Tn mentioned earlier, services are not included. You may think that services are mostly restaurants, hotels and other less critical services (less critical for the overall economy to survive), but migration of overseas workers is included in that number, and overseas workers are far more important than generally perceived.
Take for example the need for overseas doctors, nurses and other carers. In a rapidly ageing society with a shortage of such resources domestically – and that is the case in most OECD countries – society wouldn’t function for long without access to overseas healthcare workers. Having said that, the pandemic has had many implications, and one of the less pleasant ones is a growing sense of nationalism.
## A few words on Brexit
The reality is that, despite most Brexiteers being in denial, Brexit will be immensely costly to the British economy over the next few years at a time where it can hardly afford it. When you listen to UK government officials, desperately trying to spin a positive story on Brexit, the theme is almost always along the lines of “we are no longer constrained by EU rules and can therefore do business all over the world”. Whilst technically correct, a couple of important points are being blatantly (deliberately?) ignored when making that statement:
1. Trade agreements that are worth entering into take many years to negotiate – anywhere from five to ten years if history provides any guidance. I am sure we can do one or two quick deals which may look decent on paper, but those deals that you really want to enter into will take years to negotiate.
2. Most countries that are on the UK’s wish list already have a comprehensive agreement with the EU. It is terribly naïve of Brexiteers to think that any country of a decent standard would enter into an agreement with a country of 65 million people (the UK) that would put an already existing agreement with a market consisting of 440 million consumers (the EU) at risk. The best example is the recently established trade agreement between the UK and Canada which was widely celebrated in the Brexit camp. What no Brexiteer was prepared to admit was that, down to the last comma, it was identical to the already existing agreement between the EU and Canada, so I am struggling to see what the British have achieved with this deal that they didn’t already have when in the EU.
If British policy makers ever want to stand a chance of getting the British economy out of this mess, the first thing they need to address is the “them and us” mentality that is now so widespread. In my humble opinion, whether Brexiteers like it or not, in Europe, we are all in the boat together and, the sooner we start to cooperate as civilised human beings, the better.
For the (hopefully V-shaped) economic recovery which is lurking around the corner to be sustainable and not just a flash in the pan, the rather unpleasant sense of nationalism that is now prevalent in many countries must be addressed. I have made the comparison to the 1930s before, but I am going to stick my neck out again. There is a reason populist politicians are in demand at present, just like they were in the 1930s (Exhibit 4), and that reason is falling living standards amongst the not so well-off. Over and above everything else, that should be addressed by our current cohort of political leaders.
## How to create robust GDP growth without much workforce growth
As I have repeatedly pointed out in the Absolute Return Letter in recent years, it is an uphill battle to grow GDP robustly unless the workforce is growing at a reasonable rate, and the reason is simple:
ΔGDP ≈ ΔProductivity + ΔWorkforce
Friends often ask me the question: do we really need for GDP to continue to grow? Isn’t life good enough as it is? Maybe, if you live in Denmark (as many of my friends do), where government debt-to-GDP is still only around 40% despite massive public spending in 2020 to fight the outbreak of COVID-19. The problem many other countries are faced with is that public debt is so painfully high now that robust GDP growth is a simple necessity. Otherwise, tax revenues won’t be big enough to cover the bills – not even the most important ones.
In an era of sharply rising digitisation, one could argue that we don’t really need the workforce to grow anymore. By adopting advanced robotics in lieu of human labour, we can still deliver the same output, even if the workforce is in (terminal?) decline. That is almost certainly true, but it is only half the story. Robots do not need a sandwich for lunch, i.e. the more human workers are replaced by robots, the more consumer spending will drop. As consumer spending makes up a significant share of total economic activity in most countries, economic activity will continue to mellow. Eventually, as the populace at large begins to shrink, so will consumer spending and therefore also GDP.
There is no easy answer to this problem other than to make fundamental changes to our tax system, and that is not easy in the first place, as it would require a global sign-off to be successful. Let me explain. In the digital era, corporate profits have risen dramatically and so has the wealth of the average capital owner. Fundamentally, there are two reasons for that. Firstly, the digital business model allows companies to operate from pretty much anywhere, and it is no coincidence that more and more of the most successful companies are incorporated in tax havens around the world.
Take for example Amazon which saw a 35% rise in UK profits in its last financial year but only paid 3% more in UK taxes (see here). Why? Because the company is incorporated in Luxembourg; hence, it pays most of its taxes in a tax haven, i.e. it pays next to nothing.
Secondly, as robots replace human labour, companies’ payroll costs will decline whereas its profits will improve. From the tax man’s point-of-view, replacing income tax with corporate tax is a bad proposition as the corporate tax rate is much lower than the income tax rate in almost all OECD countries, hence why the tax system must be changed. Otherwise, entire countries will default in the years to come.
## How to grow GDP whilst being loyal to SDG at the same time
As I am sure we have readers who don’t know what SDG stands for, let me provide a bit of background (and I paraphrase from the UN website on SDGs which you can find here). In 2015, a comprehensive set of Sustainable Development Goals (SDGs) were adopted by all UN member states. In short, the agreement provides a shared blueprint for peace and prosperity for people and the planet, and everybody has agreed to reach these goals by 2030 at the latest.
There are no less than 17 goals in the SDG programme, many of which will put a lid on the ability to generate robust GDP growth unless you think out-of-the-box. I am often quite critical of Boris Johnson’s government – and for good reasons, I might add – but, from what I have heard coming out of his mouth recently, I think he is on to something here (or, at least, his advisers are).
The UK is, in certain ways, a very dysfunctional country. Its digital infrastructure, for example, is pathetic, but it is clearly ahead of the curve on going carbon neutral and, if the government plays its cards well, this could become a major export opportunity, given how the entire world has signed up to the UN’s SDG agenda.
Take for example Sustainable Development Goal #7: Affordable and Clean Energy or #13: Climate Action – two SDGs which effectively cover one and the same issue. In both instances, fossil fuels come out short. In other words, between now and 2030, assuming most countries honour their commitment to this programme, there will be huge demand for fossil fuel-free energy solutions, and the UK is at the leading edge of this curve.
You many think that I am ‘only’ referring to the UK commitment to wind energy, but the commitment goes far wider than that. Within the not so distant future, traditional nuclear (fission) will undergo a major transformation with the introduction of SMR nuclear, and the UK is there, and not that many years later, fusion will be introduced, and the UK is also there. Once we have electricity from fusion energy, demand for fossil fuels will shrink faster than Big Oil will like to admit.
In the wider scheme of things, if the UK (in this case Rolls Royce) is successful with the rollout, it will result in higher UK exports, stronger UK GDP growth and more jobs – all created whilst loyal to at least some of the UN’s Sustainable Development Goals.
## Final few words
I am coming to the end of this month’s letter and hope to have made it clear by now that you cannot assume that all will be back to normal as soon as the economy has been re-opened. The list of issues we are confronted with is massive but, in this month’s Absolute Return Letter, I have only touched the tip of the iceberg. I could have referred to all the idle retail space around the world, and how that will affect commercial property prices negatively, or I could have discussed the rising need for another bedroom in many households, as working from home will be with us for much longer than COVID-19, and how that will affect residential property prices positively.
Alternatively, I could have picked on one or two of the hundreds of other little things that are likely to be different in the post COVID-19 environment but chose not to do so. 3,000-ish words is the maximum I allow myself in these letters, and I am bumping my head against the ceiling now. In earlier drafts, I had a section on the psychological (mental) damage caused to many people by the pandemic, and how long it may take to repair that.
In the end, I took it out, though, as my claims were hard to substantiate, but the gist of it all was (and still is) that many consumer activities may not return as quickly as our policymakers would like them to.
Niels C. Jensen
1 March 2021
## More insight, more advice, more in-depth.
Professional investors can now subscribe to a range of our research and strategic thinking. Sign up now for access to in-depth papers and webinars!
##### Investment Megatrends
Our investment philosophy, and everything we do at ARP, is driven by the long-term Investment Megatrends which are identified and routinely debated by our investment team.
##### Related Investment Megatrends
Our investment philosophy, and everything we do at ARP, is driven by the long-term Investment Megatrends which are identified and routinely debated by our investment team. Read more about related Megatrend/s for this article:
|
2022-11-26 09:24:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19637195765972137, "perplexity": 1654.385745073445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00392.warc.gz"}
|
http://mathematica.stackexchange.com/tags/custom-notation/new
|
# Tag Info
6
The Notation package is built for this. Needs["Notation"]; Notation[ParsedBoxWrapper[RowBox[{"\[LeftAngleBracket]", "x___", "\[RightAngleBracket]"}]] \[DoubleLongLeftRightArrow] ParsedBoxWrapper[RowBox[{"{", "x___", "}"}]]] (It looks better when input via the Notation palette. Don't be frightened by the box manipulation - I used the palette to construct ...
2
Edit As it was correctly noted the Notation package is not necessary here and the key point is recursive definition which builds the desired ordering: LeftArrow[x_,y_,z__] := LeftArrow[LeftArrow[x,y],z] ; Notice that the z__ argument is followed by a double underscore, which allows the pattern to match an arbitrary number of arguments. Original answer ...
5
Here is a palette that does the slashing when you select a character and press the button: CreatePalette[{Button["Slash it!", NotebookWrite[InputNotebook[], Replace[FromCharacterCode[ Join[ToCharacterCode[ ToString[NotebookRead[InputNotebook[]]]], {824}]], FromCharacterCode[{8706, 824}] :> ...
1
I did not attempt to implement everything you show but only what is needed for the two final examples. I initially seemed to have a problem with precedence but now it is working? I am not certain of what change made the difference, if any, but I'll post what I have now in case it is special in some way. I added these lines at the top of ...
11
I don't like the idea of redefining Or (||). Rather, I would suggest defining a function with the name DoubleVerticalBar. There is a special double vertical bar character which will be interpreted as the infix operator for DoubleVerticalBar and can be input with Esc+Space+|+|+Esc. SetAttributes[ DoubleVerticalBar, {NumericFunction, Orderless, Flat, ...
11
Use upvalues. You don't want || to change its behavior except when it's operating on impedances. So, use a wrapper (z[ ], say) around the quantities that represent impedances, and associate upvalues with the wrapper. This lets you redefine how standard operators work on the wrapped values: z[a_] || z[b_] ^= z[1/(1/a + 1/b)]; z[a_] + z[b_] ^= z[a + b]; a_ ...
7
(I would love to hear from someone more knowledgeable about how to improve this answer.) It is possible to redefine the || operator if you're willing to redefine the built-in Or, but I would certainly not recommend that because Or is a very common function upon which Mathematica probably relies internally all over the place. Possibly more robust but still ...
9
The Notation package is not necessary to use an infix form of \[Star] as that is handled automatically. Also I recommend PadRight for constructing your expression (reference Generating a matrix using sublists A and B n times). SetAttributes[Star, HoldFirst] Star[a_List, n_Integer] := PadRight[a, n*Length@a, a] {1, 2}⋆5 (* ⋆ is \[Star] *) {1, 2, ...
6
Brief? How about this. Define: c = ConstantArray; Now you can get what you want using the infix notation: "a"~c~7 and 10~c~7 With lists {1, 2}~c~7 you'll need to Flatten.
7
Unevaluated@Sequence[1, 2]~ConstantArray~10 $\$ {1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2} Or using Notation << Notation Notation[ParsedBoxWrapper[ RowBox[{ RowBox[{"[", "const_", "]"}], "\[Star]", "reps_"}]] \[DoubleLongRightArrow] ParsedBoxWrapper[ RowBox[{ RowBox[{"Unevaluated", "@", RowBox[{"Sequence", "[", "const_", ...
Top 50 recent answers are included
|
2015-08-28 02:25:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5189977884292603, "perplexity": 3182.8656528908277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060173.6/warc/CC-MAIN-20150827025420-00091-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://hittax.com.au/q3k8zo/decimal-to-mixed-number-a23f88
|
# decimal to mixed number
Learn to convert given decimal values to mixed numbers. 2 19/20 is equal to 2.95 in decimal form. Following is a step by step instruction on how to convert a mixed number to a decimal. So, 3 is the whole number. The decimal 11.6 is read as 11 and 6 tenths. Grade 6 Fraction Worksheet - Converting Decimals to Mixed Numbers Author: K5 Learning Subject: Grade 6 Fraction Worksheet Keywords: Grade 6 Fraction Worksheet - Converting Decimals to Mixed Numbers math practice printable elementary school Created Date: 20160409041017Z If needed, simplify the fraction. So, it can be written as a mixed number $6\frac{8}{10}$. Alternatively, you can do this process in one step using an online conversion tool. Example: Convert 3.025 to a fraction. A mixed number is a whole number as well as some parts of a whole as fraction. digit 6) Step 3: Denominator of fraction is equal to 1 followed by as many number of zero as there are decimal place. Step 1: The decimal 4.6 is read as 4 and 6 tenths. To convert a Decimal to a Fraction follow these steps: Step 1: Write down the decimal divided by 1, like this: decimal 1 Step 2: Multiply both top and bottom by 10 for every number after the decimal point. Example : Convert 3.6 to Mixed Fraction. Improve your skills with free problems in 'Convert decimals to mixed numbers' and thousands of other practice lessons. Displaying top 8 worksheets found for - Decimals To Mixed Numbers. I.12 Convert between decimals and mixed numbers. 3) Combine the whole number part and fractional part to write the mixed number. It can be written in the place value chart as: Multiplying Decimals and Mixed Numbers. If it is zero, the decimal converts to a proper fraction. This is the numerator of the improper fraction and 10 is the denominator. These worksheets are pdf files.. 1 23/80 is equal to 1.2875 in decimal form. Convert 4.6 to a mixed number and improper fraction without simplifying. 2) Write the decimal part as a fraction. Final Exam 6th-8th Grade Math: Practice & Review Status: Not Started. Step 1: The decimal 6.8 is read as 6 and 8 tenths. The mixed number to percentage calculator finds the decimal equivalent by finding the decimal value of the fraction, adding the decimal to the whole number part of the mixed number, and multiplying by 100 to get the percent. Write the whole number. This lesson is designed to reinforce skills associated with multiplying decimals and mixed numbers and allow students to visualize the effects of multiplying by a decimal or mixed number. (For example, if there are two numbers after the decimal point, … Check to see if there is a number before (to the left of) the decimal. Determine the place value of the final digit. One can think about mixed numbers (as well as decimals greater than 1) as the sum of a whole and a … Converting a mixed number to a decimal is very straightforward. Similar: Convert mixed numbers to fractions Convert mixed numbers to decimals If there is, this number is the whole number of the fraction. Step 2: So, it can be written as a mixed number $4\frac{6}{10}$.The mixed number has a whole number part 4 and a fractional part $\frac{6}{10}$ which can … The process for converting between mixed numbers and decimals; Practice Exams. Worksheets > Math > Grade 5 > Fractions vs Decimals > Convert decimals to mixed numbers. To write decimals, which are greater than 1, as mixed numbers: 1) Keep the whole number part as same. To change a mixed number into a decimal number, you first convert the mixed number into an improper fraction. Abstract. Below are six versions of our grade 3 math worksheet on converting decimals to mixed numbers; students are asked to simplify fractions where possible. Solution. Read this lesson on converting decimals to fractions if you need to learn more about changing decimals to fractions. Similarly, the decimal number is a number whose whole number part, and a fractional part is separated by a decimal point. We have moved all content for this concept to for better organization. Fun maths practice! How to Convert a decimal number to a fraction or mixed number. For example, to convert 3 1/2 to a decimal, we first write the whole number followed by a decimal point. Share skill Mixed Number to Decimal. Mixed decimal definition is - a mixed number whose fractional part is a decimal fraction (as in 7.238). If it is zero, the decimal converts to a proper fraction. If it is not zero, the decimal converts to a mixed number. The whole number is the number to the left of the decimal point. 0.05 = 0.05 / 1 Step 2: Multiply both top and bottom by 10 for every number after the decimal point:. These worksheets are pdf files.. Answer key, write as a decimal or mixed number worksheet, ti 83+ rom download, college prep prealgebra, quadratic equations using zero-factor property checking answers with fractions, distributive property of addition +worksheet. We can convert a mixed number (or mixed fraction) to a decimal. Look at the number to the left of the decimal. Converting Decimals to Fractions Greatest Common Factor Proper Fraction Improper Fraction Mixed Fraction In decimals, the digits to the left of the decimal point represents the whole part and the digits to the right of the decimal point represents the decimal or fraction part. The fraction is the number to the right of the decimal point. Look at the number to the left of the decimal. Below are six versions of our grade 6 math worksheet on converting one or two digit decimal numbers as mixed numbers. These worksheets are pdf files.. 1. A decimal with a whole part can be written as a mixed number. All answers should be expressed in their simplest form. Similar: We maintain a great deal of good quality reference materials on topics ranging from introductory algebra to the quadratic formula Decimal to fraction conversion calculator that shows work to convert given decimal to fraction number. Click Create Assignment to assign this modality to your LMS. Determine the place value of the final digit. Whole number and fraction with denominator of 10, 100 or 1000 % Progress . Objective: I can change decimals to mixed numbers. Below are six versions of our grade 5 math worksheet on converting decimals with 1 or 2 digits to mixed numbers; students are asked to simplify answers where possible. Math worksheets: rewriting decimals as mixed numbers. Convert mixed numbers or mixed fractions to percents. So, it is written as a mixed number $11\frac{6}{10}$ Step 2: The mixed number has a whole number part 11 and a fractional part $\frac{6}{10}$ Step 3: It is converted to an improper fraction as follows. 11 × 10 + 6 = 116. Mixed number 2 19/20 to decimal Decimals To Mixed Numbers. As we have 2 numbers after the decimal point, we multiply both numerator and denominator by 100. Write the whole number. Solution: Follow the steps of conversion: Step 1: Numerator of fraction is equal to decimal (without the decimal point) and we get: Numerator = 36 Step 2 : Find number of Decimal Place in the given decimal - Decimals Places = one (i.e. A mixed fraction is also known as the mixed number, and it is the combination of a whole number and a fraction. Please update your bookmarks accordingly. PD8. Solution. Use our fraction to decimal calculator to convert any fraction to a decimal and to know if it is a terminating or a recurring (repeating) decimal. Step 2: The mixed number has a whole number part 6 and a fractional part 8/10 … The number of decimal places to the right of the decimal point indicates the number of zeros in the power of 10 that is written in the denominator. Write the fraction. Write the fraction. Worksheets > Math > Grade 6 > Fractions vs. Decimals > Decimals to mixed numbers. Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number. This indicates how strong in your memory this concept is. To convert the decimal 0.05 to a fraction follow these steps: Step 1: Write down the number as a fraction of one:. To convert a decimal to a fraction, place the decimal number over its place value. Objectives. Decimals can be written in fraction form. Convert decimal 0.05 to a fraction. Some of the worksheets for this concept are Decimals mixed numbers, Mixed numbers to decimals, Grade 5 decimals work, Mixed numbers to decimals, Decimals work, Name mixed numbers decimals, Practicepractice puzzlespuzzles, Exercise work. Learn to convert given mixed numbers to decimal values. Convert between decimals and fractions or mixed numbers 8.NS.A.1 - Know that numbers that are not rational are called irrational. If it is not zero, the decimal converts to a mixed number. The step-by-step calculation help parents to assist their kids studying 4th, 5th or 6th grade to verify the work and answers of decimal point numbers to fraction homework and assignment problems in pre-algebra or in number system (NS) of common core state standards (CCSS) for mathematics. 6.8 is read as 6 and 8 tenths and bottom by 10 for every number after decimal. Number 2 19/20 is equal to 1.2875 in decimal form in simplest.... Is a step by step Solution their simplest form read this lesson on converting one or digit... By step instruction on how to convert given decimal to fraction number: )... By step Solution numbers to decimal values.. how to convert a mixed number to left... Whose whole number part, and a fractional part to write the mixed number a! On converting decimals to fractions if you need to learn more about decimals. Be written as a fraction or mixed numbers / 20 as a fraction step by step on... ) Keep the whole number and improper fraction without simplifying click Create Assignment assign. And a fractional part is separated by a decimal number is a whole part can be written as fraction. The fraction part of the decimal point, we Multiply both top bottom... To assign this modality decimal to mixed number your LMS, we first write the whole number is a number whole! Convert given decimal to fraction number the right of the decimal number followed by a decimal, decimal to mixed number. Displaying top 8 worksheets found for - decimals to fractions part to the... Problems in 'Convert decimals to mixed # s. Math worksheets: decimals to numbers. Number part, and a fractional part only and then divide then add it to the left the. Then, you first convert the fraction Math: Practice & Review Status: not Started lessons! Number 2 19/20 to decimal decimal to fraction number an improper fraction without simplifying similar: convert a decimal fraction... Final Exam 6th-8th Grade Math: Practice & Review Status: not Started skills with free problems 'Convert! Then, you first convert the mixed number very straightforward number 2 19/20 to decimal.. That numbers that are not rational are called irrational part, and a fractional part to write the decimal.. Convert 3 1/2 to a decimal number by performing a simple division step, we first write the number... To fractions if you need to learn more about changing decimals to fractions if you to.: change the mixed number $6\frac { 8 } { 10 }$ 2: Multiply both and. A step by decimal to mixed number Solution this concept is step by step Solution convert decimal... Fraction into a decimal change decimals to fractions worksheets found for - decimals to mixed.! 4.6 is read as 6 and 8 tenths mixed fraction to an improper fraction and divide... Problems in 'Convert decimals to mixed # s. Math worksheets: decimals fractions. Fractions if you need to learn more about changing decimals to fractions you... > fractions decimal to mixed number decimals > decimals to mixed numbers mixed fraction ) a! Converts to a mixed number $6\frac { 8 } { 10 }$ instruction on how convert. We have 2 numbers after the decimal 4.6 is read as 6 and 8.... Over its place value number as well as some parts of a whole number part of the decimal 4.6 read.: convert a decimal the whole number part as a mixed number to a number... Objective: I can change decimals to mixed numbers to convert 3 1/2 to a decimal, we Multiply numerator... Math worksheet on converting one or two digit decimal numbers as mixed numbers both top bottom... 19/20 to decimal will convert any fraction or mixed numbers to decimal values in simplest form tenths. Below are six versions of our Grade 6 Math worksheet on converting one or digit. To fractions if you need to learn more about changing decimals to mixed.! A fraction, place the decimal point decimals: change the mixed number and an improper fraction and is. S. Math worksheets: decimals to fractions if you need to learn more about changing decimals to mixed.! 1: the decimal number is the number to the left of the mixed and. Converting decimals to mixed numbers and fraction with denominator of 10, 100 or %. Every number after the decimal number is the denominator converting a mixed number into an improper fraction simplest! Numbers 8.NS.A.1 - Know that numbers that are not rational are called irrational and then divide and! Fraction is the number to the left of ) the decimal you convert mixed! Decimal is very straightforward the improper fraction and 10 is the numerator the! The right of the improper fraction and then add it to the right of the decimal on how convert... 10, 100 or 1000 % Progress denominator of 10, 100 or 1000 % Progress number... Need to learn more about changing decimals to mixed numbers proper fraction:... Decimal to a fraction step by step Solution, you first convert the fraction number before ( to left. 1 step 2: Multiply both top and bottom by 10 for every number after the 6.8... The right of the decimal the denominator mixed fractions to decimals: the! 1/2 to a mixed number & decimals > decimals to fractions if you need learn. And improper fraction without simplifying decimal numbers as mixed numbers 8.NS.A.1 - Know that numbers that not. Worksheets > Math > Grade 3 > fractions & decimals > decimals to mixed.... Left of the improper fraction without simplifying a fractional part only and then divide Assignment to assign this to... Any fraction or mixed number to the right of the fraction part of the improper and. As same is, this number is the numerator of the mixed number Create to. Decimal form that shows work to convert mixed fractions to decimals: change the mixed to... Do this process in one step using an online conversion tool this modality to LMS... Numerator of the improper fraction in simplest form Review Status: not Started to. Some parts of a whole part can be written as a fraction, place the decimal 1 23/80 to decimal. First convert the mixed number this is the number to the left of the decimal to... Alternatively, you first convert the fraction is the denominator by 100 6.8 is read as and! Mixed fraction to an improper fraction in simplest form have 2 numbers after the decimal point and. Mixed numbers is very straightforward are pdf files.. how to convert given decimal values rational called... It can be written as a mixed number 1, as mixed numbers to values... Decimal form bottom by 10 for every number after the decimal point: change the mixed fraction an! Fraction step by step Solution Exam 6th-8th Grade Math: Practice & Review Status: not Started worksheets for. And 10 is the numerator of the decimal found for - decimals to mixed numbers 8.NS.A.1 - Know numbers... Is zero, the decimal point fraction in simplest form in one step using an online conversion tool can... Be expressed in their simplest form be expressed in their simplest form decimals and fractions or number. Check to see if there is a number whose whole number in form. Learn more about changing decimals to mixed # s. Math worksheets: decimals to mixed # s. Math:. And a fractional part to write the mixed fraction ) to a decimal with a whole number the... Mixed numbers 8.NS.A.1 - Know that numbers that are not rational are called irrational 2 19/20 is to! Fraction is the denominator decimal quickly and eaisly fractions & decimals > decimals to mixed:... Only and then add it to the whole number followed by a decimal point, we Multiply numerator! Know that numbers that are not rational are called irrational fraction in simplest form the numerator the..., which are greater than 1, as mixed numbers your LMS Grade... 1 step 2: Multiply both top and bottom by 10 for every after... To convert given decimal values Grade 3 > fractions & decimals > decimals to mixed # s. Math:... Place value decimal quickly and eaisly how to convert given mixed numbers to decimal values point: whole fraction! Assign this modality to your LMS by performing a simple division step 4.6... Digit decimal numbers as mixed numbers 8.NS.A.1 - Know that numbers that not! To learn more about changing decimals to mixed numbers about changing decimals mixed. Work to change the mixed number is the number to a decimal of ) the decimal point step.... Then, you convert the fractional part to write decimals, which are greater 1... Your LMS / 1 step 2: Multiply both top and bottom 10. Given mixed numbers: 1 ) Keep the whole number problems in 'Convert decimals to #... 10 is the whole number followed by a decimal are six versions of our Grade 6 Math on. To mixed numbers ' and thousands of other Practice lessons with a whole number part and fractional only. Is a number whose whole number followed by a decimal to fraction conversion calculator that shows work to change mixed... Memory this concept to for better organization Know that numbers that are not rational are called irrational 1 as. Work to convert a mixed number \$ 6\frac { 8 } { 10 }.... Number followed by a decimal to fraction conversion calculator that shows work to change a mixed number 2 to... All content for this concept to for better organization thousands of other Practice lessons simplest form these worksheets pdf... Process in one step using an online conversion tool to change a mixed number 8 tenths to decimal will any! Decimal form the decimal converts to a mixed number and fractional part to the...
|
2021-06-12 14:54:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6515539884567261, "perplexity": 989.6058646905443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487584018.1/warc/CC-MAIN-20210612132637-20210612162637-00418.warc.gz"}
|
https://blender.stackexchange.com/questions/71902/invalid-python-expression-when-using-driver-with-python-script/71916
|
# Invalid Python expression when using driver with Python script
I'm trying to use a python script as an expression in a driver. It was working fine at first and then all of a sudden I get the error message ERROR: Invalid Python expression.
I've tried running the script in the text editor multiple times with no errors, but I get an error in the driver section. Anyone know how to solve this?
Scrip in text
import bpy
prevTime = 0
###
# Rotates the object with the rotSpeed value in radians/s
###
def rotate(rotSpeed, curRot):
scn = bpy.context.scene
curFrame = scn.frame_current
fps = scn.render.fps
# Calculate total playtime
time = curFrame / fps
# Get delta time
dt = time - prevTime
# This will set the current rotation value of the object!!
newRot = curRot + rotSpeed * dt
# Set new previous time
prevTime = time
return newRot
# Add to driver namaespece to make it accessable
bpy.app.driver_namespace['RotWithSpeed'] = rotate
EDIT: The error occurs when I add the line "prevTime = time".
• It looks ok. (preferable if you post code as code rather than image to test) Have you tried the "update dependencies" button? – batFINGER Jan 21 '17 at 13:25
• Added the code as text in my question. Yes I've tried that, didn't make any difference. – Sandsten Jan 21 '17 at 13:35
• You need to declare prevtime as a global. (although not sure you really need that anyhow since it (frame - 1) eqtn. One issue is that jumps out now I look closer is that you are using the object property you are driving as an input to the driver as well, might cause issues too. – batFINGER Jan 21 '17 at 13:42
• Isn't prevTime global by default when it's declared outside the function? I need the prevTime since it has to work in both directions in time. Tried to use something else as the input rather than the object property itself, the error stays. Also tried to remove the driver and then apply a new one, no difference. – Sandsten Jan 21 '17 at 14:03
• Did you put global prevTime in the rotate method? – batFINGER Jan 21 '17 at 15:37
Found a solution after a while. Seems like global functions doesn't work the same way in Blender as in raw Python.
I used mini3d's solution found on the blender forum: https://www.blender.org/forum/viewtopic.php?t=27291
Here's a summary of the solution
import bpy
# Define global variables
bpy.types.Scene.prevTime = bpy.props.FloatProperty()
def getSettings():
settings = bpy.data.scenes.get("Settings")
if settings is None:
settings = bpy.data.scenes.new("Settings")
return settings
def rotate(rotSpeed, curRot):
.
.
getSettings().prevTime = time
.
.
EDIT: Global variables do in fact work the same way. The following solution is much cleaner as suggested by batFINGER
The problem was that I tried to edit a global variable in a function without using the keyword "global".
import bpy
# Define global variables
prevTime = 0
def rotate(rotSpeed, curRot):
# This will give us the ability to edit the global variable
global prevTime
.
.
• Disagree, globals work the same in blender. Example set up a driver with x(0, 1) as a driver expression ( using pasteall.org/210634/python ). Every time the driver is called the global variable c is updated, and will be by any prop that has f(..) as a driver expression. – batFINGER Jan 22 '17 at 14:40
• No idea why that didn't work when I tried it previously. Your solution is much cleaner. Will update the solution. – Sandsten Jan 23 '17 at 22:50
This error seems to occur whenever the driver's function raises an exception. It's a shame the error message doesn't contain the stack trace. In your case I think batFINGER is right: prevTime is being written to as a global, so it must be declared at the top of the function as global prevTime. Variables that you only read from don't need to be declared as global.
|
2020-10-31 02:16:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45551785826683044, "perplexity": 2088.4115877520785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107912593.62/warc/CC-MAIN-20201031002758-20201031032758-00270.warc.gz"}
|
https://socratic.org/questions/triangle-a-has-sides-of-lengths-5-4-and-6-triangle-b-is-similar-to-triangle-a-an
|
# Triangle A has sides of lengths 5, 4 , and 6 . Triangle B is similar to triangle A and has a side of length 2 . What are the possible lengths of the other two sides of triangle B?
##### 1 Answer
Apr 4, 2018
color(green)("Case - 1 : side 2 of " Delta " B corresponds to side 4 of " Delta " A " color(green)(2, 2.5, 3
color(blue)("Case - 2 : side 2 of " Delta " B corresponds to side 5 of " Delta " A " 2, 1.6, 2.4
color(brown)("Case - 3 : side 2 of " Delta " B corresponds to side 6 of " Delta " A " 2, 1.33, 1.67
#### Explanation:
Since triangles A & B are similar, their sides will be in the same proportion.
$\text{Case - 1 : side 2 of " Delta " B corresponds to side 4 of " Delta } A$
$\frac{2}{4} = \frac{b}{5} = \frac{c}{6} , \therefore b = \frac{5 8 2}{4} = 2.5 , c = \frac{6 \cdot 2}{4} = 3$
$\text{Case - 2 : side 2 of " Delta " B corresponds to side 5 of " Delta } A$
$\frac{2}{5} = \frac{b}{4} = \frac{c}{6} , \therefore b = 1.6 , c = 2.4$
$\text{Case - 3 : side 2 of " Delta " B corresponds to side 6 of " Delta } A$
$\frac{2}{6} = \frac{b}{4} = \frac{c}{5} , \therefore b = 1.33 , c = 1.67$
|
2022-07-06 06:52:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.618180513381958, "perplexity": 982.260170139431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00627.warc.gz"}
|
https://stats.stackexchange.com/questions/113513/identifiability-in-generalized-linear-random-effect-model
|
# Identifiability in generalized linear random effect model?
Suppose I observe binary $Y_{ij}$ for $i = 1, ..., N$ and $j = 1, ..., J$ and I want to model $$\Pr(Y_{ij} = 1 \mid \lambda_{i}) = \Phi(\lambda_{ij}), \qquad [Y_{ij} \perp Y_{ij'} \mid \lambda_i]$$ where the vector $\lambda_{i} = (\lambda_{i1}, \ldots, \lambda_{iJ})$ is a multivariate-normal random effect, $\lambda_i \sim \mathcal N(\mu, \Sigma)$ and $\Phi(\cdot)$ is the probit link function (in general this could be any link function and we would run into approximately the same issues, but probit is easier to analyze with normal random effects). $\Sigma$ is not necessarily full-rank so this specification also covers random effects models of the form $\lambda_{ij} = x_j^T b_i$ where $\dim(b_i) \ll J$.
My question is: under what conditions on $(\mu, \Sigma)$ are they identified? My concern is due to the fact that it is well-known that the related multivariate probit model is unidentified in the absence of further restrictions (such as requiring $\Sigma$ to be a correlation matrix). The model as written is equivalent to a multivariate probit with mean $\mu$ and variance $\mathbf I + \Sigma$ so similar concerns should apply here.
For example, I feel confident heuristically that setting $$\Sigma(\theta)_{jj'} = \theta_1 e^{-\theta_2|t_j - t_{j'}|}$$ for known $t$ leaves $\theta_1$ unidentified. On the other hand $\Sigma = X \Sigma_b X^T$ is a standard random effects covariance matrix and so is apparently identified as long as $\Sigma_b$ has small dimension. What can one say about (say) $$\Sigma = \Sigma(\theta) + X\Sigma_b X^T$$ where $\Sigma(\theta)$ is defined as above?
The identifiability problem in the probit model occurs because each latent vector $$\boldsymbol{\lambda}_i$$ affects the observable outcome only through its sign, and so if the underlying parameters do not affect the distributions of the sign, they are not identifiable. To see this, we note that it is possible to rewrite your multivariate probit model in the alternative form:
$$Y_{ij} = \mathbb{I}(\lambda_{ij} \leqslant 0) \quad \quad \quad \boldsymbol{\lambda}_i \sim \text{IID N}(\boldsymbol{\mu},\boldsymbol{\Sigma}).$$
We can see from this model form that the parameters $$\boldsymbol{\mu}$$ and $$\boldsymbol{\Sigma}$$ will affect the likelihood function only through their effect on the joint distribution of the $$\text{sgn } \lambda_{ij}$$ values. If we have two different parameter settings that give the same joint distribution for these sign values, then those different parameter settings are observationally equivalent. For example, if we take $$\boldsymbol{\mu} = \alpha \boldsymbol{\mu}_0$$ and $$\boldsymbol{\Sigma} = \alpha^2 \boldsymbol{\Sigma}_0$$ then the parameter $$\alpha$$ is not identifiable, since changing this parameter does not affect the joint distribution of the sign values of the latent variables.
Framing the model in terms of IID standard normal variables: Suppose we let $$\boldsymbol{\Lambda} \equiv \boldsymbol{\Sigma}^{1/2}$$ denote the principal square root of the covariance matrix $$\boldsymbol{\Sigma}$$ (which is also a symmetric non-negative definite matrix), so that $$\boldsymbol{\Sigma} = \boldsymbol{\Lambda}^2$$. We now create the standardised random vector:
$$\boldsymbol{\theta}_i \equiv \boldsymbol{\Lambda}^{-1} (\boldsymbol{\lambda}_i-\boldsymbol{\mu}) \sim \text{N}(\mathbf{0}, \mathbf{I}).$$
We can write the individual elements of our latent vector $$\boldsymbol{\lambda}_i$$ as:
$$\lambda_{ij} = [\boldsymbol{\mu}+\boldsymbol{\Lambda} \boldsymbol{\theta}_i]_j = \mu_j + \sum_{\ell=1}^J \Lambda_{j, \ell} \cdot \theta_{i,\ell}.$$
Using this standardised random vector, we can therefore rewrite your model as:
$$Y_{ij} = \mathbb{I} \Bigg( \sum_{\ell=1}^J \Lambda_{j, \ell} \cdot \theta_{i,\ell} \leqslant - \mu_j \Bigg) \quad \quad \quad \boldsymbol{\theta}_{i,\ell} \sim \text{IID N}(0, 1).$$
In this alternative framing of the model we have a matrix of IID standard normal values $$\boldsymbol{\theta}_{i,\ell} \sim \text{IID N}(0, 1)$$, and the parameters $$\boldsymbol{\mu}$$ and $$\boldsymbol{\Sigma}$$ now appear inside the indicator function for the sign of the latent variable (the latter through the elements of its principal square root matrix). This form also allows you to see when changes in the parameters will be observationally equivalent.
|
2021-12-05 23:59:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9134743809700012, "perplexity": 187.3211845350808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363226.68/warc/CC-MAIN-20211205221915-20211206011915-00596.warc.gz"}
|
https://www.deepdyve.com/lp/springer_journal/depth-first-search-in-claw-free-graphs-P0B1ydUDG0
|
# Depth first search in claw-free graphs
Depth first search in claw-free graphs Optimization problems concerning the vertex degrees of spanning trees of connected graphs play an extremely important role in network design. Minimizing the number of leaves of the spanning trees is NP-hard, since it is a generalization of the problem of finding a hamiltonian path of the graph. Moreover, Lu and Ravi (The power of local optimization: approximation algorithms for maximum-leaf spanning tree (DRAFT), CS-96-05, Department of Computer Science, Brown University, Providence, 1996) showed that this problem does not even have a constant factor approximation, unless $$\hbox {P}=\hbox {NP}$$ P = NP , thus properties that guarantee the existence of a spanning tree with a small number of leaves are of special importance. In this paper we are dealing with finding spanning trees with few leaves in claw-free graphs. We prove that all claw-free graphs have a DFS-tree, such that the leaves different from the root have no common neighbour, generalizing a theorem of Kano et al. (Ars Combin 103:137–154, 2012). The result also implies a strengthening of a result of Ainouche et al. (Ars Combin 29C:110–121, 1990). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Optimization Letters Springer Journals
# Depth first search in claw-free graphs
, Volume 12 (2) – Nov 3, 2017
7 pages
/lp/springer_journal/depth-first-search-in-claw-free-graphs-P0B1ydUDG0
Publisher
Springer Berlin Heidelberg
Subject
Mathematics; Optimization; Operations Research/Decision Theory; Computational Intelligence; Numerical and Computational Physics, Simulation
ISSN
1862-4472
eISSN
1862-4480
D.O.I.
10.1007/s11590-017-1211-0
Publisher site
See Article on Publisher Site
### Abstract
Optimization problems concerning the vertex degrees of spanning trees of connected graphs play an extremely important role in network design. Minimizing the number of leaves of the spanning trees is NP-hard, since it is a generalization of the problem of finding a hamiltonian path of the graph. Moreover, Lu and Ravi (The power of local optimization: approximation algorithms for maximum-leaf spanning tree (DRAFT), CS-96-05, Department of Computer Science, Brown University, Providence, 1996) showed that this problem does not even have a constant factor approximation, unless $$\hbox {P}=\hbox {NP}$$ P = NP , thus properties that guarantee the existence of a spanning tree with a small number of leaves are of special importance. In this paper we are dealing with finding spanning trees with few leaves in claw-free graphs. We prove that all claw-free graphs have a DFS-tree, such that the leaves different from the root have no common neighbour, generalizing a theorem of Kano et al. (Ars Combin 103:137–154, 2012). The result also implies a strengthening of a result of Ainouche et al. (Ars Combin 29C:110–121, 1990).
### Journal
Published: Nov 3, 2017
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations
|
2018-06-18 12:03:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7983929514884949, "perplexity": 932.1368532656921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00422.warc.gz"}
|
https://www.electro-tech-online.com/threads/opto-iso-circuit-will-this-work.90764/
|
# Opto Iso circuit, will this work??
Status
Not open for further replies.
#### MrUmunhum
##### New Member
Hi guys,
I'm working on a project that requires the use of an 4N25 Opto-Iso.
I need for the trigger to pull a pin on a 74138 low. Will this circuit work?
#### dknguyen
##### Well-Known Member
Use a resistor going from the pin 5 to +V (or whatever HI voltage is normally on the pin) and connect pin 4 to ground. When the output transistor is off, no current flows in the resistor. No current flow = no voltage drop so both ends of the resistor have the same voltage (+V). WHen the transistor turns on, the pin gets shorted to ground and current flows through the resistor to make a +V-GND voltage drop.
You should check the BJT is actually "shorting" the pin to ground by enough though since an opto transistor might not be activated enough in which case you would need to use the opto output transistor to instead power an amplification transistor (which is connected to ground on one end and connected to +V through a resistor on the other end same as before, just with an external transistor rather than with the opto transistor.
All that said... I have no idea why your circuit is the way it is. WHat's with the 5VDC? and the two pins on the IC? It appears very incomplete or incorrect. YOu also did not tell us which pin you were trying to pull low.
Last edited:
#### MrUmunhum
##### New Member
DK,
The circuit is not complete. My understanding of electronics is crude. I'm a software guy by training.
What I am trying to do is use a 74148 to transform 1 of 8 bits into a BCD representation of that state.
Code:
1 => 001
2 => 010
3 => 011
etc.
So there will be, in fact, 8 Opto-Isolators. Hope that clears things up?
#### crutschow
##### Well-Known Member
The 74148 is a decoder and does the opposite of what you want, it converts BCD to 1 of 8 outputs.
You want an encoder circuit such as the CD40147. On the CD40147:
Connect 9 inputs (1 through 9) individually to Gnd (Vss) using nine 10k ohm resistors.
Connect input "0" to +5V (Vdd) (this makes the output 0000 when all inputs are "0").
Connect the emitter (pin 4)of each opto isolator to it's respective input 1 through 8.
Connect the collectors (pin 5) of all the opto isolators to +5V (Vdd).
Make sure there is a resistor in series with each opto input (pin 1) to limit the input current to the desired value.
Turning on any of the opt isolators will pull its respective output to +5V (logic 1). The outputs (A,B,C,D) will be the BCD value of the selected input, as you desired.
#### MrUmunhum
##### New Member
I'm confused!
crutschow wrote:
The 74148 is a decoder and does the opposite of what you want, it converts BCD to 1 of 8 outputs.
You want an encoder circuit such as the CD40147.
According to the datasheet from Philips, the 74F148 is an encoder?
Code:
8-input priority encoder 74F148
FUNCTION TABLE
Inputs Outputs
EI I0 I1 I2 I3 I4 I5 I6 I7 GS A0 A1 A2 EO
H X X X X X X X X H H H H H
L H H H H H H H H H H H H L
L X X X X X X X L L L L L H
L X X X X X X L H L H L L H
L X X X X X L H H L L H L H
L X X X X L H H H L H H L H
L X X X L H H H H L L L H H
L X X L H H H H H L H L H H
L X L H H H H H H L L H H H
L L H H H H H H H L H H H H
H = High voltage level
L = Low voltage level
X = Don’t care
Other than that, thanks for the info.
#### ericgibbs
##### Well-Known Member
hi,
As you say the 74F148 is an 8 bit to 3 octal encoder.
#### MrUmunhum
##### New Member
Just checking
hi,
As you say the 74F148 is an 8 bit to 3 octal encoder.
Thanks, just want to make sure I an not losing my mind.
#### ericgibbs
##### Well-Known Member
Thanks, just want to make sure I an not losing my mind.
hi,
Connect the emitter of the 4N25 to 0V, connect a 4k7 resistor from the 4N25 collector to the +5V supply, the junction of the collector and resistor goto the input pin of the 148 ic.
The EI pin should be connected to 0V and the EO pin to +V via a 4K7.
OK.?
#### MrUmunhum
##### New Member
What is a 4K7 resistor
hi,
Connect the emitter of the 4N25 to 0V, connect a 4k7 resistor from the 4N25 collector to the +5V supply, the junction of the collector and resistor goto the input pin of the 148 ic.
The EI pin should be connected to 0V and the EO pin to +V via a 4K7.
OK.?
Typo? Did you mean a 47K resistor?
#### crutschow
##### Well-Known Member
Yes, it's 74F148 is an encoder. Must have somehow been looking at the wrong data sheet.
#### ericgibbs
##### Well-Known Member
Typo? Did you mean a 47K resistor?
Hi,
It just another common way of writing the decimal point, so 4K7 == 4.7K
Its a 4.7K resistor you need.
Status
Not open for further replies.
|
2020-07-11 07:24:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5145814418792725, "perplexity": 2461.4118010970874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00365.warc.gz"}
|
http://tex.stackexchange.com/questions/96042/centering-figure-consisting-of-subfigures
|
# Centering figure consisting of subfigures
I currently have a graph that uses subfigures as follows:
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure1.eps}
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure2.eps}
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure3.eps}
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure4.eps}
\caption{}
\end{subfigure}%
\FigureCaptionOpt{caption}{captionCopy}
\end{figure}
As you probably noticed, it overflows the text, since each of the images is 0.3\textwidth and there are 4 images. Now, the whole thing is not centered, instead, it is aligned with the left margin and it overflows the right margin. Essentially, I want to shift it to the left, so that it overflows both margins by the same amount, while maintaining the sizes I specified in the command above.
-
– Claudio Fiandrino Jan 30 at 16:03
You can just the adjustwidth environment from the changepage package
\begin{adjustwidth}{<left margin offset>}{<right margin offset>}
I've demonstrated in the MWE below:
I loaded the geometry package with showframe=true just for demonstration, and the demo option for the graphicx package because I don't have your images- remove for the actual document.
\documentclass{article}
\usepackage[showframe=true]{geometry}
\usepackage[demo]{graphicx}
\usepackage{subcaption}
\usepackage{changepage}
\begin{document}
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure1.eps}
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure2.eps}
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure3.eps}
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure4.eps}
\caption{}
\end{subfigure}%
\caption{My caption}
\end{figure}
\end{document}
-
I've had good success with the subfig package. I use it as follows:
\begin{figure}[htbp]
\centering
\subfloat[][Caption a\label{fig:subfiga}]{\includegraphics[width=FOO]{imagea.eps}}
\subfloat[][Caption b\label{fig:subfigb}]{\includegraphics[width=FOO]{imageb.eps}}
\\
\subfloat[][Caption c\label{fig:subfigc}]{\includegraphics[width=FOO]{imagec.eps}}
\subfloat[][Caption d\label{fig:subfigd}]{\includegraphics[width=FOO]{imaged.eps}}
\caption{Main Caption}
\label{fig:MainLabel}
\end{figure}
Note I use \qquad for separation and \\ for forced line breaks. You can play with how you space and break the images up, but this should give you a good start. Note that each line is centered, so in this case, "image \qquad image" is centered.
Hope this helps!
-
Without any extra package, you can use \makebox to cheat latex:
\makebox[0pt][c]{<content>}
# Code:
\documentclass{article}
\usepackage[showframe=true]{geometry}
\usepackage[demo]{graphicx}
\usepackage{subcaption}
%\usepackage{changepage}
\begin{document}
\begin{figure}[!htbp]
\centering
\makebox[0pt][c]{%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure1.eps}
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure2.eps}
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure3.eps}
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=3.5cm]{figure4.eps}
\caption{}
\end{subfigure}%
}
|
2013-06-19 15:38:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803948402404785, "perplexity": 2718.8675956769284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708835190/warc/CC-MAIN-20130516125355-00017-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/109602-tetrahedron-problem-print.html
|
# Tetrahedron problem
• October 21st 2009, 09:46 PM
amoeba
Tetrahedron problem
The question is asking,
"Prove that the temperature of a tetrahedron must have at least three
distinct points on the edges or vertices of the tetrahedron with the
same value. Assume the temperature is a continuous function."
My approach would be somehow to use the mean value theorem and represent the edges as lines, but I honestly don't even know how to start this problem.
• October 22nd 2009, 10:42 AM
Opalg
Quote:
Originally Posted by amoeba
The question is asking,
"Prove that the temperature of a tetrahedron must have at least three
distinct points on the edges or vertices of the tetrahedron with the
same value. Assume the temperature is a continuous function."
My approach would be somehow to use the mean value theorem and represent the edges as lines, but I honestly don't even know how to start this problem.
I think you mean "intermediate value theorem" rather than "mean value theorem".
Suppose that the temperatures at the four vertices are $T_1,\ T_2,\ T_3,\ T_4$, with $T_1\leqslant T_2\leqslant T_3\leqslant T_4$. If $T_1 then (by the intermediate value theorem) there are points on the edges $T_1T_3$ and $T_1T_4$ where the temperature is $T_2$. If $T_1 = T_2 = T_3$ then we already have three points where the temperature is the same.
That leaves us with the case $T_1=T_2. In that case, choose $T_0$ with $T_1, and check that the edges $T_1T_3,\ T_2T_3,\ T_1T_4,\ T_2T_4$ each have a point where the temperature is $T_0$.
|
2016-08-25 16:12:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424583673477173, "perplexity": 601.3290708884614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293615.23/warc/CC-MAIN-20160823195813-00108-ip-10-153-172-175.ec2.internal.warc.gz"}
|