url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
http://nrich.maths.org/public/leg.php?code=-99&cl=2&cldcmpid=2158 | Search by Topic
Resources tagged with Working systematically similar to A Square in a Circle:
Filter by: Content type:
Stage:
Challenge level:
There are 332 results
Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically
Triangles to Tetrahedra
Stage: 3 Challenge Level:
Starting with four different triangles, imagine you have an unlimited number of each type. How many different tetrahedra can you make? Convince us you have found them all.
Waiting for Blast Off
Stage: 2 Challenge Level:
10 space travellers are waiting to board their spaceships. There are two rows of seats in the waiting room. Using the rules, where are they all sitting? Can you find all the possible ways?
When Will You Pay Me? Say the Bells of Old Bailey
Stage: 3 Challenge Level:
Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring?
Single Track
Stage: 2 Challenge Level:
What is the best way to shunt these carriages so that each train can continue its journey?
Shunting Puzzle
Stage: 2 Challenge Level:
Can you shunt the trucks so that the Cattle truck and the Sheep truck change places and the Engine is back on the main line?
Map Folding
Stage: 2 Challenge Level:
Take a rectangle of paper and fold it in half, and half again, to make four smaller rectangles. How many different ways can you fold it up?
Square Corners
Stage: 2 Challenge Level:
What is the greatest number of counters you can place on the grid below without four of them lying at the corners of a square?
You Owe Me Five Farthings, Say the Bells of St Martin's
Stage: 3 Challenge Level:
Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring?
Cover the Tray
Stage: 2 Challenge Level:
These practical challenges are all about making a 'tray' and covering it with paper.
Display Boards
Stage: 2 Challenge Level:
Design an arrangement of display boards in the school hall which fits the requirements of different people.
Street Party
Stage: 2 Challenge Level:
The challenge here is to find as many routes as you can for a fence to go so that this town is divided up into two halves, each with 8 blocks.
Ice Cream
Stage: 2 Challenge Level:
You cannot choose a selection of ice cream flavours that includes totally what someone has already chosen. Have a go and find all the different ways in which seven children can have ice cream.
Counters
Stage: 2 Challenge Level:
Hover your mouse over the counters to see which ones will be removed. Click to remover them. The winner is the last one to remove a counter. How you can make sure you win?
Tetrafit
Stage: 2 Challenge Level:
A tetromino is made up of four squares joined edge to edge. Can this tetromino, together with 15 copies of itself, be used to cover an eight by eight chessboard?
Stage: 2 Challenge Level:
How can you arrange the 5 cubes so that you need the smallest number of Brush Loads of paint to cover them? Try with other numbers of cubes as well.
Paw Prints
Stage: 2 Challenge Level:
A dog is looking for a good place to bury his bone. Can you work out where he started and ended in each case? What possible routes could he have taken?
Two on Five
Stage: 1 and 2 Challenge Level:
Take 5 cubes of one colour and 2 of another colour. How many different ways can you join them if the 5 must touch the table and the 2 must not touch the table?
Sticks and Triangles
Stage: 2 Challenge Level:
Using different numbers of sticks, how many different triangles are you able to make? Can you make any rules about the numbers of sticks that make the most triangles?
Newspapers
Stage: 2 Challenge Level:
When newspaper pages get separated at home we have to try to sort them out and get things in the correct order. How many ways can we arrange these pages so that the numbering may be different?
Isosceles Triangles
Stage: 3 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
Squares in Rectangles
Stage: 3 Challenge Level:
A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle contains 20 squares. What size rectangle(s) contain(s) exactly 100 squares? Can you find them all?
Making Squares
Stage: 2 Challenge Level:
Investigate all the different squares you can make on this 5 by 5 grid by making your starting side go from the bottom left hand point. Can you find out the areas of all these squares?
Geoboards
Stage: 2 Challenge Level:
This practical challenge invites you to investigate the different squares you can make on a square geoboard or pegboard.
Red Even
Stage: 2 Challenge Level:
You have 4 red and 5 blue counters. How many ways can they be placed on a 3 by 3 grid so that all the rows columns and diagonals have an even number of red counters?
Putting Two and Two Together
Stage: 2 Challenge Level:
In how many ways can you fit two of these yellow triangles together? Can you predict the number of ways two blue triangles can be fitted together?
Fence It
Stage: 3 Challenge Level:
If you have only 40 metres of fencing available, what is the maximum area of land you can fence off?
Celtic Knot
Stage: 2 Challenge Level:
Building up a simple Celtic knot. Try the interactivity or download the cards or have a go on squared paper.
Knight's Swap
Stage: 2 Challenge Level:
Swap the stars with the moons, using only knights' moves (as on a chess board). What is the smallest number of moves possible?
Three Sets of Cubes, Two Surfaces
Stage: 2 Challenge Level:
How many models can you find which obey these rules?
My New Patio
Stage: 2 Challenge Level:
What is the smallest number of tiles needed to tile this patio? Can you investigate patios of different sizes?
3 Rings
Stage: 2 Challenge Level:
If you have three circular objects, you could arrange them so that they are separate, touching, overlapping or inside each other. Can you investigate all the different possibilities?
Cuboid-in-a-box
Stage: 2 Challenge Level:
What is the smallest cuboid that you can put in this box so that you cannot fit another that's the same into it?
Tiles on a Patio
Stage: 2 Challenge Level:
How many ways can you find of tiling the square patio, using square tiles of different sizes?
Halloween Investigation
Stage: 2 Challenge Level:
Ana and Ross looked in a trunk in the attic. They found old cloaks and gowns, hats and masks. How many possible costumes could they make?
Counting Cards
Stage: 2 Challenge Level:
A magician took a suit of thirteen cards and held them in his hand face down. Every card he revealed had the same value as the one he had just finished spelling. How did this work?
Calcunos
Stage: 2 Challenge Level:
If we had 16 light bars which digital numbers could we make? How will you know you've found them all?
Hexpentas
Stage: 1 and 2 Challenge Level:
How many different ways can you find of fitting five hexagons together? How will you know you have found all the ways?
Tetrahedra Tester
Stage: 3 Challenge Level:
An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length?
Stage: 2 Challenge Level:
How many DIFFERENT quadrilaterals can be made by joining the dots on the 8-point circle?
Two by One
Stage: 2 Challenge Level:
An activity making various patterns with 2 x 1 rectangular tiles.
Four Triangles Puzzle
Stage: 1 and 2 Challenge Level:
Cut four triangles from a square as shown in the picture. How many different shapes can you make by fitting the four triangles back together?
Teddy Town
Stage: 1, 2 and 3 Challenge Level:
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
Egyptian Rope
Stage: 2 Challenge Level:
The ancient Egyptians were said to make right-angled triangles using a rope with twelve equal sections divided by knots. What other triangles could you make if you had a rope like this?
Arranging the Tables
Stage: 2 Challenge Level:
There are 44 people coming to a dinner party. There are 15 square tables that seat 4 people. Find a way to seat the 44 people using all 15 tables, with no empty places.
Polydron
Stage: 2 Challenge Level:
This activity investigates how you might make squares and pentominoes from Polydron.
Eight Queens
Stage: 2 Challenge Level:
Place eight queens on an chessboard (an 8 by 8 grid) so that none can capture any of the others.
One to Fifteen
Stage: 2 Challenge Level:
Can you put the numbers from 1 to 15 on the circles so that no consecutive numbers lie anywhere along a continuous straight line?
The Pied Piper of Hamelin
Stage: 2 Challenge Level:
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
A Square of Numbers
Stage: 2 Challenge Level:
Can you put the numbers 1 to 8 into the circles so that the four calculations are correct?
More Magic Potting Sheds
Stage: 3 Challenge Level:
The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? | 2015-10-04 12:54:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24716651439666748, "perplexity": 1855.0473259229836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673632.3/warc/CC-MAIN-20151001215753-00214-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/491279/mandelbrot-boundary | # Mandelbrot boundary
Is there a sequence of parameterized expressions for the border of all the major bulbs of the mandelbrot set? By major meaning all bulbs with diameter greater than 0.01 for example. I am interested in generating the major features of the set to apply transformations to it. I want to create a vector of points of the border (for major features only) for a Matlab type language. I know the parameterized expression for the main cardiod and the period 2 bulb centered at -1.
In the discussion, the op asked wheather the bulbs are exactly circular, and if I could post the code to generate the Taylor series for the bulbs. The quick answer is that only the period2 bulb centered at -1 is exactly circular. My first answer gave the center and size of some of the bulbs and cardioids. The size is a rather crude approximation of a bulb or cardiod, since all but one of the bulbs isn't exactly circular, and since the cardioids also need a direction. I enhanced the original pari-gp code to also calculate the mapping of the bulbs/cardioid to the unit circle. The pari-gp code is below. islopetaylor(c,n) returns the taylor series for the period n cardiod or bulb near c.
All of these hyperbolic centers can have their bulbs and/or cardioids mapped to the unit circle, where the boundary of the unit circle corresponds to where the $|\text{slope}|=1$. The inside of the circle, is mapped to points inside the hyperbolic region, where the region has an attracting fixed point of period n.
Finding such a region involves getting an approximation for the nearest fixed point, in the neighborhood of the hyperbolic center. The fixed point is attracting, so we can iterate, starting with z=0 with period n, $z \mapsto f^{n}(z)$, where $f(z)=z^2+c$, and c is the point whose fixed point is desired, so that in the limit, the fixed point is $l=\lim_{k \to \infty} f^{kn}(0)$, $f^{n}(l)=l$.
At the fixed point l, calculate the derivitive. $\frac{dy}{dx}f^{n}(l+x)$, where l is the fixed point, described above. If we call this derivitive fixed point function g(c), then the desired function mapping the bulb/cariod to the unit circle is $g^{-1}(z)$. For a given bulb, $g^{-1}(0)$ is hyperbolic center of that bulb or cardiod. I generated the Taylor series by using a circular Cauchy integral numerical approximation at a radius of 0.5, using 50 sample points; which gives results accurate to approximately 20 decimal digits or so, with a 25 term Taylor series.
Here is an example of how to use the pari-gp code, below, to print out the taylor series for the period4 bulbs and cardioids. EDIT: I also included the results for all of the period3 and period5 bulbs and cardioids. Similar code would work for other periods, and sizes. The prtpoly routine is just for pretty printing, of the first few terms; The polynomial output of of islopetaylor can be used directly. For bulbs, the diameter is approximately twice the absolute value of the x coefficient; the Taylor series is obviously much more exact. My first answer estimate of 1.6x the distance to the nearest 2n bulb is also a good approximation. p=periodnzero(n,l); also works for n=6..10. There are a total of sixteen period6 bulbs and cardioids, most of which are pretty small. p=randomperiodn(n,l) works for the larger bulbs, for p>=5.
p=periodnzero(3,0.001); /* rtn period3 bulbs size>0.001 */
/* taylor series for p3 bulbs, first 7 terms of gt poly */
for (n=1,length(p),gt=islopetaylor(p[n]); prtpoly(gt,7));
p=periodnzero(4,0.001); /* rtn period4 bulbs size>0.001 */
for (n=1,length(p),gt=islopetaylor(p[n]); prtpoly(gt,7));
p=periodnzero(5,0.001); /* rtn period5 bulbs size>0.001 */
---- below is the output ----
(08:49) gp > p=periodnzero(3,0.001); /* rtn period3 bulbs size>0.001 */
periodnzero(3);
0.0288243794158896630 -1.75487766624669276
0.185709338183664371 -0.122561166876653620 + 0.744861766619744237*I
(09:06) gp > /* taylor series for p3 bulbs, first 7 terms of gt poly */
(09:06) gp > for (n=1,length(p),gt=islopetaylor(p[n]); prtpoly(gt,7));
{z= -1.75487766624669276
+x^ 1* 0.00951775795656622345
+x^ 2*-0.00439215019285891483
+x^ 3*-0.000255593673326239667
+x^ 4* 0.00000507529911656475223
+x^ 5* 0.00000242235932732248524
+x^ 6* 0.000000167831763783128979
}
{z= (-0.122561166876653620 + 0.744861766619744237*I)
+x^ 1*(-0.00475887897828311173 - 0.0943369626738123763*I)
+x^ 2*( 0.00219607509642945741 - 0.00111573891029371424*I)
+x^ 3*( 0.000127796836663119834 + 0.0000966580666527584112*I)
+x^ 4*(-0.00000253764955828237611 + 0.0000131076055547205131*I)
+x^ 5*(-0.00000121117966366124262 + 0.000000329634003020475535*I)
+x^ 6*(-0.0000000839158818915644933 - 0.0000000951268060412508001*I)
}
(09:06) gp > p=periodnzero(4,0.001); /* rtn period4 bulbs size>0.001 */
periodnzero(4);
0.00157165406194145265 -1.94079980652948475
0.113351748952365738 -1.31070264133683288
0.0129942224028632946 -0.156520166833755062 + 1.03224710892283180*I
0.0868986952447806464 0.282271390766913880 + 0.530060617578525299*I
(09:06) gp > for (n=1,length(p),gt=islopetaylor(p[n]); prtpoly(gt,7));
{z= -1.94079980652948475
+x^ 1* 0.000495813881701186015
+x^ 2*-0.000244518035475789364
+x^ 3*-0.00000233355263604428247
+x^ 4* 0.0000000534459067501123460
+x^ 5* 0.00000000181828170546195257
+x^ 6*-3.23710018614877151 E-12
}
{z= -1.31070264133683288
+x^ 1* 0.0589801138156112997
+x^ 2* 0.00165047843656841052
+x^ 3* 0.0000693045614883186867
+x^ 4* 0.00000269896930057060465
+x^ 5* 0.0000000523314280419260434
+x^ 6*-0.00000000565936605551606439
}
{z= (-0.156520166833755062 + 1.03224710892283180*I)
+x^ 1*( 0.00345889762902884997 - 0.00245474033761904195*I)
+x^ 2*(-0.00152469430548367362 + 0.00130179544716022133*I)
+x^ 3*(-0.000143828841409685183 - 0.0000378568944213399944*I)
+x^ 4*( 0.00000385828834515733905 - 0.00000994102399364532939*I)
+x^ 5*( 0.00000140472771445005478 + 0.000000673772055503541835*I)
+x^ 6*(-0.0000000523614237035883056 + 0.000000196431825642151357*I)
}
{z= ( 0.282271390766913880 + 0.530060617578525299*I)
+x^ 1*(-0.0331968614776850928 - 0.0287926429442936276*I)
+x^ 2*( 0.000821714104937363043 - 0.00133634537333004157*I)
+x^ 3*( 0.000110343336983547981 + 0.0000572780596940444853*I)
+x^ 4*(-0.00000523449594881769755 + 0.0000117988699179895988*I)
+x^ 5*(-0.00000143180256932374878 - 0.000000528764025646146377*I)
+x^ 6*( 0.0000000551927252814394115 - 0.000000186571532024463594*I)
}
(09:06) gp > p=periodnzero(5,0.001); /* rtn period5 bulbs size>0.001 */
periodnzero(5);
0.00124142647819312318 -1.86078252220485487
0.00643004051683055908 -1.62541372512330374
0.0782244856281390343 -0.504340175446244000 + 0.562765761452981964*I
0.00750234541167943880 0.359259224758007439 + 0.642513737138542349*I
0.0471208943401874649 0.379513588015923745 + 0.334932305597497587*I
0.00540957192966561334 -0.0442123577040706231 + 0.986580976280892768*I
0.00109467916151353002 -0.198042099364253840 + 1.10026953729269853*I
0.00454737524002394192 -1.25636793006818076 + 0.380320963472722507*I
(09:06) gp > for (n=1,length(p),gt=islopetaylor(p[n]); prtpoly(gt,7));
{z= -1.86078252220485487
+x^ 1* 0.000390175993253921407
+x^ 2*-0.000193606895721835075
+x^ 3*-0.00000111641882446931317
+x^ 4* 0.0000000945861637002427640
+x^ 5* 0.00000000182079818330302844
+x^ 6*-6.26858107084452022 E-11
}
{z= -1.62541372512330374
+x^ 1* 0.00202943297897757605
+x^ 2*-0.00100562087600704118
+x^ 3*-0.0000100969021567375794
+x^ 4* 0.00000299007153447035234
+x^ 5* 0.0000000448549883194611938
+x^ 6*-0.0000000140393194397544057
}
{z= (-0.504340175446244000 + 0.562765761452981964*I)
+x^ 1*( 0.0227804277538978753 - 0.0313962947316761485*I)
+x^ 2*(-0.000187493753264388258 + 0.000240573004638874982*I)
+x^ 3*(-0.0000173170154681859400 + 0.0000483439272385087499*I)
+x^ 4*( 0.00000186140591725642208 - 0.00000140612260499651276*I)
+x^ 5*(-0.0000000359687742865028775 - 0.000000238921821822236246*I)
+x^ 6*(-0.0000000140479390745627075 + 0.0000000154228924213440731*I)
}
{z= ( 0.359259224758007439 + 0.642513737138542349*I)
+x^ 1*( 0.00125062477418773157 - 0.00205512908763713047*I)
+x^ 2*(-0.000473500958540455897 + 0.00108015520081310997*I)
+x^ 3*(-0.000108144537904337611 - 0.0000221136751964296838*I)
+x^ 4*( 0.00000365352385186322974 - 0.0000109205152171821385*I)
+x^ 5*( 0.00000143644262907468698 + 0.000000753636636313957017*I)
+x^ 6*(-0.000000133837667274211728 + 0.000000211504575842695306*I)
}
{z= ( 0.379513588015923745 + 0.334932305597497587*I)
+x^ 1*(-0.0230612323455722479 - 0.00538785818574664379*I)
+x^ 2*( 0.000223715667692254576 - 0.00100096226615781152*I)
+x^ 3*( 0.0000924390752585046801 + 0.0000283948327212593745*I)
+x^ 4*(-0.00000444777220263711731 + 0.0000109991366640167731*I)
+x^ 5*(-0.00000147488363872001793 - 0.000000750149091456313290*I)
+x^ 6*( 0.000000131362683170454543 - 0.000000211363105809565828*I)
}
{z= (-0.0442123577040706231 + 0.986580976280892768*I)
+x^ 1*(-0.00168631410697906826 + 0.000419905880430706866*I)
+x^ 2*( 0.000824122737737012684 - 0.000147452933665481287*I)
+x^ 3*( 0.0000148294436540590271 - 0.0000423012304388835166*I)
+x^ 4*(-0.00000179611219530982926 + 0.000000252886482712731910*I)
+x^ 5*( 0.000000132428561354388846 + 0.000000194421657850058569*I)
+x^ 6*( 0.0000000186904817675855266 - 0.0000000121128281556966309*I)
}
{z= (-0.198042099364253840 + 1.10026953729269853*I)
+x^ 1*( 0.000338411530274354574 + 0.0000551258574846720436*I)
+x^ 2*(-0.000169328304150344563 - 0.0000234867261464734632*I)
+x^ 3*( 0.0000000298451123227361170 - 0.00000280280755567260578*I)
+x^ 4*( 0.0000000425917226359921085 + 0.0000000627926935964822686*I)
+x^ 5*(-0.00000000297733729350064710 + 0.00000000108485294983731985*I)
+x^ 6*( 6.65684728185643910 E-12 - 9.88920256175429103 E-11*I)
}
{z= (-1.25636793006818076 + 0.380320963472722507*I)
+x^ 1*(-0.000846269215461327949 - 0.00116600657512957892*I)
+x^ 2*( 0.000389349729034723316 + 0.000591786929496725821*I)
+x^ 3*( 0.0000237849154393727229 - 0.00000452582161657443604*I)
+x^ 4*(-0.000000856099479577055100 - 0.00000108478239958016467*I)
+x^ 5*(-0.0000000783802180386627187 + 0.0000000644312584363213421*I)
+x^ 6*( 0.00000000487678960666910970 + 0.00000000500884893452725251*I)
}
(09:06) gp > p=periodnzero(5,0.001); /* rtn period3 bulbs size>0.001 */
periodnzero(5);
0.00124142647819312318 -1.86078252220485487
0.00643004051683055908 -1.62541372512330374
0.0782244856281390343 -0.504340175446244000 + 0.562765761452981964*I
0.00750234541167943880 0.359259224758007439 + 0.642513737138542349*I
0.0471208943401874649 0.379513588015923745 + 0.334932305597497587*I
0.00540957192966561334 -0.0442123577040706231 + 0.986580976280892768*I
0.00109467916151353002 -0.198042099364253840 + 1.10026953729269853*I
0.00454737524002394192 -1.25636793006818076 + 0.380320963472722507*I
Here is the pari-gp program to generate this. This code also includes updates for the code in the original answer well. More updates, including the plotsetup routine, and maketheplot routine; see the picture below.
print ("periodnzero(n,l); /* calculate all periodn bulbs size>l, returns vector */");
print ("randomperiodn(n,l); /* n>4 approximate all periodn bulbs size>l rtn vect */");
print ("invzero(c,n); /* use Newton's method */");
print ("estim2nzero(c,n); /* find nearest 2n zero */");
print ("gt=islopetaylor(c,n); /* taylor series for c, n iterations */ ");
print ("prtpoly(gt,10); /* print taylor series for islopetaylor */ ");
default(format,"g0.18");
z=1.0;
precis=precision(z);
plim=10^(-0.5*precis); /* precision limit */
/* plim=10^(-0.65*precis); */ /* precision limit */
periodnzero(n,l) = {
local(z,i,pcur,otemp,Cp,oout);
otemp=vector(2^(n-1));
pcur=0;
zn=0;
print("periodnzero("n");");
Cp=croot(n);
for (i=1,length(Cp),
z=estimz2(Cp[i],n,l);
if (z<>0,pcur++;otemp[pcur]=Cp[i]);
);
oout=vector(pcur);
i=1;while (otemp[i]<>0,oout[i]=otemp[i];i++);
return(oout);
}
/* return all the imag>0 roots of zn, iterating zn<=zn^2+x */
croot(m) = {
local(zn,Cr,n,i,v);
zn=x;
for (n=1,m-1, zn=zn^2+x; );
Cr=polroots(zn);
i=0;
v=vector(length(Cr));
for (n=1,length(Cr),
z=Cr[n];
if (((real(z)<>0) || (imag(z)<>0)) && (imag(z)>=0),
if (imag(z)==0, z=real(z));
i++;
v[i]=z;
);
);
return(v);
}
estimz2(C0,n,l) = {
local(z,y,i);
z=0;
y=0;
if (l==0,l=0.01);
/* check to make sure that the z_n root is not repeating by a factor of n */
for (i=1,n-1,z=z^2+C0;if (abs(z)<plim,y=i));
if (y==0,
y=estim2nzero(C0,n); /* estimate for 2n zero */
y=invzero(y,n*2); /* refinement for estimate */
/* 1.647(zn-z2n) is approx size of the bug or bulb, based on Feignenbaum limit */
if (1.6*abs(C0-y)>l,
print(1.6*abs(C0-y)" "C0 );
return(y);
);
return(0);
,
return(0);
);
}
sizez2(C0,n) = {
1.6*abs(C0-invzero(estim2nzero(C0,n),n*2));
}
/* use newton's method, centered at C0 */
/* iterating z=z^2+x+C0 with z only needing a0 and a1 terms */
/* s<>0 is used to calculate inv(s) instead of inv(0) */
/* m<>0 is used to calculate Misiurewicz point near C0,n,m */
invzero(C0,n,s,m)={
local(z,i,j,a1,sz,sa1);
z=1;
j=0;
while ( ((abs(z)>plim)||(j<1)) && (abs(z)<2) && (j<50),
j++;
z=0; a1=0;
sz=0; sa1=0; /* Misiu */
i=0;
while ((i<n) && (abs(z)<2),
a1=2*a1*z+1;
z=z^2+C0;
i++;
if (i==m, sz=z; sa1=a1; ); /* Misiu */
);
z=z+sz-s;
a1=a1+sa1;
C0=C0-z/a1;
/* if (j<3, C0=C0-z/(2*a1), C0=C0-z/a1); */
/* C0=C0-(z-s)/a1; */
/* if ((abs(z-s)>plim),print(z-s)); */
);
jm=0.001*a1^-2;
if (abs(z)>plim, return(0));
return(C0);
}
/* find nearest 2n zero, assuming that n is a zero */
estim2nzero(C0,n) = {
local(z1,z2,y,i,a0,a1,a2,a3);
if (C0==0,return(-1));
a0=0;a1=0;a2=0;a3=0;
/* iterating 2n times, centered at C0, */
/* starting with z=0; z<=z^2+x+C0, truncating to x^3 terms */
for (i=1,2*n,
a3=2*a3*a0+2*a1*a2;
a2=2*a2*a0+a1^2;
a1=2*a1*a0+1;
a0=a0^2+C0;
);
/* assume zn=0, so z2n=0, so a0=0, */
/* divide by x, solve quadratic root closest to zero */
a0=a1;
a1=a2;
a2=a3;
y=a1^2-4*a0*a2;
z1=(-a1+sqrt(y))/(2*a2);
z2=(-a1-sqrt(y))/(2*a2);
if (abs(z1)>abs(z2),z1=z2);
return(C0+z1);
}
shortperiodn(n,nc,cnt,l) = {
local(z,y,i,j);
z=0;for (i=1,n,z=truncs(z^2-0+x+nc,cnt+1));
v=polroots(z);
for (i=1,cnt,
z=v[i]+nc;
y=0;
for (j=1,n,y=y^2+z);
y=abs(y);
if (y<0.04,
z=invzero(z,n);
if (abs(imag(z))<1E-21, z=real(z));
if (imag(z)<0,z=conj(z));
if (((real(z)<>0) || (imag(z)<>0)) && (imag(z)>=0),
j=1;
while (abs(oj[j]-z)>1E-21 && (oj[j]<>0), j++);
if (oj[j]==0,
if (estimz2(z,n,l)<>0, oj[j]=z; if (j==15, oj[j]=v[i]+nc));
);
);
);
)
}
randomperiodn(n,l,j)= {
local(i,y,oout);
if (l==0,l=0.01);
if (j==0, j=50);
oj=vector(128);
print("randomperiodn("n");");
for (i=1,j,
shortperiodn(n,cfromz(goldr^i),16,l); /* main cardiod */
shortperiodn(n,-1+0.25*(goldr^i),16,l); /* 2n bulb */
shortperiodn(n,-2+fwidth*((gold*i)%1),16,l); /* tip */
);
i=1;while (oj[i]<>0,i++);
oout=vector(i-1);
i=1;while (oj[i]<>0,oout[i]=oj[i];i++);
return(oout);
}
/* truncate series */
truncs(z,n)={
local(y,i);
y=0;
if (n==0,n=8);
for (i=0,n-1,y=y+polcoeff(z,i)*x^i);
return(y);
}
gfunc(C0,n)={local(i,z);z=0;for (i=1,n,z=z^2+C0);z}
cfromz(z) = subst(x/2-x^2/4,x,z);
lfromc(C0) = {1/2-sqrt(1/4-C0);}
gold=2/(sqrt(5)+1);
goldr=conj(exp(2*Pi*I*gold));
/* feigenbaum point+2, and feigenbaum ratio */
fwidth= 0.598844810907949399476173212106138707773702;
feign = 4.669201609102990671853203820466201617258186;
/* slope at the fixed point of C0, using n iterations */
slopeh(C0,n)={
local(l,l0,l1,i,j,co,s);
s=1;
co=0;
while (abs(s)>plim, /* one past plim */
l0=co;l1=1; /* l1=slope */
for (i=1,n,l0=l0^2+C0;l1=l1*2*l0);
s=l0-co; /* as co approaches fixed point, (l0-co) approaches zero */
co=co-(l0-co)/(l1-1); /* co=updated approximation for fixed point */
);
l0=co;l1=1; /* one more iteration to improve l1 slope approximation */
for (i=1,n,l0=l0^2+C0;l1=l1*2*l0);
return(l1);
}
invslopeh(z,C0,n,est) = {
local (y,s,slop,lastyz,curyz,lest,ly,pgoal);
lastyz=100;
y=slopeh(C0+est,n);
curyz=abs(y-z);
lest=est+curyz*jm;
ly=slopeh(C0+lest,n);
pgoal=10^(-precis/1.3);
/* generate the fixed point for pentation by iteration slog */
s=1;
while ((curyz>pgoal) && ((curyz<lastyz) || (s<3)),
est=precision(est,precis);
y=precision(y,precis);
slop=(y-ly)/(est-lest);
lest=est;
ly=y;
est=est+(z-y)/slop;
lastyz=curyz;
y=slopeh(C0+est,n);
curyz=abs(y-z);
s++;
);
if (curyz>0.1, print (curyz " bad result, need better initial est"));
return(est);
}
/* default use r=0.5, samples=50 */
islopetaylor(C0,n,r,samples) = {
local(rinv,s,t,x1,y,z,tot,t_est,tcrc,halfsamples,wtaylor,terms);
if (n==0,
n=1;
z=C0;
while (abs(z)>plim,z=z^2+C0;n++);
);
if (r==0,r=0.5);
C0=invzero(C0,n);
if (samples==0, samples=50);
halfsamples=samples/2;
terms = floor(samples*0.51);
t_est = vector (samples,i,0);
tcrc = vector (samples,i,0);
if (r==0,r=1);
rinv = 1/r;
wtaylor=C0;
for(s=1, samples, x1=-1/(samples)+(s/halfsamples); tcrc[s]=exp(Pi*I*x1); );
for (t=1,samples, t_est[t] = invslopeh(r*tcrc[t],C0,n); );
for (s=0,terms-1,
tot=0;
for (t=1,samples,
tot=tot+t_est[t];
t_est[t]=t_est[t]*conj(tcrc[t]);
);
tot=tot/samples;
if (s>=1, tot=tot*(rinv)^s);
wtaylor=wtaylor+tot*x^s;
);
wtaylor=precision(wtaylor,precis);
if (imag(C0)==0, wtaylor=real(wtaylor));
return(wtaylor);
}
prtpoly(wtaylor,t) = {
local(s,z,iprt);
if (t==0,t=7);
z=polcoeff(wtaylor,0);
if (imag(z)<>0,iprt=1,iprt=0);
print1 ("{z= ");
if (iprt,print1("("));
if (real(z)<0, print1(z), print1(" " z));
if (iprt,print(")"),print() );
for (s=1,t-1,
z=polcoeff(wtaylor,s);
if (s>9, print1("+x^" s), print1("+x^ " s));
if (iprt,print1("*("),print1("*") );
if (real(z)<0, print1(z), print1(" " z));
if (iprt,print(")"),print() );
);
print("}");
}
local(n,i,zeros,zeron);
zp=0;
if (numb==0,numb=85);
zeros=vector(numb);zeron=vector(numb);zerov=vector(numb);
if (low==0,low=1);
if (high==0, high=14);
for (n=low,high,
for (i=1,length(C),
zp++;
zeros[zp]=C[i];
zeron[zp]=n;
);
);
ip=0;
print(zp);
for (i=1,zp,
ip++;
zerov[ip]=truncs(islopetaylor(zeros[i],zeron[i]),7);
if (imag(zeros[i])<>0,
ip++;
zerov[ip]=conj(zerov[ip-1]);
);
);
}
evaln(z)={local(y,s,i);
y=vector(ip*2);
for (i=1,ip,
s=subst(zerov[i],x,z);
y[i*2-1]=real(s);
y[i*2]=imag(s);
);
return(y)
}
maketheplot(w) = local(t,z); { ploth(t=0,2*Pi,z=exp(t*I);evaln(z),1); }
Mandelbrot plot using bulbs/cardioids from period 1..20, bigger than 0.004. I truncated the Taylor series to seven terms, and plotted 215 different bulbs and cardioids; 112 are unique if not counting complex conjugates twice. For the randomperiodn, I used (n,0.004,350). It took 350 random tries to get both of the period(15) sub-bulbs on the period(5) bulb; some bulbs are still missing. Directly searching for bulbs near $\frac{k\pi i}{n}$ and other algorithms would probably work better; I'm pretty new at this.
• Thanks again. Are these algorithms based on theory from the book you recommended? – PMay Sep 22 '13 at 19:36
• The first answer used Devaney's article's description of how to calculate the hyperbolic center of the bulbs and cardiods. The "nearby period 2n" algorithm, which I posted in the first answer was my own. Another good resource is John Milnor's "Dynamic's in one complex variable"; preprint on the web, math.sunysb.edu/preprints/ims90-5.pdf . The second answer used generic complex dynamics ideas, boundary |slope|=1, plus some numerical techniques I developed to solve analytic tetration. I haven't seen the equations mapping the bulbs/carioids to the unit cirlce, except for the c=0 and c=-1 cases. – Sheldon L Sep 23 '13 at 2:34
• Anyway, I'm not claiming any original work here, it just seemed like a fun challenge that captured my imagination, and I knew I had the tools to find the Taylor series; so I did. I found this link on the web, for the period3 bulbs/cardioid. ams.org/journals/proc/1995-123-12/S0002-9939-1995-1301497-3/… For n>3 one would have to solve hopelessly complicated algebraic equations to find the fixed points; so numeric approximations would be all that is possible. I haven't seen the answer I gave posted before, but again, I'm not claiming original work. – Sheldon L Sep 23 '13 at 2:39
Edit: I answered the question of where the centers of the biggest bulbs are. I think the OP wants an approximation for the boundaries of the bulbs, which may require a more exact radius (size/2), and a better approximation of size; see my edit below.
All bulbs are periodic. If n is the period, than $f^{n}(0)=0$, where $f^{n}=(f^{n-1})^2+c$, and $f^{0}=0$. So the center of all of the bulbs (and cardiods), are all roots of algebraic equations. For n=2, you the two centers you mentioned: $c^2+c=0$, whose two roots are $c=0$ and $c=-1$. For all of these equations, you get the trivial root c=0, which can be trivially factored out. So, for n=3, after factoring out c=0, you get a cubic equation, $x^3+2x^2+x+1$, with one real root, and two complex roots. For n=4, you get a 8th order equation. Factoring out the trivial zeros, c=0 and c=-1, you get a sixth order equation, $c^6 + 3c^5 + 3c^4 + 3c^3 + 2c^2 + 1$, with two real roots, and two pairs of complex conjugate roots. This process can be continued ad nauseam, to find all values of c which are centers of bulbs. One way to find the approximate bulb size, is to say that if $f^{n}=0$, than $f^{2n}=0$ as well, but there is also another nearby zero for $f^{2n}=0$, which gives the approximate size of the bulb.
I implemented the algorithm above. Here are the results, for period(n)=2 through 19, for a size of 0.01. The size of the bulb is printed first, followed by the hyperbolic center for that bulb, which is a zero of the algebraic equation described above. The pari-gp code is posted below these results. There aren't that many big bulbs, and I didn't find any with a period>14! The algorithm uses the approximation that the size of a bulb is $1.6(z^n-z^{2n})$, where $z^{2n}$ is the nearest period 2n zero not equal to the $z^n$ zero. Edit $1.647(z^n-z^{2n})$ might be more accurate, based on the Feigenbaum constant, although that is a limiting bifurcation value, so maybe 1.61 would be better. I used an interesting routine, to estimate where the nearest 2n zero is with a cubic polynomial that factors into a quadratic, and then used Newton's method to get an exact location of the nearest 2n bulb. In the pari-gp code, if $f^n(c)$ is a zero, than "y=estim2nzero(c,n); y=invzero(y,n*2);" returns the nearest 2n zero.
You can run with smaller radius, as an optional second parameter in the periodnzero(n,l) routine. The periodnzero routine is exact, but I haven't run it for n>10 due to memory requirements. The randomperiodn routine matches the periodnzero routine results for n=5 through n=10, for a radius of 0.01, when used with 50 iterations. The randomperiodn routine only searches near the border of the main bug and the 2n bug, so it cannot find all of the smaller bulbs. For example, there is a period(6) mini-mandelbrot at -1.723, with a size of 0.0086, that the randomperiodn(6,50,0.005) doesn't find, that periodnzero(6,0.005) does find.
periodnzero(2)
0.497124226138932614 -1.00000000000000000
periodnzero(3)
0.0288243794158896630 -1.75487766624669276
0.185709338183664371 -0.122561166876653620 + 0.744861766619744237*I
periodnzero(4)
0.113351748952365738 -1.31070264133683288
0.0129942224028632946 -0.156520166833755062 + 1.03224710892283180*I
0.0868986952447806464 0.282271390766913880 + 0.530060617578525299*I
periodnzero(5)
0.0782244856281390343 -0.504340175446244000 + 0.562765761452981964*I
0.0471208943401874649 0.379513588015923745 + 0.334932305597497587*I
periodnzero(6)
0.0511889369361296355 -1.13800066665096451 + 0.240332401262098302*I
0.0282242861365737314 0.389006840569771235 + 0.215850650870819108*I
0.0433219915118591584 -0.113418655949436572 + 0.860569472501573055*I
periodnzero(7)
0.0426479226891188707 -0.622436295041293588 + 0.424878436475629184*I
0.0181617636688071151 0.376008681846767560 + 0.144749371321632865*I
0.0345136362674653694 0.121192786105906486 + 0.610611692210754212*I
periodnzero(8)
0.0246366004359986754 -1.38154748443206147
0.0294627655353046600 -0.999442387206567375 + 0.265387532468407285*I
0.0318189851964300712 -0.359102390112449160 + 0.617353453398826859*I
0.0208607361162561978 0.324819701465459679 + 0.563815622140333520*I
0.0123381391559926738 0.359031062836614452 + 0.100934876864297563*I
periodnzero(9)
0.0265592163668188400 -0.672333174859118535 + 0.337714901737740193*I
0.0177260936911318770 0.328902947296482145 + 0.410912590892970139*I
0.0185193079849810855 -0.0315529748129236492 + 0.790783175406400223*I
0.0203944122420907310 -0.210705526779016425 + 0.804635638170098617*I
randomperiodn(10,50)
0.0191394909327384273 -0.919278545244102030 + 0.247048121273991288*I
0.0203674647559298651 -1.21039963741797970 + 0.152874772077754334*I
0.0182000934472523787 0.0502718348233550644 + 0.630468552293935962*I
0.0195932658053389602 -0.533089681450721463 + 0.602309758248200881*I
0.0115562613503280188 0.408518185116081373 + 0.340038064137432468*I
randomperiodn(11,50)
0.0179922313096332877 -0.697838195122424838 + 0.279304134101365780*I
0.0101876737961521636 0.376030292897131701 + 0.266467605276236516*I
0.0142926479408153391 0.172311886095790515 + 0.570759959055939233*I
0.0170058824618214608 -0.293902905530376044 + 0.632861961802271055*I
randomperiodn(12,50)
0.0116593892097745116 -1.15193862345878364 + 0.269129812348473540*I
0.0134209900466310253 -0.871266416358976791 + 0.222317914834670266*I
0.0155654577554219361 -0.562959829953243594 + 0.471465232041119193*I
0.0116746885689485088 -1.34419625941643939 + 0.0549457739045995046*I
randomperiodn(13,50)
0.0129307773546223782 -0.712577592671273148 + 0.237792568209519750*I
0.0110764626149245331 0.0109444639810614271 + 0.638228893362735870*I
0.0129409256506876911 -0.407104083085098332 + 0.584852842868102271*I
randomperiodn(14,50)
0.0109194116271260426 -1.22997158539723289 + 0.110671419951984272*I
0.0109394099532368402 -1.05674050872754759 + 0.248827389019942445*I
0.0104598592537462953 -0.257053859166882025 + 0.639339268776504382*I
0.0111324718829115060 -0.644083434218572230 + 0.440431179466495531*I
randomperiodn(15,50)
randomperiodn(16,50)
randomperiodn(17,50)
randomperiodn(18,50)
randomperiodn(19,50)
This is the pari-gp program that generated the above results.
default(format,"g0.18");
z=1.0;
precis=precision(z);
plim=10^(-precis/2); /* precision limit */
periodnzero(j,l) = {
print("periodnzero("j")");
C0=croot(j);
for (n=1,length(C0),estimz2(C0[n],j,l));
}
/* return all the imag>0 roots of zn, iterating zn<=zn^2+x */
croot(m) = {
local(zn,Cr,n,i,v);
zn=x;
for (n=1,m-1, zn=zn^2+x; );
Cr=polroots(zn);
i=0;
v=vector(length(Cr));
for (n=1,length(Cr),
z=Cr[n];
if (((real(z)<>0) || (imag(z)<>0)) && (imag(z)>=0),
if (imag(z)==0, z=real(z));
i++;
v[i]=z;
);
);
return(v);
}
estimz2(c,n,l) = {
local(z,y,i);
z=0;
y=0;
if (l==0,l=0.01);
/* check to make sure that the z_n root is not repeating by a factor of n */
for (i=1,n-1,z=z^2+c;if (abs(z)<plim,y=i));
if (y==0,
y=estim2nzero(c,n); /* estimate for 2n zero */
y=invzero(y,n*2); /* refinement for estimate */
/* 1.6(zn-z2n) is the approximate size of the bug or bulb */
if (abs(c-y)>l/1.6,
print(1.6*abs(c-y)" "c );
return(y);
);
return(0);
,
return(0);
);
}
/* use newton's method, centered at c */
/* iterating z=z^2+x+c with z only needing a0 and a1 terms */
invzero(c,n)={
local(z,i,a1);
z=1;
while ((abs(z)>plim) && (abs(z)<2),
z=0;
i=0;
while ((i<n) && (abs(z)<2), a1=2*a1*z+1;z=z^2+c;i++);
c=c-z/a1;
);
if (abs(z)>plim, print("invzero covergence error "c" "c+z));
return(c);
}
/* find nearest 2n zero, assuming that n is a zero */
estim2nzero(c,n) = {
local(z1,z2,y,i,a0,a1,a2,a3);
a0=0;a1=0;a2=0;a3=0;
/* iterating 2n times, centered at c, */
/* starting with z=0; z<=z^2+x+c, truncating to x^3 terms */
for (i=1,2*n,
a3=2*a3*a0+2*a1*a2;
a2=2*a2*a0+a1^2;
a1=2*a1*a0+1;
a0=a0^2+c;
);
/* assume zn=0, so z2n=0, so a0=0, */
/* divide by x, solve quadratic root closest to zero */
a0=a1;
a1=a2;
a2=a3;
y=a1^2-4*a0*a2;
z1=(-a1+sqrt(y))/(2*a2);
z2=(-a1-sqrt(y))/(2*a2);
if (abs(z1)>abs(z2),z1=z2);
return(c+z1);
}
shortperiodn(nt,nc,cnt,il) = {
local(z);
if (il==0,il=0.01,j);
z=0;for (i=1,nt,z=truncs(z^2-0+x+nc,cnt+1));
v=polroots(z);
for (n=1,cnt,
z=v[n]+nc;
y=abs(gfunc(z,nt));
if (y<0.04,
z=invzero(z,nt);
if (abs(imag(z))<1E-21, z=real(z));
if (((real(z)<>0) || (imag(z)<>0)) && (imag(z)>=0),
j=1;
while (abs(oj[j]-z)>1E-21 && (oj[j]<>0), j++);
if (oj[j]==0,
if (estimz2(z,nt,il)<>0, oj[j]=z);
);
);
);
)
}
randomperiodn(nt,n,il)= {
local(i,j);
oj=vector(128);
print("randomperiodn("nt","n")");
for (i=1,n,
shortperiodn(nt,cfromz(goldr^i),16,il); /* main cardiod */
shortperiodn(nt,-1+0.25*(goldr^i),16,il); /* period2 bulb */
);
}
/* truncate series */
truncs(z,n)={
local(y,i);
y=0;
if (n==0,n=8);
for (i=0,n-1,y=y+polcoeff(z,i)*x^i);
return(y);
}
gfunc(c,n)={local(i,z);z=0;for (i=1,n,z=z^2+c);z}
cfromz(z) = {z=z/2-1/2;z=-(z^2-1/4);}
lfromc(c) = {1/2-sqrt(1/4-c);}
gold=2/(sqrt(5)+1);
goldr=exp(2*Pi*I*gold);
• So, I wrote some code that implemented this algorithm, and it works. However, its not practical for n>8, because its apparently really memory intensive to find the roots of polynomials with more than 128 terms, although you can bump the memory on pari-gp to 1/2 a gig, and than get to 10 terms or so. So, I was going to post the roots, with radius>0.01, that I found, but the author may not be that interested. I'm exploring more random routines to find more roots for larger values of n=13, or 14 or arbitrarily large. Will post if there is interest. – Sheldon L Sep 13 '13 at 22:12
• Thanks for the insight. Is there a good article on this that is not too advanced? I am interested in the algorithm, can you post? Thanks. – PMay Sep 14 '13 at 2:31
• Posted. you could also upvote me, and give me credit for answering your question.... – Sheldon L Sep 14 '13 at 9:24
• He couldn't upvote you, having 1 rep. – Ruslan Sep 14 '13 at 9:42
• A classic book from 1990 that I own is, "Complex Dynamical Systems: The Mathematics Behind the Mandelbrot and Julia Sets", with a wonderful introductury section by Robert Devaney. You can see some of his papers online, at math.bu.edu/people/bob/papers.html – Sheldon L Sep 14 '13 at 12:24
The algebraic solutions one can find : John Stephenson : "Formulae for cycles in the Mandelbrot set", Physica A 177, 416-420 (1991); "Formulae for cycles in the Mandelbrot set II", Physica A 190, 104-116 (1992); "Formulae for cycles in the Mandelbrot set III", Physica A 190, 117-129 (1992)
Example image using this technique is here :
• Hey Adam; thanks for your reply. I didn't do much research on prior answers to how to parametrize the individual hyperbolic components. I started reading Jung's paper which looks really interesting; citeseerx.ist.psu.edu/viewdoc/… However, I may not know enough Galois theory to understand the paper. Anyway, I guess I did ok, to answer the op's question with an accurate algorithm for the Taylor series for n=3..14 for components with diameter>0.01 – Sheldon L Sep 23 '13 at 21:57
• Hi Sheldon, thanks for your work on writing the code. I have perused the literature, and I cannot find an explanation of how to numerically derive Taylor series approximations to the bulbs. Can you provide pseudo-code of your code your posted here? The Taylor series, is it a power series of angle theta, or is it in a power series of e^i*theta? – PMay Mar 10 '15 at 2:24
• – Adam Mar 12 '15 at 17:51 | 2020-02-22 20:33:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8528926968574524, "perplexity": 7963.961768362864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00355.warc.gz"} |
https://socratic.org/questions/how-do-you-predict-and-balance-acid-base-reactions | # How do you predict the products in acid-base reactions?
Feb 19, 2014
You predict the products of acid-base reactions by pairing each cation with the anion of the other compound.
#### Explanation:
Acid-base neutralization is a special case of double displacement reaction.
We can use the same techniques to predict the products.
In acids, the cation is always ${\text{H}}^{+}$.
Let's use the neutralization of stomach acid ($\text{HCl}$) by milk of magnesia [${\text{Mg(OH)}}_{2}$] as an example.
"HCl + Mg(OH)"_2 → ?
(a.) Divide the acid into ${\text{H}}^{+}$ ions and negative ions.
$\text{HCl" → "H"^+ + "Cl"^"-}$
(b.) Divide the base into positive ions and $\text{OH"^"-}$ ions.
$\text{Mg(OH)"_2 → "Mg"^"2" + "OH"^"-}$
(c.) For products, combine the ${\text{H}}^{+}$ and $\text{OH"^"-}$ ions to make $\text{H"_2"O}$.
Then combine the positive and negative ions to make a salt.
Remember to balance the positive and negative charges in the salt.
In this case, $\text{Mg"^"2+}$ and $\text{Cl"^"-}$ make ${\text{MgCl}}_{2}$.
$\left(\text{H"^+ + "Cl"^"-") + ("Mg"^"2+" + "OH"^"-") → ("Mg"^"2+" + "Cl"^"-") + ("H"^+ + "OH"^"-}\right)$
$\text{HCl + Mg(OH)"_2 → "MgCl"_2 + "H"_2"O}$
(d.) Balance the equation.
$\text{2HCl" + "Mg(OH)"_2 → "MgCl"_2 + 2"H"_2"O}$
EXAMPLE
Complete and balance an equation for the reaction between $\text{HBr}$ and ${\text{Al(OH)}}_{3}$.
Solution
$\text{HBr" + "Al(OH)"_3 → "?}$
$\left(\text{H"^+ + "Br"^"-") + ("Al"^"3+" + "OH"^"-") → ("Al"^"3+" + "Br"^"-") + ("H"^+ + "OH"^"-}\right)$
$\text{HBr + Al(OH)"_3 → "AlBr"_3 +"H"_2"O}$
$\text{3HBr + Al(OH)"_3 → "AlBr"_3 + "3H"_2"O}$ | 2022-08-19 23:07:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5775656700134277, "perplexity": 2667.718445166422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00611.warc.gz"} |
https://mammothmemory.net/maths/maths-basics/multiplication-tables/7-times-table.html | # 7 Times table
The seven times table can be remembered because of the other multiplier and multiplication works both ways.
i.e.
2times7 = see 2 times table
3times7 = see 3 times table
4times7 = see 4 times table
5times7 = see 5 times table
6times7 = see 6 times table
7times7 = see multiplication by itself mnemonic = 49
8times7 = which is 7times7=49 then 49+7=56
9times7 = see 9 times table
10times7 = see 10 times table
11times7 = see 11 times table
12times7` = see 12 times table | 2021-09-19 13:06:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938964009284973, "perplexity": 7311.009695085088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00143.warc.gz"} |
https://physics.stackexchange.com/questions/168035/orbital-velocity-question | # Orbital Velocity Question
I have a satellite in a stable trajectory around Earth, of a Mass $m$. I know that its velocity is:
$$v = \sqrt{mG/r}\,$$
But now it begins accelerating directly against the gravity vector (i.e. away from the center of the earth) by an acceleration, $a$. How do I describe the velocity of the satellite under a constant acceleration away from Earth?
Since the force is radial, you are not changing the angular momentum, but you are adding potential energy: this tells you what must happen to the tangential velocity (decreases) and radial velocity (increases). I will leave it up to you to figure out by how much.
• how tangential velocity goes down if the radial goes up? You speak of direction? Or you speak of magnitude? – Sofia Mar 2 '15 at 22:38
• @Sofia - conservation of angular momentum says that if you end up in a wider orbit without changing your angular momentum, your tangential velocity must be lowered so $v_{\theta} r$ is constant. – Floris Mar 2 '15 at 22:41
• yes, it's clear but in the beginning I took you "goes down and goes up", literally. – Sofia Mar 2 '15 at 22:43
• I see how that might be confusing. I changed the wording to "decreases" and "increases" - is that better? – Floris Mar 2 '15 at 22:57
• Yeah, I've got the general idea that the tangential velocity would decrease. I was hoping to get something more quantifiable, like a formula. – Quarkly Mar 2 '15 at 23:20
I have an answer, but I was hoping to see confirmation from another source. I used the centripetal force formula to come up with:
$$v = \sqrt{Gm/r-ar}$$ Where v is the tangential velocity of the orbit and a is the radial acceleration away from Earth. Is this right?
• What is the direction of this velocity? Is it the total velocity, or the radial component? Is it sufficient for you to have the answer as a function of $r$ (rather than say $t$)? You might want to show your entire derivation if you would like us to comment – Floris Mar 2 '15 at 23:22
• Sorry, I thought it was understood. This is the tangential velocity of the orbit. – Quarkly Mar 2 '15 at 23:24
• Why does the tangential velocity depend on $a$. Why is it not simply $v = \sqrt{GM/R_0}\frac{R_0}{r}$ (conservation of angular momentum)? – Floris Mar 2 '15 at 23:25 | 2019-10-23 12:42:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7186732888221741, "perplexity": 342.3691484989812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00472.warc.gz"} |
http://www.ask.com/question/how-fast-does-the-earth-turn | # How Fast Does the Earth Turn?
The earth turns 15 degrees every hour. The earth would make one full turn, or 360 degrees, every 24 hours or once every day, 365 days a year. | 2014-03-16 12:39:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8970276713371277, "perplexity": 1134.4147693954183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678702690/warc/CC-MAIN-20140313024502-00022-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://brilliant.org/practice/euclidean-algorithm/ | ###### Waste less time on Facebook — follow Brilliant.
×
Back to all chapters
# Greatest Common Divisor / Lowest Common Multiple
What is the largest number that can divide two numbers without a remainder? What is the smallest number that is divisible by two numbers without a remainder?
# Euclidean Algorithm
Use the Euclidean Algorithm to calculate $$\gcd( 26187, 1533).$$
Use the Euclidean Algorithm to calculate $$\gcd( 51414, 2123).$$
Use the Euclidean Algorithm to calculate $$\gcd( 574662, 51843).$$
Use the Euclidean Algorithm to calculate $$\gcd( 223460, 151360).$$
Use the Euclidean Algorithm to calculate $$\gcd( 41425, 2425).$$
×
Problem Loading...
Note Loading...
Set Loading... | 2017-06-24 06:49:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45555731654167175, "perplexity": 1113.3996418503577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00449.warc.gz"} |
https://www.edaboard.com/threads/how-to-use-rtc-in-microcontrollers.336017/ | # [AVR]How to use RTC in microcontrollers?
Status
Not open for further replies.
#### abdi110
##### Member level 2
Could any one please let me know how to use RTC in AVR?
I would like to have a calendar in my program in order to my program or a part of it be executed in a certain date. Is it possible? and How?
Last edited:
abdi110
### abdi110
Points: 2
#### KlausST
##### Super Moderator
Staff member
Hi,
After power loss the date and time is lost.
Therefore i use external rtc with backup battery.
******
Internal rtc:
Isr for a fixed isr frequency. First counter for parts of second (depends on isr frequency)
Then counter 0...59 for seconds (on every counter reset)
Then counter 0..59 for minutes(on every second counter reset)
Then counter 0..23 for hours ( on every minutes reset)
...
And so on.
Take care withmonth, because they have variabke number of days.
Other things to take care: leap year, dalight saving...
Klaus
Points: 2
### jeet_asic
Points: 2
#### hobbyckts
Make sure the RTC is having the back-up battery supply.
abdi110
### abdi110
Points: 2
#### KlausST
##### Super Moderator
Staff member
Hi,
Make sure the RTC is having the back-up battery supply.
This would mean to supply the whole AVR with a backup battery.
But the AVR maybe needs 100 time the current of a dedicated RTC IC.
An thus the battery needs 100 times the capacity of an RTC IC battery.
Using the internal RTC in my eyes makes sense:
* If the AVR is continously supplied with power. (stove, microwave ofen, hifi equippment ..)
* or if the AVR has the posibility to adjust its intenal RTC after power loss (internet, DCF77...)
Klaus
abdi110
### abdi110
Points: 2
#### hobbyckts
I meant to use the external RTC with battery backup to provide the timing correct
#### abdi110
##### Member level 2
Hi,
This would mean to supply the whole AVR with a backup battery.
But the AVR maybe needs 100 time the current of a dedicated RTC IC.
An thus the battery needs 100 times the capacity of an RTC IC battery.
Using the internal RTC in my eyes makes sense:
* If the AVR is continously supplied with power. (stove, microwave ofen, hifi equippment ..)
* or if the AVR has the posibility to adjust its intenal RTC after power loss (internet, DCF77...)
Klaus
Thanks every one
I decided to use an external RTC IC because of the power loss case.
#### bigdogguru
I decided to use an external RTC IC because of the power loss case.
In that case, you may want to consider an RTC like the Maxim DS3234, there are breakout board readily available for the developer and open source libraries for AVR and Arduino.
Maxim DS3234 Datasheet
lib-ds3234, A library for interfacing the DS3234 (SPI) RTC to an AVR microcontroller.
Arduino DeadOn RTC – DS3234 wiring example and tutorial.
SparkFun DeadOn RTC Breakout - DS3234
Several device manufacturers offer RTCs along with Maxim, Microchip, NXP, ST, TI and others:
Maxim Real-Time Clocks (RTC) ICs
Microchip Real-Time Clock/Calendar
NXP Real-Time Clocks (RTC)
ST Real-Time Clock (RTC) ICs
TI Real-time Clocks (RTC)
BigDog
Points: 2
### hardik.patel
Points: 2
#### abdi110
##### Member level 2
So many thanks dear friend for your complete; information I was thinking about ds1307 now I can consider both to select one.
Thank you.
- - - Updated - - -
Could you please help me about the battery Circuit? I mean the battery charger circuit and type of battery which is good for this purpose.
- - - Updated - - -
Based on the data sheet the battery is connected directly to the chip. I wanted to ask if the battery charger is implemented in the chip? Or I have to make it externally?
#### hobbyckts
For DS1307 you need to connect the battery externally and regarding the battery charger circuit you need to use rechargeable battery for that first so select that first. Then will work charging circuit
V
Points: 2
#### bigdogguru
So many thanks dear friend for your complete; information I was thinking about ds1307 now I can consider both to select one.
One advantage the DS3234 has over the DS3107 is the DS3234 has an integrated temperature compensated crystal oscillator (TCXO) and crystal, therefore no external crystal is required. The DS3234 is quite a bit more stable, ±2ppm, than the DS3107, enabling it to retain a more accurate time.
Could you please help me about the battery Circuit? I mean the battery charger circuit and type of battery which is good for this purpose.
Based on the data sheet the battery is connected directly to the chip. I wanted to ask if the battery charger is implemented in the chip? Or I have to make it externally?
Typically these types of devices utilize a small coin lithium cell, like the CR1225. The DS3234 only draws a few uA, an average of 1.0uA to 3.0uA, while on battery backup, therefore the life of a small lithium cell is quite reasonable.
Of course you could, utilize a recharge lithium based cell, however you would need to implement an external charging circuit into the design and I'm not sure a cost/benefit analysis would justify it. Some of the Maxim RTCs do incorporate a trickle charger into their design, DS12R885/DS12R887 and DS1340, I'm sure there are other manufactures offering similar devices.
Power Considerations for Accurate Real-Time Clocks
Trickle Charging Lithium Batteries with the ISL1208 and ISL1209 RTC Devices
Selecting a Backup Source for Real-Time Clocks
Battery Charging and Management Solutions
The above appnote details some of the power considerations.
What is the specific end application?
BigDog
V
Points: 2
#### KlausST
##### Super Moderator
Staff member
Hi,
i´d avoid rechargable batteries for clock backup power supply.
* the clock need only a few uA of current.
* a lithium coin cell lasts for years
* rechargable batteries often have a bigger self discharge than those coin cells
* if you use rechargabe batteries and need to recharge it - lets say - every three month, it is likely you forget them to charge.
* as long as your application is powered it needs no current from the battery
Klaus
abdi110
### abdi110
Points: 2
#### hobbyckts
Hi,
i´d avoid rechargable batteries for clock backup power supply.
* the clock need only a few uA of current.
* a lithium coin cell lasts for years
* rechargable batteries often have a bigger self discharge than those coin cells
* if you use rechargabe batteries and need to recharge it - lets say - every three month, it is likely you forget them to charge.
* as long as your application is powered it needs no current from the battery
Klaus
Then in that case we can use a simple coin battery as a backup supply for RTC DS1307.
abdi110
### abdi110
Points: 2
#### bigdogguru
As both Klaus and I have alluded to, a simple coin lithium cell should be able to effectively provide backup power to a RTC for quite a long time. The following appnote should provide insight into predicting cell life in a typical RTC backup application:
Lithium Coin-Cell Batteries: Predicting an Application Lifetime
BigDog
abdi110
### abdi110
Points: 2
#### abdi110
##### Member level 2
One advantage the DS3234 has over the DS3107 is the DS3234 has an integrated temperature compensated crystal oscillator (TCXO) and crystal, therefore no external crystal is required. The DS3234 is quite a bit more stable, ±2ppm, than the DS3107, enabling it to retain a more accurate time.
Typically these types of devices utilize a small coin lithium cell, like the CR1225. The DS3234 only draws a few uA, an average of 1.0uA to 3.0uA, while on battery backup, therefore the life of a small lithium cell is quite reasonable.
Of course you could, utilize a recharge lithium based cell, however you would need to implement an external charging circuit into the design and I'm not sure a cost/benefit analysis would justify it. Some of the Maxim RTCs do incorporate a trickle charger into their design, DS12R885/DS12R887 and DS1340, I'm sure there are other manufactures offering similar devices.
Power Considerations for Accurate Real-Time Clocks
Trickle Charging Lithium Batteries with the ISL1208 and ISL1209 RTC Devices
Selecting a Backup Source for Real-Time Clocks
Battery Charging and Management Solutions
The above appnote details some of the power considerations.
What is the specific end application?
BigDog
So many thanks dear friend your information is very useful for me. So I use a non rechargeable battery and therefore no need to any charging circuit.
My customer asks me to make a circuit for him to work as a trial sample (for his end customer) and after a exact time his customer pay him the cost of circuit otherwise the circuit stops working. So I need a calendar in the circuit in order that stop the circuit after an exact date.
Thank you very much again.
- - - Updated - - -
Hi,
i´d avoid rechargable batteries for clock backup power supply.
.
* the clock need only a few uA of current.
* a lithium coin cell lasts for years
* rechargable batteries often have a bigger self discharge than those coin cells
* if you use rechargabe batteries and need to recharge it - lets say - every three month, it is likely you forget them to charge.
* as long as your application is powered it needs no current from the battery
Klaus
Thanks dear friend so I will use a non rechargeable battery it is easier and witj more advantages thanks
#### milan.rajik
##### Banned
Using timer interrupts make a software clock. Also use eeprom to store number of days used. Read this value into day variable on start up. Create delays as required like 180 days, 360 days... When that much days are over then the device stops functioning.
No need for an RTC if clock is used for trial time expire check. I always do such things. When I give some .hex file for testing I always give code which works for 30 days. Days count is stored in eeprom. The device can be erased and reprogrammed and used for another x days.
Code:
if(days < TRIAL_LIMIT) {
//do everything
}
abdi110
### abdi110
Points: 2
#### abdi110
##### Member level 2
Using timer interrupts make a software clock. Also use eeprom to store number of days used. Read this value into day variable on start up. Create delays as required like 180 days, 360 days... When that much days are over then the device stops functioning.
No need for an RTC if clock is used for trial time expire check. I always do such things. When I give some .hex file for testing I always give code which works for 30 days. Days count is stored in eeprom. The device can be erased and reprogrammed and used for another x days.
Code:
if(days < TRIAL_LIMIT) {
//do everything
}
Thanks I do not know how to store some thing in eeprom for example in at mega 8
Last edited:
#### milan.rajik
##### Banned
If you can use mikroC PRO AVR Compiler then I can give you an example of software timer setting trial time for the device.
abdi110
### abdi110
Points: 2
#### bigdogguru
Thanks but what happens after a power loss or after a turning and off? I think in power loss and resetting the AVR everything goes to first and therefore the time days variable will be set to zero and never reaches to the time limit so the user can turn off and on the system and everything will start from the beginning isn't it?
Actually Milan suggested storing the day count in EEPROM which is nonvolatile, the stored data is retained regardless of power availability. If the day count was stored in SRAM, you would be correct, once power is removed from the device all data, variables, etc, stored in SRAM are lost.
The difficulty maybe detecting repeated removal of the battery from the RTC, thereby resetting it back to a default date. You may need to store the last known date and time of the RTC in EEPROM as well, if the date or time suddenly reverts to the past, then RTC has been reset and a proper course of action can be taken, like expiring the demo. You would need to regularly update the stored date and time with a routine that is scheduled to run at certain intervals.
What AVR will you be using in your design? Disregard, an ATMega 8, which has 512 bytes of EEPROM storage.
BigDog
#### abdi110
##### Member level 2
If you can use mikroC PRO AVR Compiler then I can give you an example of software timer setting trial time for the device.
- - - Updated - - -
Actually Milan suggested storing the day count in EEPROM which is nonvolatile, the stored data is retained regardless of power availability. If the day count was stored in SRAM, you would be correct, once power is removed from the device all data, variables, etc, stored in SRAM are lost.
The difficulty maybe detecting repeated removal of the battery from the RTC, thereby resetting it back to a default date. You may need to store the last known date and time of the RTC in EEPROM as well, if the date or time suddenly reverts to the past, then RTC has been reset and a proper course of action can be taken, like expiring the demo. You would need to regularly update the stored date and time with a routine that is scheduled to run at certain intervals.
What AVR will you be using in your design? Disregard, an ATMega 8, which has 512 bytes of EEPROM storage.
BigDog
So thanks
Based on what Milan said do I need the RTC yet?
I think I have to change the AVR to one with more eeprom memory. How much memory is proper in your opinion for this application?
Status
Not open for further replies.
Replies
3
Views
2K
Replies
7
Views
8K
Replies
0
Views
1K
Replies
14
Views
4K
Replies
1
Views
2K | 2021-05-17 01:02:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24458584189414978, "perplexity": 5332.4086385024475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00332.warc.gz"} |
https://kurianbenoy.com/ml-blog/fastai/fastaicourse/2022/07/28/fastai55.html | ## What is important for an ML practitioner
Most of the time as a practitioner, your job is to connect a set of inputs to the sets of outputs you want with machine learning algorithm together in a framework. According to Jeremy what is important is how you tweak on first layer and last layer of neural network. The middle layer is usually not that important.
## Remind yourself these concepts
Before getting started with this lesson, let's remind ourselves What is matrix & vector?
In this chaper, you may stumble into terms like matrix-vector multiplication, matrix matrix products etc. So it's a good idea to remind yourself with the concept of matrix multiplication and broadcasting.
In this chapter three notebooks where covered, so it's bit more hectic compared to the previous chapters to be honest. The notebooks covered were:
For the course there was close to one month gap between fifth and sixth lesson, because of exams in University of Queenzland.
## Linear model & neural network from scratch notebook
In this notebook first few sections covers on data cleaning and feature engineering with pandas. A few notes which I jotted down, when I started looking into the lesson at first.
• In pandas never delete columns
• You can replace missing values using mode of column
• We can have multiple modes, so choose the first element as 0
• In first baseline model, don't do complicated things at the start.
• for categorical variables we can set dummy variables for Pclass with pd.get_dummies
Then the notebook progresses first into building:
1. Linear models
2. Neural networks
3. Deep Learning
Note: This notebook is a pre-requisite for lesson 7 when we are covering collabrative filtering also.
## Why you should use a framework?
This notebook, does some interesting feature engineering followed by building models with fastai framework. It also shows how to use ensembling with fastai library and to get in the top 25% of accuracy.
I have seen this cliche argument that for learning ML, you need to go into details and using frameworks is a step down. Jeremy emphasises always use good frameworks on top of it. Rather than re-inventing from scratch. Lot of the success of fast.ai comes from it not asking practitioners to go into details. One of the reasons I like frameworks like blurr, Icevision is also because of that and it's helping users who are familiar with fastai to easily build complex computer vision and NLP models.
During a conversation with Icevision core-developer, Dickson Neoh:
In icevision, within 10 minutes I can train an object detection model with any dataset. It may not be most accurate, yet I can iterate so quickly.
## How random forests really work?
Jeremy was know as the random forest guy before he became know as the Deep learning person. One of the cool things about random forest is it's very hard to get something wrong unlike logistic regression.
Random forests are really intereptables, and helps in getting good accuracy. He also covered about gradient boosted trees during this lesson.
## Homework dataset
To practise these techniques, I feel a good place to start is by participating in Kaggle Tabular Playground dataset competition or previous tabular competitions in Kaggle. | 2022-08-18 14:56:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25800955295562744, "perplexity": 1921.0049657611214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00012.warc.gz"} |
https://www.nature.com/articles/s41598-019-41484-8?error=cookies_not_supported&code=254f31f8-bc15-40d3-ac94-7871c32f1521 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# A fruit firmness QTL identified on linkage group 4 in sweet cherry (Prunus avium L.) is associated with domesticated and bred germplasm
## Abstract
Fruit firmness is an important market driven trait in sweet cherry (Prunus avium L.) where the desirable increase in fruit firmness is associated with landrace and bred cultivars. The aim of this work was to investigate the genetic basis of fruit firmness using plant materials that include wild cherry (syn. mazzard), landrace and bred sweet cherry germplasm. A major QTL for fruit firmness, named qP-FF4.1, that had not previously been reported, was identified in three sweet cherry populations. Thirteen haplotypes (alleles) associated with either soft or firm fruit were identified for qP-FF4.1 in the sweet cherry germplasm, and the “soft” alleles were dominant over the “firm” alleles. The finding that sweet cherry individuals that are homozygous for the “soft” alleles for qP-FF4.1 are exclusively mazzards and that the vast majority of the bred cultivars are homozygous for “firm” alleles suggests that this locus is a signature of selection. Candidate genes related to plant cell wall modification and various plant hormone signaling pathways were identified, with an expansin gene being the most promising candidate. These results advance our understanding of the genetic basis of fruit firmness and will help to enable the use of DNA informed breeding for this trait in sweet cherry breeding programs.
## Introduction
Sweet cherry (Prunus avium L.) is an important fruit crop in temperate regions and fresh fruit is highly valued. Sweet cherries for the fresh market are hand harvested, often mechanically sorted and frequently in transit for several weeks to distant markets. Because of the harvesting, handling and marketing practices, fresh market sweet cherries need to be firm when harvested. Consumers also prefer fresh market sweet cherries that are firm1. Sour cherry (Prunus cerasus L.), a tetraploid relative of the diploid sweet cherry, is primarily used for processed products such as jam, juice and pie filling, and therefore neither the supply chain nor the consumer requires the level of firmness necessary for sweet cherry. In addition, the softer texture of sour cherry is a positive attribute for their use in cooked products and beverages. However, very soft sour cherries are rejected by the processors as mechanical pit removal is problematic.
The sweet cherries grown for fruit production were domesticated from wild cherry (syn. mazzard), which is believed to have originated around the Caspian and Black Seas and subsequently spread throughout Europe and south to Iran2. Mazzards have extremely small fruit and are grown for their high value lumber. The three mazzards used in this study also have very soft fruit. Increases in fruit size and firmness are the main fruit traits associated with the domestication of sweet cherry from its wild relatives. Sour cherry was formed from the hybridization between sweet cherry and the tetraploid ground cherry (P. fruticosa Pall.), a wild bush species native to Eastern Europe2. Sour and ground cherry have fruit that is significantly softer than that of the commercial sweet cherry.
The importance of fruit firmness for sweet cherry can be traced back to early records of cherry cultivation. For instance, sweet cherry cultivation in the Jerte Valley, Extremadura, Spain, reported for the first time in 1352, was based on farmer’s selection of local cultivars with improved quality attributes, including firmer fruit3. Cherry cultivation in this region grew significantly during the XIXth century, and was mainly based on four very firm cultivars harvested stemless, which were traditionally known as ‘Picotas’ including ‘Ambrunés’, ‘Pico Negro’, ‘Pico Limon Negro’ and ‘Pico Colorado’. At a period when no modern transportation systems existed in Spain, cherries were transported with mules from this valley to the country’s capital, Madrid, a several day journey, and consumer demand suggested that the fruit still kept acceptable quality. Another example of a relatively old firm cultivar is ‘Bing’, which was selected from a seedling of ‘Black Republican’ in 1875 in Oregon, USA4. Today, ‘Bing’ still remains the most important cultivar of the Pacific Northwest, USA, and has also been fundamental in the extremely fast development of sweet cherry cultivation in Chile, which is a country that directs most of its production to long-distance export markets. One of the first goals of sweet cherry breeding programs was to select hybrids with large and firm fruits. As an example, the INRA breeding program released during the 1980’s the cultivar ‘Fercer’ (Arcina®), obtained from an open pollination of ‘Stark Hardy Giant’, which was one of the first cultivars producing very large fruit (up to 15 g) with a high level of firmness as well. Subsequently, this cultivar was heavily used as a parent in the INRA breeding program, leading to many new cultivars, such as ‘Folfer’, ‘Ferdouce’, ‘Fertille’ or ‘Ferdiva’5.
Little is known about the genetic control of fruit firmness in either sweet or sour cherry. In a study of two F1 populations derived from three sweet cherry cultivars (‘Regina’ × ‘Lapins’, ‘Regina’ × ‘Garnet’), the phenotypic data of the progeny fit normal distributions suggesting that the trait was quantitatively inherited6. Multiple quantitative trait loci (QTLs) for firmness were identified, the largest one on linkage group (LG) 5, but none of the QTL explained more than 24.1% of the phenotypic variance. A second QTL study was done using the sweet cherry population ‘Ambrunés’ × ‘Sweetheart’ that also exhibited a continuous distribution for firmness7. In this case, a previously undetected QTL was identified on LG 1 along with a previously identified QTL on LG 6. No QTL for fruit firmness have yet to be identified in sour cherry.
Genes controlling fruit firmness have been well investigated in many species including tomato, peach and apple. In these species, the physiological modifications of the cell wall organization were considered important components of tissue firmness8. Many enzymes, which are capable of altering cell wall texture, have been proposed9,10. For example, endopolygalacturonase (endoPG), encoded by a multiple-gene family, is well established as one of the major enzymes involved in pectin disassembly in tomato and kiwifruit11,12. In apple, endoPG was also shown to be involved in fruit softening process and its regulation was found to be ethylene dependent13. Copy number variation of a gene cluster encoding endoPG was also found to mediate flesh texture in peach14. In sweet cherry, genome-wide transcriptional dynamics from developing fruit between flowering and maturity at 14 time points were investigated and the results suggested tight developmental regulation of genes functioning in diverse processes such as sugar transport, lipid metabolism and cell wall rearrangement related to changes in fruit firmness15. To date, no genes have been identified that control the variation for fruit firmness in sweet cherry, but candidate genes underlying the firmness QTL identified on LG 5 in sweet cherry have been proposed6. In sour cherry, expansin genes were found to be upregulated during ripening (also the period of fruit softening)16.
The objectives of this study were to (1) identify and characterize the QTL(s) for fruit firmness segregating in an F1 sweet cherry population, (2) explore whether the QTL identified is associated with the firmness that accompanied sweet cherry domestication and breeding, and the presence of softer fruit exhibited by sour cherry, and (3) identify candidate genes for fruit firmness within the QTL region.
## Results
### Phenotypic variation for fruit firmness
The progeny in the ‘Fercer’ × ‘X’ population exhibited a wide range of fruit firmness; however, the distribution was bimodal with more individuals exhibiting soft fruit (Table 1, Figs 1a and S1). ‘X’ was assigned as the paternal parent of this population since the recorded paternal parent was found incorrect based on genotype data and the correct parent is unknown. ‘Fercer’ had a multi-year fruit firmness mean of ~67 g/mm2 (Min 56, Max 82) which is aligned with the firm-fruited progeny group. A wide range of variation for fruit firmness was also observed for individuals from the INRA sweet cherry germplasm collection and RosBREED germplasm (Fig. 1b,c). In the INRA sweet cherry germplasm collection, the majority of soft-fruited individuals were characterized as landraces as opposed to bred cultivars. In the RosBREED germplasm, the majority of soft-fruited individuals were either mazzards or hybrids with mazzards. When the two INRA populations were compared for firmness for the one year that they were both phenotyped, the sweet cherry germplasm collection exhibited a wider phenotypic distribution compared to the F1 population (Fig. 1). Within the three populations, the ANOVA analysis revealed highly significant effects for the different genotypes (Table 1). In all three populations, the broad-sense heritabilities were high (0.73–0.97), indicating that much of the phenotypic variation in these populations is genetically controlled (Table 1). The highest heritability for fruit firmness (0.97) was obtained from the multi-year data for the ‘Fercer’ × ‘X’ population, likely because the environmental variation was low among years compared to the genetic variation as suggested by the bimodal phenotypic distribution.
The firmness of the RosBREED sweet cherry materials and sour cherry materials could be directly compared as they were phenotyped using the same instrumentation. Fruit from the sour cherry individuals exhibited a smaller range of firmness (98–197) compared to sweet cherry (137–397) (Figs 1 and S2). The fruit firmness of 95% of the sour cherry individuals was less than 170, indicating that almost all the sour cherries were softer than the sweet cherry landraces, bred cultivars, and their offspring.
### QTL analysis
‘Fercer’ × ‘X’: The two parental maps constructed consisted of 110 SNPs for ‘Fercer’ and 87 SNPs for ‘X’ with an average coverage of one marker every 6.7 and 7.5 cM, respectively. QTL analysis from the ‘Fercer’ × ‘X’ population identified four QTL in the multi-year analysis (LGs 4, 5, 6 and 8); however, the major stable QTL was located on LG 4 (Table 2, Fig. 2 and Table S1). This QTL segregating from the ‘X’ parent was significant in all seven years evaluated and across years, and the percentage of variation explained by the QTL ranged from 54.0% to 84.6%. The QTL confidence interval based on multiple years’ analysis was small (33.1–36.0 cM) and estimated to only cover about one Mbp (10,335,393–11,216,807 bp) based on the peach physical map v.2. The peak position of this QTL was located at 34.5 cM, which was estimated to be approximately 10.76 Mbp on the peach physical map v.2 (Table 2). One QTL was identified segregating from ‘Fercer’, but this QTL was only significant over multiple years, explaining 20.1% of the phenotypic variance. As the QTL confidence intervals overlapped, the QTLs derived from ‘X’ and ‘Fercer’ were considered to be the same and this QTL was named qP-FF4.1.
INRA Sweet Cherry Germplasm Collection: In the INRA sweet cherry germplasm collection, variation in fruit firmness significantly associated with SNPs located on chromosome four (Fig. 3a). The SNP most significantly associated with fruit firmness, ss490552928, has a peach physical map position (v.2) of 11,472,398 bp. Fruit firmness was significantly different among the three SNP genotypes, as illustrated for ss490552928 and the second most significant SNP on chromosome 4, ss490552906, located at 10,880,163 bp (Fig. 3b,c). For ss490552928, mean fruit firmness for the SNP genotype AB, was intermediate to that of BB and AA, with increased firmness associated with BB (Fig. 3b). For ss490552906, mean fruit firmness for the SNP genotype AB was significantly less than that of the most firm genotypic class (AA) and not significantly different from the softest class (BB) (Fig. 3c). In addition to the SNPs on chromosome 4, an additional SNP on chromosome 1 (ss490546759, 23,455,434 bp) was significantly associated with fruit firmness (Fig. 3a). This SNP is within a region where a fruit firmness QTL was previously identified in four of six years in a ‘Regina’ × ‘Lapins’ population6.
RosBREED Sour cherry germplasm: The sour cherry germplasm was also segregating for ss490552928; however, no sour cherry individuals had the BBBB genotype (Fig. S3a). The majority of the individuals were AAAB, followed by AAAA, AABB and then ABBB. The association of SNP genotype and fruit firmness was investigated within two segregating sour cherry populations and found to be not significant (Fig. S3b,c). This is in contrast to the INRA sweet cherry germplasm collection where BB was the most prevalent genotype for this SNP and BB individuals had significantly firmer fruit than AB or AA individuals (Fig. 3b).
RosBREED sweet cherry germplasm: Two fruit firmness QTLs were identified in both years for the RosBREED pedigree germplasm, one small effect QTL on LG 2 and one large effect QTL on LG 4 (Table 3, Fig. 4 and Table S1). The QTL on LG 4 explained 16.4% and 83.5% of the variation for fruit firmness in 2011 and 2012, respectively. This difference is probably due to the high number of missing values in 2011 compared to 2012 (219 vs. 126, respectively). The peak genetic map position of this QTL was 33 cM and the peak physical map position was estimated to be ~10.8 Mbp. As this is similar to the peak position of the QTL identified in the ‘Fercer’ × ‘X’ population, this QTL was considered to be qP-FF4.1. Predictions for the genotypes of qP-FF4.1 for the RosBREED germplasm were calculated by FlexQTL, as QQ, Qq and qq, where Q and q represent the “firm” and “soft” alleles, respectively. FlexQTL used a bi-allelic model denoted by Q and q to estimate the QTL genotypes. The only individuals in this germplasm set predicted to be qq (and therefore soft) were three mazzard accessions, MIM 17, MIM 23, and NY 54 (Table S2). Likewise the only individuals in this germplasm set predicted to be Qq were offspring from these three mazzard accessions plus ‘Moreau’ and ‘Cristobalina’, old landrace cultivars, and their offspring. All other individuals were predicted to be QQ. This suggests that homozygosity for the firm Q allele at qP-FF4.1 is a signature of selection exhibited by domesticated and bred sweet cherries.
### Haplotype analysis
To further trace and evaluate the allele effects of qP-FF4.1, five SNPs that span the peak physical map QTL location were chosen for haplotype (allele) construction (Fig. 5). These five SNPs spanned a ~1.23 Mbp and ~0.3 cM region of LG 4. Using these five SNPs, 13 haplotypes (H1 to H13) were identified in the RosBREED sweet cherry germplasm and an additional three haplotypes were identified in sour cherry (H14 to H16) (Tables S3 and S4). As the QTL haplotypes were based on SNP marker composition spanning the QTL region and not variation in underlying genes, it is possible that the haplotypes identified over-represent the number of functional alleles. Of the thirteen haplotypes exhibited by sweet cherry, four were only identified in the mazzards (H8, H11, H12 and H13) (Tables S2, S3). The haplotypes most frequent in the RosBREED sweet cherry germplasm and also not present in any mazzards were H4 (49.1%) and H1 (28.2%), suggesting that these haplotypes are associated with firm fruit possibly due to the influence of human selection and breeding. However, as only three mazzards were used in this study, it is possible that other mazzards might also possess a “firm” allele. Two of the commercially dominant cultivars notable for their fruit firmness, ‘Bing’ and ‘Ambrunés’, are H1H1 and H4H4, respectively. The haplotypes most frequent in the sour cherry selections were H3 (46.9%) followed by H11 (18.8%) and H10 (12.8%).
The qP-FF4.1 genotypes (diplotypes) for ‘Fercer’ and ‘X’ were H1H2 and H1H3, respectively (Fig. 6a). As ‘Early Burlat’ is the only sweet cherry founder known to have H3, it is likely that ‘Early Burlat’ is an ancestor of ‘X’. When the fruit firmness of the ‘Fercer’ × ‘X’ progeny were compared based on their qP-FF4.1 diplotypes, those progeny that were H1H1, had significantly firmer fruit than progeny that were H1H3 or H1H2 (Fig. 6a). This is consistent with the high relative frequency of H1 in bred germplasm. Furthermore, it suggests that H1 is recessive to H3 and H2. In other words, for this QTL, firm fruit appears to be recessive to soft fruit. H1 and H2 were deduced to be “firm” and “soft” alleles, respectively, as H1H2 individuals had significantly softer fruit than H1H1 individuals. The inheritance of H3, uniquely present in ‘Early Burlat’ and not present in any other RosBREED germplasm, was further followed through breeding using this germplasm. Five U.S. cultivars have ‘Early Burlat’ in their ancestry and all five inherited the ‘Early Burlat’ H6, and not H3 (Fig. S4).
The effects on fruit firmness associated with 14 qP-FF4.1 diplotypes were compared for the RosBREED germplasm. These 14 diplotypes, representing 10 haplotypes (H1, H2, H4 to H6, H8, H10 to H13), each consisted of firmness data from six to 104 individuals (Fig. 6b). Fruit firmness ranged from a mean of 275 down to 162 for the softest fruit. The four diplotypes that had the firmest fruit all had one or two copies of H1, paired either with itself or with H4, H5 or H6. This suggests that in addition to H1 and H4, H5 and H6 can also be considered “firm” alleles. However, when H1 or H4 were paired with H2, the mean fruit firmness was reduced significantly. This is consistent with the dominant ‘soft’ effect of H2 observed in the ‘Fercer’ × ‘X’ progeny where H1H2 progeny had significantly softer fruit than H1H1 progeny. Progeny with H8 and H10-13, only present in the mazzards, had significantly softer mean fruit firmness than the majority of progeny homozygous for the firm diplotypes. These effects were based on pairings with the “firm” haplotypes H1 or H4, indicating that these “soft” haplotypes present in wild cherry are dominant to the “firm” haplotypes that are found in bred cultivars. It was not possible to determine if the three haplotypes identified in sour cherry (H14-16) were associated with firm or soft fruit due to the dominance of soft compared to firm fruit (Table S4).
### In silico candidate genes
The qP-FF4.1 interval identified from both the INRA F1 and RosBREED pedigreed populations was used for candidate gene identification. This ~ 1.8 Mbp interval was between SNPs located at 10,156,468 and 11,956,655 bp on chromosome 4 of the peach genome v2.0 and the same SNPs located between 12,928,603 and 14,860,789 bp on the sweet cherry genome (Fig. 7). In this region, 241 genes were predicted in the sweet cherry genome (Table S5). From these genes, 25 were selected as candidate genes based on their potential to be involved in the control of fruit firmness (Table 4, Fig. 7). The most promising candidate gene identified was Pav_sc0002828.1_g410.1.mk which encodes an expansin protein related to plant cell wall metabolism. This gene is very close to the QTL peak and an expansin gene with homology to Pav_sc0002828.1_g410.1.mk was found to be expressed in sour cherry fruit and associated with tissue softening16. Of the three expansin genes identified that were upregulated during softening in sour cherry fruit, the expansin gene PcEXP4 had the highest similarity to the expansin gene in the sweet cherry genome as evidenced by their placement on a distal lineage, relative to the other sour cherry expansins (Fig. S5a,b). The candidate expansin gene contains two functional domains (Expansin EG45 and Expansin CBD) and three encoded signal peptide regions (H, N, and C) located on the N-terminus region (Fig. S5c). Nine other candidate genes were predicted to encode plant cell wall modifying enzymes which have been found to be potentially involved in regulating fruit firmness in peach and apple17,18. Fourteen candidate genes were included as they are potentially involved in various plant hormone signaling pathways well known to be involved in fruit maturation and ripening in non-climacteric and climacteric fruits and in sweet cherry firmness19,20. Among these candidate genes, two are predicted to be NAC (NAM/ATAF1, 2/CUC2) transcription factors involved in the ethylene signaling pathway. Of these two genes, Pav_sc0000029.1_g070.1.mk, is a homolog of the peach NAC gene ppa008301m that has been predicted to control maturity date21,22. The final candidate gene, Pav_sc0000975.1_g210.1.mk, was predicted to be a Squamosa promoter-Binding Protein which has been found to be associated with fruit ripening in tomato23.
## Discussion
### Genetic determinism and signature of selection for fruit firmness in sweet cherry
The bimodal segregation for fruit firmness in the ‘Fercer’ × ‘X’ population provided the opportunity to identify a major QTL for fruit firmness in sweet cherry that was also identified in a wide range of genetic backgrounds represented by the INRA sweet cherry germplasm collection and the RosBREED pedigreed population. This is the first report of a major QTL for fruit firmness identified on LG 4 in sweet cherry. However, due to the small size of the ‘Fercer’ × ‘X’ population, the QTL interval would be affected by potential errors in phenotyping and genotyping as well as the environmental conditions. Despite this population size limitation, the QTL region estimated for all seven years was stable and consistent, possibly due to the large effect of this QTL in this population as suggested by the bimodal phenotypic distribution. However, future fine mapping is needed to more precisely define the QTL interval. In a prior study of two sweet cherry populations between bred cultivars, ‘Regina’ × ‘Lapins’ and ‘Regina’ × ‘Garnet’, QTLs for fruit firmness detected in at least three of the six years of study, were identified on LG 1, 2 and 56. On LG 4, a QTL was detected in only two of the six years analyzed in one of the two populations (‘Regina’ × ‘Lapins’), and this QTL was located on the upper region of chromosome 4 in a region that does not overlap with that for qP-FF4.1. It is possible that the LG 4 QTL, qP-FF4.1, was not identified in these two populations, because all three parents only had “firm” alleles for this locus. Indeed, all three haplotypes present in ‘Regina’ (H4H5) and ‘Lapins’ (H1H4) were identified as “firm” alleles in this study. In contrast, the plant materials used in this study resulted in the identification of qP-FF4.1 because of the presence of “soft” alleles in the plant materials.
The finding that sweet cherry individuals that are homozygous for the “soft” alleles for qP-FF4.1 are exclusively mazzards and that the vast majority of bred cultivars are homozygous for “firm” alleles suggests that this locus was a signature of selection during domestication and modern breeding. In addition, three of the old cultivars included in this study, ‘Moreau’, ‘Cristobalina’ and ‘Early Burlat’, have relatively soft fruit and their qP-FF4.1 genotypes include one “soft” and one “firm” allele. In sour cherry, all of the germplasm are soft, as firmness comparable to sweet cherry has not been a critical trait. This QTL region is the second region in sweet cherry that has been shown to have been under selection, the first being a QTL region on LG 2 that contains a major QTL for fruit size24.
The results from the ‘Fercer’ × ‘X’ population and the RosBREED germplasm further indicate that the “soft” alleles present in the mazzard accessions are dominant, or at least partially dominant over the “firm” alleles present in bred cultivars. This is consistent with the findings in sour cherry where no individuals exhibited the firmness of bred sweet cherry cultivars. No sour cherry individual was homozygous for the “firm” allele at qP-FF4.1, and therefore all the sour cherry individuals had at least one “soft” allele for qP-FF4.1. These results are also consistent with the results from progeny from a cross between a sweet cherry cultivar ‘Emperor Francis’ and the mazzard accession NY 54. A major QTL for fruit size associated with domestication was identified in this F1 population25; however, QTL for fruit firmness could not be identified from this population because all the progeny had soft fruit. Given that the qP-FF4.1 diplotypes for ‘Emperor Francis’ and NY 54 are H1H1 and H13H13, respectively, and the conclusion that soft fruit is dominant over firm fruit, this result would be expected.
In peach, a major QTL for fruit firmness was also identified on LG 4, first reported by Dettori et al.26. This locus, termed F-M, controls both peach fruit firmness and flesh adhesion to the endocarp, with soft (melting) fruit dominant to firm (non-melting) fruit. Two genes encoding endopolygalacturonase (endoPG) are considered to be the causal genes at this locus. The peach physical map (v.2.0) positions of these two genes are as follows: ppa006839m 19046344-19049605 bp, and ppa006857m 19081325-19083984 bp. Using the peach genome as a proxy for cherry, this places the endoPGs and the F-M locus, ~ 8 Mbp distal to the qP-FF4.1. In contrast to peach, no studies in cherry have associated endoPG with flesh firmness, nor were any endoPG genes been identified in the qP-FF4.1 region. Peach and cherry also differ in their ethylene requirement for ripening. Peach is a climacteric fruit meaning that it has a strong requirement for ethylene to ripen, while cherry is a non-climacteric fruit. Taken together, these results suggest that the genetic control of fruit firmness in cherry evolved separately from that of peach. This conclusion is consistent with qP-FF4.1 being associated with domesticated and bred germplasm.
### QTL hotspot of qP-FF4.1 region
The scope of the work presented herein is limited to fruit firmness; however, the qP-FF4.1 region is an important QTL “hotspot” for cherry breeders because major QTL for other traits map to this region. LG 4 loci for two phenology traits, bloom and maturity date, have been conserved across multiple Prunus species27. For bloom time, the major locus named Lb, was first reported in almond by Ballester et al.28 and subsequently identified in multiple Prunus species29,30,31,32. In sour cherry, the peak peach genome v2.0 position for the bloom time QTL on chromosome 4 was ~10.8 Mbp33. For maturity date, a major QTL termed qMD4.1, was identified first in peach34,35 and subsequently in cherry36,37. The most likely candidate gene for the peach QTL qMD4.1 is ppa008301m, which is believed to be an NAC transcription factor. It maps to ~11.106 Mbp on the peach genome sequence v1.038 which is equivalent to ~11.117 Mbp on the peach genome sequence v2.039. In a recent study, Isuzugawa et al.36 found that two sweet cherry candidate genes, homologous to the NAC transcription factors identified in peach, also mapped within the maturity date QTL on LG 4. In addition, QTL for fruit weight and soluble solids content have also been reported in this qP-FF4.1 region in peach34,35. An analysis of maturity date for the ‘Fercer’ × ‘X’ population used in our study identified a QTL that explained on average 50% of the phenotypic variance for maturity time37. This QTL was detected in the same region as the one for firmness; however, the peak detected with the multi-year analysis was at 36.1 cM, which is almost 2 cM downstream from the peak for the firmness QTL.
The clustering of these QTLs is in agreement with the correlations observed in the populations studied among the two fruit traits: firmness and maturity date. Indeed, within the ‘Fercer’ × ‘X’ population, early maturing individuals bear smaller and softer cherries than the late maturing ones. In the RosBREED germplasm, a prior study of this germplasm identified a QTL for maturity date that overlapped with qP-FF4.140. In this prior study, ss490552928 was associated with maturity date with the ‘A’ and ‘B’ alleles associated with “early” and “late” maturity, respectively; however, one of the haplotypes that contained the ‘A’ allele was associated with late maturity40. A similar result was obtained from the INRA sweet cherry germplasm collection population where the ‘A’ for ss490552928, was also associated with early maturity and soft fruit. The phenomenon of early maturing cultivars which tend to be softer than late maturing ones could probably result from a developmental impossibility of having a firm fruit in a short period of time between blooming and maturity. This would be in particular the case of cultivar ‘Early Burlat’, which has one of the shortest developmental periods between blooming and maturity. Hence, the recent method of developing sweet cherry cultivars which come to maturity at the same period as ‘Early Burlat’, but exhibit a significantly higher firmness, was to use genitors with an extra-early blooming time41. In sour cherry, none of the individuals evaluated had the genotype BBBB for ss490552928; therefore, the ‘A’ allele was always present. This is consistent with “soft” fruit alleles for this locus being dominant to “firm” fruit alleles. In sour cherry, bloom and maturity time are also correlated, however all individuals whether early or late maturing have soft fruit compared to the firm fruit associated with bred sweet cherries.
The multiple QTLs in the qP-FF4.1 region should be taken into consideration when performing breeding selection in both sweet and sour cherry. Hence, it is of utmost importance to disentangle the genetic determinism of the traits’ variation within this QTL; in particular, it would be helpful for breeders to know whether maturity date and firmness are controlled by the same pleiotropic locus or by two closely linked genes. Fine mapping initiatives might be conducted in order to search for recombinants within this narrow genetic interval. More specifically, using cultivars such as ‘Early Burlat’ and ‘Fercer’ might be highly informative. Indeed, the predicted diplotypes for both cultivars are H3H6 and H1H2, respectively; that is, each would have one “firm” and one “soft” haplotype. However, ‘Fercer’ is known to be a significantly firmer cultivar as compared to ‘Early Burlat’. Multi-year data from INRA indicate a mean firmness, as measured by Durofel, of 67 and 49 for ‘Fercer’ and ‘Early Burlat’, respectively. This shows the complexity of the genetic determinism of fruit firmness, as already demonstrated by Campoy et al.6 and might suggest as well the existence of epistatic interactions. To test the hypothesis that maturity time and firmness are controlled by distinct genes in this LG 4 “hotspot”, breeders will need to produce very large progeny populations when crossing ‘Early Burlat’ with other firmer but also later ripening cultivars in order to obtain recombinants between the two hypothesized closely linked genes. Finally, the fact that among the founders used in modern breeding, the haplotype H3 was only found in ‘Early Burlat’ agrees with the ‘originality’ of this cultivar in terms of developmental cycle; as already stated, ‘Early Burlat’ has a rather intermediate bloom time but is one of the earliest maturing cultivars.
### Candidate genes controlling fruit firmness
The available sweet cherry genome sequence provided the opportunity to identify agronomically important candidate genes for qP-FF4.142. In our study, the identification of candidate genes was employed across species including sweet cherry, peach and tomato. Peach and sweet cherry are closely related Prunus species and share a high level of synteny43. Therefore, prior to the publication of the sweet cherry genome sequence, the peach genome was used as a proxy for candidate gene prediction in sweet cherry6,27. Tomato was included as fruit firmness has been extensively studied in this species44,45, and like cherry and peach, tomato is a fleshy carpel. Although fleshy fruits are physiologically classified as climacteric (tomato and peach) and non-climacteric (cherry), these fruits share some common characteristics such as the role of plant hormones and their interplay related to changes in firmness during fruit softening19,44,46. For example, all fruits appear to respond to abscisic acid (ABA) and ethylene; but, in non-climacteric fruit, even if ABA has a more dominant role, the fruit still exhibit characteristics of ethylene-dependent ripening19. Moreover, recent studies seem to indicate that the classification of fruits as either climacteric or non-climacteric is an oversimplification44. Some fruits, like melons, can display both climacteric and non-climacteric behaviors47 while kiwi fruit can display non-climacteric behaviors in the first stage of ripening and climacteric behaviors and in the second stage of ripening48. The fruit softening process involves the physiological modification of the cell wall and during the ripening phase of the fruit, plant hormones play important roles in this process49. Therefore, genes related to plant cell wall metabolism or various hormone signaling pathways were considered as candidates in this study. Much of tissue firmness work in tomato also focused on characterizing the potential role of cell wall–modifying genes and the transcription factors involved in hormone signaling pathways11,23. Although a cross-species strategy could help identify more candidate genes for fruit firmness, it should be noted that mechanisms controlling firmness in these three species are possibly different. For example, endoPG, the major enzyme associated with softening in peach, was not identified in the qP-FF4.1 region.
Among the candidate genes identified, an expansin gene was considered as the most promising candidate for several reasons. Firstly, expansin genes have been thought to contribute to fruit softening by weakening noncovalent interactions between cellulose microfibrils and hemicellulose components50. In tomato, expansin genes have been shown to be associated with fruit ripening and firmness51,52. Secondly, the candidate expansin gene in sweet cherry has sequence homology to the expansin gene PcEXP4 previously reported to be upregulated during tissue softening in sour cherry16. Thirdly, the expansin gene in sweet cherry was predicted to contain two functional domains; one of them was commonly found in pollen allergens which were proposed as cell wall-loosening agents to induce extension of the plant cell wall53. Lastly, this gene is very close to the peak of the qP-FF4.1 region. Among other candidate genes, transcription factors, such as NAC domain protein, MADS-box protein and Squamosa promoter-Binding Protein could also play roles in regulating fruit firmness as they have been shown to be involved in fruit ripening process21,23,54. However, future work is needed to fine map this region and ultimately identify and characterize the genes and their alleles that underlie these QTL. The haplotypes and their germplasm sources described in this study provide a genetic framework for this future discovery.
In conclusion, we identified a major QTL for fruit firmness in three sweet cherry populations that is associated with domesticated and bred germplasm. As all commercial sweet cherry cultivars must meet consumer demands for firm fruit, knowledge of the desirable alleles at this QTL would be targets for marker-assisted breeding. For example, it would be especially useful to select against “soft” alleles in cases where wild germplasm is being use as breeding parents. Candidate genes for this fruit firmness QTL were proposed; however, future fine mapping and transcriptomic analysis is needed to enable the identification of the underlying gene(s).
## Methods
### Plant materials
Three sweet cherry populations were used in this study. These were: (1) an INRA bi-parental F1 population, (2) the INRA sweet cherry germplasm collection, and (3) RosBREED (www.rosbreed.org) pedigreed population. The INRA bi-parental F1 population consisted of a progeny of 67 individuals derived from a cross between cultivar ‘Fercer’ and an unknown parent called ‘X’. The INRA sweet cherry germplasm collection, maintained by INRA’s Prunus Genetic Resources Center at Bourran, France, included 193 sweet cherry accessions collected from France and other 15 countries of America, Asia and Europe55. RosBREED pedigreed populations consisted of a set of 65 elite sweet cherry and mazzard clones, and 463 unselected F1 seedlings from 86 crosses from the Washington State University sweet cherry breeding program (Table S2). The germplasm of RosBREED pedigreed population, spanning six generations, was considered representative of U.S. public breeding germplasm for this crop24. For sour cherry, a total of 338 individuals including parents, ancestors and offspring from five bi-parental F1 populations, were used. These individuals were grown at the Michigan State University Clarksville Research Station, Clarksville, Michigan. A detailed description of these sour cherry plant materials is presented in Cai et al.33.
### Phenotyping and phenotype modeling
Fruit were harvested from the field when ripe based on a subjective assessment of skin color, texture and taste56,57, placed in coolers for transport back to the laboratory, and evaluated the same day. Fruit firmness (g/mm2) was evaluated using different methods for INRA’s populations and RosBREED’s sweet and sour cherry pedigreed populations. For the two INRA populations (F1 population and the sweet cherry germplasm collection), fruit firmness was measured using a Durofel (Setop Giraud Technologie, Cavaillon, France) texture analyzer on the day of harvest. A 3-mm probe was applied at two points on the fruit equator, the movement of the probe was recorded and the average of the two measures on ten fruits was used. Data were collected from seven years (2009–2013, 2015 and 2016) for the F1 population and two years for the germplasm collection (2014 and 2015). For the RosBREED sweet cherry and sour cherry individuals, fruit firmness was measured from 25 fruits that were at room temperature using the compression test of BioWorks FirmTech 2 (Wamego, KS, USA). Compression was performed from the fruit cheek and with the stems still on the fruit. The mean value of 25 measures were used. The data for the sweet and sour cherry individuals were collected for two years (sweet cherry, 2011 and 2012; sour cherry, 2011 and 2013). Since different methods were used to measure firmness in the RosBREED and INRA populations, the data for these populations were analyzed separately.
The phenotypic data were analyzed using SAS version 9.13 (SAS Institute Inc.) and the model PROC MIXED was used to obtain the variance components. PROC CORR was performed to calculate the correlation coefficients of fruit firmness among different years. Broad-sense heritability (H2) was calculated using estimates for the individuals based on the following random linear model:
$${{\rm{Y}}}_{{\rm{ij}}}=\mu +{{\rm{y}}}_{{\rm{i}}}+{{\rm{g}}}_{{\rm{j}}}+{{\rm{e}}}_{{\rm{ij}}}$$
where Yij is the phenotypic value of the jth individual in ith year; µ is the mean value of fruit firmness; yi is the random effect of the ith year on the phenotype; gj is the random genotypic effect of jth individual; and eij is the model residual. H2 was calculated using the following equation: H2 = σ2g/(σ2g + σ2e/n), where σ2g is the genetic variance, σ2e is the residual error, and n is the number of years.
### Genotyping and genetic map
All individuals from the three sweet cherry populations were genotyped with the RosBREED Illumina Infinium cherry SNP array of 5,696 SNP markers58 and SNP genotypes were scored using the Genotyping Module of GenomeStudio Data Analysis software59. For the ‘Fercer’ × ‘X’ F1 population, a total of 724 SNP markers were polymorphic and segregating in this population. A linkage map was constructed using JoinMap 4.060 and Kosambi’s mapping function was used to convert recombination frequency into map distance. The two resulting parental maps consisted of 110 and 87 SNP markers, polymorphic for ‘Fercer’ and ‘X’, respectively. For the INRA sweet cherry germplasm collection, marker data curation was described in Campoy et al.55. A total of 1,215 SNP markers were retained after removing the following four SNP types: (1) SNPs failing to generate clear genotype clusters; (2) SNPs with missing genotypes greater than 5%; (3) SNPs showing high distortion for Hardy-Weinberg equilibrium (>0.0001); and (4) SNPs with minor allele frequencies lower than 5%. For RosBREED pedigreed population, marker data curation was described in Cai et al.24. A total of 1,617 SNPs were identified as robust markers and therefore used for QTL analysis. Genetic positions for these markers were determined by aligning and integrating these physical positions (based on Peach Genome v2.0)39 with the sweet cherry ‘Regina’ × ‘Lapins’ SNP linkage map61. The sour cherry plant materials were also genotyped using the RosBREED Illumina Infinium cherry SNP array58. The generation of the sour cherry genetic data, including haplotype reconstruction was described in Cai et al.33.
### QTL analysis
QTL analyses were performed for all three sweet cherry populations using different mapping softwares. For ‘Fercer’ × ‘X’, QTL mapping was carried out using MultiQTL v2.6 software with the multiple interval mapping (MIM) approach used (MultiQTL Ltd, Haifa, Israel, 2005, www.multiQTL.com). Different types of analyses were performed for single year and multiple years, respectively. Each year was first analyzed independently in order to examine the stability of the QTLs. An analysis combining all years was performed using the multiple environment option with increase of the accuracy of the QTL detection. The detailed QTL mapping methodology was as described in Castède et al.27. The graphical presentation of QTL locations on the linkage groups was obtained using the MapChart software version 2.262.
A genome-wide association analysis was done for the INRA sweet cherry germplasm collection. This analysis tested the association between fruit firmness and the SNP markers on the chromosome where the firmness QTL was located. A total of 1215 SNP markers across the sweet cherry genome were used in the analysis and the SNPs associated with fruit firmness were identified according to a mixed linear model (MLM) using the software TASSEL version 5.2.6163. Corrections for population structure and kinship were adopted in the model. Population structure was described in the study of Campoy et al.55. The relative kinship matrix was calculated using SPAGeDi64. The genome-wide significance cutoff was set at 4 × 10−5 (0.05/1215).
QTL analysis in RosBREED sweet cherry pedigreed population was done using FlexQTL software that is designed for use with multiple pedigree-connected families65. Markov Chain Monte Carlo (MCMC) simulation, implemented in FlexQTL software, was applied to obtain samples from the joint posterior distribution of the model. A total of 1,000 samples (500,000 iterations with thinning value of 500) were stored for each simulation and then used for statistical inference. The inference on the number of QTLs was based on a pairwise comparison of models differing from each other by one QTL. The Bayes factor parameters (2lnBF) were interpreted as non-significant (0–2), positive (2–5), strong (5–10) or decisive (>10) evidence for the presence of QTL. The inference on the QTL position was based on posterior intensity and the inference on the QTL contribution was based on the posterior mean estimates of the QTL effect size. Both additive and dominant genetic models were tested using a maximum number of 10 QTLs. Prior number of QTL was set to 1 or 3 and genome-wide analyses were performed twice for each prior using different seed numbers to test the robustness of the analysis.
### Haplotype (diplotype) analysis
Haplotypes (i.e. alleles) for the fruit firmness QTL were identified for the INRA F1 population and the RosBREED sweet and sour cherry pedigreed germplasm. Five SNPs covering the consensus QTL region identified from the sets of plant materials were chosen for haplotype construction. Phased SNP marker information was obtained for each individual from the RosBREED sweet cherry germplasm from the FlexQTL output. Haplotypes were assigned using the PediHaplotyper software66. The SNP phasing for the sour cherry haplotypes was described in Cai et al.33. Unique haplotypes were named from H1 to H16 randomly. Statistical analyses for the association between haplotype (diplotype) and fruit firmness were performed using the software R version 3.1.367. QTL genotypes (QQ, Qq, qq) predicted by FlexQTL were used to deduce if a haplotype was associated with soft or firm fruit. As ‘Q’ is assigned to the higher phenotypic value, in this case increased firmness, QQ, Qq, and qq genotypes correspond to two “firm” alleles, one “firm” and one “soft” allele, and two “soft” alleles, respectively. Haplotypes were also deduced to be associated with soft or firm fruit based on comparison of diplotypes.
### In silico Candidate Genes
The list of genes within the QTL interval and their functional annotations in sweet cherry were obtained from the sweet cherry genome42 available on the Genome Database of Rosaceae website (GDR, https://www.rosaceae.org). The corresponding predicted cherry protein sequences obtained from GDR were blasted against the National Center for Biotechnology Information (NCBI) database to obtain gene ontology terms using BLASTP in the program Blast2GO68 with an E-value cutoff of 0.001. The sequences of the genes within the QTL interval were also blasted to the peach genome v2.039 available at GDR and the tomato genome ITAG 2.4069 available at Sol genomics Network (https://solgenomics.net), and their best gene matches and annotations were extracted. Overall, three annotations (sweet cherry, peach and tomato) and gene ontology results from Blast2GO were considered for the identification of candidate genes for fruit firmness. Candidate genes considered were those genes predicted to be involved in plant cell wall metabolism or various hormone (Ethylene, Brassinosteroid, Auxin, Gibberellin, and ABA) signaling pathways associated with fruit ripening. A circular plot of the mapping and candidate gene data was prepared with the Circos plotting tool70. Alignment and analysis of the expansin genes was performed with ClustalW v2.1 multiple sequences alignment tool. A dendrogram of the four genes was constructed using the Jukes-Cantor genetic distance model, and the neighbor-joining tree building method within the GeneiousV.11.1.4 GUI software71,72.
## Data Availability
Genotypic data for the INRA bi-parental population and RosBREED pedigreed populations is available at the Genome Database for Rosaceae (www.rosaceae.org/publication_datasets) under accession number tfGDR1037. All other data generated or analyzed during this study are included within this article and its Supplementary Information Files.
## References
1. 1.
Yue, C. et al. U.S. growers’ willingness to pay for improvements in rosaceous fruit traits. Agric. Resour. Econ. Rev. 46, 103–122 (2017).
2. 2.
Olden, E. J. & Nybom, N. On the origin of Prunus cerasus L. Hereditas 59, 327–345 (1968).
3. 3.
Flores del Manzano, F. La vida tradicional en el Valle del Jerte, Biblioteca Nacional, Sección de Manuscritos. Correspondencia de Tomás López, Párroco de Jerte, Mérida, Spain (1992).
4. 4.
Bargioni, G. Sweet cherry scions. Characteristics of the principal commercial cultivars, breeding objectives and methods in Cherries: Crop physiology, production and uses (eds Webster, A. D. & Looney, N. E.) 73–112 (CABI, 1996).
5. 5.
Quero-García, J. et al. Breeding sweet cherries at INRA-Bordeaux: from conventional techniques to marker-assisted selection. Acta Hortic. 1161, 1–14 (2017).
6. 6.
Campoy, J. A., Le Dantec, L., Barreneche, T., Dirlewanger, E. & Quero-García, J. New insights into fruit firmness and weight control in sweet cherry. Plant Mol. Bio. Rept. 33(4), 783–796 (2015).
7. 7.
Balas, F. et al. Firmness QTL mapping using an ‘Ambrunés’בSweetheart’ sweet cherry population. Acta Hortic. (In press).
8. 8.
Harker, F. R., Redgwell, R. J., Hallett, I. C., Murray, S. H. & Carter, G. Texture of fresh fruit. Hortic. Rev. 20, 121–224 (1997).
9. 9.
Brownleader, M. D. et al. Molecular aspects of cell wall modifications during fruit ripening. Crit. Rev. Food Sci. 39, 149–164 (1999).
10. 10.
Brummell, D. A., Dal Cin, V., Crisosto, C. H. & Labavitch, J. M. Cell wall metabolism during maturation, ripening and senescence of peach fruit. J. Exp. Bot. 55, 2029–2039 (2004).
11. 11.
Sitrit, Y. & Bennett, A. B. Regulation of tomato fruit polygalacturonase mRNA accumulation by ethylene: a re-examination. Plant Physiol. 116, 1145–1150 (1998).
12. 12.
Atkinson, R. G., Schröder, R., Hallett, I. C., Cohen, D. & MacRae, E. A. Overexpression of polygalacturonase in transgenic apple trees leads to a range of novel phenotypes involving changes in cell adhesion. Plant Physiol. 129, 122–133 (2002).
13. 13.
Costa, F. et al. QTL dynamics for fruit firmness and softening around an ethylene-dependent polygalacturonase gene in apple (Malus × domestica Borkh.). J. Exp. Bot. 61(11), 3029–39 (2010).
14. 14.
Gu, C. et al. Copy number variation of a gene cluster encoding endopolygalacturonase mediates flesh texture and stone adhesion in peach. J. Exp. Bot. 67, 1993–2005 (2016).
15. 15.
Alkio, M., Jonas, U., Declercp, M., van Nocker, S. & Knoche, M. Transcriptional dynamics of the developing sweet cherry (Prunus avium L.) fruit: sequencing, annotation and expression profiling of exocarp-associated genes. Hortic. Res. 1, 11 (2014).
16. 16.
Yoo, S. D., Gao, Z., Cantini, C., Loescher, W. H. & van Nocker, S. Fruit ripening in sour cherry: Changes in expression of genes encoding expansins and other cell-wall-modifying enzymes. J. Amer. Soc. Hort. Sci. 128, 16–22 (2003).
17. 17.
Cao, K. et al. Genome-wide association study of 12 agronomic traits in peach. Nat. Commun. 7, 13246 (2016).
18. 18.
Duan, N. et al. Genome re-sequencing reveals the history of apple and supports a two-stage model for fruit enlargement. Nat. Commun. 8, 249 (2017).
19. 19.
McAtee, P., Karim, S., Schaffer, R. & David, K. A dynamic interplay between phytohormones is required for fruit development, maturation, and ripening. Front. Plant Sci. 4, 79 (2013).
20. 20.
Kondo, S. & Tomiyama, A. Changes of free and conjugated ABA in the fruit of’Satonishiki’ sweet cherry and the ABA metabolism after application of (s)-(+)-ABA. J. Hortic. Sci. Biotech. 73(4), 467–472 (2015).
21. 21.
Pirona, R. et al. Fine mapping and identification of a candidate gene for a major locus controlling maturity date in peach. BMC Plant Biol. 13, 166 (2013).
22. 22.
Nuñez-Lillo, G. et al. Identification of candidate genes associated with mealiness and maturity date in peach [Prunus persica (L.) Batsch] using QTL analysis and deep sequencing. Tree Genet. Genomes 11, 86 (2015).
23. 23.
Manning, K. et al. A naturally occurring epigenetic mutation in a gene encoding an SBP-box transcription factor inhibits tomato fruit ripening. Nat. Genet. 38, 948–952 (2006).
24. 24.
Cai, L., Voorrips, R. E., van de Weg, E., Peace, C. & Iezzoni, A. Genetic structure of a QTL hotspot on chromosome 2 in sweet cherry indicates positive selection for favorable haplotypes. Mol. Breed. 37, 85 (2017).
25. 25.
Zhang, G. et al. Fruit size QTL analysis in an F1 population derived from a cross between a domesticated sweet cherry cultivar and a wild forest sweet cherry. Tree Genet. Genomes 6, 25–36 (2010).
26. 26.
Dettori, M. T., Quarta, R. & Verde, I. A peach linkage map integrating RFLPs, SSRs, RAPDs, and morphological markers. Genome 44, 783–790 (2001).
27. 27.
Castède, S. et al. Genetic determinism of phenological traits highly affected by climate change in Prunus avium: flowering data dissected into chilling and heat requirements. New Phytol. 202(2), 703–715 (2014).
28. 28.
Ballester, J., Socias, I., Company, R., Arus, P. & de Vicente, M. C. Genetic mapping of a major gene delaying blooming time in almond. Plant Breed. 120(3), 268–270 (2001).
29. 29.
Quilot, B. et al. QTL analysis of quality traits in an advanced backcross between Prunus persica cultivars and the wild relative species P. davidiana. Theor. Appl. Genet. 109, 884–897 (2004).
30. 30.
Fan, S. et al. Mapping quantitative trait loci associated with chilling requirement, heat requirement and bloom date in peach (Prunus persica). New Phytol. 185, 917–930 (2010).
31. 31.
Campoy, J. A. et al. Inheritance of flowering time in apricot (Prunus armeniaca L.) and analysis of linked quantitative trait loci (QTLs) using simple sequence repeat (SSR) markers. Plant Mol. Biol. Rep. 29, 404–410 (2011).
32. 32.
Dirlewanger, E. et al. Comparison of the genetic determinism of two key phenological traits, flowering and maturity dates, in three Prunus species: peach, apricot and sweet cherry. Heredity 109, 280–292 (2012).
33. 33.
Cai, L. et al. Identification of bloom date QTLs and haplotype analysis in tetraploid sour cherry (Prunus cerasus). Tree Genet. Genomes 14, 22 (2018).
34. 34.
Eduardo, I. et al. QTL analysis of fruit quality traits in two peach intraspecific populations and importance of maturity date pleiotropic effect. Tree Genet. Genome 7, 323–335 (2011).
35. 35.
Hernández Mora, J. R. et al. Integrated QTL detection for key breeding traits in multiple peach progenies. BMC Genomics 18(1), 404 (2017).
36. 36.
Isuzugawa, K. et al. QTL analysis and candidate gene mapping for harvest day in sweet cherry (Prunus avium L.). Acta Hortic. (In press).
37. 37.
Quero-García, J. et al. Present and future of marker-assisted breeding in sweet and sour cherry. Acta Hortic. (In press).
38. 38.
Verde, I. et al. The high-quality draft genome of peach (Prunus persica) identifies unique patterns of genetic diversity, domestication and genome evolution. Nat. Genet. 45(5), 487–494 (2013).
39. 39.
Verde, I. et al. The Peach v2.0 release: high-resolution linkage mapping and deep resequencing improve chromosome-scale assembly and contiguity. BMC Genomics 18(1), 225 (2017).
40. 40.
Sandefur, P. Enhancing efficiency in tree-fruit breeding by developing trait-predictive DNA tests. PhD Thesis, Wash. State Univ. (2016).
41. 41.
Quero-García, J., Schuster, M., López-Ortega, G. & Charlot, G. Sweet Cherry Cultivars and Improvement in Cherries: Botany, Production and Uses (eds Quero-García, J., Iezzoni, A., Pulawska, J. & Lang, G.) 60–94 (CABI, 2017).
42. 42.
Shirasawa, K. et al. The genome sequence of sweet cherry (Prunus avium) for use in genomics-assisted breeding. DNA Res. 24(5), 499–508 (2017).
43. 43.
Dirlewanger, E. et al. Comparative mapping and marker-assisted selection in Rosaceae fruit crops. Proc. Natl. Acad. Sci. USA 101, 9891–9896 (2004).
44. 44.
Chen, Y. et al. Ethylene receptors and related proteins in climacteric and non-climacteric fruits. Plant Sci. 276, 63–72 (2018).
45. 45.
Cruz, A. B. et al. Light, ethylene and auxin signaling interaction regulates carotenoid biosynthesis during tomato fruit ripening. Front. Plant Sci. 9, 1370 (2018).
46. 46.
Kumar, R., Khurana, A. & Sharma, A. K. Role of plant hormones and their interplay in development and ripening of fleshy fruits. J. Exp. Bot. 65(16), 4561–4575 (2014).
47. 47.
Fernández-Trujillo, J. P. et al. Climacteric and non-climacteric behavior in melon fruit: 2. Linking climacteric pattern and main postharvest disorders and decay in a set of near-isogenic lines. Postharv. Biol. Technol. 50, 125–134 (2008).
48. 48.
McAtee, P. A. et al. The hybrid non-ethylene and ethylene ripening response in kiwifruit (Actinidia chinensis) is associated with differential regulation of MADS-box transcription factors. BMC Plant Biol. 15, 304 (2015).
49. 49.
Brummell, D. A. et al. Modification of expansin protein abundance in tomato fruit alters softening and cell wall polymer metabolism during ripening. Plant cell 11(11), 2203–2216 (1999).
50. 50.
Cosgrove, D. J. Loosening of plant cell walls by expansins. Nature 407(6802), 321–326 (2000).
51. 51.
Rose, J. K., Lee, H. H. & Bennett, A. B. Expression of a divergent expansin gene is fruit-specific and ripening-regulated. Proc. Natl. Acad. Sci. USA 94(11), 5955–60 (1997).
52. 52.
Perini, M. A. et al. Overexpression of the carbohydrate binding module from Solanum lycopersicum expansin 1 (Sl-EXP1) modifies tomato fruit firmness and Botrytis cinerea susceptibility. Plant Physiol. Biochem. 113, 122–132 (2017).
53. 53.
Cosgrove, D. J., Bedinger, P. & Durachko, D. M. Group I allergens of grass pollen as cell wall-loosening agents. Proc. Natl. Acad. Sci. USA 94(12), 6559–6564 (1997).
54. 54.
Serra, O. et al. Genetic analysis of the slow-melting flesh character in peach. Tree Genet. Genomes 13, 77 (2017).
55. 55.
Campoy, J. A. et al. Genetic diversity, linkage disequilibrium, population structure and construction of a core collection of Prunus avium L. landraces and bred cultivars. BMC Plant Biol. 16, 49 (2016).
56. 56.
Chavoshi, M. et al. Phenotyping protocol of sweet cherry (Prunus avium L.) to facilitate an understanding of trait inheritance. J. Am. Pom. Soc. 68(3), 125–134 (2014).
57. 57.
Stegmeir, T., Sebolt, A. & Iezzoni, A. Phenotyping protocol for sour cherry (Prunus cerasus L.) to enable a better understanding of trait inheritance. J. Am. Pom. Soc. 68(1), 40–47 (2017).
58. 58.
Peace, C. et al. Development and evaluation of a genome-wide 6K SNP array for diploid sweet cherry and tetraploid sour cherry. PLoS ONE 7(12), e48305 (2012).
59. 59.
Illumina Inc. GenomeStudio genotyping modulev1.0, User Guide. Illumina Inc., Towne Centre Drive, San Diego, CA, USA (2010).
60. 60.
van Ooijen, J. W. JoinMap 4, Software for the calculation of genetic linkage maps in experimental populations, Wageningen, Netherlands: Kyazma B.V (2006).
61. 61.
Klagges, C. et al. Construction and comparative analyses of highly dense linkage maps of two sweet cherry intra-specific progenies of commercial cultivars. PLoS One 8(1), e54743 (2013).
62. 62.
Voorrips, R. E. MapChart: Software for the graphical presentation of linkage maps and QTLs. J. Hered. 93(1), 77–78 (2002).
63. 63.
Bradbury, P. J. et al. TASSEL: software for association mapping of complex traits in diverse samples. Bioinformatics 23, 2633–2635 (2007).
64. 64.
Hardy, O. J. & Vekemans, X. SPAGeDi: a versatile computer program to analyse spatial genetic structure at the individual or population levels. Mol. Ecol. Notes 2, 618–610 (2002).
65. 65.
Bink, M. C. A. M. et al. Bayesian QTL analyses using pedigreed families of an outcrossing species, with application to fruit firmness in apple. Theor. Appl. Genet. 127(5), 1073–1090 (2014).
66. 66.
Voorrips, R. E., Bink, M. C. A. M., Kruisselbrink, J. W., Koehorst-van Putten, H. J. & van de Weg, W. E. PediHaplotyper: software for consistent assignment of marker haplotypes in pedigrees. Mol. Breed. 36, 119 (2016).
67. 67.
R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, http://www.R-project.org (2015).
68. 68.
Conesa, A. et al. Blast2GO: a universal tool for annotation, visualization and analysis in functional genomics research. Bioinformatics 21, 3674–3676 (2005).
69. 69.
Tomato Genome Consortium. The tomato genome sequence provides insights into fleshy fruit evolution. Nature 485, 635–641 (2012).
70. 70.
Krzywinski, M. et al. Circos: an information aesthetic for comparative genomics. Genome Res. 19, 1639–1645 (2009).
71. 71.
Larkin, M. A. et al. Clustal W and Clustal X version 2.0. Bioinformatics 23, 2947–2948 (2007).
72. 72.
Saitou, N. & Nei, M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol. Biol. Evol. 4(4), 406–25 (1987).
## Acknowledgements
This project was supported in part by the USDA-NIFA-Specialty Crop Research Initiative project, RosBREED: Enabling marker-assisted breeding in Rosaceae (2009-51181-05808) and RosBREED 2: Combining disease resistance with horticultural quality in new rosaceous cultivars (2014-51181-22378). This project was also supported by the CEP Innovation, the private partner of INRA breeding program. We thank Dr. José Antonio Campoy and Mr. Guillaume Lalanne-Tisné for the curation of SNP data of ‘Fercer’ × ‘X’ progeny and the construction of the ‘Fercer’ and ‘X’ parental maps. We also thank the technical staff of A3C team and UEA Unit for the management of trees as well as the phenotyping activities and the INRA’s ‘Prunus Genetic Resources Center’ for preserving and managing the sweet cherry germplasm collections.
## Author information
Authors
### Contributions
A.I. designed the experiments and provided financial support; L.C. and J.Q. analyzed the data; T.B. provided data for INRA sweet cherry germplasm collection; E.D. and C.S. provided candidate gene list; L.C., J.Q. and A.I. wrote the manuscript. All authors reviewed and approved the final manuscript.
### Corresponding author
Correspondence to Amy Iezzoni.
## Ethics declarations
### Competing Interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Cai, L., Quero-García, J., Barreneche, T. et al. A fruit firmness QTL identified on linkage group 4 in sweet cherry (Prunus avium L.) is associated with domesticated and bred germplasm. Sci Rep 9, 5008 (2019). https://doi.org/10.1038/s41598-019-41484-8
• Accepted:
• Published:
• ### Fruit size and firmness QTL alleles of breeding interest identified in a sweet cherry ‘Ambrunés’ × ‘Sweetheart’ population
• Alejandro Calle
• , Francisco Balas
• , Lichun Cai
• , Amy Iezzoni
• , Margarita López-Corrales
• & Ana Wünsch
Molecular Breeding (2020)
• ### The cherry 6+9K SNP array: a cost-effective improvement to the cherry 6K SNP array for genetic studies
• Stijn Vanderzande
• , Ping Zheng
• , Lichun Cai
• , Goran Barac
• , Ksenija Gasic
• , Dorrie Main
• , Amy Iezzoni
• & Cameron Peace
Scientific Reports (2020)
• ### Construction of a High-Density Genetic Map and Mapping of Firmness in Grapes (Vitis vinifera L.) Based on Whole-Genome Resequencing
• Jianfu Jiang
• , Xiucai Fan
• , Ying Zhang
• , Xiaoping Tang
• , Xiaomei Li
• , Chonghuai Liu
• & Zhenwen Zhang
International Journal of Molecular Sciences (2020)
• ### RosBREED: bridging the chasm between discovery and application to enable DNA-informed breeding in rosaceous crops
• Amy F. Iezzoni
• , Jim McFerson
• , James Luby
• , Ksenija Gasic
• , Vance Whitaker
• , Nahla Bassil
• , Chengyan Yue
• , Karina Gallardo
• , Vicki McCracken
• , Michael Coe
• , Craig Hardner
• , Jason D. Zurn
• , Stan Hokanson
• , Eric van de Weg
• , Sook Jung
• , Dorrie Main
• , Cassia da Silva Linge
• , Stijn Vanderzande
• , Thomas M. Davis
• , Lise L. Mahoney
• & Cameron Peace
Horticulture Research (2020)
• ### A draft genome of sweet cherry (Prunus aviumL.) reveals genome‐wide and local effects of domestication
• Sara Pinosio
• , Fabio Marroni
• , Andrea Zuccolo
• , Nicola Vitulo
• , Stephanie Mariette
• , Gabriella Sonnante
• , Filippos A. Aravanopoulos
• , Ioannis Ganopoulos
• , Marino Palasciano
• , Michele Vidotto
• , Gabriele Magris
• , Amy Iezzoni
• , Giovanni G. Vendramin
• & Michele Morgante
The Plant Journal (2020) | 2021-05-06 07:19:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35553643107414246, "perplexity": 14108.12877172591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00092.warc.gz"} |
https://socratic.org/questions/what-basic-trigonometric-identity-would-you-use-to-verify-that-sin-x-1-sin-x-1-c | # What basic trigonometric identity would you use to verify that (sin x+1)/sin x=1+csc x?
Feb 23, 2016
see explanation
#### Explanation:
To prove that the left side = right side . There is a choice.
(1) manipulate the left side into the form of the right side.
(2) manipulate the right side into the form of the left side.
(3) manipulate both sides until a point is reached where they are equal.
$\frac{\sin x + 1}{\sin} x = \sin \frac{x}{\sin} x + \frac{1}{\sin} x = 1 + \csc x = \text{ right side }$ | 2020-02-23 19:52:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4163564145565033, "perplexity": 784.2591239785252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145839.51/warc/CC-MAIN-20200223185153-20200223215153-00128.warc.gz"} |
https://socratic.org/questions/how-do-you-convert-from-moles-to-particles | How do you convert from moles to particles?
First, you need to know Avogadro's number, which is directly associated with molarity. The number is: $6.02 \cdot {10}^{23}$
Now, one mole = $6.02 \cdot {10}^{23}$ particles | 2021-01-23 12:09:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6552637219429016, "perplexity": 941.4767597791949}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00569.warc.gz"} |
http://cvgmt.sns.it/paper/2975/ | # Global-in-time regularity via duality for congestion-penalized Mean Field Games
created by santambro on 30 Mar 2016
modified on 16 Jan 2017
[BibTeX]
Accepted Paper
Inserted: 30 mar 2016
Last Updated: 16 jan 2017
Journal: Stochastics
Year: 2017
Notes:
Proceedings of the International Conference on Stochastic Analysis and Applications, 2015, Hammamet.
Abstract:
After a brief introduction to one of the most typical problems in Mean Field Games, the congestion case (where agents pay a cost depending on the density of the regions they visit), and to its variational structure, we consider the question of the regularity of the optimal solutions. A duality argument, used for the first time in a paper by Y. Brenier on incompressible fluid mechanics, and recently applied to MFG with density constraints, allows to easily get some Sobolev regularity, locally in space and time. In the paper we prove that a careful analysis of the behaviour close to the final time allows to extend the same result including $t=T$. | 2018-03-20 21:35:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5079277157783508, "perplexity": 709.206126680616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.54/warc/CC-MAIN-20180320205242-20180320225242-00381.warc.gz"} |
https://kx.lumerical.com/t/two-different-reflectance-by-varying-source-and-monitor-distance/52069 | # Two different reflectance by varying source and monitor distance
Hello,
I am trying to solve absorptance of a single layer silver particles (by using periodicity). When I change the parameter (gap_si_source in code) that define distance between surface and source (by keeping the distance between source and monitor constant) I get two different solutions (1 and 2) as I present in the figure.
Distance (in nm) and corresponding result is like this
270 result ->2
260 result ->1
250 result ->2
240 result ->1
230 result ->2
225 result ->2
224 result ->2
223 result ->1
220 result ->1
219 result ->1
215 result ->1
210 result ->1
205 result ->1
203 result ->1
202 result ->2
200 result ->2
175 result ->2
173 result ->1
170 result ->1
168 result ->1
166 result ->1
165 result ->2
150 result ->2
130 result ->1
100 result ->2
90 result ->1
There is a behavior like 90 + 40*k (where k is integer)
I get two different result not an another result in between.
How can I make my solution independent of that distance?
Lumerica.zip (313.9 KB)
Hello @ref,
I have tried running your script with gap_si_source values of 90, 170, 200, 210, 220 and 260 nm, and for all of them I obtained result 2. Which version of FDTD Solutions are you using? And how are you calculating the absorptance?
My initial guess would be that this is an issue with how the mesh is aligning with your structures. You should try adding a mesh override region to the silica to make sure the mesh is constant between the different simulations. Also, you could try keeping the z span of the FDTD region constant while you vary gap_si_source.
1 Like
Thank you for the quick response.
Version is 2020a. To find absorption I use A=1-T-R.T is close to 0 so it is 1-R where R is find by monitor_1
I attach 2 new .fsp files. Both are created by the script I uploaded at my first post and by the same pc and same lumerica version. One of them gives the result 1 the other gives 2. You can just check T from monitor_1 without giving a run
silver_sphere_unit_.zip (514.4 KB)
I am still not sure what exactly is causing this issue, but I noticed that the results change quite a bit when I decrease the mesh spacing in the mesh override object over the sphere. Here is the reflection with a different mesh override spacings:
This indicates that your mesh is not fine enough to give reliable results. I would recommend you use convergence testing to make sure your mesh is fine enough, especially over the metal sphere. Let me know if your problem persists after you have performed this convergence testing.
By the way, to speed up your simulation you can apply antisymmetric BCs to the x boundaries and symmetric BCs to the y boundaries to reduce your simulation size by four (this will still represent a periodic structure in the x and y directions):
I hope this helps. Let me know if you have any questions.
In the uploaded files mesh size is 4 nm but normally I use 0.2 nm, which results a 28 hour execution time. With this fine mesh it is hard to get the length vs result table in the first post so I make the mesh coarse.
When I opened r1.fsp and r2 fsp files that looks same in lumerical editor, editplus show different codeblocks are at different places and also some minor changes at last lines. May be there is a bug arise due to custom material and/or addition order of the components like (rectangle, monitor etc). I do not know if the technical team find it is interesting to make a reverse engineering.
Thank you for the valuable information. I will use your recommendations about the BC.
.
I found the difference. Number of coefficients for the optical data fit is different. In one file it is set to 2, in the other it is 5 so curve fits are different.
Now the question is why it gives two different fit for same material data - model and how can I manually set those values from the script.
An another question. The BCs you provide (symmetrci and anti-symmetric) reduced execution time from 28h to 3.5h. Can I use these BCs if the source incident is not normal but 30 deg for example.
Good catch! You are right, it’s strange that the fit is changing between your simulations. Note that the number of coefficients is not set directly by the user. The maximum number of coefficients is set, then the fit uses as many coefficients as necessary to get the best fit. I noticed that the same fit is obtained for both simulations when I turn off the “improve stability” option under “Advanced settings”. Turning this off can lead to a better fit, but can lead to instability and divergence.
Try running your simulations with this option off and see if the fit stays constant between simulations. You can set the various material fit parameters via the script using commands with this format: setmaterial("material name", "material property", value);.
As for the BCs, the symmetric/anti-symmetric BCs require both the structure and the source to have the required symmetry. So if you tilt your plane wave source around the y axis, for example, you will lose the symmetry condition in the x direction.
Keep in mind that there are some added complications when you are using an angled plane wave source in broadband simulations. For example, you may want to use BFAST boundary conditions. You can take a look at this page for more information:
I hope that helps. Let me know if you have any questions.
Thank you for the information.
setmaterial(matName2,“make fit passive”, [1] );
setmaterial(matName2,“improve stability”, [1] );
From the codes, first line works (it also has a boolean data type), but second does not.
Error: line 76: The material’s improve stability property is not available.
I can change various material properties like tolerance, imaginary weight etc. but not “improve stability”
The command setmaterial(matName2, “improve numerical stability”, 1); should work. Generally the names of the variables match the names in the GUI, but occasionally they do not. You can print the names of the variables you can change via script using the command ?setmaterial(matName2);. | 2020-01-29 14:23:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5028106570243835, "perplexity": 1265.2735711571925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00028.warc.gz"} |
https://mathoverflow.net/questions/352734/schrodinger-operator-with-magnetic-field-eigenvalues | # Schrodinger operator with magnetic field: eigenvalues
Consider the self-adjoint operator on $$L^{2}(\mathbb{R}^{N})$$,
$$H=-\frac{1}{2}(\nabla-iA)^{2}+V,$$ where $$A\in C^{\infty}(\mathbb{R}^{N}, \mathbb{R}^{N} )$$, $$V\in C^{\infty}(\mathbb{R}^{N})$$, $$V\geq 0$$ and $$V(x)\rightarrow\infty$$ as $$|x|\rightarrow\infty$$.
Does H have a purely discrete spectrum?
New contributor
user152385 is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
• In the examples that immediately come to my mind, say $N=2$, $A=(y,-x)B/2$ (constant magnetic field), and $V=C(x^2 +y^2 )$ (harmonic oscillator), the spectrum is indeed discrete. I don't know where to point you for a general statement, though. What if $A$ is strong enough to overwhelm $V$ for $|x|\rightarrow \infty$? That might be a way to get a continuous part of the spectrum. It could be you need more conditions in that respect. – Michael Engelhardt Feb 14 at 22:48 | 2020-02-20 22:16:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6939130425453186, "perplexity": 378.96786244097973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145282.57/warc/CC-MAIN-20200220193228-20200220223228-00492.warc.gz"} |
https://gamedev.stackexchange.com/questions/129638/get-modified-key-with-sdl2 | # Get modified key with SDL2
If I poll keyboard events with SDL2 all I get is the pressed key and the modifiers. But I am using Neo and have some keys like Esc and arrow keys on a different layer which means that e.g. Mod4+i is the same as the left arrow key.
How can I get the modified key from SDL2?
The same problem occurs to me when I want to handle special chars which are located on different keys on different keyboard layouts (/ for example has a dedicated key on US layout while on German layout it is located at Shift+7).
I already found the text input mode, but this seams rather unfitting as one gets a string instead of a single key.
Edit: formatting
Edit 2: I just found another problem .. SDL doesn't recognize Mod4 so when pressing Mod4 plus another key, the mod field is still 0. Other modifiers like Shift work.
• Have you tried using scancodes? I feel like they might solve atleast some of your problems. – Tyyppi_77 Sep 8 '16 at 18:49
• But that way I have to handle different keyboard layouts different ... Edit: Also Mod4 is not recognized which doesn't allow me to handle this by hand. – Daniel Hauck Sep 8 '16 at 18:51
• I'm not at all familiar with Neo (the website is in German), but it does seem a little weird that SDL doesn't recognize the modifiers. Could you perhaps clarify how Neo works, and where in terms of event managing it does its magic? – Tyyppi_77 Sep 8 '16 at 18:56
• I can only speak for Linux ... there it is an official XServer keyboard map using the Xkbmap levels (I'm not really familiar with how the XServer handles different levels), so it should be no magic. – Daniel Hauck Sep 8 '16 at 19:06
• What's your SDL version? – Tyyppi_77 Sep 8 '16 at 19:14 | 2019-08-22 11:28:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23308229446411133, "perplexity": 2433.228832802303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317113.27/warc/CC-MAIN-20190822110215-20190822132215-00257.warc.gz"} |
https://www.physicsforums.com/threads/why-eigenvalue-specification-reduces-the-no-of-li-equations.284175/ | # Why eigenvalue specification reduces the no. of LI equations?
1. Jan 11, 2009
### neelakash
Hi everyone, I am stuck with the following for last couple of days.
Many books mention during the development in the idea of Eigenvalue problem: say, you have the equation
$$[\ A-\lambda\ I]\ X=\ 0$$ where A is an NxN matrix and X is an Nx1 vector.
The above consists of n equations.Say,all eigenvalues are non-degenerate.
If you specify one of the non-degenerate eigenvalues,the number of linearly independent equations will be (N-1).This is written in book.I am looking for the explanation.
The linear independence of the equations come from the vectors in the matrix $$[\ A-\lambda\ I]$$.Since,the matrix $$[\ A-\lambda\ I]$$ is singular,its rank can be at most (N-1).Means,the maximum number of the linearly independent vectors in the matrix A after specifying one of its eigenvalues is (N-1).At least one of the vectors can be expanded in terms of the (N-1) vectors.
I find it difficult to see how the specification of the eigenvalue results in this.It is clear that in specifying the eigenvalue,all the matrix elements become known to us.And we can readily calculate its deerminant=0.That way it is OK.But how do we know the rank is precisely (N-1) and not (N-2) or (N-3)...etc.?
-Please take part in the discussion so that the thing becomes clear.
Neel
Last edited: Jan 11, 2009
2. Jan 11, 2009
### lurflurf
I assume you have n eigenvalues. Your result only holds when they are all different. If x is an eigenvalue of (algebraic) multiplicity k (A-lambda*I)^k has rank N-k. It is also possible that (A-lambda*I)^l has rank N-k for some l=1,2,...,k-1. This helps to keep things interesting. We can see this because (A-x1*I)...(A-xN*I) has rank 0. When eigenvalues are distinct we conclude each (A-xk*I) has rank N-1.
example the matrix
{{x,1}
{0,x}}
The eigenvector x is multiplicity 2 (A-x*I) has rank 1 (A-x*I)^2 has rank 2
example the matrix
{{x,0}
{0,x}}
The eigenvector x is multiplicity 2 (A-x*I) has rank 2 | 2017-05-22 19:28:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8842588663101196, "perplexity": 1050.3877687250142}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607046.17/warc/CC-MAIN-20170522190443-20170522210443-00588.warc.gz"} |
https://wizedu.com/questions/22516/imagine-a-firm-that-only-uses-capital-k-and-labor | In: Economics
# Imagine a firm that only uses capital (K) and labor (L). Use an isocost / isoquant...
Imagine a firm that only uses capital (K) and labor (L). Use an isocost / isoquant diagram to illustrate the firm’s equilibrium input mix for given prices of capital and labor and a given rate of output. Now illustrate what happens if the price of labor falls, and the firm wants to produce the same rate of output. What happens to the cost of production? Compare the relative marginal products of labor and capital (the MRTS) at the two equilibria.
## Solutions
##### Expert Solution
When price of labor falls, isocost line becomes flatter and now isocost line is AC. Equilibrium output will increase as price of labor falls. And equilibrium occurs at e1 level, where isocost line AC and isoquant line IQ1 are tangent with each other
## Related Solutions
##### There is a firm who manufacturers and uses capital (K) and labor (L) to product output...
There is a firm who manufacturers and uses capital (K) and labor (L) to product output Q such that Q=10KL. The unit price for K and L are w = $15 and r =$5, respectively. 1).Does the firm’s production exhibit decreasing, constant, or increasing returns to scale? 2)What is the optimal input bundle (K*, L*) to produce 480 unit of output? 3)Derive the long run cost function.
##### A firm discovers that when it uses K units of capital and L units of labor...
A firm discovers that when it uses K units of capital and L units of labor it is able to produce q=4K^1/4 L^3/4 units of output. Continue to assume that capital and labor can be hired at $40 per unit for labor and$10 for capital. In the long run if the firm produces 600 units of output, how much labor and capital will be used and what is the LR Total cost of production?
##### A firm discovers that when it uses K units of capital and L units of labor...
A firm discovers that when it uses K units of capital and L units of labor it is able to produce q=4K^1/4 L^3/4 units of output. a) Calculate the MPL, MPK and MRTS b) Does the production function (q=4K^1/4 L^3/4) exhibit constant, increasing or decreasing returns to scale and why? c) Suppose that capital costs $10 per unit and labor can each be hired at$40 per unit and the firm uses 225 units of capital in the short run....
##### A firm produces output using capital (K) and labor (L). Capital and labor are perfect complements...
A firm produces output using capital (K) and labor (L). Capital and labor are perfect complements and 1 unit of capital is used with 2 units of labor to produce 1 unit of output. Draw an example of an isoquant. If wages and rent are $2 and$3, respectively, what is the Average Total Cost? A firm has a production function given by Q=4KL where K, L and Q denote capital, labor, and output, respectively. The firm wants to produce...
##### Suppose your firm uses 2 inputs to produce its output: K (capital) and L (labor). the...
Suppose your firm uses 2 inputs to produce its output: K (capital) and L (labor). the production function is q = 50K^(1/2)L^(1/2). prices of capital and labor are given as r = 2 and w = 8 a) does the production function display increasing, constant, or decreasing returns to scale? how do you know and what does this mean? b) draw the isoquants for your firms production function using L for the x axis and K for y. how are...
##### There are two kinds of factors of production, labor L and capital K, which are only...
There are two kinds of factors of production, labor L and capital K, which are only available in non-negative quantities. There are two firms that make phones, Apple and Banana. To make qA phones, Apple’s input requirement of (L,K) is given by production function f(L,K) = L0.6K0.2. To make qB phones, Banana’s input requirement of (L,K) is given by production function g(L,K) = L0.75K0.25. (a) (Time: 3 minutes) How many phones can Apple make with factor bundle (L1,K1) = (1,1)?...
##### Using a model of production isoquant curves and isocost curves explain how a firm with a...
Using a model of production isoquant curves and isocost curves explain how a firm with a Cobb-Douglas production function will meet its quota for producing a necessary level of output while minimizing costs. How would this firm choose among competing production technologies or change its production when it implements an improved technology (innovation)?
##### Please use Isoquant-isocost graph to explain. Thank You! remember: isoquant curve would not change, just need...
Please use Isoquant-isocost graph to explain. Thank You! remember: isoquant curve would not change, just need to generate Q1 level of electricity. here are asking about the changes in production cost •Suppose a power plant can use a mixture of coal and renewable resources to generate Q1 level of electricity. And the price of coal is Pc and price of renewable resources is Pr. Notice that the use of coal or renewable resources will subject to law of diminishing marginal...
##### The production function has two input, labor (L) and capital (K). The price for L and...
The production function has two input, labor (L) and capital (K). The price for L and K are respectively W and V. q = L + K a linear production function q = min{aK, bL} which is a Leontief production function 1.Calculate the marginal rate of substitution. 2.Calculate the elasticity of the marginal rate of substitution. 3.Drive the long run cost function that is a function of input prices and quantity produced. | 2022-11-29 05:39:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37151676416397095, "perplexity": 2868.479583284356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00468.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-6-systems-of-equations-and-inequalities-6-5-linear-inequalities-practice-and-problem-solving-exercises-page-394/24 | Algebra 1
The graph was made using graphing software. Since the inequality sign is a greater than sign, the graph is shaded to the right of the dotted line (which includes any values of $x$ greater than $x=-2$). | 2019-10-23 08:01:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7930592894554138, "perplexity": 240.69373032455502}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829507.97/warc/CC-MAIN-20191023071040-20191023094540-00069.warc.gz"} |
https://tcevents.chem.uzh.ch/event/14/contributions/83/ | # FortranCon 2021
23-24 September 2021
Virtual
Europe/Zurich timezone
## Finish AST generation in LFortran
24 Sep 2021, 17:20
5m
ZOOM (Virtual)
### ZOOM
#### Virtual
Fortran-lang Communications
### Speaker
Thirumalai Shaktivel (KS Institute Of Technology)
### Description
LFortran has a Bison based parser implemented in the parser.yy file that can parse most of Fortran source code. The main objective of my GSoC project was to make sure that all the grammar rules defined are exposed to AST (Abstract Syntax Tree) level i.e., one has to systematically go over the parser file and make sure AST is always generated for all the grammar rules defined, resulting in Completion of the AST generation. AST.asdl contains a list of AST nodes that have already been implemented, this project aims at adding the missing node into the AST.asdl.
The steps for implementation are:
Define macros, use them in the parser file to expose these nodes to the AST level.
Add tests to make sure things work as expected.
The fmt sub-command(format) is used to convert AST back to Fortran source code. There were certain issues related to fmt and parser, this project solved those issues. As a result, LFortran would be able to convert every grammar rule defined into AST for further manipulation and if required back to Fortran source code.
### Primary author
Thirumalai Shaktivel (KS Institute Of Technology)
### Co-author
Ondřej Čertík (Los Alamos National Laboratory)
### Presentation Materials
FortranCon2021.pdf LFortran: Finish AST generation [Fortran-lang] [GSOC'21] video | 2022-09-30 05:40:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29861360788345337, "perplexity": 9179.734853038794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00453.warc.gz"} |
https://tryalgo.org/en/permutations/2016/11/11/counting-inversions/ | # Résolution de problèmes algorithmiques
Given a table, find for every entry how many elements there are to left that are larger and how many elements there are to right that are smaller. In other words find out how many swaps bubble sort will do on the table.
## A $O(n\log n)$ algorithm based on merge sort
Consider one step of the recursive merge sort applied on the table. In the merge step we are given two consecutive portions of the table which each are already sorted. A temporary table recieves the result of the merging of the two lists. We have two pointers i and j that progress in each of the tables and a pointer k in the temporary table.
At any moment we compare the elements pointed by i and j and move the smaller of them to the temporary table. In each of the cases we can identify some inversion pairs as depicted below.
In the implementation below, we do not sort the intial given table, but rather a vector rank containing indices to the table. This is necessary since otherwise the items would at some stage of the algorithm not be at their initial position anymore, making it impossible to increase the correct entries in the tables left and right | 2020-05-25 21:29:50 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19086730480194092, "perplexity": 441.18427885442526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389355.2/warc/CC-MAIN-20200525192537-20200525222537-00394.warc.gz"} |
http://blog.jgc.org/2010/01/how-i-got-50000-page-views-by-simply.html | ## Friday, January 08, 2010
### How I got 50,000 page views by simply being me
Today on Hacker News there's a post about a user who 'engineered' two postings that resulted in 60,000 page views. That post annoyed me because the last thing people like is to know that they've been manipulated.
The other day I blogged about being a geek with an Ikea train set and for some reason that post really captured the imagination of a certain part of the Internet.
I hadn't expected that post to be so popular, and I certainly didn't tailor it to any community. It was just me being me.
But within hours it was on the top of Hacker News, on the front page of Reddit and Wired, and being tweeted widely.
I try to make my blog genuine, if you follow it then you'll be getting a raw feed of me and that could cover all sorts of topics. On the other hand there are many blogs that pander (to varying degrees) to different communities. Part of the reason I follow very few RSS feeds is that much blog writing is vapid self-promotion.
Labels:
If you enjoyed this blog post, you might enjoy my travel book for people interested in science and technology: The Geek Atlas. Signed copies of The Geek Atlas are available.
<$BlogCommentBody$>
<$BlogCommentDateTime$> <$BlogCommentDeleteIcon$>
<$BlogBacklinkControl$> <$BlogBacklinkTitle$> <$BlogBacklinkDeleteIcon$>
<$BlogBacklinkSnippet$> | 2016-09-27 13:47:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19582624733448029, "perplexity": 2211.8734982644028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661087.58/warc/CC-MAIN-20160924173741-00258-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/inverse-laplace-transform-with-complex-inversion-theorm.341013/ | # Inverse laplace transform with complex inversion theorm
1. Sep 28, 2009
### oddiseas
1. The problem statement, all variables and given/known data
Find the inverse laplace transform of In(1+1/s)
2. Relevant equations
3. The attempt at a solution
Using the complex inversion theorm, and the sum of the residues.
The only residue is at s=0. and it is a simple pole of degree one.
Therefore lim(s approaches 0)= s*In(1+1/s)e^st and since e^st at s=0 becomes 1, i get:
lim(s approaches 0)=s*In(1+1/s)
Now i have no idea what to do next. Usually i can find f(t) easily at this point but not with this function.
(note: i have already found the answer using a power series and the derivative property,(1-e^-t)/t) i am trying to figure out how to proceed next with this method.)
= | 2018-02-24 07:12:52 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235572218894958, "perplexity": 785.1396216845653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815435.68/warc/CC-MAIN-20180224053236-20180224073236-00269.warc.gz"} |
https://dobots.nl/2012/02/28/a-brief-story-of-slam | Feb 28 2012
# What is this SLAM?
SLAM (Simultaneous Localization And Mapping) tries to answer two key questions in Robotics:
• Robot Localization: “Where is the Robot?”
• Robot Mapping: “What does the world around the Robot look like?”
SLAM can help you solve the problem of building up a map within an unknown environment (without a priori knowledge), or to update a map within a known environment (with a priori knowledge from a given map), while at the same time keeping track of the current location of the robots in the environment.
## Some possible SLAM applications
• Oil pipeline inspection
• Ocean surveying and underwater navigation
• Mine exploration
• Coral reef inspection
• Military applications
• Crime scene investigation
• Earthquake surveillance procedures
## Requirements
In order to implement SLAM, you need a mobile robot and a range measurement device, such as a laser scanner, a sonar device or a camera (visual SLAM).
## How to implement SLAM
The goal of the SLAM process is to use the data obtained by the robot’s ranging sensors in order to update the position of the robot and the map of the world around it. A very basic localisation process consists of a numbers of steps:
1. Move
2. Estimate position (odometry)
3. Sense features
4. Map and localization update
This is indeed a naive approach, because robot odomety of an unknown environment is very error-prone and the result of such an approach would be an inconsistent map. Therefore as an corrected version, SLAM uses a probabilistic approach consisting of the following steps:
1. Move
2. Predict position (odometry)
3. Sense features
4. Recognize landmarks (data association) ⇒ loop closure
5. Correct position (probability theory)
In this new approach, the ranging sensor data and odometry data are combined to correct the perception of the robot about its location and the position of environmental feature.
This prediction and correction process is described with the aid of some figures below:
In the figures, the robot is represented by the triangle. The stars represent landmarks (environmental features). The robot initially measures the location of the landmarks, using its sensors (sensor measurements illustrated by the ‘lightning’).
The robot moves. Based on robot odometry, the robot thinks that is it located here.
Once again the robot measures its distance to the landmarks using its range measurement sensors. What if the odometry and sensor measurements don’t match? The answer is: The robot is not at the location where it thinks it is (odometry error).
The robot has more trust in sensor data than in odometry. So, the new location of the robot considering the landmarks distances is here. (The dashed triangle is where the robot thought it was using the odometry data).
The actual location of the robot is here. You can see that the sensors are not perfect but their measurements are more reliable than odometry. The lined triangle is the actual position of the robot, the dotted triangle is its estimated location based on the sensor data and the dashed triangle is its estimated location based on odometry.
## Visual SLAM
At DoBots we are mostly interested in projects involving Visual SLAM, where the range measurement sensor is simply a camera. Using camera as range measurement device has some advantages such as:
• It is fast.
• It has longer range than many other sensors.
The basic procedure of Visual SLAM is described in the following figures:
The camera localization $y_i^C$ depends on three parameters: camera orientation and camera position (which are given by odometry) and the range measurements (which is provided by the camera). The yiW
is the camera distance from the landscape from measurement $z_i$.
### Predictor-Corrector
At any moment(t) the robot localization can be predicted by the robot location at(t-1), the robot orientation and the robot position. This prediction will be corrected by the aid of range measurements as it is shown in below figure:
Three stages of the SLAM(Prediction, Observartion and Update) need to be done based on a recursive algorithm which can derive the Robot state from the noisy sensor data. One the best known algorithms which is implemented to solve the SLAM problem is Kalman filter.
In the following links you can see some SLAM projects realization, done by different research groups.
## Videos
The following YouTube videos give a nice insight into using visual SLAM: | 2018-03-19 18:11:00 | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2919918894767761, "perplexity": 1875.8217600465528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647044.86/warc/CC-MAIN-20180319175337-20180319195337-00258.warc.gz"} |
https://www.gamedev.net/forums/topic/191473-how-to-publishmy-game/ | • Announcements
Archived
This topic is now archived and is closed to further replies.
how to publish..my game????
Recommended Posts
well firstly assume that i''ve worked over 2 years and developed a rpg game that is similiar to diablo 2 but a little more attractive and complex one... and then secondly know that i did it alone as a lonewolf hunted his prey.and know nothing about how to publish it ... and at last show me a way to publish it.i am trying to make a decide.will there be a shareware version what will be its price (how can i make my money???)blaa bla blaa... but also gimme the net adresses of the companies that you offer me.how can i apply them? will i have to show my source code just to apply them? or will it be more profitable to publish it from net.when i publish it from net with which ways can i take the price from my costumers?? finally i would be grateful if you help me and be my pionner to publish my game last note:: do you know how much did blizzard earned from diablo2? well i have to make it my own way....
Share on other sites
There''s a web-site call eGameZone where you can submit your games for online publication. A shareware demo is made available for people to download to try out your game or they can buy the full version if they thinknit''s good enough:
http://www.egamezone.net/
Share on other sites
quote:
do you know how much blizzard made from diablo2?
easy to calculate. go to your favorite software retailer, take a look at the diablo2 box, which should have a sticker saying "3 million copies sold." Add 500,000 to that and multiply that by 30.00-40.00. Viola.
Blizzard didn''t make $35 on each copy sold. Firdt off the developer typically doesn''t get a big cut if they go through a standard publisher distribution scheme (which Diablo2 was). Next thing to consider is how many of those sales were full price and how many were as part of the bundles and discount versions. In any case, Blizzard made tons of money on it. Share this post Link to post Share on other sites diablo2 was 49.99 full price...so thats why i point it in the 30.00-40.00 range :-). and you''re right about the publishing thingy. but still...still ton of money Share this post Link to post Share on other sites If the games really done and you don''t need any funding, then find yourself a nice distributor (which differs from a publisher in that theirs no funding involved). I''d say they''d take about a 30% cut of the profits. Get your product nicely packaged and shipped out to retail stores. Share this post Link to post Share on other sites Also remember, even though diablo made oodles of dough in 1996 or whatever, if it were to come out right now, it probably wouldnt sell nearly as many copies simply because the standard of gaming keeps going up. So if you have a diablo clone (yes i realized you said it looks better), I wouldnt expect to become a millionaire, although I wish you the best of luck. Can we see a screenshot or three? Share this post Link to post Share on other sites He said assuming. The price tag$35. Was probably sold to the retailer for $25-$28 depending on how many the retailer bought in one shipment. Subtract away shipping costs, Cost of 3 CDs, manual(Forgot whether it had one), box packaging(The box was nicely done) and if i remember a poster and other costs to bring the product to the retailer. What you get is like $15-$20. Of this i can expect the publisher to take atleast 70% since they funded the game and marketed it(I believe marketing costs are often as high if not higher than the development costs in the US, i might be wrong though). So on average Blizzard made around $5 per copy sold. Makes you think distributing shareware games through the internet might be more effective. And i don't think the game was made cheap. All the Acts had highly detailed prerendered backgrounds, and the CGI videos made my jaw drop when i saw them. They might still be one of the best CGI's for videogames on the market(WC3 included). Sometimes i think the effort Blizzard puts into its videos might be more than what they spend on the game itself. Oh and lets not forget all the years Blizzard has been providing free Battle.NET services for all these years. Bandwidth might not be much of a price issue in the USA(Though it can add up to quite abit considering the number of people online at any 1 time * 8 years). But in Asia it becomes more expensive by a factor ranging from 2-20 depending on the country. [edited by - GamerSg on November 17, 2003 10:33:42 AM] Share this post Link to post Share on other sites i find it hard to believe you could be smart enough to make a game this good yet have no idea about the industry. Share this post Link to post Share on other sites of course i didnt made the game i just wanted to know how i will make ready my game to the first hand costumer( this is a streotyped word in my language "first hand costumer.") i dont know its synonymous in english"... well but they sold the game for 35-40 dollar and just take 5 dollar for each copy of the game this is terrifing..!! well i have to make it my own way.... okunu hedefinden uzaða atan okçu okunu hedefine atamýyan okçudan baþarýlý deðildir(turkish proverb) it means the archer who throwed his arrow beyond his target is not succesful more than the one who even didnt reached his arrow to this target.... Share this post Link to post Share on other sites quote: Original post by Anonymous Poster i find it hard to believe you could be smart enough to make a game this good yet have no idea about the industry. Why not? Knowledge of 3D graphics does not magically give you knowledge of the business procedures of publishing a game. James Simmons MindEngine Development http://medev.sourceforge.net Share this post Link to post Share on other sites quote: Original post by Cipher3D Viola. A small stringed instrument? Eh? I think you need this. Share this post Link to post Share on other sites quote: Original post by OrangyTang quote: Original post by Cipher3D Viola. A small stringed instrument? Eh? I think you need this. Thats a Typo, if you just switch 2 letter then you''ll have Voila and I believe you''ll find it it more appropriate Share this post Link to post Share on other sites quote: Original post by Turt99 Thats a Typo, if you just switch 2 letter then you''ll have Voila and I believe you''ll find it it more appropriate I know what he intended, but its a worrying trend that lots of americans don''t know the difference. o_O Share this post Link to post Share on other sites quite amusing...as I do play the Viola..haha$5 per copy? quite reasonable...they stiill made a lot of dough from it.
5*3 million ain''t bad guys...thats more than what my mom makes..(40,000 per year) :-D
Share on other sites
quote:
quote:
--------------------------------------------------------------------------------
Original post by Turt99
Thats a Typo, if you just switch 2 letter then you'll have Voila and I believe you'll find it it more appropriate
--------------------------------------------------------------------------------
I know what he intended, but its a worrying trend that lots of americans don't know the difference. o_O
How do you know he is American? This is an international forum. "Its" possible that he's English.
--
Dave Mikesell Software & Consulting
[edited by - dmikesell on November 18, 2003 9:34:13 AM]
Share on other sites
A distributor won''t take 70% of the profits, only a publisher will. Theres a big difference between the two.
Thats why, if your game is done and you require no funding, use a distributor.
Share on other sites
if i have a funding problem i must find a distrubutor .
but what is the diffrences betwwen the two what the publisher doing more than the distrbutor to take % 70 of the money???
well i have to make it my own way....
okunu hedefinden uzaða atan okçu okunu hedefine atamýyan okçudan baþarýlý deðildir(turkish proverb)
it means the archer who throwed his arrow beyond his target is not succesful more than the one who even didnt reached his arrow to this target....
Share on other sites
Ddevelopers make very little of the $44.99. I''m talking$5 or $10 max. This is partly the reason why Valve is implementing Steam, if they can self-publish their games, think of what that means to the industry (and what it means to the publishing industry :D). If Steam breaks though and does successfully (which it wont if they dont fix the bugs), it''ll be interesting to see how many other developers go the same route. By the way, doesn''t that game Savage self-publish their game? They had it on their website for$40 to download it, that''s awesome! So much easier than going to the store and getting all that crap that I usually throw away (box manual etc.).
As for the original poster, there''s a whole bunch of companies that might take on it, realarcade.com...and...., ok I''m drawing a blank, but there''s tons of places that publishes independent games.
Share on other sites
Blizzard is also a publisher. Even if it is not a publisher the rates will not be at %70, because Blizzard is a reliable producer( No one can think Blizzard will be unsuccessful) The %70 rates are only to the new established companies which has great possibility of being unsuccessful. The rates will drop with the decrease of risk.
Selam hoca...
• Forum Statistics
• Total Topics
628275
• Total Posts
2981743
• 10
• 11
• 17
• 10
• 9 | 2017-11-17 18:07:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21368670463562012, "perplexity": 3681.3690626544344}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803848.60/warc/CC-MAIN-20171117170336-20171117190336-00739.warc.gz"} |
http://www.linzhoukai.com/?p=37 | delete\update where… in语句 导致死锁问题分析
1、问题描述
2、问题分析
delete from b where (a,b,c,d,e) in ((1,1,1,1,1));
delete from b where (a,b,c,d,e) in ((1,1,2,1,1));
delete from b where where a=1 and b=1 and c=1 and d=1 and e=1;
delete from b where where a=1 and b=1 and c=2 and d=1 and e=1;
3、解决办法
4、半一致性读仍会死锁问题
6、源码分析 | 2023-03-25 14:15:14 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199772477149963, "perplexity": 3714.7871681329307}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00554.warc.gz"} |
https://global-sci.org/intro/article_detail/jcm/9779.html | Volume 34, Issue 1
Strong Predictor-Corrector Methods for Stochastic Pantograph Equations
J. Comp. Math., 34 (2016), pp. 1-11.
Published online: 2016-02
Preview Full PDF 953 5891
Export citation
Cited by
• Abstract
The paper introduces a new class of numerical schemes for the approximate solutions of stochastic pantograph equations. As an effective technique to implement implicit stochastic methods, strong predictor-corrector methods (PCMs) are designed to handle scenario simulation of solutions of stochastic pantograph equations. It is proved that the PCMs are strong convergent with order $\frac{1}{2}$. Linear MS-stability of stochastic pantograph equations and the PCMs are researched in the paper. Sufficient conditions of MS-unstability of stochastic pantograph equations and MS-stability of the PCMs are obtained, respectively. Numerical experiments demonstrate these theoretical results.
• Keywords
Stochastic pantograph equation, Predictor-corrector method, MS-convergence, MS-stability.
• AMS Subject Headings
60H10, 65C20.
fyxiao@mailbox.gxnu.edu.cn (Feiyan Xiao)
pwang@jlu.edu.cn (Peng Wang)
• BibTex
• RIS
• TXT
@Article{JCM-34-1, author = {Xiao , Feiyan and Wang , Peng}, title = {Strong Predictor-Corrector Methods for Stochastic Pantograph Equations}, journal = {Journal of Computational Mathematics}, year = {2016}, volume = {34}, number = {1}, pages = {1--11}, abstract = {
The paper introduces a new class of numerical schemes for the approximate solutions of stochastic pantograph equations. As an effective technique to implement implicit stochastic methods, strong predictor-corrector methods (PCMs) are designed to handle scenario simulation of solutions of stochastic pantograph equations. It is proved that the PCMs are strong convergent with order $\frac{1}{2}$. Linear MS-stability of stochastic pantograph equations and the PCMs are researched in the paper. Sufficient conditions of MS-unstability of stochastic pantograph equations and MS-stability of the PCMs are obtained, respectively. Numerical experiments demonstrate these theoretical results.
}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.1506-m2014-0110}, url = {http://global-sci.org/intro/article_detail/jcm/9779.html} }
TY - JOUR T1 - Strong Predictor-Corrector Methods for Stochastic Pantograph Equations AU - Xiao , Feiyan AU - Wang , Peng JO - Journal of Computational Mathematics VL - 1 SP - 1 EP - 11 PY - 2016 DA - 2016/02 SN - 34 DO - http://doi.org/10.4208/jcm.1506-m2014-0110 UR - https://global-sci.org/intro/article_detail/jcm/9779.html KW - Stochastic pantograph equation, Predictor-corrector method, MS-convergence, MS-stability. AB -
The paper introduces a new class of numerical schemes for the approximate solutions of stochastic pantograph equations. As an effective technique to implement implicit stochastic methods, strong predictor-corrector methods (PCMs) are designed to handle scenario simulation of solutions of stochastic pantograph equations. It is proved that the PCMs are strong convergent with order $\frac{1}{2}$. Linear MS-stability of stochastic pantograph equations and the PCMs are researched in the paper. Sufficient conditions of MS-unstability of stochastic pantograph equations and MS-stability of the PCMs are obtained, respectively. Numerical experiments demonstrate these theoretical results.
Feiyan Xiao & Peng Wang. (2019). Strong Predictor-Corrector Methods for Stochastic Pantograph Equations. Journal of Computational Mathematics. 34 (1). 1-11. doi:10.4208/jcm.1506-m2014-0110
Copy to clipboard
The citation has been copied to your clipboard | 2022-01-24 12:54:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22750116884708405, "perplexity": 4798.897711620819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00006.warc.gz"} |
https://labs.tib.eu/arxiv/?author=S.%20Henderson | • ### Issues and R&D Required for the Intensity Frontier Accelerators(1409.5426)
Sept. 18, 2014 physics.acc-ph
Operation, upgrade and development of accelerators for Intensity Frontier face formidable challenges in order to satisfy both the near-term and long-term Particle Physics program. Here we discuss key issues and R&D required for the Intensity Frontier accelerators.
• ### Project X: Broader Impacts(1306.5024)
July 12, 2013 physics.acc-ph
Part-3 of "Project X: Accelerator Reference Design, Physics Opportunities, Broader Impacts". The proposed Project X proton accelerator at Fermilab, with multi-MW beam power and highly versatile beam formatting, will be a unique world-class facility to explore particle physics at the intensity frontier. Concurrently, however, it can also facilitate important scientific research beyond traditional particle physics and provide unprecedented opportunities in applications to problems of great national importance in the nuclear energy and security sector. Part 1 is available as arXiv:1306.5022 [physics.acc-ph] and Part 2 is available as arXiv:1306.5009 [hep-ex].
• ### The Advanced Superconducting Test Accelerator (ASTA) at Fermilab: A User-Driven Facility Dedicated to Accelerator Science \& Technology(1304.0311)
April 1, 2013 physics.acc-ph
Fermilab is currently constructing a superconducting electron linac that will eventually serve as the backbone of a user-driven facility for accelerator science. This contribution describes the accelerator and summarizes the enabled research thrusts. A detailed description of the facility can be found at [\url{http://apc.fnal.gov/programs2/ASTA_TEMP/index.shtml}].
• ### Background Rejection in the DMTPC Dark Matter Search Using Charge Signals(1301.5685)
The Dark Matter Time Projection Chamber (DMTPC) collaboration is developing a low pressure gas TPC for detecting Weakly Interacting Massive Particle (WIMP)-nucleon interactions. Optical readout with CCD cameras allows for the detection of the daily modulation of the direction of the dark matter wind. In order to reach sensitivities required for WIMP detection, the detector needs to minimize backgrounds from electron recoils. This paper demonstrates that a simplified CCD analysis achieves $7.3\times10^{-5}$ rejection of electron recoils while a charge analysis yields an electron rejection factor of $3.3\times10^{-4}$ for events with $^{241}$Am-equivalent ionization energy loss between 40 keV and 200 keV. A combined charge and CCD analysis yields a background-limited upper limit of $1.1\times10^{-5}$ (90% confidence level) for the rejection of $\gamma$ and electron events. Backgrounds from alpha decays from the field cage are eliminated by introducing a veto electrode that surrounds the sensitive region in the TPC. CCD-specific backgrounds are reduced more than two orders of magnitude when requiring a coincidence with the charge readout.
• The Proceedings of the 2011 workshop on Fundamental Physics at the Intensity Frontier. Science opportunities at the intensity frontier are identified and described in the areas of heavy quarks, charged leptons, neutrinos, proton decay, new light weakly-coupled particles, and nucleons, nuclei, and atoms.
• ### Dark Matter Time Projection Chamber: Recent R&D Results(1109.3270)
Sept. 19, 2011 astro-ph.IM
The Dark Matter Time Projection Chamber collaboration recently reported a dark matter limit obtained with a 10 liter time projection chamber filled with CF4 gas. The 10 liter detector was capable of 2D tracking (perpendicular to the drift direction) and 2D fiducialization, and only used information from two CCD cameras when identifying tracks and rejecting backgrounds. Since that time, the collaboration has explored the potential benefits of photomultiplier tube and electronic charge readout to achieve 3D tracking, and particle identification for background rejection. The latest results of this effort is described here.
• ### Background Rejection in the DMTPC Dark Matter Search Using Charge Signals(1109.3501)
The Dark Matter Time Projection Chamber (DMTPC) collaboration is developing low-pressure gas TPC detectors for measuring WIMP-nucleon interactions. Optical readout with CCD cameras allows for the detection for the daily modulation in the direction of the dark matter wind, while several charge readout channels allow for the measurement of additional recoil properties. In this article, we show that the addition of the charge readout analysis to the CCD allows us too obtain a statistics-limited 90% C.L. upper limit on the $e^-$ rejection factor of $5.6\times10^{-6}$ for recoils with energies between 40 and 200 keV$_{\mathrm{ee}}$. In addition, requiring coincidence between charge signals and light in the CCD reduces CCD-specific backgrounds by more than two orders of magnitude.
• ### DMTPC: Dark matter detection with directional sensitivity(1012.3912)
Dec. 17, 2010 astro-ph.CO, astro-ph.IM
The Dark Matter Time Projection Chamber (DMTPC) experiment uses CF_4 gas at low pressure (0.1 atm) to search for the directional signature of Galactic WIMP dark matter. We describe the DMTPC apparatus and summarize recent results from a 35.7 g-day exposure surface run at MIT. After nuclear recoil cuts are applied to the data, we find 105 candidate events in the energy range 80 - 200 keV, which is consistent with the expected cosmogenic neutron background. Using this data, we obtain a limit on the spin-dependent WIMP-proton cross-section of 2.0 \times 10^{-33} cm^2 at a WIMP mass of 115 GeV/c^2. This detector is currently deployed underground at the Waste Isolation Pilot Plant in New Mexico.
• ### First Dark Matter Search Results from a Surface Run of the 10-L DMTPC Directional Dark Matter Detector(1006.2928)
Dec. 9, 2010 hep-ex, astro-ph.IM
The Dark Matter Time Projection Chamber (DMTPC) is a low pressure (75 Torr CF4) 10 liter detector capable of measuring the vector direction of nuclear recoils with the goal of directional dark matter detection. In this paper we present the first dark matter limit from DMTPC. In an analysis window of 80-200 keV recoil energy, based on a 35.7 g-day exposure, we set a 90% C.L. upper limit on the spin-dependent WIMP-proton cross section of 2.0 x 10^{-33} cm^{2} for 115 GeV/c^2 dark matter particle mass.
• ### The case for a directional dark matter detector and the status of current experimental efforts(0911.0323)
Nov. 1, 2009 astro-ph.CO | 2021-03-04 07:15:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6619563698768616, "perplexity": 3684.0710421290587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00478.warc.gz"} |
https://stats.stackexchange.com/questions/530041/how-to-recover-estimates-on-original-scale-in-log-linear-model | # How to recover estimates on original scale in log-linear model?
If fitting a linear model to a untransformed and log-transformed y variable, can anyone explain why the coefficients are different from the log-transformed model even after exponentiating the coefficients?
Here's a simple R example
library(tidyverse)
# Set seed
set.seed(1)
# Make data
x <- rnorm(100, 5, 1)
y <- x + rnorm(100, 5, 1)
data <- cbind.data.frame(y, x)
# Fit on original scale
summary(lm(y ~ x))
# Fit on log scale
summary(lm(log(y) ~ x))
I was expecting the exponentiated coefficients from the second model to exactly align with the coefficients from the first model and this isn't the case.
• Why? Your first model is of the form $\hat y=a+bx$ while your second is closer to $\hat y=c x^d$ if you exponentiate the coefficients. These are very different models – Henry 2 days ago
• Shouldn't it be easy to transform the parameters between models? I'm confused about why I can't log transform a variable and then use an exponent to get results back on the original scale. – andy_d 2 days ago
• Are you presuming that if $y = a + bx$ is fitted by your first lm() call, then $\ln y = \ln a + (\ln b) x$ is fitted by second call? Logarithms don't work like that. In addition, your first simulation makes negative or zero y\$ unlikely but not impossible, which is not compatible with taking logarithms, and if errors are normal on the original scale, the second call applies an inappropriate estimation method. – Nick Cox 2 days ago | 2021-06-12 10:53:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6632834672927856, "perplexity": 1581.311342786546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00419.warc.gz"} |
http://mathhelpforum.com/discrete-math/228406-onto-composition-proof.html | # Math Help - onto and composition proof
1. ## onto and composition proof
I've got this review problem:
Let f: X -> Y and g: Y -> Z be functions such that gf: X -> Z is onto. Prove that g must be onto.
When I draw it out on paper, it seems quite intuitive: if g is not onto, then there is a "connection" that can't be made between X and some member of Z. But I'm having trouble thinking of how to formally express that...
2. ## Re: onto and composition proof
Originally Posted by infraRed
I've got this review problem:
Let f: X -> Y and g: Y -> Z be functions such that gf: X -> Z is onto. Prove that g must be onto.
When I draw it out on paper, it seems quite intuitive: if g is not onto, then there is a "connection" that can't be made between X and some member of Z. But I'm having trouble thinking of how to formally express that...
Suppose that g is not onto. Then $\exists z_0 \in Z \ni g(y) \neq z_0~\forall y \in Y$
Does $gf(x) = z_0$ for some $x \in X$ ? Can $gf(\cdot)$ thus be onto? | 2014-12-29 01:58:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.692872166633606, "perplexity": 503.9602555018984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447560397.30/warc/CC-MAIN-20141224185920-00040-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://www.taylortree.com/2006/ | ## Sunday, December 24, 2006
### Happy Holidays!
Twas the night before Christmas, when all through the house
Not a creature was stirring, well, except for TaylorTree.
I wish you and your family a wonderful Holiday Season!
MT
## Sunday, December 17, 2006
### Quote of the Week - Know Thyself
Boy: "Do not try and bend the spoon. It is impossible. Instead, only try to realize the truth."
Neo: "What truth?"
Boy: "There is no spoon."
Neo: "There is no spoon?"
Boy: "Then you'll see that it is not the spoon that bends...it is only yourself."
-- from the Matrix (one of my all-time favorite movies)
Hope all is well. I'm as busy as a bee...buzz, buzz, buzz.
MT
## Wednesday, December 06, 2006
### Thread of the Week - Stock Distributions
Eric Crittenden shared an interesting study of stock distributions over at the Trading Blox Forum.
How many of you look at the Annual Compounded Returns graph and immediately think...man, I gotta get me some of those 3,000 plus 10% to 20% returns! If I can just find an edge, a better indicator, profit targets, something to capture them. Work it like a Casino, baby!
How many view the graph and have the 344 100% or more returns catch your eye? Or better yet...stare in amazement at the Terminal Wealth Relative graph and its 2,000 plus returns of 500% or more. Count me in that camp.
This study really confirms what the market is all about. Unlimited gains and limited losses. If you time the market or cap your profits in order to capture and/or protect those small gains...you'll...as Eric says...
"virtually guarantee to participate fully in the left side of the distribution and not in a positive way."
Really after giving this study more thought...it seems after you set yourself up for success via capturing the right side of the distribution...it is then just a matter of managing risk. Right? And not from the sense of your initial risk in the stock via volatility based position sizing. But, from maintaining a certain risk profile throughout the entirety of the trade. As these positions move further in your favor...I would assume their risk profiles could differ greatly from the original risk set forth.
Again, interesting study. Thanks Eric!
MT
## Monday, December 04, 2006
### Quote of the Week
Caston glared. "Observation selection effects are totally commonplace. At the supermarket, have you ever noticed how often you find yourself in the longer checkout lane? Why is that? Because those are the lines with the most people in them. Let's say I told you that Mr. Smith, about whom you knew nothing at all, was standing in one of those checkout lines, and you had to predict which one, based only on knowing how many people were in each line."
"There'd be no way to know."
"But inference is about probabilities. And the most probable outcome, obviously, is that he's in the line with the most people in it." Once you step back and consider yourself from an outsider's perspective, it becomes self-evident. The slowest traffic lane is the one with the most cars in it. The laws of probability say that any given driver is most likely to be in that lane. That means you. It's not bad luck or delusion that makes you think the other lanes of traffic are going faster. More often than not, they are going faster."
Great quote from a great book, The Ambler Warning by Robert Ludlum.
If you haven't read it...you should.
The book, while not about investing or the market, contains two fictional characters who fit well with characters in the investment world. One of the characters lives by gut feel alone. Instinct. The other...100% logic, statistics, probabilities, just the facts ma'm. Interesting to see the development of these characters and how they find common ground.
MT
## Friday, December 01, 2006
### First Snow!
Received our first Snow of the year and it is wonderful! For a Texas boy who has never been around snow before...I feel like a kid at Christmas. Fun stuff.
MT
## Friday, November 24, 2006
### Aquamarine Fund Diary, Buffett, and Owning a Business
You can tell from the volatility breakout in my blog posts...that I have some time on my hands. :)
Found some great posts by the Aquamarine Fund Diary. Here's a post on Warren Buffett and the Chicago Graduate School of Business - trip to Omaha. I really liked the following bullets:
• Associate with people who are better than you. Marry up, employ up,
work for your heroes. Associations rub off. Tell me your heroes, I’ll
tell you how you’ll turn out. People in the room (us) have IQ, energy,
and smarts to burn. No bad results will be due to deficiencies in this
area.
• Take one hour. Think of the one classmate who you’d like to own 10%
of for the rest of their life. 10% of all of their future income. What
do you think about? The person who others admire and want to work with.
Person who works hard and gives others credit. It’s simple. Select
those qualities for yourself.
• "Now the fun part” who would you want to short? The guy who turns other people off.
I also liked this interview of Tom Murphy in The Wisdom of Tom Murphy. In the full interview Murphy encourages
"people, particularly those who are young but also experienced enough to know what's going on, to try starting a business because the rewards of being your own boss are wonderful."
Okay, readers...if you own your own business...give up the goods on how you came to the realization you wanted to run a business instead of working for whatever company you were working for. Why did you feel the risk was worth taking? And what business did you choose...something you were familiar with? Something new?
And for those working for others...if you have thoughts of running a business someday...what type of business would you like to run? And why?
MT
## Thursday, November 23, 2006
### Quote of the Week - We're Just Ants!
"If you study an ant colony, you will find it has a life cycle — it’s robust, it’s adaptive. However, if you ask any individual ant what’s going on, they have no clue. They’re working with local information and local interaction. I think there’s a very clear parallel to markets. How do markets get to be efficient? The answer is it’s an interaction among a lot of diverse investors. The aggregation mechanism to bring the information together is the stock exchange, and then what emerges from that is the stock market.
The important takeaway is it’s impossible to understand the market by interviewing individual investors because each investor only has a partial piece of the picture. It’s the aggregation that allows the full picture to emerge. What the ant colonies teach us is that in markets, cause and effect are very difficult to pin down. Sometimes we like to think that the experts on TV or the pundits quoted in the Wall Street Journal know what’s going on. They’re really just ants." -- Michael Mauboussin
The above quote comes from Mauboussin's article, "Guppies, ants, and golf swings: Mental models for investors." This quote really defines the methodology I have adopted in trading. Forget how you feel about the stock. It doesn't matter. Forget how you feel about the market...it doesn't matter. Who cares if we're in a housing bubble, USD is going lower, inflation, deflation, yakkity-yak...don't come back. The only thing that matters is what the market thinks.
The market is really just a glorified voting system. You may believe Google, Starbucks or Milli Vanilli is road kill. But, if the vast majority of participants believe it's the next best thing...then it is. I know, I know...you know better....but you're just one vote...amongst millions of voters. Know matter how strongly you feel about something...it's just a drop in the bucket.
So, why fight it? I ask's ya's!?! Just go with it. Embrace your inner ant.
MT
### Happy Thanksgiving!!!
Just want to wish everyone a Safe and Happy Thanksgiving! This will be our first Thanksgiving in Missouri and we're really looking forward to it. Plus, it will be our son's first Thanksgiving.
The best part is I have 4 full days of complete rest ahead of me. Definitely time for catching a movie or two. And many other things I've put off for far too long.
I leave you with a rather interesting thread over at the Trading Blox Forum on Pre-emptive money management. Provides food for thought on volatility based position sizing. Is volatility predictable? If so, should we adjust our position size to anticipated changes in volatilty? Since our current position size is based on historical volatility.
This also begs the question as to the use of historical volatility in position sizing. Is past volatility a good measure of future volatility? Should we use a shorter time frame for measurement? Or longer? Or weight the average? Seasonality may be a poor choice to predict changes in price but what about changes in volatility?
Is all this getting way to complicated? Would we instead be better off just randomly choosing a number? And one more question...when do your best returns occur? During periods of high volatility? Or low? How about your worst returns?
As always, something to explore.
MT
## Wednesday, November 15, 2006
"There's a great story about a famous local trader at the Chicago Board of Trade (CBOT). One day, he was on the floor of the CBOT and a U.S. inflation number came out that was totally unexpected. Pure pandemonium ensued. When all the noise died down, he walked out of the pit having made $10 million and said, "By the way, what was the number?" -- humorous story shared by Dr. John Porter in the book, Inside the House of Money It's official...the new trading system is in production. A week earlier than the deadline. Much of the work was done in R. In fact, a few times in the project, I just don't know what I'd do without the fantastic little language. Python and Ruby was used as well. Along with Wealth-Lab. That's it from my part of the world...where I'm eagerly anticipating my first Missouri snowfall. And enjoying the cool vlog, WallStrip. It's the first stock market show I've seen where the highlighted stocks are chosen from a valid investing concept. Who'd a thunk it? Smart! Later Trades, MT ## Monday, November 06, 2006 ### Quote of the Week "Implied volatility is based on historical volatility, but who cares about historicals? They're irrelevant. The point is, things can happen for the first time that aren't in your distribution so they can't be priced. If it's never happened before, how can you hedge yourself? The only way to hedge the unknown is to cut off tail risk completely." -- Jim Leitner from the interview in Inside the House of Money MT ## Sunday, October 29, 2006 ### Quote of the Week "All these years I had been sustained by an illusion - happiness through victory - and now that illusion was blurred to ashes. I was no happier, no more fulfilled, for all my achievements. Finally I saw through the clouds. I saw that I had never learned how to enjoy life, only how to achieve. All my life I had been busy seeking happiness, not finding it." -- Dan Millman's character in the Way of the Peaceful Warrior. MT ## Thursday, October 19, 2006 ### Quote of the Week "In times of change, learners inherit the Earth, while the learned find themselves beautifully equipped to deal with a world that no longer exists." -- Eric Hoffer Sorry everyone for the lack of posts or response to emails. I'm trying to meet a November 1st deadline for a new trading system. And between that and the entire family being sick from a nasty little cold bug...well I haven't been up for much else. I do appreciate your patience...and hope to get back to the normal routine soon. In fact, once I push this trading system to production...I plan on taking a nice long break from the trading system turret. Catch a few breaths before my next run. Ha ha! Until then, hop on over to the StockTickr Blog and read Jon Tait's interview. One of the best interviews yet from StockTickr. Jon's a smart cookie and shares some great insights into system trading and market behavior. Later Trades, MT ## Tuesday, October 03, 2006 ### Quote of the Week "Many questions are unanswerable. Many answers are questionable." -- from a fortune cookie Wow, the above quote is so true. I have dug a little deeper into everything I have worked on the past several years. Calling into question my beliefs and attitudes towards the market. I was so wrong. It all started from a seed that Eric Crittenden planted into my head. "Sounds like an exercise in curve-fitting", he said, in reference to one of my system ideas. Then a friend introduced me to the concept of focusing on what you don't like to do and casting it aside in order to free yourself for the things you do like to do. The butterfly began to flutter... "It has been said something as small as the flutter of a butterfly's wing can ultimately cause a typhoon halfway around the world." - Chaos Theory Next, I watched the recent show on Sabermetrics where they discussed Bill James and many of the very cool things brought out in Moneyball. Flutter, flutter. Finally, my recent foray into the hazards and pennywinks of developing a trading platform has brought out a very interesting focus to my trading. What would I like in a platform? What am I really trying to test? How is a certain test helpful to my bottom-line? And all that has helped me to understand what I've been missing. I've been focusing on the wrong thing! So much of my time was spent on my next trade. Kinda like in Sabermetrics where they found too much focus was on RBI's or Homeruns. Bill James found Outs was where the focus should be. And I think in trading...the focus should be on the only fixed rule that I know exists: If you don't use margin...your losses are limited to 100%. But, as long you don't cap your profits in any major way...your gains are infinite. With that in mind, where should your focus be? And what kind of formulas and tools can we use to measure this new focus? For example, the smoothness the Sharpe Ratio tries to show becomes something of a throw-away...a tool/formula used to measure the wrong focus in your trading. Don't understand? Maybe this will help. Or maybe not: http://www.fooledbyrandomness.com/0603_coverstory.pdf Later Trades, MT ## Friday, September 22, 2006 ### Quote of the Week - Kaizen "The most important choice you make is what you choose to make important" -- Michael Neill I had coffee with a friend today that brought up an interesting topic. He said, instead of thinking about all the things you like to do or would like to do and persuing them. Step back a moment and think about all the things you do not like to do...and stop doing them. A lot to chew on for yours truly. First off, because it is very hard for me to think about what I don't like to do. Perhaps because I've spent so much time and effort in determining what I like to do? Or maybe I don't like to admit there are things I don't like to do? Reminds me of Kaizen. Eliminating activities that add cost and do not add value in an effort to continuously improve. We could all use that, right? So, here I am thinking about what I don't like about investing/trading. 1) I don't like nothingness. I don't mind drawdowns...at least something is happening. And of course, I love when I'm reaching new equity highs. But, I absolutely abhor nothingness. That period of time when your investments just sit there and do nothing. I don't like that. Which is a bad thing...since most of an investor's time is spent in nothingness. 2) I don't like gut feel investments. I want a precise method to follow that lets me know exactly when to buy and when to sell. Thus, the reason for developing trading systems. 3) I don't enjoy buying and selling stocks. I enjoy researching trading ideas and building systems around those ideas. But, the actual buying and selling of stocks is not fun for me. Would enjoy things much better if someone else traded my systems, so to speak. That's about it as far as my dislikes. Not too bad. One day I need to write what I like about investing/trading. Trading Platform Update: I've made lots of progress on the trading platform front. But, so much still to go. I'm spending equal time in Python and Ruby in this quest. My major roadblock right now is obtaining the most efficient way to process historical stock data against portfolio data. Most trading platforms process a symbol at a time. But, doing that prevents you from ranking all stocks triggered for a given day along with the currently held stocks and choosing the top 10, 20, etc. Because you'd have to read all symbols and all dates in order to get at a certain date for all symbols. So, I'm trying date processing instead. Spin through all the symbols for a given date instead of all dates for a given symbol. Doing this would enable me to rank, adjust, etc. prior to the next day of trade. But, going this route scares me due to performance issues. Maybe it won't be so bad. We will see. If anyone has ideas on this subject, please send my way. The main goal of the project is to avoid memory intensive methods. The reason? The trading systems I work with consists of all stocks in the US Markets going back 20 years or so. It crashes Wealth-Lab due to its memory method of position sizing despite 2GB of memory. I could buy more memory, but that would be too easy. :-) Later Trades, MT ## Friday, September 08, 2006 ### Quote of the Week - Dilbert? "I'm a great student of successful people, and usually at some point in their careers, they've had to take a huge risk. That used to cause a dull ache in my stomach. I still get it, but now I ignore it." -- Scott Adams, creator of Dilbert MT ### Fall Movies Wow, there is nothing like Fall movie lineups to get you going. And this season looks to be a good one. Here's just a few of the movies that caught my eye and will likely see... We Are Marshall & Facing the Giants - Nothing like football movies in the Fall. A Guide to Recognizing Your Saints - Returning home again? The Departed - Good Cop? Bad Cop? Fearless - Jet Li...Martial Arts...need I say more? Enjoy the weekend! MT ## Wednesday, September 06, 2006 ### Quote of the Week - Mindsets "Often when you mention risk, what people think of is the downside. Danger. That's not the entrepreneurial mind-set," she said. "The entrepreneurial mind-set is that risk is the heightened probability that there is a big range of possible outcomes." -- Heidi Roizen The above quote is from Money.com's recent series on What it takes to be rich. I love the story describing growth mindsets versus fixed mindsets. Dweck, the psychologist who studies growth mind-sets, created an experiment to demonstrate how persistence and the pursuit of knowledge leads to success. She posed a series of trivia questions to a group of people with fixed mind-sets and another with growth mind-sets. After each answer, one and a half seconds passed before the participants were told whether they were right or wrong, and, if they were wrong, another one and a half seconds lapsed before they were given the correct response. Their brains were monitored with electrodes the entire time. Dweck found that the people with fixed mind-sets cared a lot about whether they were right or wrong but not at all about what the right answer was. The growth-mind-set participants stayed interested until the correct answer was given, showing an interest in learning new information rather than in simply validating their intelligence. more from Carol Dweck... People with fixed mind-sets believe that they were born with a certain amount of intelligence, and they strive to convince the world of their brilliance so that no one finds out they're not actually geniuses. Growth-mind-set people believe that intelligence, knowledge and skill need to be "cultivated" by trial and error. Failing at something, they believe, is the best way to ensure they'll succeed at it the next time. This growth mindset versus fixed mindset sounds so interesting...I just might have to go out and read her new book: Follow along with the Money.com's series here... Lesson 1: Make your own luck Lesson 1, Corollary 1: Building 'social capital' often pays off in the end. Lesson 2: Failing at something is the best way to ensure success at it the next time. Lesson 2, Corollary 1: Successful people are always on the look out for new experiences that they can later build on. Lesson 2, Corollary 2: If you see an opportunity, take it. But that doesn't mean betting the ranch. Later Trades, MT ## Tuesday, August 29, 2006 ### Quote of the Week - I Love Ruby! "It is not the responsibility of the language to force good looking code, but the language should make good looking code possible." -- Yukihiro Matsumoto I just discovered the power of Ruby!!! More later, MT ## Tuesday, August 22, 2006 ### Quote of the Week - Programming "And don't write longer, more obtuse code because you think it's faster. Remember, hardware gets faster. MUCH faster, every year. But code has to be maintained by programmers, and there's a shortage of good programmers out there. So, if I write a program that's incredibly maintainable and extensible and it's a bit too slow, next year I'm going have a huge hit on my hands. And the year after that, and the year after that. If you write code with more code that's fast now, you're going to have a hit on your hands. And next year, you're going to have a giant mess to maintain, and it's going to slow you down adding features and fixing bugs, and someone's going to come along and eat your lunch." -- Wil Shipley Great quote! Read more on this topic here. MT ### Development 0.1 "Be careful about using the following code -- I've only proven that it works, I haven't tested it." -- Donald Knuth I have finally started my dynamic allocation of equity project. This is something I've stewed about for several weeks...okay...maybe months. But, after meeting with Jon for lunch this weekend...I finally got the motivation back to begin work on the project. Thanks Jon! And seeing as how I hardly ever write anything of significance on this blog...I figure I'd start documenting some of the steps I'm taking to get this project on the road. First thing was to find a better coding environment than what I was using. I have been using the PythonWin IDE for my trials and tribulations. I needed more oomph. Hopped over to Vim and have hunted and pecked my way around a bit. No flow joe yet. Before moving on...does anybody know of a windows or even linux distro of the EVE$EDITOR? Somebody? Anybody? Hello?
Just a week ago, I found out about the new Pydev extension to Eclipse. Pretty nice. It's still not perfect...but much closer to what I'm looking for. So, now that I've found an IDE that allows me to play in the sandbox a bit...on to the database choice.
I downloaded pytables due to their "designed to efficiently and easily cope with extremely large amounts of data" claim to fame. And then did nothing with it. It's not the relational type of storage I'm used to...so maybe that's why. Thought maybe a viewer would help, so downloaded the vitables viewer. It was nice...but still did nothing with it.
Okay, maybe I'm making this too hard. One of the python programmers I know mentioned Sqlite. Downloaded it. Found the python extension for it here. Explored documentation for working with it here and here. Now, I'm getting somewhere. Wrote a few python modules to test create, insert, drop, and fetch. Here they are:
Create Table in Python/Sqlite:
******Begin of Code***********************
from pysqlite2 import dbapi2 as sqlite
conn = sqlite.connect("TaylorTree")
cursor = conn.cursor()
SQL = """
create table MarketDaily
(
Symbol text,
Bar SQL_DATE,
Open float,
High float,
Low float,
Close float,
Volume float,
primary key (Symbol, Bar)
);
"""
cursor.execute(SQL)
******End of Code***********************
Insert into Table:
******Begin of Code***********************
from pysqlite2 import dbapi2 as sqlite
conn = sqlite.connect("TaylorTree")
cursor = conn.cursor()
SQL = """
insert into MarketDaily
(Symbol, Bar, Open, High, Low, Close, Volume, AdjClose)
values
(
"YHOO",
20060801,
20.00,
25.00,
19.00,
22.00,
50000,
22.00
);
"""
cursor.execute(SQL)
conn.commit()
******End of Code***********************
Fetch from Table:
******Begin of Code***********************
from pysqlite2 import dbapi2 as sqlite
conn = sqlite.connect("TaylorTree")
cursor = conn.cursor()
SQL = "select * from MarketDaily"
cursor.execute(SQL)
# Retrieve all rows as a sequence and print that sequence:
print cursor.fetchall()
cursor.close()
******End of Code***********************
Drop Table:
******Begin of Code***********************
from pysqlite2 import dbapi2 as sqlite
conn = sqlite.connect("TaylorTree")
cursor = conn.cursor()
SQL = "drop table MarketDaily"
cursor.execute(SQL)
******End of Code***********************
Not too bad. Not too hard. But, then I figured I'd make a module that would handle all this stuff for me. Some hard work began...all because I had no idea how to use symbolics in Python/SQL. Finally discovered the needle in a haystack...'%s'. Aha!
******Begin of Code***********************
from pysqlite2 import dbapi2 as sqlite
conn = sqlite.connect("TaylorTree")
cursor = conn.cursor()
def UpdatePrice(sym, b, o, h, l, c, v, ac):
SQL = """
insert into MarketDaily
(Symbol, Bar, Open, High, Low, Close, Volume, AdjClose)
values
(
'%s',
%s,
%s,
%s,
%s,
%s,
%s,
%s
);
""" % (sym, b, o, h, l, c, v, ac)
cursor.execute(SQL)
conn.commit()
******End of Code***********************
After spending a lot of time getting all that going...I then turn back to pytables. Maybe I need to dig deeper there. Found some very good documentation here. But, I'm still sitting here...nothing. Hey, someone give me some motivation on working with this bad boy! Anybody have any experience to share in regard to pytables? If so, bring it on! I need some mojo!
And that's where I am now. Oh...and of course, will begin working on spinning through TC2005's databank and load historics into Sqlite. How do I do that? That involves working with COM objects and Python makes it very easy for you. In fact...I'm amazed at how complicated it is to call a COM object from Microsoft's own languages like C#. In python...all you have to do in order to get to the TC2005 COM object is...
******Begin of Code********
import win32com.client
w=win32com.client.Dispatch("TC2000Dev.cTC2005")
******End of Code**********
2 lines. Now, I'm sure there is a much easier way to call a COM Object in C# that what I was trying to do. If anyone out there knows how...please leave a comment. I'm really interested to see how many lines it takes to connect.
One last thing...if C# is your thing...check out Microsoft's free version of Visual Studio, C#, and even SQL Server via the Express Editions. C# not your cup of tea? There is Visual Basic, Visual C++, and even Visual J++.
And that's it from here...where I'm hoping to catch up on some much needed sleep.
MT
## Sunday, August 13, 2006
### Quote and Thread of the Week
"One of the best attributes I know a trader to have is humility. The best traders I know admit to knowing very little about what the market will do or don't pretend to have any kind of secret method or style or edge that others don't have. They just go in to work everyday like a brick layer. Their goal is to lay bricks. One at a time. And hopefully at the end of their life they have built a solid foundation. That's all a trader can hope for." -- Maverick74
Found the great quote above perusing EliteTrader this weekend. The thread is titled, Writing Options for a Living, read here. You'll have to be patient because a lot of time is spent with posts from people still believing in the Easter Bunny. But, there are a few gems to be found...especially from Maverick74, riskarb, and a few others.
MT
## Monday, August 07, 2006
### Quote of the Week
" Becoming wealthy is like playing Monopoly.. the person who can accumulate the most assets wins the game."
-- Noel Whittaker
MT
## Sunday, July 30, 2006
### Thread of the Week - Birth of a Turtle
Came across a great thread this weekend regarding Curtis Faith and his Turtle background. Read here. I especially enjoyed the story of his initial programming experience converting trading systems. What a great first job. And of course, always hungry for insights and traits into what makes a successful trader. Here's Mr. Faith's take:
The ones who were successful had more emotional control. The ones who weren't successful were either too intellectually insecure and unable to commit to a strategy, too greedy, too emotionally invested in their financial success, too affected by the large swings in equity, or too averse to the risks required to trade well (probably due to a lack of confidence in themselves). One of the things that distinguished the good Turtles from the ones that were completely unsuccessful is their personalities. The traders with a more intellectual and systematic approach to life were much more successful than the emotional traders who really wanted to make a lot of money.
And finally, one of the most important insights Curtis makes:
...all successful people owe their success to the help of others. They therefore have an obligation and usually a desire to pass on the craft, to teach and help others.
I am thankful that such a thing is true. I owe many thanks to the people that have helped my programming and trading experience grow in the right direction. In a sense, we are all like those baby turtles Mr. Dennis refers to. Just trying to make it out to sea and swim with the big dogs. And avoid the many perils from beach to sea.
MT
## Saturday, July 29, 2006
### Quote of the Week
"There is no doubt in my mind that systems and styles which offer a rougher ride will hold up more over the long run because not as many traders and certainly almost no institutional money wants the ride.
You will make more money if you can take the pain. Unfortunately, you will make little or none if you think you can but it ends up that you can't."
- Curtis Faith -
MT
## Wednesday, July 26, 2006
### Quote of the Week
"Always swim, never sink" -- Yoji Harada
:)
MT
## Monday, July 24, 2006
### Interview of Programming Greats!
A really cool interview of several of the great programmers of our time: Linus Torvalds, Guido Van Rossum, James Gosling, etc. Read here.
They answer questions in regard to what they feel is the next big thing, what new technology they feel is worth learning, what makes programmers productive, etc. Really great interview. Check it out.
MT
## Saturday, July 22, 2006
Best part was the comments by Curtis Faith in regard to "the characteristics of markets over time." Curtis broke the markets into three classes:
1) Fundamental Driven Markets - cleanest trends and easiest to trade;
2) Speculator Driven Markets - perception driven and harder to trade;
3) Aggregated Derivative Markets - averaging out effect dilutes momentum.
Plus, I always enjoy it when Curtis shares his Turtles experience. His coffee story reminds me of a few trades from my Melba Toast story
Also, pay attention to Barli's mention of optimization and the effect lack of cash has on your results. This is a very hard lesson to learn. Most backtesting platforms will drop trades due to lack of cash. Thus, you only see a sample size of the actual results. There are a few solutions to this problem...but that's for another time.
MT
## Monday, July 17, 2006
### Quote of the Week
"Every now and then go away, even briefly, have a little relaxation, for when you come back to your work your judgement will be surer; since to remain constantly at work will cause you to lose power." -- Leonardo da Vinci
Well, I had a little break. And during that break I moved my family to Missouri! Yes, we are now in Missouri. Things are going good. Still have so much unpacking to do. But, was able to mow the yard (grass is different here than Texas) and find my grilling supplies for a good steak dinner with a corona or two.
The break did me good. No computer, time spent with the family, and change. Plenty of change. Change does the mind good. Breaks you out of your comfort zone. And that's a good thing...even though it doesn't always feel like it when undergoing the change. Here's more on breaking out of your comfort zone from Dr. Brett.
And that's the update for yours truly. Oh, and I start my new job this week. Very excited.
Have a great week!
MT
## Wednesday, July 05, 2006
### Quote of the Week
"If you've been pounding nails with your forehead for years, it may feel strange the first time somebody hands you a hammer. But that doesn't mean that you should strap the hammer to a headband just to give your skull that old familiar jolt." -- Wayne Throop
This quote rings so true. :)
Happy 4th everybody!
MT
## Thursday, June 29, 2006
### Beating the Market
Dan has posted some great insights into the zero-sum nature of beating the market. Read here.
Favorite quote from his post:
If the majority of investors
believe they will beat the market return by investing in fundamental
indexing, they will have to earn their above market return at the
expense of other market participants-- but those market participants
aren't anywhere to be had. Those abnormal returns exist because the
"market" has allocated funds in a particular way over the history of
the stock market. If the "market" were to no longer allocate funds that
way, perhaps we would have the indirect benefit of an overall better
functioning economic system, but directly, the market, as a whole,
cannot escape the market return. If everyone believes something to be true, you cannot earn abnormal returns off of it.
MT
## Wednesday, June 28, 2006
### Quote of the Week - Letting Go
"The difficulty lies, not in the new ideas, but escaping the old ones, which ramify, for those brought up as most of us have been, into every corner of our minds." -- John Maynard Keynes
This past weekend I wrapped up a new system I've been working on for several months now. The sad part is it replaces all the current systems I trade. So, I'm in the process of closing down my existing systems in order to begin trading this new one.
This kinda stuff is never easy. One in particular has been very hard to let go. It was the first system I developed back in 2001. Named it after my daughter. This new system has been named after my son. Go figure.
One important change I have made is trading from an end of week basis to an end of month basis. The backtesting has gone very well...but the forward testing is ongoing. If this works out well...I may even push out to a quarterly basis. Time will tell.
The interesting aspect of this system is it curtails nicely with the recent post by acrary here. While I'm not anywhere close to what acrary has discovered...I too have found certain slices of the market where specific strategies work well. And as embarassing as it is to say...all my systems I have built over the past five to six years are trying to capture the same market characteristic. So, this monthly system really is just a simplification of all my weekly systems targetted at a very specific market slice.
What are my next goals? Well, I have two...
1) Figure out a strategy for the other side of the market coin. The area I have yet to develop a viable system. This should hopefully increase my rate of return while reducing my risk. Heck, even if its a net loser...may still reduce my risk.
2) Begin designing a backtesting engine. I've done a lot of research over the past few days and had some help from a few technical gurus here at work. I believe I've got a platform framework in mind. Surprise, surprise...most of it will be done in Python. Still much design work to do and testing. Question for you Python guys and gals...any experience using Pytables? That's what I'm considering for the time series data store. Any feedback on Pytables would be much appreciated.
That's it here from a short-timer. Only have a few days left at my current job before I move away from the great state of Texas. There will be lots to miss but hopefully much to gain up in my new state of Missouri.
MT
## Monday, June 19, 2006
### Quote of the Week
"It is impossible for a man to learn what he thinks he already knows." -- Epictetus
MT
## Wednesday, June 07, 2006
### Bill Dance Video - Funny
I haven't laughed this hard in a long time...
Enjoy!
MT
## Tuesday, June 06, 2006
### Quote of the Week - Programming
The time at my current employer is coming to an end and my new job soon beginning. I'm currently in the process of gathering up all the systems I have designed and supported over the past 8 years and ensuring the documentation is complete and up-to-date and the code nice and tight. I'll be turning these kids of mine over to another programmer to adopt and support. The programmer taking over the systems is a great guy and will indeed treat them well. But, as I'm cross-checking user guides, code documentation, and data dictionaries...I find motivation in the quote below:
"Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live." -- M. Golding
I've always followed a similar mantra...Always design your systems to be supported by someone else even if it will only be supported by yourself. Because our main goal should be to let our code sail...
"A ship in port is safe, but that is not what ships are built for. I want all the youngsters to sail out to sea and be good ships." -- Grace Hopper
Speaking of software...what software tools do you use in your daily routine? Editors? Backtesters? Spreadsheets? Calculators? Here's a breakdown of my software tool set...
Wealth-Lab - Rapid Prototyping! I typically develop one or two trading systems over a 3 to 6 month time-frame. Each day I'll scribble ideas onto pieces of paper. Trying to find ways to improve the system and use Wealth-Lab to test those ideas out.
R Project - Great batch analysis of Wealth-Lab backtests. I'll run a Wealth-Lab simulation that generates a comma-delimited file of the trade output. Then analyze the CSV file with a batch R script that outputs to the terminal or to HTML. Couldn't live without this tool in backtesting and system studies.
ActiveState ActivePython - I can connect to the TC2005 database with Python and parse the securities anyway I please. Build portfolios by sector, exchange, etc. Oh, and ActiveState includes the Pythonwin IDE which is nice. Update: I also can connect to Wealth-Lab Developer with Python and run chartscripts against custom portfolios. Very cool when watching the Python script open and close the Wealth-Lab Chartscripts for each symbol in the list or table you're reading down.
gVim - This is my notepad replacement. I haven't used it very long...but so far so good. Also experimenting with jEdit. If only someone would develop an EVE Editor for Windows!
Excel - Hey, I know...pretty simple huh? Well, sometimes there's nothing better than Excel in dumping data quickly and testing out various scenarios.
Calcr - If you need to quickly calculate something...this website rocks! It can even handle assignment of variables. Such as x=2; x*2. Also the Google Search Bar always works in a crunch as shown in my Amortization Formula post.
MT
Technorati : , , , , , , ,
Del.icio.us : , , , , , , ,
## Sunday, June 04, 2006
### Testing Blog Editor
"Rest: the sweet sauce of labor" -- Plutarch
Testing new blog editor, Zoundry.
As you can see...taking it easy today. Actually taking a break before I begin more clean-up around the house. With putting my house up for sale, getting ready for my trip to Missouri, and completing a big project at my current job...I needed a rest! :)
The above picture is something my daughter and I drew a few weeks ago...a picture of her with her toy dog Danny. Just testing the picture insertion feature of this editor.
MT
### New Blog Editor and Fortress
Test of new blog editor, Qumana.
By the way, it's really cool to see the excitement surrounding Sun's new Fortress Language:
Deep Market - Fortress Programming Language for Scientific Computing
Wikipedia - Fortress Programming Language
Slashdot - Fortress: The Successor to Fortran?
Sig9 - Fortress
MT
Tags: , , ,
## Wednesday, May 31, 2006
### Quote of the Week
“All changes, even the most longed for, have their melancholy; for what we leave behind us is a part of ourselves; we must die to one life before we can enter another.” -- Anatole France
MT
## Wednesday, May 24, 2006
### Quote of the Week - Moving
"The moment one definitely commits oneself, then providence moves too. All sorts of things occur to help one that would never otherwise have occurred. A whole stream of events issues from the decision, raising in one's favor all manner of unforeseen incidents and meetings and material assistance, which no man could have dreamed would have come his way." -- Goethe
Much is happening here at TaylorTree. My family and I are moving to Missouri. As you can imagine, much to do. More to come later.
MT
## Sunday, May 14, 2006
### Quote of the Week
"All fixed set patterns are incapable of adaptability or pliability. The truth is outside of all fixed patterns." -- Bruce Lee
MT
## Saturday, May 13, 2006
### Hawk Picture
Hawk in my backyard, originally uploaded by TaylorTree.
Came home today and noticed this hawk checking out our backyard. My daughter and I couldn't believe how close we got before it flew off.
MT
## Monday, May 08, 2006
### Quote of the Week
“Your time is limited, so don't waste it living someone else's life. Don't be trapped by dogma - which is living with the results of other people's thinking. Don't let the noise of other's opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.” -- Steve Jobs
MT
## Saturday, May 06, 2006
### Interesting Stuff
Michael Covel pointed to a document from NorthCoast Asset Management here. Very interesting read on their dynamic portfolio allocation methods. From their site I found a BusinessWeek interview explaining a bit more about their techniques. Read here.
On a side note...I'd like to thank everyone at Memorial Hermann for making our labor & delivery a wonderful experience. The facility and people were the best I've ever encountered. Everyone went above and beyond the top-level of service and made our stay one to remember. There were two nurses in particular who were simply amazing and helped us through a very scary time in the middle of the night. So, to Memorial Hermann and their amazing staff...thank you from the bottom of my heart.
MT
## Thursday, May 04, 2006
### Baby Boy!
Well, I'll be out of commission for a few days...taking care of our new baby boy! Almost everything is okay...he just has some reflux issues that he's still getting tested for. Tomorrow they'll perform an ultrasound on his stomach to confirm their hunch on the problem. If they're correct...he'll require surgery. But, I'm hoping it's just a 24 hour "get used to the world" thing and he keeps showing improvement in the condition.
My wife did an incredible job and now on the road to recovery. Which is a tough road considering she labored for 10 hours before the little bugger got here.
Me? I'm tired but smart enough to know this is part of the deal. Mostly can't wait until all of our family can be well and together at home.
MT
## Sunday, April 30, 2006
### Quote of the Week
"Believe nothing, no matter where you read it,
or who said it, no matter if i have said it,
unless it agrees with your own reason
-- Buddha
Nice quote, huh? But, it could be better. Instead of following one's own reason and common sense....one should believe something only after careful observation and analysis. Then perhaps adjust your common sense to those findings.
Really sorry for the lack of system trading posts these past few weeks. Doesn't mean I've changed focus...just means I've been extremely busy in system development work. I'm building several tools to aid in my trading idea validations. Along with tools to aid in identifying the core components that lead to success in my current systems. Needless to say, it has been a learning experience. For one, this work has led me to understand more about the systems I trade. And secondly, has driven home the importance of keeping systems simple.
I see I'm not the only one reviewing trades and trying to uncover opportunities for improvement. Read TraderMike's Path to 100 R in Profits here. One suggestion I'd make in analyzing one's trades is to break your trade history into 3 groups:
Group 1 - The Great Performers
Group 2 - The Churners
Group 3 - The Lousy Losers
Spend time trying to understand Group 3's Lousy Losers. What caused those really awful losses?
But don't forget to check out Group 2's Churners. The trades that didn't do anything for your bottom line still have a cost...they tie up valuable capital and keep those brokers fat and happy.
And of course, don't forget to take a look or two at Group 1's Great Performers. That's where your Gordon Gekko personality needs to kick in and ask yourself...Could I have made more?
MT
## Wednesday, April 26, 2006
### Thread of the Week - Discipline
"Success is the sum of small efforts, repeated day in and day out." -- Robert Collier
Acrary posted a great topic on overcoming discipline problems here. Acrary really nailed it on the head with the following statements:
"To overcome my discipline problems, I've been programming my life to achieve the results I desire."
"Anytime I want to consciously achieve a goal, I figure out how I can setup a process so it would be hard to fail."
Much to learn...
MT
## Monday, April 24, 2006
### Quote of the Week
“The real voyage of discovery consists not in seeking new landscapes but in having new eyes.” -- Marcel Proust
How much time and effort do you spend on identifying the characteristics that produce winning trades? If you're like me...a lot! But, have you ever thought about increasing your time allocation to identifying the characteristics of losing trades? More importantly...the really awful ones?
Based on Pareto's Law and more specifically Sturgeon's Revelation:
If 90% of everything is crud then 100% of our investing returns come from 10% of the trades. And if 90% of our trades are indeed crud...then it follows that 90% of that is most likely crap. Which means a little over 80% of our total trades are full of crap. :)
Formula:
crap = crud * 0.90
% total crap = (crap / trades) * 100
Example:
crud = 100 * 0.90 = 90
crap = 90 * 0.90 = 81
% total crap = (81 / 100) * 100 = 81%
MT
## Friday, April 21, 2006
If you didn't know this...the Google search bar is also a calculator...and pretty good one I might add.
Here's an example amortization formula you can cut & paste into Google's search bar to obtain the loan's monthly payment amount:
20000 * ((6 / (12 * 100) / (1 - (1 + (6 / (12 * 100))) ^ -(5*12))))This will return a monthly payment of 386.656031 that corresponds to a $20,000.00 loan at 6% interest for 5 years in length. To get a better understanding of the loan amortization payment formula...see below: i = interest rate ex. 6 for 6% n = number of years ex. 5 for 5 years p = loan amount ex. 20000 for a$20,000 car loan
m = monthly payment
m = p * ((i / (12 * 100) / (1 - (1 + (i / (12 * 100))) ^ -(n*12))))
A big thanks to Hugh Chou for kindly supplying the amortization formula on his site. Please check out his site for further information regarding amortization formulas.
MT
Interesting article on Risk Homeostasis here.
"...human beings have a target level of risk with which they are most comfortable. When a given activity exceeds their comfort level, people will modify their behavior to reduce their risk until they are comfortable with their level of danger.....if a given person’s level of risk drops too far below their comfort level, they will again modify their behavior. This time though, they will increase their level of risk until they are once again in their target zone."
Can we create systems from this idea? The first question we'd have to answer is what constitutes risk for the average investor in the stock market? Is market volatility considered risk to an investor? I'm not sure many thought so at the time back in the late 90's. What if we examine only the downside portion of market volatility? Hmmmm...
The Five Truths About Code Optimization here. Great tips that relate to designing and more importantly optimizing your trading systems. Here are just a few:
"You are looking to answer two questions. First, did my change actually help? If the change did speed things up, is there now a new bottleneck? Some part of our program is always going to be the limiting factor -- otherwise your code would be infinitely fast. As you optimize things, it is quite likely that the part you sped up will fade into the background and some other section of the code will become the new bottleneck."
"I don't care if your idea is so brilliantly efficient that it can't possibly not speed things up. If Mother Nature doesn't agree, Take It Out."
"The trouble with optimization is there is no end to it."
And finally...check out the new Adam Sandler movie coming soon to a theatre near you: Click. I want one of those remotes! Ha ha.
That's it from here...where I'll be spending the weekend cleaning up the house in anticipation of the stork's delivery in the next few weeks.
MT
## Tuesday, April 18, 2006
What's the Thread of the Week you ask? Well, each week I'll try to post an interesting thread from one of the many trading forums out here on the wild & woolly Internet. The thread could be of value to your trading...or just a good old laugh. So, enjoy!
This week's thread is a very funny topic posted on the EliteTrader boards: "Altucher guesses: trend funds to disappear within the next 10 years..." You would think the thread would actually hold some value considering James Altucher and Victor Niederhoffer are some of the posters. But, the thread mostly ends up as an ideology debate similar to my football team is better than yours.
You do have to give the originator of the thread some credit...the opening post below sure did the job of drawing many traders into the fire:
"successful hedge fund manager and author, james althucher, states in his new book--"super cash"---- that the trend following funds will be history within the next 10 years. he cites the dismal performance of the major trend funds over the last several years, over leverage, and investors pulling out. his new book is fantastic reading into the cutting edge of hedge funds. definitely check it out!" -- marketsurfer
Even I had to post a few comments. See if you recognize which ones those were.
As a follow-up to the thread...check out Niederhoffer's post on his DailySpeculations site here. You'll have to search down for the following post, Comments on a Trend Following Discussion, dated 12-Apr-2006.
MT
## Monday, April 17, 2006
### Quote of the Week
"Man with one clock always know time. Man with two clocks never sure." -- Chinese Proverb
MT
## Tuesday, April 11, 2006
### Quote of the Week
"Aim for success, not perfection. Never give up your right to be wrong, because then you will lose the ability to learn new things and move forward with your life." -- Dr. David M. Burns
MT
## Tuesday, April 04, 2006
### Quote of the Week
"Why do they always teach us that it's easy and evil to do what we want and that we need discipline to restrain ourselves? It's the hardest thing in the world -- to do what we want. And it takes the greatest kind of courage. I mean, what we really want." - Ayn Rand
MT
## Friday, March 31, 2006
### The One Thing...
Curly: "I'll tell you the secret to life. This one thing. Just this one thing. You stick to that and everything else don't mean sh*t."
Billy: "What's the one thing?"
Curly: "That's what you've got to figure out."
City Slickers, the movie
I was a golfer growing up. A good one. Good enough to win a few tournaments in high school and be offered a full-ride in college. But, I burned out before I ever got there. Wanna know why?
I couldn't get to the next level...the pro-level. What do I mean by the pro-level? Well, I could outdrive anyone and post great scores...especially in the clutch (never lost a playoff match). But, I couldn't do it day after day. Know why? Because I thought there was a skill level that I could only achieve if I perfected my swing. I would read magazine articles, study the best player's swings, and practice 14 hour days in the East Texas humid summer heat. All in the hopes of finding that one thing that would take me to the next level. And sadly, I never found it.
The worst part...everyone else thought I was great...but I didn't. So, I gave up my talents and offers and began living life as a typical young person. Always keeping this failure in the back of my mind...the what if?
Isn't it amazing that it took trading to teach me that "magic" next level? In fact, learning to trade has been eerily similar to my golf experience. Reading trading books and studying the best charts for many endless nights than I care to share. Searching and searching for that one thing...that one edge that would take me to the next level.
I assumed that talent and a perfect edge is what would take me to the next level both in golf and now trading. Thankfully, I have finally found the one thing that can take you to the next level. And I'll even be so gracious to share it with you...
Find a strategy that gets the job done...might not belt out 50% annualized returns with 10% drawdowns...but works for you...and more importantly fits you. Don't worry about what anyone else is doing...just trade your strategy day in and day out. You'll never get to a point where the profits are easy and you can just print money at will. Realize that. The best you can hope for is you'll get to a place where you'll know your system and what it can and can't do...and you'll follow it. Simple as that. Some days...you'll look like an idiot...and other days a genius...and understand that's what it's all about. It took me all these years to figure that out. Crazy, isn't it?
This "one thing" can be applied to many aspects of trading. For example, in your backtests...do you optimize parameters on your entire trade set? If so, that's a perfect world that will never happen again. Throw out the best 5% - 10% of trades from the set before you begin tinkering. That way you're designing a system built on a bit more realistic data.
Same goes with golf...do you play that par 5 as something you can reach in 2 on your best day...everyday? Hmmm...
Side note:
Several years later after my burnout I did pick golf back up again...won several local tournaments...only to hit the wall again. And haven't really played since...that's been about 5 years ago.
MT
## Wednesday, March 29, 2006
### Cool New Blog Find: Deep Market
"Somewhere, something incredible is waiting to be known." -- Carl Sagan
Found a very cool blog a few days ago...the Deep Market blog. Check out the post covering Oversimplified Method for Finding Patterns in Stock Charts here. And the follow-up, Correlation Pattern Matching Explained, here. I have never thought to use the correlation function to find setup patterns. I have only used it in the traditional sense...comparing trading instruments and trading system equity curves. Very interesting.
Might be useful to take this idea and apply towards the Melba Toast logic. Hmmm....
MT
## Tuesday, March 28, 2006
### Optimal Risk with Ed Seykota & Dave Druz
"The biggest secret about success is that there isn't any big secret about it, or if there is, then it's a secret from me, too. The idea of searching for some secret for trading success misses the point." -- Ed Seykota
Found an interesting paper from Ed Seykota and Dave Druz written back in 2001. The team test what heat can do to a portfolio's return and drawdown. The test shows that drawdowns will eventually overtake returns if heat is increased too much. Nothing new or exciting...just a confirmation of what I've already found in my system testing. Read the paper here.
For more info on Ed Seykota...read the following interview here. My favorite quote from the interview is...
The idea of searching for some secret for trading success misses the point. It's like golf. Some golfers play to spend time outdoors. They hang out with their cronies, become one with nature, study the greens, reconnect with their muscles, drop into focused concentration and, incidentally, pick up a birdie or two. For others, it's an exercise in finding some new Holy Grail putter. Different strokes for different folks!
Also don't forget to review Donchian's Trading Guides in the back of the interview. Make note of #7 in Donchian's General Guides:
In a market in which upswings are likely to equal or exceed downswings, a heavier position should be taken for the upswings for percentage reasons; a decline from 50 to 25 will net only 50% profit, whereas an advance from 25 to 50 will net 100%.
For more on David Druz read here, here, and here. I like David's focus on designing a system to handle anything the market that throws at it...instead of designing something for just a particular market condition.
MT
## Monday, March 27, 2006
### Quote of the Week
"If I wasn't dyslexic, I probably wouldn't have won the Games. If I had been a better reader, then that would have come easily, sports would have come easily...and I never would have realized that the way you get ahead in life is hard work." -- Bruce Jenner
I can vouch for this. Being dyslexic makes everything hard. But, when you finally learn it and understand it the way you need to understand it...you know it better than anybody.
MT
## Sunday, March 26, 2006
John Henry discusses his trading system design philosophy here. Henry discusses the time period in which he developed his original trend-following system. And offers some great insights such as...
Every time we go through a bad period in our firm, whether it's for two months or for eight months, people ask me have the markets changed. And I always say the same thing. I say, "Yes, the markets are always changing; but people's reaction to change, more or less, remain the same."
I knew I could not predict anything, and that is why we decided to follow trends, and that is why we've been so successful. We simply follow trends. No matter how ridiculous those trends appear to be at the beginning, and no matter how extended or how irrational they seem at the end, we follow trends.
At JWH, we realize that not only is it impossible to foretell the future, it's not necessary. We rely on the fact that other investors are convinced that they can predict the future, and I believe that's where our profits come from.
We may take a small risk in placing a trade initially, but after we have a large profit we risk it, and that's a risk very much worth taking and one we gladly accept.
Suffice it to say that we embrace both volatility and risk and, for us, risk is that we're going to lose if we risk two-tenths of one percent on a particular trade. That is, to us, real risk. Giving back a profit to you probably seems like risk, to us it seems like volatility.
Enjoy the article.
MT
## Tuesday, March 21, 2006
### Interview with the Stock Bandit
Check out this really nice interview with Jeff White over on the Stocktickr blog. Read the interview here.
I like his KISS principles and the fact he doesn't look for the market to do this or that...just takes what the markets brings to him via his setups. Nice.
MT
## Monday, March 20, 2006
### Quote of the Week
"In fact, the ironic part of system design is if you want to maximize profits, you must be willing to give back a great deal of the profits you have already accumulated." -- Van K. Tharp
There is a fine line between giving away too much of your profits and giving too little room for your positions to grow.
MT
## Tuesday, March 14, 2006
### Quote of the Week
The Six Kase Behavioral Laws of Forecasting
Law Number One: Remember that the objective is profit, not ego-stroking.
Law Number Two: The objective is profitable trading, not proving a thesis or world view.
Law Number Three: When wrong, move on.
Law Number Four: Have confidence in your own intuition. Do not rely on the advice or opinion of others, no matter how well respected they might be.
Law Number Five: Do not read newspaper articles or watch newscasts that discuss the markets in which you have an interest.
Law Number Six: Plan your strategy when the market is closed - when you are rested and thinking clearly.
The above Quote of the Week comes from a new book I'm reading...Trading With The Odds: Using the Power of Probability to Profit in the Futures Market by Cynthia A. Kase.
No doubt, one of the all-time best books I've read on Trading...but I'll warn you...for the experienced system trader only. In other words, I would not have understood many of the fantastic insights offered in this book just a few short years ago.
In fact, while reading this book I was struck with how incredibly difficult it is to become a great system trader. Flourishing as a system trader requires two very different and conflicting mindsets:
#1) A Rule-Follower. Must be a logical thinker willing to break down the most complex of things into a set of rules to follow. And more importantly, be willing to follow the rules you have set. The latter being the hardest part for yours truly.
#2) A Rule-Breaker. In order to grow to higher levels in system trading...you must be willing to break conventional wisdom [rules] in regard to all things people including yourself take for granted. And this where the conflicting mindsets truly come into play. It's very hard to program a set of rules for a system and then allow yourself to see the ways rules can be broken to improve the system. Sounds easy...but very hard. Thinking about this one some more...I believe our true task as a trader is discovering the "real" rules versus the rules we traders have created and hold as "real".
That's what I believe Kase is uncovering in her book...the "real" rules.
Special thanks to Eric for pointing out the Variance Stop technique discussed in Kase's book. Eric's contribution has triggered several exit ideas that I'm currently testing across my systems.
MT
## Thursday, March 09, 2006
### Nassim Taleb Highlights
Active Trader Magazine interviews Nassim Taleb in the March issue. Here's a few items that Nassim shared:
If you owned an option that was 20 standard deviations out of the money - and I had plenty of those - how many cumulative months of time decay could you sustain if it moved into the money?...it was 67,000 months of time decay.
If you have a 24-sigma even on an option that's 24 standard deviations out of the money, your payoff is 750,000 times your bet.
We're not programmed to deal with variables that can take very large deviations. We tend to not pay at all for things when we don't have reason to pay for them, but overpay when we see a reason.
There's a bit more but for that you'll have to get the magazine. :)
I realize I haven't gone back to the Melba Toast system in quite awhile...it hasn't been forgotten...just been extremely busy. But there is good news...I have made some progress in capturing the dry toast pattern. At first I thought I'd have to use a bit of trig to capture the exact pattern...but from the initial tests it looks like a max/min range divided by ATR might do the trick. Hopefully, I'll get a chance to test this piece out soon and share the results with ya'll.
Until then...
MT
## Monday, March 06, 2006
### Quote of the Week & Robert Pardo Interview
"Being a scientist can sometimes be depressing. Surrounded by younger versions of yourself, you are constantly confronted by the mismatch between the dreams of youth and the facts of maturity." -- Emanuel Derman, author of My Life as a Quant
One of my favorite quotes and not only applicable to scientists and programmers...but everyone with several years of experience under their belts...and perhaps a few gray hairs to show for it. Heck, even relates to being a parent. Universal theme...I love it!
On to other things...this weekend I found a great interview with Robert Pardo, the author of Design, Testing, and Optimization of Trading Systems. Read the interview here. Some quick highlights:
When I first started getting into systems, I was persistent, objective, and analytical. I've always been willing to say what it is that I do know, and what it is that I don't know. If somebody said to me "this will work" I'd say, "well, why will it work?" What's the proof?"
Great thinking...I believe many of us could apply this type of thinking to our investing strategies.
And Pardo goes on to describe the great Art of Cherry Picking...
They call this sort of thing cherry picking now. So many people, when they're looking at an idea by hand will say, "oh, it worked here, it worked here, it worked there, and boy, did it work great!" They ignore the fact that it had seven losers before this big win, and three more losers before that big win. They're maybe small, but they do add up. They need to be included in the equation.
In a system, risk is uniform and constant. I re-optimize models periodically because conditions and volatility change. You have to adapt to that to get optimal returns. Generally, though, we're risking the same tomorrow that we are today. Most people not only will vary their risk a great deal, but they'll get very skittish when they actually get a profit.
There's a powerful strategy being expressed here. Something Basso mentioned in his Market Wizards interview.
Overall, a great interview and piques my curiosity as to the other interviews covered in the Market Beaters book. I guess another book to buy and read. :)
Also, don't forget...the new issue of Active Trader Magazine contains an interview of Nassim Taleb. Just bought the mag this weekend. So, I'll share some highlights of the interview sometime this week.
MT
## Friday, March 03, 2006
### TGIF
Some great quotes from acrary over on the EliteTrader Forum.
"Trading cannot be taught...it has to be caught. By that I mean you must have a perceptive nature. Without it, buy a system and execute it mechanically."
"I've had experience with this problem (self-sabotage). In short, I found if I had a goal that my self-concious believed was not doable, then I'd self-sabotage my trading. Once I realized this and changed my goals, the self-sabotage stopped."
"If you want to remain emotionless during trading, concentrate on the process and let the outcome happen."
** my favorite one **
Now, for some silly Friday quotes...
"Giant oaks do grow from little acorns. But first you must have an acorn."
"Behind every successful man stands a surprised mother-in-law." -- Hubert Humphrey
"Always program as if the person who will be maintaining your program is a violent psychopath that knows where you live." -- Martin Golding
"As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs." -- Maurice Wilkes
And finally, the always funny Jack Handy...
He was a cowboy, mister, and he loved the land. He loved it so much he made a woman out of dirt and married her. But when he kissed her, she disintegrated. Later, at the funeral, when the preacher said, "Dust to dust," some people laughed, and the cowboy shot them. At his hanging, he told the others, "I'll be waiting for you in heaven--with a gun."
MT
### Does Trend Following Work on Stocks?
Check out this paper written by Eric Crittenden and Cole Wilcox of Blackstar Funds: Does Trend Following Work on Stocks? There's a lot of great information embedded in this paper. And for equity system traders...much to learn. In fact, so much to learn, that I've exchanged a few emails with one of the coauthors, Eric Crittenden. Before I begin...let it be said that Eric is a very sharp guy and truly understands the system trading world.
One of the great things I found in this paper was finally someone addressed survivorship bias in their system tests. And more importantly discussed the impact of dividend-adjustments. The really surprising point, especially after talking with Eric, was that survivorship-bias doesn't play as much of a role as I thought in backtesting long-term stock trading systems and dividend-adjusted data or lack thereof plays a much larger role than I expected. So much of a role that my first goal after reading the paper and talking with Eric is to obtain dividend-adjusted equities data.
Another dividend, if you will, of dividend-adjusted data is that your system signal's can be applied to a different time series despite the underlying stocks remaining the same. In other words, you may get more trades if you run your system against two sets of data...1) Non dividend-adjusted and 2) Dividend Adjusted. Some stocks that previously looked stale or non-trending may indeed show up in a long-term trending system with dividends factored in.
Re-entry of positions is another very interesting part of this paper. In my current systems I do not have re-entry criteria. If my trailing exit is hit...I'm out of that stock for good...or until my system model captures it again. In the paper you will see stock charts with stocks hitting the ATR trailing stop and then re-entered. This also has made me look to my own systems and possibly adding some type of re-entry logic.
And finally, for those still yearning for more Trailing Stop ideas...the paper provides plenty of discussion on the Average True Range trailing stop technique. Eric has even offered an alternative solution to the ATR trailing exit problem from my Innovating Exits post. His solution involves using the variance of the Average True Range in your trailing stop. I'll discuss more on this in another post.
Finally, I'd like to express my thanks to Eric for kindly responding to my questions and graciously sharing his thoughts and views on system trading. Maybe I can get an interview out of him to share on the site some day.
Until then...
MT
## Monday, February 27, 2006
### Quote of the Week
"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell
MT
## Wednesday, February 22, 2006
### Quote of the Week
"Any time you sincerely want to make a change, the first thing you must do is to raise your standards. When people ask me what really changed my life eight years ago, I tell them that absolutely the most important thing was changing what I demanded of myself. I wrote down all the things I would no longer accept in my life, all the things I would no longer tolerate, and all the things that I aspired to becoming." -- Anthony Robbins
MT
## Wednesday, February 15, 2006
Did a little digging on CXOAG's blog and found some interesting studies they've performed on the market. Enjoy!
Collective2: A Marketplace of Trading Systems
Culls through the number of systems in Collective2's site and breakdowns the performance of swing trading versus daytrading. Most interesting part? Only 24% of Collective2's systems average 1% or more per week yet all systems exceed winning percentages of 50%.
Update: Cramer Offers You His Protection?
Asks and answers the question, Does Cramer have an edge? Insights shared: There may be some edge in buying the Cramers sells during the immediate negative returns and holding longer than 6 months. And it seems part of Cramer's edge is issuing buys on a rather large number of stocks. This creates a thin red line where the more stocks issued as buys...take him further away from market beating returns.
End-of-Quarter Effect: Window Undressing?
Is there a tradeable event at the end of quarters? This is something I have tested in the past and my results match their findings...expect market strength after the quarter...not before.
A study is performed on the cane walkers of Wall Street. After reading this post...I thought why judge the decline absolutely? Judge against volatility instead?
An Out-of-Sample Test
Discusses James O'Shaughnessy's strategies now used by Hennessy Funds. Interesting the Growth strategy beat Value in out-of-sample testing.
MT
The article titled, The Use of Hurst and Effective Return in Investing by Andrew Clark, contains much that is over my head. But, that shouldn't dissuade me or you from diving in and learning what we can. Heck, any article that contains the following statement is definitely worth my time...
Ideally, a good performance measure should show high performance when the return on capital is high, when the equity/return curve increases linearly over time, and when loss periods (if any) are not clustered.
In the sentence above, Andrew Clark describes just exactly what all of us are looking for in designing, testing, and evaluating our trading systems.
Sorry for the lack of updates on the Melba Toast System. I've been very busy with other projects. But, haven't stopped dreaming up ways to capture the congestion. Here are just a few ideas that I will test as soon as I get the time:
• What if you count the number of weeks a stock closes above its mean and number of weeks closed below its mean? If the ratio of above to below is close to 1 then does that suggest a congestion range-bound area in the time series?
• Should we look for these congestion areas within a certain percentage from their all-time high? Or all-time low? Or both? Or maybe all-time high is too limited and we just need to look for a certain percentage from their 5-year high and low.
• Could using a stock's beta help identify congestion areas? Does the congestion area exhibit less beta than the market? Speaking of beta...has anyone ever attempted to create an indicator out of beta? Basically, the number of stocks with a beta above 1? If so, please share.
Well, that's it from here...where I'm looking forward to seeing Ricky Bobby on the big screen! Ha ha! | 2017-12-18 12:44:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19397710263729095, "perplexity": 2929.53710596984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948616132.89/warc/CC-MAIN-20171218122309-20171218144309-00184.warc.gz"} |
http://www.w3.org/Math/planet/ | # Planet MathML
The Planet MathML aggregates posts from various blogs that concern MathML. Although it is hosted by W3C, the content of the individual entries represent only the opinion of their respective authors and does not reflect the position of W3C.
## Last Call for Papers: Workshop on User Interfaces for Theorem Provers (UITP 2016 @ IJCAR), Coimbra, Portugal, Deadline May 17th *NEW* (was May 9th, 2016)
Source: www-math@w3.org Mail Archives • Serge Autexier (serge.autexier@dfki.de) • May 04, 2016 • Permalink
Last Call for Papers
UITP 2016
12th International Workshop on User Interfaces for Theorem Provers
in connection with IJCAR 2016
July 2nd, 2016, Coimbra, Portugal
http://www.informatik.uni-bremen.de/uitp/current/
* NEW Submission deadline: May 17th, 2016 *
----------------------------------------------------------------------
NEWS:
- Invited Speaker: Sylvain Conchon (LRI, France) giving a talk about
"AltGr-Ergo, a graphical user interface for the SMT solver Alt-Ergo"
- Submission deadline postponed by one week to May, 17th, 2016
----------------------------------------------------------------------
The User Interfaces for Theorem Provers workshop series brings
together researchers interested in designing, developing and
evaluating interfaces for interactive proof systems, such as theorem
provers, formal method tools, and other tools manipulating and
presenting mathematical formulas.
While the reasoning capabilities of interactive proof systems have
increased dramatically over the last years, the system interfaces have
often not enjoyed the same attention as the proof engines
themselves. In many cases, interfaces remain relatively basic and
under-designed.
The User Interfaces for Theorem Provers workshop series provides a
forum for researchers interested in improving human interaction with
proof systems. We welcome participation and contributions from the
theorem proving, formal methods and tools, and HCI communities, both
to report on experience with existing systems, and to discuss new
directions. Topics covered include, but are not limited to:
- Application-specific interaction mechanisms or designs for prover
interfaces Experiments and evaluation of prover interfaces
- Languages and tools for authoring, exchanging and presenting proof
- Implementation techniques (e.g. web services, custom middleware,
DSLs)
- Integration of interfaces and tools to explore and construct proof
- Representation and manipulation of mathematical knowledge or objects
- Visualisation of mathematical objects and proof
- System descriptions
UITP 2016 is a one-day workshop to be held on Saturday, July 2nd, 2016
in Coimbra, Portugal, as a IJCAR 2016 workshop.
** Submissions **
Submitted papers should describe previously unpublished work
(completed or in progress), and be at least 4 pages and at most 12
pages. We encourage concise and relevant papers. Submissions should be
in PDF format, and typeset with the EPTCS LaTeX document class (which
done via EasyChair at
https://www.easychair.org/conferences/?conf=uitp16
All papers will be peer reviewed by members of the programme committee
and selected by the organizers in accordance with the referee
reports.
At least one author/presenter of accepted papers must attend the
workshop and present their work.
** Proceedings **
Authors will have the opportunity to incorporate feedback and insights
gathered during the workshop to improve their accepted papers before
publication in the Electronic Proceedings in Theoretical Computer
Science (EPTCS - http://www.eptcs.org/).
** Important dates **
Workshop: July 2nd, 2016
** Programme Committee **
Serge Autexier, DFKI Bremen, Germany (Co-Chair)
Pedro Quaresma, U Coimbra, Portugal (Co-Chair)
David Aspinall, University of Edinburgh, Scotland
Chris Benzmüller, FU Berlin, Germany & Stanford, USA
Yves Bertot, INRIA Sophia-Antipolis, France
Gudmund Grov, Heriott-Watt University, Scotland
Zoltán Kovács, RISC, Austria
Christoph Lüth, University of Bremen and DFKI Bremen, Germany
Alexander Lyaletski, Kiev National Taras Shevchenko Univ., Ukraine
Michael Norrish, NICTA, Australia
Christian Sternagel, University Innsbruck, Austria
Enrico Tassi, INRIA Sophia-Antipolis, France
Laurent Théry, INRIA Sophia-Antipolis, France
Makarius Wenzel, Sketis, Germany
Wolfgang Windsteiger, RISC Linz, Austria
Bruno Woltzenlogel Paleo, TU Vienna, Austria
## FMath "HTML + MathML" for Chrome Solution
Source: FMath • Ionel Alexandru (noreply@blogger.com) • April 29, 2016 • Permalink
Hi,
I have created an extension for Google Chrome to display MathML inside HTML.
The solution is ONLY javascript.
Search for "MathML" keyword.
After you install the extension you can test by going on this page
http://www.fmath.info/plugins/chrome/test.html
The page is built using HTML and MathML. No other tricks.
enjoy
ionel alexandru
P.S. Let me know on www.fmath.info if you find bugs or you need more features.
## Math Accessibility Trees
Source: Murray Sargent: Math in Office • MurrayS3 • April 28, 2016 • Permalink
This post discusses some aspects of making mathematical equations accessible to blind people. Presumably equations that are simple typographically, such as E = mc², are accessible with the use of standard left and right arrow key navigation and with each variable and two-dimensional construct being spoken when the insertion point is moved to them. At any particular insertion point, the user can edit the equation using the regular input methods, perhaps based on the linear format and Nemeth Braille or Unified English Braille keyboards. But it can be hard to follow a more typographically complex equation, let alone edit it. Instead, the user needs to be able to navigate such an equation using a mathematical tree of the equation.
More than one kind of tree is possible and this post compares two possible kinds using the equation
We label each tree node with its math text in the linear format along with the type of node. The linear format lends itself to being spoken especially if processed a bit to say things like “a^2” as “a squared” in the current natural language. The first kind of tree corresponds to the traditional math layout used in documents, while the second kind corresponds to the mathematical semantics. Accordingly we call the first kind a display tree and the second a semantic tree.
More specifically, the first kind of tree represents the way TeX and Microsoft Office applications display mathematical text. Mathematical layout entities such as fractions, integrals, roots, subscripts and superscripts are represented by nodes in trees. But binary and relational operators that don’t require special typography other than appropriate spacing are included in text nodes. The display tree for the equation above is
Note that the invisible times between the leading fraction and the integral isn’t displayed and the expression a+b sinθ is displayed as a text node a+b followed by a function-apply node sinθ, without explicit nodes for the + and the invisible times.
To navigate through the a+b and into the fractions and integral, one can use the usual text left and right arrows or their braille equivalents. One can navigate through the whole equation with these arrow keys, but it’s helpful also to have tree navigation keys to go between sibling nodes and up to parent nodes. For the sake of discussion, let’s suppose the tree navigation hot keys are those defined in the table
Ctrl+→ Go to next sibling Ctrl+← Go to previous sibling Home Go to parent position ahead of current child End Go to parent position after current child
For example starting at the beginning of the equation, Ctrl+→ moves past the leading fraction to the integral, whereas → moves into the numerator of the leading fraction. Starting at the beginning of the upper limit, Home goes to the insertion point between the leading fraction and the integral, while End goes to the insertion point in front of the equal sign. Ctrl+→ and Ctrl+← allow a user to scan an equation rapidly at any level in the hierarchy. After one of these hot keys is pressed, the linear format for the object at the new position can be spoken in a fashion quite similar to ClearSpeak. When the user finds a position of interest, s/he can use the usual input methods to delete and/or insert new math text.
Now consider the semantic tree, which allocates nodes to all binary and relational operators as well as to fractions, integrals, etc.
The semantic tree has two drawbacks: 1) it’s bigger and requires more key strokes to navigate and 2) it requires a Polish-prefix mentality. Some people have such a mentality, perhaps having used HP calculators, and prefer it. But it’s definitely an acquired taste and it doesn’t correspond to the way that mathematics is conventionally displayed and edited. Accordingly the display tree seems significantly better for blind reading and editing, as well as for sighted editing.
Both kinds of trees include nodes defined by the OMML entities listed in the following table along with the corresponding MathML entities
Built-up Office Math Object OMML tag MathMl Accent acc mover/munder Bar bar mover/munder Box box menclose (approx) BoxedFormula borderBox menclose Delimiters d mfenced EquationArray eqArr mtable (with alignment groups) Fraction f mfrac FunctionApply func &FunctionApply; (binary operator) LeftSubSup sPre mmultiscripts (special case of) LowerLimit limLow munder Matrix m mtable Nary nary mrow followed by n-ary mo Phantom phant mphantom and/or mpadded Radical rad msqrt/mroot GroupChar groupChr mover/munder Subscript sSub msub SubSup sSubSup msubsup Superscript sSup msup UpperLimit limUpp mover Ordinary text r mrow
MathML has additional nodes, some of which involve infix parsing to recognize, e.g., integrals. The OMML entities were defined for typographic reasons since they require special display handling. Interestingly the OMML entities also include useful semantics, such as identifying integrals and trigonometric functions without special parsing.
In summary, math zones can be made accessible using display trees for which the node contents are spoken using in the localized linear format and navigation is accomplished using simple arrow keys, Ctrl arrow keys, and the Home and End keys, or their Braille equivalents. Arriving at any particular insertion point, the user can hear or feel the math text and can edit the text in standard ways.
I’m indebted to many colleagues who helped me understand various accessibility issues and I benefitted a lot from attending the Benetech Math Code Sprint.
## call for abstracts: Using sets of mathematical tools @ CADGME 2016
Source: www-math@w3.org Mail Archives • Paul Libbrecht (paul@hoplahup.net) • April 25, 2016 • Permalink
------------------------------------------------------------------------
Using Sets of Mathematical Tools
7--10 September 2016, Targu Mures, Romania
------------------------------------------------------------------------
Mathematical softwares are diverse and rich. Each software can perform
some tasks very well and others only with big efforts.
This session aims at exploring the possibility to teach mathematics and
mathematical tools when using them /together/: How can one use the
systems productively, keeping the best of each software's functionality
when teaching and using?
We invite contributions about the exchange between computing systems
such as the following:
* user reports (expectations, obtained results)
* teaching directions when working with several systems
* scenarios of teaching of exchanges that can be effective for the
education
* standards and their applicability in the exchanges
* technical tools that can facilitate the exchanges
There is just a week left till the end of the submission period for the
abstracts and posters at the CADGME conference (May 2nd).
* abstracts of a presentation are max 300 words big sketch the
intended presentation to be done at the conference
* posters will be presented in an expo environment so as to introduce
discussion
## Re: Change URL of sume translations about MathML
Source: www-math@w3.org Mail Archives • Xueyuan Jia (xueyuan@w3.org) • April 21, 2016 • Permalink
On 2016/4/16 20:38, 高村 吉一 wrote:
> Dear Translators
> And Dear W3C Math mailing list Members
>
> I change URL of sume translations about MathML into Japanese of the following document:
>
> (1) MathML for CSS Profile(http://www.w3.org/TR/2011/REC-mathml-for-css-20110607/)
> CSSに対応するMathMLの概要書(http://takamu.sakura.ne.jp/mathml-for-css-ja.html)
> Previous URL http://www3.fctv.ne.jp/~takamu/mathml-for-css-ja.html
>
> (2) XML Entity Definitions for Characters(http://www.w3.org/TR/2010/REC-xml-entity-names-20100401/)
> 文字に対するXML実体の定義(http://takamu.sakura.ne.jp/xml-entity-names-ja/index.html)
> Previous URL http://www3.fctv.ne.jp/~takamu/xml-entity-names-ja/index.html
Dear Yoshikazu,
Thanks for your information and now they are updated in the translation
database:
Cf.
<https://www.w3.org/2005/11/Translations/Query?rec=any&lang=ja&translator=Yoshikazu_Takamura&date=any&sorting=byTechnology&output=FullHTML&submit=Submit>
> (3) Units in MathML(https://www.w3.org/TR/2003/NOTE-mathml-units-20031110/)
> MathMLにおける単位(http://takamu.sakura.ne.jp/mathml-units-ja.htm)
> Previous URL http://www3.fctv.ne.jp/~takamu/mathml-units-ja.htm
Please note that, as of 2012, only translations of W3C Recommendations
will be added to the Translations Database. So the Group Note
translation will not be included in the DB according to the announcement:
https://lists.w3.org/Archives/Public/w3c-translators/2012JulSep/0038.html
Many thanks and all of your translations are much appreciated.
Best,
Xueyuan
> Sincerely
>
> 16.Apr.2016
>
> 高村 吉一(Yoshikazu Takamura)
>
## The Community Group ‘Getting math on Web pages’ launched
Source: W3C Math Home • April 19, 2016 • Permalink
The Community Group ‘Getting math on Web pages’ launched
## OpenType MATH in HarfBuzz
Source: Blog de Frédéric - Tag - mathml • fredw • April 16, 2016 • Permalink
TL;DR:
• Work is in progress to add OpenType MATH support in HarfBuzz and will be instrumental for many math rendering engines relying on that library, including browsers.
• For stretchy operators, an efficient way to determine the required number of glyphs and their overlaps has been implemented and is described here.
In the context of Igalia browser team effort to implement MathML support using TeX rules and OpenType features, I have started implementation of OpenType MATH support in HarfBuzz. This table from the OpenType standard is made of three subtables:
• The MathConstants table, which contains layout constants. For example, the thickness of the fraction bar of ab\frac{a}{b}.
• The MathGlyphInfo table, which contains glyph properties. For instance, the italic correction indicating how slanted an integral is e.g. to properly place the subscript in ∫D\displaystyle\displaystyle\int_{D}.
• The MathVariants table, which provides larger size variants for a base glyph or data to build a glyph assembly. For example, either a larger parenthesis or a assembly of U+239B, U+239C, U+239D to write something like:
(abcdefgh\left(\frac{\frac{\frac{a}{b}}{\frac{c}{d}}}{\frac{\frac{e}{f}}{\frac{g}{h}}}\right.
Code to parse this table was added to Gecko and WebKit two years ago. The existing code to build glyph assembly in these Web engines was adapted to use the MathVariants data instead of only private tables. However, as we will see below the MathVariants data to build glyph assembly is more general, with arbitrary number of glyphs or with additional constraints on glyph overlaps. Also there are various fallback mechanisms for old fonts and other bugs that I think we could get rid of when we move to OpenType MATH fonts only.
In order to add MathML support in Blink, it is very easy to import the OpenType MATH parsing code from WebKit. However, after discussions with some Google developers, it seems that the best option is to directly add support for this table in HarfBuzz. Since this library is used by Gecko, by WebKit (at least the GTK port) and by many other applications such as Servo, XeTeX or LibreOffice it make senses to share the implementation to improve math rendering everywhere.
The idea for HarfBuzz is to add an API to
1. 1.
Expose data from the MathConstants and MathGlyphInfo.
2. 2.
Shape stretchy operators to some target size with the help of the MathVariants.
It is then up to a higher-level math rendering engine (e.g. TeX or MathML rendering engines) to beautifully display mathematical formulas using this API. The design choice for exposing MathConstants and MathGlyphInfo is almost obvious from the reading of the MATH table specification. The choice for the shaping API is a bit more complex and discussions is still in progress. For example because we want to accept stretching after glyph-level mirroring (e.g. to draw RTL clockwise integrals) we should accept any glyph and not just an input Unicode strings as it is the case for other HarfBuzz shaping functions. This shaping also depends on a stretching direction (horizontal/vertical) or on a target size (and Gecko even currently has various ways to approximate that target size). Finally, we should also have a way to expose italic correction for a glyph assembly or to approximate preferred width for Web rendering engines.
As I mentioned at the beginning, the data and algorithm to build glyph assembly is the most complex part of the OpenType MATH and deserves a special interest. The idea is that you have a list of n≥1n\geq 1 glyphs available to build the assembly. For each 0≤i≤n-10\leq i\leq n-1, the glyph gig_{i} has advance aia_{i} in the stretch direction. Each gig_{i} has straight connector part at its start (of length sis_{i}) and at its end (of length eie_{i}) so that we can align the glyphs on the stretch axis and glue them together. Also, some of the glyphs are “extenders” which means that they can be repeated 0, 1 or more times to make the assembly as large as possible. Finally, the end/start connectors of consecutive glyphs must overlap by at least a fixed value omino_{\mathrm{min}} to avoid gaps at some resolutions but of course without exceeding the length of the corresponding connectors. This gives some flexibility to adjust the size of the assembly and get closer to the target size tt.
gig_{i}
sis_{i}
eie_{i}
aia_{i}
gi+1g_{i+1}
si+1s_{i+1}
ei+1e_{i+1}
ai+1a_{i+1}
oi,i+1o_{i,i+1}
Figure 0.1: Two adjacent glyphs in an assembly
To ensure that the width/height is distributed equally and the symmetry of the shape is preserved, the MATH table specification suggests the following iterative algorithm to determine the number of extenders and the connector overlaps to reach a minimal target size tt:
1. 1.
Assemble all parts by overlapping connectors by maximum amount, and removing all extenders. This gives the smallest possible result.
2. 2.
Determine how much extra width/height can be distributed into all connections between neighboring parts. If that is enough to achieve the size goal, extend each connection equally by changing overlaps of connectors to finish the job.
3. 3.
If all connections have been extended to minimum overlap and further growth is needed, add one of each extender, and repeat the process from the first step.
We note that at each step, each extender is repeated the same number of times r≥0r\geq 0. So if IExtI_{\mathrm{Ext}} (respectively INonExtI_{\mathrm{NonExt}}) is the set of indices 0≤i≤n-10\leq i\leq n-1 such that gig_{i} is an extender (respectively is not an extender) we have ri=rr_{i}=r (respectively ri=1r_{i}=1). The size we can reach at step rr is at most the one obtained with the minimal connector overlap omino_{\mathrm{min}} that is
∑i=0N-1(∑j=1riai-omin)+omin=(∑i∈INonExtai-omin)+(∑i∈IExtr(ai-omin))+omin\sum_{i=0}^{N-1}\left(\sum_{j=1}^{r_{i}}{a_{i}-o_{\mathrm{min}}}\right)+o_{% \mathrm{min}}=\left(\sum_{i\in I_{\mathrm{NonExt}}}{a_{i}-o_{\mathrm{min}}}% \right)+\left(\sum_{i\in I_{\mathrm{Ext}}}r{(a_{i}-o_{\mathrm{min}})}\right)+o% _{\mathrm{min}}
We let NExt=|IExt|N_{\mathrm{Ext}}={|I_{\mathrm{Ext}}|} and NNonExt=|INonExt|N_{\mathrm{NonExt}}={|I_{\mathrm{NonExt}}|} be the number of extenders and non-extenders. We also let SExt=∑i∈IExtaiS_{\mathrm{Ext}}=\sum_{i\in I_{\mathrm{Ext}}}a_{i} and SNonExt=∑i∈INonExtaiS_{\mathrm{NonExt}}=\sum_{i\in I_{\mathrm{NonExt}}}a_{i} be the sum of advances for extenders and non-extenders. If we want the advance of the glyph assembly to reach the minimal size tt then
SNonExt-omin(NNonExt-1)+r(SExt-ominNExt)≥t{S_{\mathrm{NonExt}}-o_{\mathrm{min}}\left(N_{\mathrm{NonExt}}-1\right)}+{r% \left(S_{\mathrm{Ext}}-o_{\mathrm{min}}N_{\mathrm{Ext}}\right)}\geq t
We can assume 0" class="ltx_Math" display="inline" id="p12.m1">SExt-ominNExt>0S_{\mathrm{Ext}}-o_{\mathrm{min}}N_{\mathrm{Ext}}>0 or otherwise we would have the extreme case where the overlap takes at least the full advance of each extender. Then we obtain
r≥rmin=max(0,⌈t-SNonExt+omin(NNonExt-1)SExt-ominNExt⌉)r\geq r_{\mathrm{min}}=\max\left(0,\left\lceil\frac{t-{S_{\mathrm{NonExt}}+o_{% \mathrm{min}}\left(N_{\mathrm{NonExt}}-1\right)}}{S_{\mathrm{Ext}}-o_{\mathrm{% min}}N_{\mathrm{Ext}}}\right\rceil\right)
This provides a first simplification of the algorithm sketched in the MATH table specification: Directly start iteration at step rminr_{\mathrm{min}}. Note that at each step we start at possibly different maximum overlaps and decrease all of them by a same value. It is not clear what to do when one of the overlap reaches omino_{\mathrm{min}} while others can still be decreased. However, the sketched algorithm says all the connectors should reach minimum overlap before the next increment of rr, which means the target size will indeed be reached at step rminr_{\mathrm{min}}.
One possible interpretation is to stop overlap decreasing for the adjacent connectors that reached minimum overlap and to continue uniform decreasing for the others until all the connectors reach minimum overlap. In that case we may lose equal distribution or symmetry. In practice, this should probably not matter much. So we propose instead the dual option which should behave more or less the same in most cases: Start with all overlaps set to omino_{\mathrm{min}} and increase them evenly to reach a same value oo. By the same reasoning as above we want the inequality
SNonExt-o(NNonExt-1)+rmin(SExt-oNExt)≥t{S_{\mathrm{NonExt}}-o\left(N_{\mathrm{NonExt}}-1\right)}+{r_{\mathrm{min}}% \left(S_{\mathrm{Ext}}-oN_{\mathrm{Ext}}\right)}\geq t
which can be rewritten
SNonExt+rminSExt-o(NNonExt+rminNExt-1)≥tS_{\mathrm{NonExt}}+r_{\mathrm{min}}S_{\mathrm{Ext}}-{o\left(N_{\mathrm{NonExt% }}+{r_{\mathrm{min}}N_{\mathrm{Ext}}}-1\right)}\geq t
We note that N=NNonExt+rminNExtN=N_{\mathrm{NonExt}}+{r_{\mathrm{min}}N_{\mathrm{Ext}}} is just the exact number of glyphs used in the assembly. If there is only a single glyph, then the overlap value is irrelevant so we can assume NNonExt+rNExt-1=N-1≥1N_{\mathrm{NonExt}}+{rN_{\mathrm{Ext}}}-1=N-1\geq 1. This provides the greatest theorical value for the overlap oo:
omin≤o≤omaxtheorical=SNonExt+rminSExt-tNNonExt+rminNExt-1o_{\mathrm{min}}\leq o\leq o_{\mathrm{max}}^{\mathrm{theorical}}=\frac{S_{% \mathrm{NonExt}}+r_{\mathrm{min}}S_{\mathrm{Ext}}-t}{N_{\mathrm{NonExt}}+{r_{% \mathrm{min}}N_{\mathrm{Ext}}}-1}
Of course, we also have to take into account the limit imposed by the start and end connector lengths. So omaxo_{\mathrm{max}} must also be at most min(ei,si+1)\min{(e_{i},s_{i+1})} for 0≤i≤n-20\leq i\leq n-2. But if rmin≥2r_{\mathrm{min}}\geq 2 then extender copies are connected and so omaxo_{\mathrm{max}} must also be at most min(ei,si)\min{(e_{i},s_{i})} for i∈IExti\in I_{\mathrm{Ext}}. To summarize, omaxo_{\mathrm{max}} is the minimum of omaxtheoricalo_{\mathrm{max}}^{\mathrm{theorical}}, of eie_{i} for 0≤i≤n-20\leq i\leq n-2, of sis_{i} 1≤i≤n-11\leq i\leq n-1 and possibly of e0e_{0} (if 0∈IExt0\in I_{\mathrm{Ext}}) and of of sn-1s_{n-1} (if n-1∈IExt{n-1}\in I_{\mathrm{Ext}}).
With the algorithm described above NExtN_{\mathrm{Ext}}, NNonExtN_{\mathrm{NonExt}}, SExtS_{\mathrm{Ext}}, SNonExtS_{\mathrm{NonExt}} and rminr_{\mathrm{min}} and omaxo_{\mathrm{max}} can all be obtained using simple loops on the glyphs gig_{i} and so the complexity is O(n)O(n). In practice nn is small: For existing fonts, assemblies are made of at most three non-extenders and two extenders that is n≤5n\leq 5 (incidentally, Gecko and WebKit do not currently support larger values of nn). This means that all the operations described above can be considered to have constant complexity. This is much better than a naive implementation of the iterative algorithm sketched in the OpenType MATH table specification which seems to require at worst
∑r=0rmin-1NNonExt+rNExt=NNonExtrmin+rmin(rmin-1)2NExt=O(n×rmin2)\sum_{r=0}^{r_{\mathrm{min}}-1}{N_{\mathrm{NonExt}}+rN_{\mathrm{Ext}}}=N_{% \mathrm{NonExt}}r_{\mathrm{min}}+\frac{r_{\mathrm{min}}\left(r_{\mathrm{min}}-% 1\right)}{2}N_{\mathrm{Ext}}={O(n\times r_{\mathrm{min}}^{2})}
and at least Ω(rmin)\Omega(r_{\mathrm{min}}).
One of issue is that the number of extender repetitions rminr_{\mathrm{min}} and the number of glyphs in the assembly NN can become arbitrary large since the target size tt can take large values e.g. if one writes \underbrace{\hspace{65535em}} in LaTeX. The improvement proposed here does not solve that issue since setting the coordinates of each glyph in the assembly and painting them require Θ(N)\Theta(N) operations as well as (in the case of HarfBuzz) a glyph buffer of size NN. However, such large stretchy operators do not happen in real-life mathematical formulas. Hence to avoid possible hangs in Web engines a solution is to impose a maximum limit NmaxN_{\mathrm{max}} for the number of glyph in the assembly so that the complexity is limited by the size of the DOM tree. Currently, the proposal for HarfBuzz is Nmax=128N_{\mathrm{max}}=128. This means that if each assembly glyph is 1em large you won’t be able to draw stretchy operators of size more than 128em, which sounds a quite reasonable bound. With the above proposal, rminr_{\mathrm{min}} and so NN can be determined very quickly and the cases N≥NmaxN\geq N_{\mathrm{max}} rejected, so that we avoid losing time with such edge cases…
Finally, because in our proposal we use the same overlap oo everywhere an alternative for HarfBuzz would be to set the output buffer size to nn (i.e. ignore r-1r-1 copies of each extender and only keep the first one). This will leave gaps that the client can fix by repeating extenders as long as oo is also provided. Then HarfBuzz math shaping can be done with a complexity in time and space of just O(n)O(n) and it will be up to the client to optimize or limit the painting of extenders for large values of NN…
## Change URL of sume translations about MathML
Source: www-math@w3.org Mail Archives • 高村 吉一 (soco__kankyo@hotmail.com) • April 16, 2016 • Permalink
Dear Translators
And Dear W3C Math mailing list Members
I change URL of sume translations about MathML into Japanese of the following document:
(1) MathML for CSS Profile(http://www.w3.org/TR/2011/REC-mathml-for-css-20110607/)
CSSに対応するMathMLの概要書(http://takamu.sakura.ne.jp/mathml-for-css-ja.html)
Previous URL http://www3.fctv.ne.jp/~takamu/mathml-for-css-ja.html
(2) XML Entity Definitions for Characters(http://www.w3.org/TR/2010/REC-xml-entity-names-20100401/)
文字に対するXML実体の定義(http://takamu.sakura.ne.jp/xml-entity-names-ja/index.html)
Previous URL http://www3.fctv.ne.jp/~takamu/xml-entity-names-ja/index.html
(3) Units in MathML(https://www.w3.org/TR/2003/NOTE-mathml-units-20031110/)
MathMLにおける単位(http://takamu.sakura.ne.jp/mathml-units-ja.htm)
Previous URL http://www3.fctv.ne.jp/~takamu/mathml-units-ja.htm
Sincerely
16.Apr.2016
## Announcing MathUI 2016 - the Mathematical User Interfaces Workshop
Source: www-math@w3.org Mail Archives • Paul Libbrecht (paul@hoplahup.net) • April 12, 2016 • Permalink
Call for Papers: MathUI'16
http://www.cicm-conference.org/2016/cicm.php?event=mathui
------------------------------------------------------------------------
10^th Mathematical User Interfaces Workshop 2016
------------------------------------------------------------------------
at the Conference on Intelligent Computer Mathematics
Bialystok, Poland
on Monday 25^th of July 2016
------------------------------------------------------------------------
SCOPE
MathUI is an international workshop to discuss how users can be best
supported when doing/learning/searching for/interacting with mathematics
using a computer.
* Is mathematical user interface design a design for representing
mathematics, embedding mathematical context, or a specific design
for mathematicians?
* How is mathematics for which purpose best represented?
* What specifically math-oriented support is needed?
* Does learning of math require a platform different than other
learning platforms?
* Which mathematical services can be offered?
* Which services can be meaningfully combined?
* What best practices wrt. mathematics can be found and how can they
be best communicated?
We invite all questions, that care for the use of mathematics on
computers and how the user experience can be improved, to be discussed
in the workshop.
TOPICS include
* user-requirements for math interfaces
* presentation formats for mathematical content
* mobile-devices powered mathematics
* cultural differences in practices of mathematical languages
* didactically sensible mathematical scenarios of use
* manipulations of mathematical expressions
This workshop follows a successful series of workshops held at the
Conferences on Intelligent Computer Mathematics since 11 years;
it features presentations of brand new ideas in papers selected by a
thorough review process, wide space for discussions,as well as a
software demonstration session.
SUBMISSIONS
The organizers invite authors to submit contributions of 6 to 12 pages
on the workshop-related topics in PDF format optionally illustrated by
DEADLINE for submissions: May 30^th 2015.
https://easychair.org/conferences/?conf=mathui16.
The submissions will be reviewed by the international programme
committee whose comments and recommendations will be
sent back by June 19^th requesting a final version no later than July 2^nd .
Moreover, MathUI will be concluded by an exhibit-like demonstration
session. Proposed demonstrations should be sent by email until June
20th, containing a URL to a software description, a title, a short
abstract of the demonstrated features, and the indication of hardware
expectations (own/lent laptop/tablet, internet access (speed?), power,
...). After a short elevator pitch, the demonstration session will run
for 1-3h, each demonstrating to interested parties.
REVIEW COMMITTEE
(to be confirmed)
* Finland
o Olga Caprotti, University of Helsinki
* France
o Jana Trgalova, Universite Claude Bernard Lyon 1
* Germany
o Andrea Kohlhase (organizer), Neu-Ulm University of Applied Sciences
o Peter Krautzberger, krautzource UG, Bonn
o Paul Libbrecht (organizer), University of Education of Weingarten
* Great Britain
o Chris Rowley
* Netherlands
o Jan Willem Knopper, Technische Universiteit Eindhoven
* Spain
o Daniel Marquès, wiris, Barcelona
* USA
o Deyan Ginev, Authorea, New York
o Elena Smirnova, Texas Instruments Inc. Education Technology
* Paul Libbrecht, paul@cermat.org or
* Andrea Kohlhase, Andrea.Kohlhase@hs-neu-ulm.de
## Re: MathML is a failed web standard (or not?)
Source: www-math@w3.org Mail Archives • William F Hammond (hammond@csc.albany.edu) • April 11, 2016 • Permalink
Going back to April 1, Andrew Robbins writes in part:
> In my humble opinion, the reason why MathML has failed
> isn't because of Content MathML, it's because of
> Presentation MathML, and it's not because it isn't
> accurate, or because it doesn't look good, it's because
> people prefer TeX over angle brackets. MathJax provides
> people the ability to show the same beautiful math
> expressions on web pages, that Presentation MathML
> promised but with many fewer keystrokes.
I believe that almost all extant MathML content that I've
seen originates, one way or another, with LaTeX or
LaTeX-like markup.
> superscripts. The only thing that interests me with
> regards to Content MathML, is the fact that it is, at a
> fundamental level, a LISP where symbols are selected from
> URI/RDF/XML/MathML namespaces. . . .
I think it's rather difficult to generate reliably useful
content MathML from LaTeX markup as commonly seen, for
example, at arXiv. On the other hand, I believe that adding
a LaTeX package for type declaration of mathematical symbols
would go a long way toward improving this. Even better
would be the use of a suitable LaTeX profile (such as I
spoke about at TUG 2010 and TUG 2014) with provision for
symbol type declarations.
> . . . As I said earlier, I agree that Presentation MathML
> has failed, but that's because it's a failed viewpoint.
> Math isn't symbols, it's semantics. From the beginning,
> MathML should have been about Content, not Presentation. I
> think if we had focused on Content all along, then we
> probably wouldn't be having this conversation now.
>From the beginning the development of MathML has had two
tracks with content MathML focused on content interchange
and presentation MathML designed for minimizing the amount
of work required for a web browser to provide a TeX-quality
rendering of mathematical content from an extension of HTML
markup.
I disagree with the assertion that presentation MathML has
failed as a web standard. It works quite well in Firefox
and other Gecko browsers, and one should not forget W3C's
Amaya. It is, of course, disappointing that, for the moment
and for most of the time since the beginning of MathML in
the late 1990s, three of the "big four" browsers have not
It's a failing in the market based on crass market thinking.
It was also disappointing that for the period from 2001 (if
not 1995 with the dropping of HTML v 3.0) to 2011 the W3C
banned any form of math content from the media type
"text/html".
Of course, there would be breakage of existing content if
web browsers that now render MathML ceased to do so.
It continues to be disappointing that search engines do not
cover mathematical content well.
The disappointments are not failures, but rather the
result of a world where a relatively small number of
individuals have any interest in mathematics.
Still, native browser rendering of MathML could happen.
Didn't Murray Sargent just say so? There is no reason
to stop wishing for it.
-- Bill
## Re: should MathML dictate a specific graphical rendering
Source: www-math@w3.org Mail Archives • Bruce Miller (bruce.miller@nist.gov) • April 09, 2016 • Permalink
On 04/09/2016 02:42 PM, William F Hammond wrote:
> Murray Sargent writes in part:
>
>> It's good to have this discussion. Clearly Presentation
>> MathML is used a lot for interchanging math zones between
>> programs. Also I haven't given up on the idea of the
>> browsers rendering MathML well natively. If IE ever
>> succeeds, it'll likely look like TeX, since both IE and
>> Edge use LineServices. And it should be way faster than
>> Java script code.
>
> This would be good to see.
Oh, this would more than "good", it would be...
Well, let's go with the ever popular Sports Analogies:
It would be a game changer. Moreover, knowing Murray,
I have no doubt that it'd reset the bar for math rendering
on the web.
Sigh. It's all the more frustrating in that MS has already done
the hard part (not to trivialize the integration & testing).
bruce
## Re: should MathML dictate a specific graphical rendering
Source: www-math@w3.org Mail Archives • William F Hammond (hammond@csc.albany.edu) • April 09, 2016 • Permalink
Murray Sargent writes in part:
> It's good to have this discussion. Clearly Presentation
> MathML is used a lot for interchanging math zones between
> programs. Also I haven't given up on the idea of the
> browsers rendering MathML well natively. If IE ever
> succeeds, it'll likely look like TeX, since both IE and
> Edge use LineServices. And it should be way faster than
> Java script code.
This would be good to see.
-- Bill
## RE: should MathML dictate a specific graphical rendering
Source: www-math@w3.org Mail Archives • Murray Sargent (murrays@exchange.microsoft.com) • April 08, 2016 • Permalink
The LineServices post<https://blogs.msdn.microsoft.com/murrays/2006/11/14/lineservices/> includes a description of an incredible afternoon several of us spent with Donald Knuth back in 2004. Among many things, he demonstrated how he tweaks<https://blogs.msdn.microsoft.com/murrays/2011/04/30/two-math-typography-niceties/> his TeX documents. We did automate some of these tweaks, notably creating “cut-ins” for subscript/superscript positioning. These cut-ins are part of the OpenType math spec<https://blogs.msdn.microsoft.com/murrays/2014/04/27/opentype-math-tables/>. Knuth explained that he didn’t want to go back and change TeX due to its archival usage. Naturally non-math concepts such as revision tracking and embedded objects along with international text had to be accommodated in our implementation. This was another reason for using OMML instead of MathML as the preferred math XML; you can embed other XMLs into OMML. In principle you can do that using the MathML <semantics> element, but it’d be somewhat cumbersome. The Office layout is essentially the same as TeX’s. It’ll be interesting to see how the two compare when the STIX font is finally released with full OpenType math table support. Tyro Typeworks<http://www.tiro.com/> is handling this and it also did Cambria Math.
Murray
From: Paul Topping [mailto:pault@dessci.com]
Sent: Friday, April 8, 2016 3:03 PM
To: Murray Sargent <murrays@exchange.microsoft.com>; Daniel Kinzler <daniel@brightbyte.de>; Moritz Schubotz <schubotz@tu-berlin.de>; www-math@w3.org; Peter Krautzberger <peter.krautzberger@mathjax.org>
Cc: Wikimedia developers <wikitech-l@lists.wikimedia.org>; wikidata-tech <wikidata-tech@lists.wikimedia.org>
Subject: RE: should MathML dictate a specific graphical rendering
Murray,
I guess I forgot about Appendix G of the TeXbook. Thanks for the correction. Did you find that it defined the rendering accurately enough for your line services implementation? Since it has OpenType ‘math’ table support, does it really render exactly as TeX? I guess one could say that the two implementations render the same modulo the fonts used. Did your line services improve on TeX math rendering at all or fill in any gaps in Appendix G. Were there any concessions made to compatibility with layout of non-math text?
I am not asking these questions to argue against your point. I’m just suggesting that while a reader may regard two renderings as being equal, there may still be unavoidable, or by-design, differences due to variations in rendering technology, software environment, and other considerations.
Paul
From: Murray Sargent [mailto:murrays@exchange.microsoft.com]
Sent: Friday, April 8, 2016 2:34 PM
To: Paul Topping <pault@dessci.com<mailto:pault@dessci.com>>; Daniel Kinzler <daniel@brightbyte.de<mailto:daniel@brightbyte.de>>; Moritz Schubotz <schubotz@tu-berlin.de<mailto:schubotz@tu-berlin.de>>; www-math@w3.org<mailto:www-math@w3.org>; Peter Krautzberger <peter.krautzberger@mathjax.org<mailto:peter.krautzberger@mathjax.org>>
Cc: Wikimedia developers <wikitech-l@lists.wikimedia.org<mailto:wikitech-l@lists.wikimedia.org>>; wikidata-tech <wikidata-tech@lists.wikimedia.org<mailto:wikidata-tech@lists.wikimedia.org>>
Subject: RE: should MathML dictate a specific graphical rendering
Paul commented "TeX doesn't specify its rendering in detail either except via the code itself. In other words, the only proper rendering of TeX is that done by TeX itself."
Actually Appendix G of The TeXbook describes how TeX lays out math. The Office math layout program<https://blogs.msdn.microsoft.com/murrays/2006/11/14/lineservices/> uses the algorithms therein, which is why the results look so much like TeX. The actual code is completely different from TeX’s, but the layout principles are generally the same.
It’s good to have this discussion. Clearly Presentation MathML is used a lot for interchanging math zones between programs. Also I haven’t given up on the idea of the browsers rendering MathML well natively. If IE ever succeeds, it’ll likely look like TeX, since both IE and Edge use LineServices<https://blogs.msdn.microsoft.com/murrays/2006/11/14/lineservices/>. And it should be way faster than Java script code.
My main complaints about Presentation MathML are 1) lack of an explicit n-ary element (for integrals, summations, products, etc.) and 2) lack of document level math properties<https://blogs.msdn.microsoft.com/murrays/2008/10/27/default-document-math-properties/>, like default math font. Also Presentation MathML depends too much on proper use of <mrow>, which wouldn’t even be needed if the elements were all “prefix” elements like <mfrac> and <mfenced>. But infix notation can be translated to prefix notation, a good example being conversion of the linear format<http://www.unicode.org/notes/tn28/UTN28-PlainTextMath-v3.pdf> to the OMML<https://blogs.msdn.microsoft.com/murrays/2006/10/06/mathml-and-ecma-math-omml/>-like internal format for LineServices. Similarly RichEdit’s MathML reader converts using the rich-text string stack<https://msdn.microsoft.com/en-us/library/windows/desktop/hh768736(v=vs.85).aspx> originally developed for the linear format.
The bottom line is that MathML isn’t perfect, but it’s a widely used standard and gets the job done. As such, it’s hardly a failure. And it’s nicely supported on the web thanks to MathJax.
Murray
## RE: should MathML dictate a specific graphical rendering
Source: www-math@w3.org Mail Archives • Paul Topping (pault@dessci.com) • April 08, 2016 • Permalink
Murray,
I guess I forgot about Appendix G of the TeXbook. Thanks for the correction. Did you find that it defined the rendering accurately enough for your line services implementation? Since it has OpenType ‘math’ table support, does it really render exactly as TeX? I guess one could say that the two implementations render the same modulo the fonts used. Did your line services improve on TeX math rendering at all or fill in any gaps in Appendix G. Were there any concessions made to compatibility with layout of non-math text?
I am not asking these questions to argue against your point. I’m just suggesting that while a reader may regard two renderings as being equal, there may still be unavoidable, or by-design, differences due to variations in rendering technology, software environment, and other considerations.
Paul
From: Murray Sargent [mailto:murrays@exchange.microsoft.com]
Sent: Friday, April 8, 2016 2:34 PM
To: Paul Topping <pault@dessci.com>; Daniel Kinzler <daniel@brightbyte.de>; Moritz Schubotz <schubotz@tu-berlin.de>; www-math@w3.org; Peter Krautzberger <peter.krautzberger@mathjax.org>
Cc: Wikimedia developers <wikitech-l@lists.wikimedia.org>; wikidata-tech <wikidata-tech@lists.wikimedia.org>
Subject: RE: should MathML dictate a specific graphical rendering
Paul commented "TeX doesn't specify its rendering in detail either except via the code itself. In other words, the only proper rendering of TeX is that done by TeX itself."
Actually Appendix G of The TeXbook describes how TeX lays out math. The Office math layout program<https://blogs.msdn.microsoft.com/murrays/2006/11/14/lineservices/> uses the algorithms therein, which is why the results look so much like TeX. The actual code is completely different from TeX’s, but the layout principles are generally the same.
It’s good to have this discussion. Clearly Presentation MathML is used a lot for interchanging math zones between programs. Also I haven’t given up on the idea of the browsers rendering MathML well natively. If IE ever succeeds, it’ll likely look like TeX, since both IE and Edge use LineServices<https://blogs.msdn.microsoft.com/murrays/2006/11/14/lineservices/>. And it should be way faster than Java script code.
My main complaints about Presentation MathML are 1) lack of an explicit n-ary element (for integrals, summations, products, etc.) and 2) lack of document level math properties<https://blogs.msdn.microsoft.com/murrays/2008/10/27/default-document-math-properties/>, like default math font. Also Presentation MathML depends too much on proper use of <mrow>, which wouldn’t even be needed if the elements were all “prefix” elements like <mfrac> and <mfenced>. But infix notation can be translated to prefix notation, a good example being conversion of the linear format<http://www.unicode.org/notes/tn28/UTN28-PlainTextMath-v3.pdf> to the OMML<https://blogs.msdn.microsoft.com/murrays/2006/10/06/mathml-and-ecma-math-omml/>-like internal format for LineServices. Similarly RichEdit’s MathML reader converts using the rich-text string stack<https://msdn.microsoft.com/en-us/library/windows/desktop/hh768736(v=vs.85).aspx> originally developed for the linear format.
The bottom line is that MathML isn’t perfect, but it’s a widely used standard and gets the job done. As such, it’s hardly a failure. And it’s nicely supported on the web thanks to MathJax.
Murray
## RE: should MathML dictate a specific graphical rendering
Source: www-math@w3.org Mail Archives • Murray Sargent (murrays@exchange.microsoft.com) • April 08, 2016 • Permalink
Paul commented "TeX doesn't specify its rendering in detail either except via the code itself. In other words, the only proper rendering of TeX is that done by TeX itself."
Actually Appendix G of The TeXbook describes how TeX lays out math. The Office math layout program<https://blogs.msdn.microsoft.com/murrays/2006/11/14/lineservices/> uses the algorithms therein, which is why the results look so much like TeX. The actual code is completely different from TeX’s, but the layout principles are generally the same.
It’s good to have this discussion. Clearly Presentation MathML is used a lot for interchanging math zones between programs. Also I haven’t given up on the idea of the browsers rendering MathML well natively. If IE ever succeeds, it’ll likely look like TeX, since both IE and Edge use LineServices<https://blogs.msdn.microsoft.com/murrays/2006/11/14/lineservices/>. And it should be way faster than Java script code.
My main complaints about Presentation MathML are 1) lack of an explicit n-ary element (for integrals, summations, products, etc.) and 2) lack of document level math properties<https://blogs.msdn.microsoft.com/murrays/2008/10/27/default-document-math-properties/>, like default math font. Also Presentation MathML depends too much on proper use of <mrow>, which wouldn’t even be needed if the elements were all “prefix” elements like <mfrac> and <mfenced>. But infix notation can be translated to prefix notation, a good example being conversion of the linear format<http://www.unicode.org/notes/tn28/UTN28-PlainTextMath-v3.pdf> to the OMML<https://blogs.msdn.microsoft.com/murrays/2006/10/06/mathml-and-ecma-math-omml/>-like internal format for LineServices. Similarly RichEdit’s MathML reader converts using the rich-text string stack<https://msdn.microsoft.com/en-us/library/windows/desktop/hh768736(v=vs.85).aspx> originally developed for the linear format.
The bottom line is that MathML isn’t perfect, but it’s a widely used standard and gets the job done. As such, it’s hardly a failure. And it’s nicely supported on the web thanks to MathJax.
Murray
## Re: MathML is dead, long live MathML
Source: www-math@w3.org Mail Archives • Roger Martin (mathmldashx@yahoo.com) • April 08, 2016 • Permalink
Hello, how many of us have github accounts?
On Friday, April 8, 2016 6:10 AM, Daniel Kinzler <daniel@brightbyte.de> wrote:
Am 07.04.2016 um 23:01 schrieb Paul Topping:
> I have no problem with that but are some of these lists members-only? I was
> told when I replied that my message would be reviewed by the moderator as I
> wasn't a member. Perhaps that was the W3C list.
Oh... both the Wikimedia lists are members only, I'm afraid. The W3C list
requires a 1-click agreement to their terms. That's easier, but less likely to
involve Wikimedia people.
## Re: MathML is dead, long live MathML
Source: www-math@w3.org Mail Archives • Daniel Kinzler (daniel@brightbyte.de) • April 08, 2016 • Permalink
Am 07.04.2016 um 23:01 schrieb Paul Topping:
> I have no problem with that but are some of these lists members-only? I was
> told when I replied that my message would be reviewed by the moderator as I
> wasn't a member. Perhaps that was the W3C list.
Oh... both the Wikimedia lists are members only, I'm afraid. The W3C list
requires a 1-click agreement to their terms. That's easier, but less likely to
involve Wikimedia people.
## Re: MathML is a failed web standard (or not?)
Source: www-math@w3.org Mail Archives • Peter Krautzberger (peter.krautzberger@mathjax.org) • April 08, 2016 • Permalink
Hi Peter and Andrew,
Thanks for those interesting statements. I'm not sure how they relate to
what I wrote (please let me know if I missed something) but I appreciate
having the opportunity to read them.
Regards,
Peter.
On Sat, Apr 2, 2016 at 3:42 AM, Andrew Robbins <andjrob@gmail.com> wrote:
> Dear MathML Subscribers,
>
> I must admit that I have not read every word of both posts, but I already
> know what this is about, because I have already encountered similar issues,
> with both Presentation and Content. I'm not too concerned with
> Presentation, because MathJax does an excellent job at that. What I am
> concerned with is Content (i.e. Semantics), and to quote the original
> article:
>
> "Content MathML is just not relevant." -- Peter Krautzberger
>
> I have been writing a set of tools for trans-language compilation for
> about 5 years now, (freely available at
> https://github.com/andydude/droxtools), and the only system I've found
> that is is open, non-commercial, and easily extensible for representing
> arbitrary concepts from every programming language ever invented, is
> Content MathML. This is the opposite of "not relevant", and Paul Topping
>
> In my humble opinion, the reason why MathML has failed isn't because of
> Content MathML, it's because of Presentation MathML, and it's not because
> it isn't accurate, or because it doesn't look good, it's because people
> prefer TeX over angle brackets. MathJax provides people the ability to show
> the same beautiful math expressions on web pages, that Presentation MathML
> promised but with many fewer keystrokes.
>
> I don't care about angle brackets. I don't care about superscripts. The
> only thing that interests me with regards to Content MathML, is the fact
> that it is, at a fundamental level, a LISP where symbols are selected from
> URI/RDF/XML/MathML namespaces. Granted, OpenMath/MathML namespaces are
> naturally defined to be equivalent, but to apply to XML/QNames as well,
> then you need a QName to URI mapping. I've seen two of these, the
> "{NS}NAME" method (think Java/Ruby/Python) and the "NS::NAME" method (think
> JavaScript/E4X), the first one fails to produce a valid URI, but the second
> method does produce a valid URI, so that's what I've been using in my
> tools. The point is that URIs are already a carefully controlled resource,
> and so they are much more open than LISP's traditional filesystem based
> package system, or any other system I've seen.
>
> Just in the interest of full disclosure, there are closed, commercial
> systems out there that do trans-language compilation, like the kind I'm
> currently developing. https://www.semanticdesigns.com/ is an example of a
> corporation involved in such a business. But I whole-heartedly believe that
> the future of open-source software depends on having such tools available
> as open source tools. This is starting to sound like a rant, so I will stop
> it here.
>
> Actually, I changed my mind. I still have to have a Content vs.
> Presentation debate. As I said earlier, I agree that Presentation MathML
> has failed, but that's because it's a failed viewpoint. Math isn't symbols,
> it's semantics. From the beginning, MathML should have been about Content,
> not Presentation. I think if we had focused on Content all along, then we
> probably wouldn't be having this conversation now.
>
> Regards,
> Andrew Robbins
>
>
> On Fri, Apr 1, 2016 at 7:21 PM, Peter Murray-Rust <pm286@cam.ac.uk> wrote:
>
>> I write as a chemist who has tried to do the same thing with Chemistry
>> (CML, Chemical Markup Language). I have been inspired by what I see as the
>> success of MathML and do not regard it as a failure. I am particularly
>> interested in Content MathML as computable maths.
>>
>> The reality seems to be that it takes a generation for many of these
>> ideas to be implemented. in 1998 SVG seemed to be the obvious way of doing
>> graphics, but after 5 years it looked close to death. After 15 years it's
>> become universal.
>>
>> CML is used by a small number of enthusiasts. The chemical software
>> manufacturers don't care because they only care about the pharma industry
>> and instruments. So we strugle on with a number of ad hoc broken
>> representations of chemistry, which are still primarily graphical. There is
>> almost no chemistry for blind people.
>>
>> The real problem is semantics. At the moment the world doesn't care. They
>> will have to in the future. IoT demands semantics. You cannot compute
>> pictures. Binding semantics to maths and chemistry is hard but it will have
>> to come. I'd guess that people will need semantic math in 5 years and
>> chemistry in 15.
>>
>> If you let the world be driven by browser manufacturers and publishers
>> you will get a sighted-human vision of maths and science. The IoT won't
>> need browsers.
>>
>> It WILL need semantic maths.
>>
>>
>> On Fri, Apr 1, 2016 at 11:10 PM, Paul Topping <pault@dessci.com> wrote:
>>
>>> Hi,
>>>
>>> Peter Krautzberger of MathJax fame, recently posted this on his own blog:
>>>
>>> MathML is a failed web standard
>>> https://www.peterkrautzberger.org/0186/
>>>
>>> Obviously, he presents some challenges to the MathML standard and its
>>> community. I felt that I had to respond:
>>>
>>> Response to Peter Krautzberger's "MathML is a failed web standard"
>>> http://bit.ly/1ZLfCF8
>>>
>>> I hope this exchange prompts some serious dialog.
>>>
>>> Paul Topping
>>>
>>> Design Science, Inc.
>>> "How Science Communicates"
>>> Makers of MathType, MathFlow, MathPlayer, Equation Editor
>>> http://www.dessci.com
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Peter Murray-Rust
>> Unilever Centre, Dep. Of Chemistry
>> University of Cambridge
>> CB2 1EW, UK
>> +44-1223-763069
>>
>
>
## Re: MathML is dead, long live MathML
Source: www-math@w3.org Mail Archives • Peter Krautzberger (peter.krautzberger@mathjax.org) • April 08, 2016 • Permalink
Hi Daniel,
Could you let me know once you've decided on a venue for discussion? I'd be
happy to join in.
Peter.
On Thu, Apr 7, 2016 at 8:05 PM, Daniel Kinzler <daniel@brightbyte.de> wrote:
> Am 07.04.2016 um 20:00 schrieb Moritz Schubotz:
> > Hi Daniel,
> >
> > Ok. Let's discuss!
>
> Great! But let's keep the discussion in one place. I made a mess by
> cross-posting this to two lists, now it's three, it seems. Can we agree on
> <wikitech-l@lists.wikimedia.org> as the venue of discussion? At least for
> the
> discussion of MathML in the context of Wikimedia, that would be the best
> place,
> I think.
>
> -- daniel
>
>
## Improved FMath MathML Formula and FMath Editor - Release 2.1.2
Source: FMath • Ionel Alexandru (noreply@blogger.com) • April 08, 2016 • Permalink
Hi,
I have a new free to use version for "FMath MathML Formula" components and for "FMath MathML Editor":
• Now it is possible to tweak the way the root is dislayed for "msqrt" and "mroot" elements. I added the attribute "thickness". Also I added the attribute "sqrtthickness" in "mstyle" element.
• 30% 50% 80% 100% 120%
• For java version I have migrate the xml reader to jdom2.0 library.
• If you need help to use or if you find bugs which you want to be solved, let me know.
regards
Ionel Alexandru
## Feeds
Planet MathML features:
If you own a blog with a focus on MathML, and want to be added or removed from this aggregator, please get in touch with Bert Bos at bert@w3.org. | 2016-05-05 16:18:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6120549440383911, "perplexity": 6072.726451680238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860127870.1/warc/CC-MAIN-20160428161527-00043-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/rate-of-decrease.236197/ | # Rate of decrease
This question was in the exam I just sat, it was of a type I hadn't practiced, and I'd like someone to check my answer, as if it's wrong then I'm almost certain to have failed.
At 7 a.m. I made a cup of tea; after adding some milk it is about 90 C. When I left at 7:30 a.m. the tea is still drinkable at about 40 C. When I get back home at 8 a.m. the tea has cooled to 30 C.
Assume that the rate of cooling of the tea is proportional to the temperature difference of the teas and the temperature of the house. Assume also that the temperature of the house is constant.
What is the temperature of the house?
quantumdude
Staff Emeritus
Gold Member
How about showing your work? It would make it a lot easier for us to spot a wrong turn if you would show us the route you took!
Hmm I only got to take the question paper home, not the answer paper. I started with putting dT/dt = k(T-h) where h was the temperature of the house. Then I seperated the variables, integrated and took the exponentials which gave me (T-h) = ce^kt where c is a constant. Then I plugged in the values given in the question which gave me a set of silmultaneous equations to solve, and I ended up having to form a quadratic in e^k which gave me k = 0 and k = ln 1/5. I took k = 1/5 with the reason supplied that k < 0. This also supplied me with a value for c, namely 62 1/2. Plugging those values into my set of linear equations gave me h = -27.5 and from there I gave an argument centred around the fact that I should have been using |T-h| in place of (T-h) as to why h = 27.5
tiny-tim
Homework Helper
Plugging those values into my set of linear equations gave me h = -27.5 and from there I gave an argument centred around the fact that I should have been using |T-h| in place of (T-h) as to why h = 27.5
Hi Gwilim!
Yes, 27.5 is right!
You'll lose a few marks for the |T- h| stuff … it should have worked out fine with (T - h).
(And you could have got a linear equation in (90 - h) instead of a quadratic in k, if you'd just squared one of the e^kt equations and subtracted it from the other).
But you've definitely got most of the marks for that question!
I hope the others were as good! | 2022-05-25 23:36:14 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8045936822891235, "perplexity": 527.32002851471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00510.warc.gz"} |
https://www.taylorfrancis.com/chapters/mono/10.3109/9780203340943-11/introduction-blood-cancers-tariq-mughal-tariq-mughal-john-goldman-john-goldman-sabena-mughal-sabena-mughal | ## ABSTRACT
The simplest definition of hematological cancers is that they are cancers which arise from a single blood cell. Since all blood cells are produced by a process called hematopoiesis (“heme” comes from the Greek word haema or “ α ιc μα” for blood and “ π ο í ησιc ” or poiesis means creation or formation), cancers such as leukemias, lymphomas, and myelomas are often referred to as blood cancers. The cancer type usually refers to the organ or the specific type of cell where cancer originates. To distinguish blood cancer from other forms of cancers, they are sometimes referred to as “liquid tumors” since they typically do not form lumps or masses (“tumors”). In contrast, cancers arising from all other cells, which typically form masses, are called “solid tumors.” | 2022-08-15 15:49:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982390761375427, "perplexity": 4986.478778789562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00745.warc.gz"} |
https://eccc.weizmann.ac.il/report/2010/012/ | Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > DETAIL:
### Revision(s):
Revision #1 to TR10-012 | 2nd June 2010 02:28
#### Matching Vector Codes
Revision #1
Authors: Zeev Dvir, Parikshit Gopalan, Sergey Yekhanin
Accepted on: 2nd June 2010 02:28
Keywords:
Abstract:
An $(r,\delta,\epsilon)$-locally decodable code encodes a $k$-bit message $x$ to an $N$-bit codeword $C(x),$ such that for every $i\in [k],$ the $i$-th message bit can be recovered with probability $1-\epsilon,$ by a randomized decoding procedure that reads only $r$ bits, even if the codeword $C(x)$ is corrupted in up to $\delta N$ locations. Recently a new class of locally decodable codes, based on families of vectors with restricted dot products has been discovered. We refer to those codes as Matching Vector (MV) codes. Several families of $(r,\delta,\Theta(r\delta))$-locally decodable MV codes have been obtained. While codes in those families were shorter than codes of earlier generations, they suffered from having large values of $\epsilon=\Omega(r\delta).$ Codes with constant query complexity could only tolerate tiny amounts of error, and no MV codes of super-constant number of queries capable of tolerating a constant fraction of errors were known to exist. In this paper we develop a new view of matching vector codes and uncover certain similarities between MV codes and classical Reed Muller codes. Our view allows us to obtain a deeper insight into power and limitations of MV codes.
Specifically, we show that existing families of MV codes can be enhanced to
tolerate a nearly $1/8$ fraction of errors, independent of the value
of $r.$ Such enhancement comes at a price of a moderate increase in
the number of queries.
Our construction yields the first families of matching vector
codes of super-constant query complexity that can tolerate a
constant fraction of errors. Our codes are shorter than Reed Muller
LDCs for all values of $r\leq \log k / (\log \log k)^c,$ for some
constant $c.$
On the lower bound side we show that any MV code encodes messages of length $k$ to codewords of length at least $k2^{\Omega(\sqrt{\log k})}.$ Therefore MV codes do not improve upon Reed Muller locally decodable codes for $r\geq (\log k)^{\Omega(\sqrt{\log k})}.$
### Paper:
TR10-012 | 27th January 2010 12:47
#### Matching Vector Codes
TR10-012
Authors: Zeev Dvir, Parikshit Gopalan, Sergey Yekhanin
Publication: 27th January 2010 23:26
Keywords:
Abstract:
An $(r,\delta,\epsilon)$-locally decodable code encodes a $k$-bit message $x$ to an $N$-bit codeword $C(x),$ such that for every $i\in [k],$ the $i$-th message bit can be recovered with probability $1-\epsilon,$ by a randomized decoding procedure that reads only $r$ bits, even if the codeword $C(x)$ is corrupted in up to $\delta N$ locations. Recently a new class of locally decodable codes, based on families of vectors with restricted dot products has been discovered. We refer to those codes as Matching Vector (MV) codes. Several families of $(r,\delta,\Theta(r\delta))$-locally decodable MV codes have been obtained. While codes in those families were shorter than codes of earlier generations, they suffered from having large values of $\epsilon=\Omega(r\delta).$ Codes with constant query complexity could only tolerate tiny amounts of error, and no MV codes of super-constant number of queries capable of tolerating a constant fraction of errors were known to exist. In this paper we develop a new view of matching vector codes and uncover certain similarities between MV codes and classical Reed Muller codes. Our view allows us to obtain a deeper insight into power and limitations of MV codes.
Specifically, we show that existing families of MV codes can be enhanced to
tolerate a nearly $1/8$ fraction of errors, independent of the value
of $r.$ Such enhancement comes at a price of a moderate increase in
the number of queries.
Our construction yields the first families of matching vector
codes of super-constant query complexity that can tolerate a
constant fraction of errors. Our codes are shorter than Reed Muller
LDCs for all values of $r\leq \log k / (\log \log k)^c,$ for some
constant $c.$
On the lower bound side we show that any MV code encodes messages of length $k$ to codewords of length at least $k2^{\Omega(\sqrt{\log k})}.$ Therefore MV codes do not improve upon Reed Muller locally decodable codes for $r\geq (\log k)^{\Omega(\sqrt{\log k})}.$
ISSN 1433-8092 | Imprint | 2021-09-23 01:50:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4563687741756439, "perplexity": 935.8266971781442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00320.warc.gz"} |
https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html | # torch.linalg.lstsq¶
torch.linalg.lstsq(A, B, rcond=None, *, driver=None) -> (Tensor, Tensor, Tensor, Tensor)
Computes a solution to the least squares problem of a system of linear equations.
Letting $\mathbb{K}$ be $\mathbb{R}$ or $\mathbb{C}$, the least squares problem for a linear system $AX = B$ with $A \in \mathbb{K}^{m \times n}, B \in \mathbb{K}^{m \times k}$ is defined as
$\min_{X \in \mathbb{K}^{n \times k}} \|AX - B\|_F$
where $\|-\|_F$ denotes the Frobenius norm.
Supports inputs of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if the inputs are batches of matrices then the output has the same batch dimensions.
driver chooses the LAPACK/MAGMA function that will be used. For CPU inputs the valid values are ‘gels’, ‘gelsy’, ‘gelsd, ‘gelss’. For CUDA input, the only valid driver is ‘gels’, which assumes that A is full-rank. To choose the best driver on CPU consider:
• If A is well-conditioned (its condition number is not too large), or you do not mind some precision loss.
• For a general matrix: ‘gelsy’ (QR with pivoting) (default)
• If A is full-rank: ‘gels’ (QR)
• If A is not well-conditioned.
• ‘gelsd’ (tridiagonal reduction and SVD)
• But if you run into memory issues: ‘gelss’ (full SVD).
cond is used to determine the effective rank of the matrices in A when driver is one of (‘gelsy’, ‘gelsd’, ‘gelss’). In this case, if $\sigma_i$ are the singular values of A in decreasing order, $\sigma_i$ will be rounded down to zero if $\sigma_i \leq \text{cond} \cdot \sigma_1$. If cond= None (default), cond is set to the machine precision of the dtype of A.
This function returns the solution to the problem and some extra information in a named tuple of four tensors (solution, residuals, rank, singular_values). For inputs A, B of shape (*, m, n), (*, m, k) respectively, it cointains
• solution: the least squares solution. It has shape (*, n, k).
• residuals: the squared residuals of the solutions, that is, $\|AX - B\|_F^2$. It has shape equal to the batch dimensions of A. It is computed when m > n and every matrix in A is full-rank, otherwise, it is an empty tensor. If A is a batch of matrices and any matrix in the batch is not full rank, then an empty tensor is returned. This behavior may change in a future PyTorch release.
• rank: tensor of ranks of the matrices in A. It has shape equal to the batch dimensions of A. It is computed when driver is one of (‘gelsy’, ‘gelsd’, ‘gelss’), otherwise it is an empty tensor.
• singular_values: tensor of singular values of the matrices in A. It has shape (*, min(m, n)). It is computed when driver is one of (‘gelsd’, ‘gelss’), otherwise it is an empty tensor.
Note
While X = A.pinv() @ B, this function computes the solution in a faster and more numerically stable way than performing the computations separately.
Warning
The default value of rcond may change in a future PyTorch release. It is therefore recommended to use a fixed value to avoid potential breaking changes.
Parameters
• A (Tensor) – lhs tensor of shape (*, m, n) where * is zero or more batch dimensions.
• B (Tensor) – rhs tensor of shape (*, m, k) where * is zero or more batch dimensions.
• rcond (float, optional) – used to determine the effective rank of A. If rcond= None, rcond is set to the machine precision of the dtype of A times max(m, n). Default: None.
Keyword Arguments
driver (str, optional) – name of the LAPACK/MAGMA method to be used. If None, ‘gelsy’ is used for CPU inputs and ‘gels’ for CUDA inputs. Default: None.
Returns
A named tuple (solution, residuals, rank, singular_values).
Examples:
>>> a = torch.tensor([[10, 2, 3], [3, 10, 5], [5, 6, 12]], dtype=torch.float)
>>> a.unsqueeze_(0)
>>> b = torch.tensor([[[2, 5, 1], [3, 2, 1], [5, 1, 9]],
[[4, 2, 9], [2, 0, 3], [2, 5, 3]]], dtype=torch.float)
>>> x = torch.linalg.lstsq(a, b).solution
>>> torch.dist(x, a.pinverse() @ b)
tensor(2.0862e-07)
>>> sv = torch.linalg.lstsq(a, driver='gelsd').singular_values
>>> torch.dist(sv, a.svd().S)
tensor(5.7220e-06)
>>> a[:, 0].zero_()
>>> xx, rank, _ = torch.linalg.lstsq(a, b)
>>> rank
tensor([2]) | 2021-09-16 22:12:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6218611598014832, "perplexity": 2814.6057990383224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00489.warc.gz"} |
https://schoollearningcommons.info/question/find-lcm-of-1-upon-alpha-beta-1-upon-beta-gama-1-upon-gama-alpha-20911264-13/ | ## find lCM of 1 upon alpha + beta + 1 upon beta + gama + 1 upon gama + alpha
Question
find lCM of 1 upon alpha + beta + 1 upon beta + gama + 1 upon gama + alpha
in progress 0
1 month 2021-08-19T00:01:34+00:00 2 Answers 0 views 0 | 2021-09-19 23:31:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343606233596802, "perplexity": 10343.416377384963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056902.22/warc/CC-MAIN-20210919220343-20210920010343-00643.warc.gz"} |
https://cdkkinase.com/background-computing-exact-multipoint-lod-results-for-expanded-pedigrees-rapidly-turns/ | # Background Computing exact multipoint LOD results for expanded pedigrees rapidly turns
Background Computing exact multipoint LOD results for expanded pedigrees rapidly turns into infeasible as the amount of markers and untyped individuals enhance. problem instances. Bottom line We conclude which the Cluster Variation Technique is really as accurate as MCMC and generally is normally better. Our method is normally a promising option to approaches predicated on MCMC sampling. History The purpose of hereditary linkage analysis is normally to hyperlink phenotype to genotype. Pedigrees are collected in which a disease or characteristic is thought to possess a genetic element. The individuals in the pedigree are genotyped for a genuine variety of GNG7 markers over the chromosome. The markers are in known comparative recombination frequencies, in order that in the genotypes a distribution over inheritances could be inferred. Linkage from the characteristic to a particular area in the marker map after that is normally quantified with the level to that your distribution over inheritances as inferred in the markers can describe the noticed phenotypes in the pedigree. Parametric linkage evaluation In this specific article we compute linkage likelihoods using the parametric LOD score (log odds percentage) proposed by Morton [1]. The LOD score is the log percentage of the likelihoods of the hypothesis that the disease locus is definitely linked to the marker loci at a specific location and the hypothesis that it is unlinked to the marker loci. The LOD score requires specification of the disease rate of recurrence and penetrance ideals and therefore falls into the category of parametric rating functions. Precise computations Several methods for precise computations are in use. Lander et al. [2] launched a Hidden Markov Model (HMM) where the meiosis indicators are the unobserved variables. This method is linear in the number of loci, but exponential in 2and that correspond to the paternally and maternally inherited allele, indicated by the superscript and indicate whether the paternal or the maternal allele of respectively the father and the mother is inherited. The nodes and take the values 1,…, |= (… Figure ?Figure9A9A is a graphical representation of the following conditional probability tables in the Bayesian network: = (1, 2), then the only non-zero probabilities are is specified with the penetrance values f = (… and and respectively. In this example, we have chosen the following clusters as the collection of marginal distributions (i.e. cluster marginals) that are normalized and satisfy 313254-51-2 supplier all constistency constraints between overlapping marginal distributions. Following [28], if the upper bound is 313254-51-2 supplier at least twice differentiable and satisfies the following properties: 1. for all through the relation is the conditional probability 313254-51-2 supplier table that defines the coupling between the meiosis indicators in the Bayesian network. As both of these terms are known, together they define
$Q~ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaWfGaqaaiabdgfarnaaBaaaleaacqaHXoqyaeqaaaqabeaacqGG+bGFaaaaaa@3164@$
. We can now define a distribution over trait locus inheritance vectors as follows:
$Q ‘ ( v l , G l , v T 313254-51-2 supplier , v l + 1 , G l + 1 | T ) 313254-51-2 supplier Q ? (.$ | 2022-08-10 07:44:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233184218406677, "perplexity": 1861.768214042494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00186.warc.gz"} |
https://because0fbeauty.wordpress.com/2014/01/19/its-better-with-theory/ | ## It’s better with ‘theory’
It seems to me that adding the word theory tends to increase my interest in a subject. Consider puddles, they’re fun…but ‘Puddle Theory’ what’s that? Sounds intriguing.
Definition: An n-puddle is a n-dimensional manifold with boundary such that the boundary is the union of a sky and ground. The ground is a n-1 manifold with boundary equal to the n-2 boundary of the sky. The sky boundary resides within a hyperplane of dimension n-1.
We can use standard geometric structures on $\mathbb{R}^n$ because in general an n-puddle will not be smooth.
Definition: Define a splash to be a family of functions $F_{\alpha}:P\to P$ indexed by $[0,1]$. $F_{\alpha}$ need not be continuous, in fact will seldom be so.
It’s possible for $P$ to become disconnected during the splash. Splashes can be divided into volume-preserving and non-volume-preserving splashes. These are called dry and wet respectably.
Definition: A splash is dry if $\lim_{\alpha\to 1} F_{\alpha}\to id_P$.
It’s possible to compose a series of splashes, $F^1_{\alpha_1}, F^2_{\alpha_2},...F^k_{\alpha_k},...$ $k\in\mathbb{N}$ such that $F_1^k = F_0^{k+1}$. It was discovered by A. Child that for any such sequence there exists an integer N such that for all $n>N$ $F^n_{\alpha}$ is the empty splash. This is known as Child’s Theorem. We leave this as an exercise. | 2017-07-22 00:31:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512738347053528, "perplexity": 1150.0393962794437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423839.97/warc/CC-MAIN-20170722002507-20170722022507-00637.warc.gz"} |
https://www.physicsforums.com/threads/commutator-proof.390121/ | # Commutator proof
## Homework Statement
Show $$\left[x,f(p)$$$$\right)]$$ = $$i\hbar\frac{d}{dp}(f(p))\right.$$
## Homework Equations
I can use $$\left[x,p^{n}$$$$\right)]$$ = $$i\hbar\\n\right.$$$$p^{n}\right.$$
f(p) = $$\Sigma$$ $$f_{n}$$$$p^{n}$$ (power series expansion)
## The Attempt at a Solution
I started by expanding f(p) to the power series which makes
$$\left[x,\Sigma\\f_{n}\\p^{n}$$$$\right)]$$
and I know I must use the commutator identity [A, BC] = [A,B]C + B[A,C]
but the power series cannot be split up into two products(BC) ? So I'm not sure how to go on
Related Advanced Physics Homework Help News on Phys.org
Hurkyl
Staff Emeritus
Gold Member
and I know I must use the commutator identity [A, BC] = [A,B]C + B[A,C]
How do you know that?
In a text book it says it can be shown using that equation
Trying a different method:
[x, f(p)] = [x,$$\sum_{n}\\f_{n}p^{n}$$] = [x,fnpn + $$\sum_{n-1}\\f_{n}p^{n}$$]
using [A, B+C] = [A,B] + [A,C]
= [x, fnpn] + [x, $$\sum_{n-1}\\f_{n}p^{n}$$]
using [A, BC] = C[A,B] + B[A,C]
= fn[x, pn] + pn[x, fn] + [x, $$\sum_{n-1}\\f_{n}p^{n}$$]
using [x, pn] = i$$\hbar$$npn-1
= fni$$\hbar$$npn-1 + pn[x, fn] + [x, $$\sum_{n-1}\\f_{n}p^{n}$$]
[x, fn] = 0 as fn is a const.
= fni$$\hbar$$npn-1 + [x, $$\sum_{n-1}\\f_{n}p^{n}$$]
am I on the right track?
Last edited:
Hurkyl
Staff Emeritus
Gold Member
I'm curious why you used
[A, rC] = [A,r]C + r[A,C]
to pull out a scalar, rather than just using
[A, rC] = r [A,C]
I'm also curious why you stopped using
[A, B + C] = [A,B] + [A,C] | 2020-02-23 11:50:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35769277811050415, "perplexity": 11776.665159366463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00195.warc.gz"} |
http://analysisters.blogspot.com/2015/03/ | ## Monday, March 16, 2015
### Distributional Calculus Part 4: Properties of Distributions
So, in the previous post, we found that distributions gave an alternate way to characterize functions; that is, by mapping from a set of test functions instead of from a compact set $X$. Test functions turn out to be completely central to how operations are performed! In fact, I'll spoil the entire content of this post by saying any operation on $f$ can be 'moved' to apply on the set of test functions instead.
But before that, let's list some basic properties which are more evocative of elementary real analysis than anything else. For a distribution $\langle f, \phi\rangle$:
• Linearity, i.e. $f(a\phi_1+\phi_2)$ for any real constant $a$ and test functions $\phi_1,\ \phi_2$;
• There exists a sequence of test functions $\{\phi_n\}$ such that $\phi_n \to f$
All of these properties are necessary, but we'll be making the most use out of the second one. Recall the super-useful integral characterization of a distribution $$T_f(\phi)=\int_{\mathbb{R}}f(x)\phi(x)\,dx?$$ That can only be expressed if $f$ is a function with no weird generalized properties. Yet now, if we consider $f$ as the limit of a sequence of test functions, $\phi_n$ is a classically defined function for all $n$ and it is now possible to write $$T_f(\phi)=\lim_{n\to\infty}\int_{\mathbb{R}}\phi_n(x)\phi(x)\;dx$$ for any generalized function $f$.* Great! Now we can look at any and all distributions the easy way.
The real magic starts when we attempt to translate the distribution. Recall that any function can be translated $y$ units by taking $f(x-y)$ instead of $f(x)$; the same thing can be done for generalized functions by considering $\lim_{n\to\infty}\langle \phi_n(x-y),\phi(x)\rangle$. (Let's define the translation function tau as $\tau_y\phi(x)=\phi(x-y)$.) Using some simple $u$-substitution magic, \begin{align}\langle\tau_yT_f,\phi\rangle &=\lim_{n\to\infty}\langle \phi_n(x-y),\phi(x)\rangle\\&=\lim_{n\to\infty}\int_{\mathbb{R}}\phi_n(x-y)\phi(x)\;dx;\qquad u = x-y\\&=\lim_{n\to\infty}\int_{\mathbb{R}}\phi_n(u)\phi(u+y)\;du\\&=\langle T_f,\tau_{-y}\phi\rangle.\end{align} We have essentially found that any distribution can be translated by applying the opposite translation to every test function in $\mathcal{D}$. To reiterate:$$\langle\tau_yT_f,\phi\rangle =\langle T_f,\tau_{-y}\phi\rangle.$$ Hooray!
Differentiating a distribution works in much the same way as translation in that the operation gets pawned off onto the test function but with an extra minus sign. However, it does involve an extra technique: integration by parts. I assume that nobody who is reading this is unfamiliar with the practice, but, for the sake of cute mnemonics, a friend of my fiancé's refers to $$\int u\;dv = uv - \int v \;du$$as "sudv uv svidoo."
Let's take a moment to appreciate how adorable that is.
The actual fancy differentiation trick can be proved in essentially one integration-by-parts step:\begin{align}\left\langle \frac{d}{dx}T_f,\phi\right\rangle &= \lim_{n\to\infty}\int_\mathbb{R}\left(\frac{d}{dx}\phi_n(x)\right)\phi(x)\;dx\\&= \lim_{n\to\infty}-\int_\mathbb{R}\phi_n(x)\left(\frac{d}{dx}\phi(x)\right)\;dx \\ &=\left\langle T_f, -\frac{d}{dx}\phi(x)\right\rangle\end{align}(brownie points if you've already figured out what happened to the $uv$ term). This identity is essential for a crazy number of distributional calculus proofs.
For example, we can directly use this identity to prove the Dirac delta function is the distributional derivative of the Heaviside function in two seconds. Let $T_H= \int_0^\infty \phi(x)\,dx$ represent the Heaviside distribution. Now, from the above identity, we conclude $$\left\langle \frac{d}{dx}T_H(x),\phi\right\rangle=\left\langle T_H(x),\frac{d}{dx}\phi\right\rangle=-\int_0^\infty \phi(x)\,dx=\phi(0)-\phi(\infty)=\phi(0),$$ that is, because $\phi$ is zero at infinity. Yet $\phi(0)=\langle \delta, \phi\rangle$ by definition! We're done here.
As super awesome as that is, there should be some material on how all this pertains to weak solutions of DEs up on Thursday. Woooo! This is basically my definition of a party!
* The MCT happened here. Shhh.
## Wednesday, March 11, 2015
### Distributional Calculus Part 3: Distributions
Sorry for the delay, guys! I just started a rather demanding full-time job, so it may be a bit hard to keep up the quality of these posts. Let's hope it gets easier...
Today brings us to the most important definition in distributional calculus: the distributions themselves.
Here's the formal definition using the set of test functions $\mathcal{D}(\mathbb{R})$ we defined earlier:
Any linear functional $T: \mathcal{D}(\mathbb{R}) \to \mathbb{R}$ a distribution. In addition, for a locally integrable function $f(x):X\to\mathbb{R}$, a corresponding distribution can be defined by $$T_f(\phi)=\int_{\mathbb{R}}f(x)\phi(x)\;dx.$$We usually write $\langle T, \phi\rangle$ instead of $T(\phi)$ and call the set of all distributions of this type $\mathcal{D}'(\mathbb{R})$.
There are only two things needed to truly understand this definition; how to take the average of a continuous function and what test functions are. Check out the integrand. Multiplying the target function $f(x)$ by each individual test function $\phi(x)$ has the effect of scaling $f(x)$ at every point---in particular, the integrand zeros out outside the support of $\phi(x)$, while the other points are weighted depending on $\phi(x)$. Hence every individual component of the definition a weighted average of $f(x)$ over a compact set. (Strichartz directly compares this to finding the temperature of a room with a thermometer: it won't display the temperature at one point, rather the average temperature of some portion of the area.) If each of these weighted averages are known for every existing $\phi(x)$, that is what defines the distribution.
Defining distributions in this way lets us account for objects that we think look like functions, but actually aren't. The Dirac delta function is the perfect example---the infinite value at zero ruins anything, so it isn't really a function*. However, the integral of $\delta(x)$ is bounded no matter what test function we weight it by, so the 'average' exists over every possible range, meaning $\delta(x)$ is a distribution. In particular, $$\langle \delta,\phi\rangle = \phi(0).$$
It would be useful to go over a couple useful properties of distributions, starting with the issue of consistency. This was supposed to happen today! Unfortunately, I'm dead tired and need to go lie down forever. Let's leave the important properties for next week.
* The Dirac delta function is to functions what killer whales are to whales... a complete misnomer.
## Wednesday, March 4, 2015
### Distributional Calculus Part 2: Compact support and test functions
Our goal with this series is to provide a resource for basic distribution theory that includes all of the formal definitions, justifications and theorems with as little hand-waving as possible, while also fully explaining these definitions through appeals to intuition.The following is written assuming an audience who cares or wants to care about mathematical formality but needs some intuitive background in order to learn quickly.
A few minor definitions are needed to understand what distributions represent. We define a set $X$ and function $\phi: X\to\mathbb{R}$ for the rest of this post.
The first two definitions are very simple.
Definition: Suppose $\phi$ is in $L_p(\mathbb{R}^n)$ and $X$ is open. We say $\phi$ is locally integrable if, for all compact subsets $A$ of $X$,
$$\int_A |\phi(x)|\;dx< \infty.$$The space of all such functions is called $L_p^{loc}$.
The formal definition of compactness can be found here. For those who haven't studied real analysis, a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded.
Definition: The support of $\phi$, written supp($\phi$), is the closure of the set of points in X where f is non-zero. That is,
$$\operatorname{supp}(\phi) = \{x\in X \,|\, \phi(x)\ne 0\}.$$(Topologists use a slightly different definition.)
From here, a slightly more specific property can be considered:
Definition: A function $\phi$ is said to have compact support if $supp(\phi)$ is compact.
It's hard to come up with a compactly supported function without specifying that the complement of the support is zero. As a result, most easily representable test functions, even the continuous and infinitely differentiable ones, are defined piecewise. We consider a few examples.
Note that compact support can also be interpreted as the function vanishing outside a compact set; continuous functions are always nonzero on an open set, so taking the closure in the definition of support is necessary.
One of the simplest examples of a compactly supported function is $\chi_A(x)$, where $A$ is a compact set and
$$\chi_A(x)=\left\{\begin{array}{ll}1&x\in A\\0& x \notin A.\end{array}\right.$$This is the identity on $A$ and zeros out everything else. In fact, the composition of $\chi_A(x)$ with any function on $x$ will have compact support as well. Here are a couple examples:
(Test yourself! Is H(x) from the previous post compactly supported? Are B-splines?)
This example leads well into the last definition.
Definition: A function $\phi$ is a test function if it has compact support and is infinitely differentiable (i.e., in $C^\infty$). We refer to the space of all test functions on a set $X$ as $\mathcal{D}(X)$.
This is a crucial definition! It's weird for a function to have compact support but to also be infinitely differentiable, so let's generate a couple examples. Consider
$$\psi(x)=\left\{\begin{array}{ll}e^{-\frac{1}{1-x^2}}&|x|<1\\0& |x|\geq 1.\end{array}\right.$$This is a lot smoother than the previous function, and looks like a bump:
A slightly more complicated example would be
$$u_A(x)=\int_{\mathbb{R}^n}\chi_A(x-y)u(y)\;dy$$where $u(x)$ is a locally integrable function in $\mathbb{R}^n$ (the technique used to generate this example is called Sobolev's mollification method). If you're familiar with convolution already, it should not be difficult to prove this function is compactly supported. It looks like someone built a sandcastle shaped like a regular $\chi$ function and a wave rolled over it:
Many operations, such as translation and scaling, preserve infinite differentiability and compact support. Linear combinations of test functions and products of test functions are also test functions themselves.
(Test yourself! Can test functions be analytic?)
So, our point---these definitions are necessary in order to understand what distributions are. We'll go into this in detail next week.
## Monday, March 2, 2015
### Distributional Calculus Pt. 1: What is it?
In high school, despite being told I was "good at math" for being able to perform simple algebra, I was terrified of calculus. It was a scary word---"calculus"---and I didn't want to be outed as an impostor who wasn't ever good at math at all. That's how I ended up enrolled in the easiest calculus course offered at my high school, a place where most people took AP Calc. That's also how I ended up bored with the slow pace and lack of formality of my first calculus course, and transferred to AP Calc halfway through the year. That's also when I developed the unmitigated desire to become a mathematician; the calculus floodgates had been opened, and the only cure was more calculus. Calculus was followed by real analysis. Real analysis was followed by functional analysis.
Which brings us here... to the ultimate form of calculus. But why? Why does such a thing exist?
The catalyst for developing a more general form of calculus came when some people, such as physicists and engineers, decided it was okay to consider derivatives of non-differentiable functions. We consider the Heaviside step function ($H(x)$) as the quintessential example: this function is constant and hence has a zero derivative everywhere except at the jump discontinuity, where the classical definition of the derivative breaks down. One could reason that, because the derivative at a point is the slope of the tangent line, and the tangent line at the jump is a vertical line with infinite slope, $H'(0)$ is infinity. We therefore understand the derivative of the Heaviside function to be zero everywhere except at the jump, where it's infinite. That's the Dirac delta function ($\delta(x)$)!
Generally---and I apologize for stereotyping here---generally, physicists and engineers are totally okay with this interpretation and accept it as fact, but mathematicians are upset by the hand-waving. It particularly bothered Sergei Sobolev and Laurent Schwartz, whose work lead to the first mathematical justification of these ideas. This formalization of the engineers' and physicists' approaches grew to be called distributional calculus.
Distributions (also called generalized functions) define a broad set of function-like objects including, but not limited to, classical functions (hence, generalized functions). Distributional calculus is the study of calculus on this larger class of objects. This certainly allows for a formal reimagining of the Heaviside example given above: the Heaviside function is nondifferentiable at a point, but its distribution is differentiable everywhere! It can also be used to describe "weak" solutions of DEs. So, if you're like me and can't get enough calculus, it's just... more. More calculus.
Distributional calculus is also a great demonstration of the central public-relations conflict of real/functional/complex analysis: it's both the coolest thing anyone has done, ever, but also completely inaccessible to laypeople. In particular, the notation gets very intimidating, very fast. (Converting any idea from functions to distributions requires several million extra symbols.)
Our goal with this series is to provide a resource for basic distribution theory that includes all of the formal definitions, justifications and theorems with as little hand-waving as possible, while also fully explaining these definitions through appeals to intuition. There are already great books that deal with the formal side of distribution theory (Haroske and Triebel, 2008; Friedlander and Joshi, 1998) and great books that eschew formality in order to be accessible to physicists and engineers (Strichartz, 2003). These books are much better than a series of blog posts---that's why the authors of the books get paid. However, we adopt a different approach for our audience: the first set of textbooks caters to analysts, the second to people who don't care for analysis, while we assume the audience cares or wants to care about mathematical formality but needs some intuitive background in order to learn quickly.
Without further exposition, here's the game plan for March:
• Week 1 & 2: Basic definitions (compact support, test functions, distributions, distributional derivatives, all that good stuff)
• Week 3: The big examples
• Week 4: A couple important theorems
• Week 5 (March 31st): Recent papers /books for suggested further reading
Lastly, especially if you're a non-mathematician who doesn't care about overt formality, I cannot recommend the Strichartz enough. It's hilarious! I definitely got something out of it despite being peeved at the lack of formal analysis.
[1] Haroske, Dorothee, and Hans Triebel. Distributions, Sobolev spaces, elliptic equations. European Mathematical Society, 2008.
[2] Friedlander, Friedrich Gerard, and Mark Suresh Joshi. Introduction to the Theory of Distributions. Cambridge University Press, 1998.
[3] Strichartz, Robert S. A guide to distribution theory and Fourier transforms. Singapore: World Scientific, 2003. | 2018-05-22 09:36:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823692798614502, "perplexity": 523.9077167743006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864657.58/warc/CC-MAIN-20180522092655-20180522112655-00584.warc.gz"} |
http://mathhelpforum.com/pre-calculus/118474-power-functions-homogeneity-print.html | # Power functions and homogeneity
• December 4th 2009, 09:13 AM
MathBane
Power functions and homogeneity
The question:
Under certain conditions, tsunami waves encountering land will develop into bores. A bore is a surge of water much like what would be expected if a dam failed suddenly and emptied a reservoir into a river bed. In the case of a bore traveling from the ocean into a dry river bed, one study shows that the velocity V of the tip of the bore is proportional to the square root of its height h. This is expressed in the formula below, where k is a constant.
$V = kh^{0.5}$
(a) A bore travels up a dry river bed. How does the velocity of the tip compare with its initial velocity when its height is reduced to one third of its initial height? (Round your answer to two decimal places.)
The velocity changes by a factor of [____].
(b) How does the height of the bore compare with its initial height when the velocity of the tip is reduced to one quarter of its initial velocity?
The height is reduced to [____] of the initial height.
(c) If the tip of one bore surging up a dry river bed is four times the height of another, how do their velocities compare? (Round your answer to two decimal places.)
The velocity of the first is [____] times that of the other.
(I'm not even sure what to do here. How do I use the homogeneity of power functions to help me find the answers when no values are given?)
• December 5th 2009, 04:03 AM
HallsofIvy
Quote:
Originally Posted by MathBane
The question:
Under certain conditions, tsunami waves encountering land will develop into bores. A bore is a surge of water much like what would be expected if a dam failed suddenly and emptied a reservoir into a river bed. In the case of a bore traveling from the ocean into a dry river bed, one study shows that the velocity V of the tip of the bore is proportional to the square root of its height h. This is expressed in the formula below, where k is a constant.
$V = kh^{0.5}$
(a) A bore travels up a dry river bed. How does the velocity of the tip compare with its initial velocity when its height is reduced to one third of its initial height? (Round your answer to two decimal places.)
The velocity changes by a factor of [____].
If the initial height is h, then $V_1= kh^{0.5}$. If the height is reduced to 1/3, then it is h/3 so $V_2= k(h/3)^{0.5}$. What is the ratio of $V_1$ to $V_2$?
Quote:
(b) How does the height of the bore compare with its initial height when the velocity of the tip is reduced to one quarter of its initial velocity?
The height is reduced to [____] of the initial height.
Now, you are given that $V= kh_1^{0.5}$ and that $V/4= kh_2^{0.5}$. What is the ratio of those two? What is the ratio of $h_1$ to $h_2$?
Quote:
(c) If the tip of one bore surging up a dry river bed is four times the height of another, how do their velocities compare? (Round your answer to two decimal places.)
The velocity of the first is [____] times that of the other. Really the same as (a) isn't it? $V_1= kh^{0.5}$ and $V_2= k(4h)^{0.5}$. What is the ratio of $V_2$ to $V_2$?
Quote:
(I'm not even sure what to do here. How do I use the homogeneity of power functions to help me find the answers when no values are given?) | 2015-08-02 02:41:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5452663898468018, "perplexity": 539.6033574168351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988930.94/warc/CC-MAIN-20150728002308-00290-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://mailman.ntg.nl/pipermail/ntg-context/2011/060353.html | # [NTG-context] Missing space between digit and unit with the new unit-command
Fri Jul 1 08:15:09 CEST 2011
```Am 01.07.2011 um 00:33 schrieb yoraxe:
> Ok, thanks, for this minimal example it works. But what do I have to
>
> \unit{10^{-3} kilogram cubic meter}
>
> (or
> \unit{10^{-3} kgm²}
> )
>
> ? This does not work for me.
It’s “10e-3”, you can find a list a valid input in the manual [1] for the \digits
commands which is now included in \unit. | 2020-07-11 05:31:51 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8082917928695679, "perplexity": 10891.980789875752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00378.warc.gz"} |
http://learnemc.com/shielding-theory | # Shielding Theory
Ask the average engineer on the street about controlling electromagnetic interference and the response will probably involve shielding. Virtually all high-speed electronic devices employ shielding in some form. Computers, cell phones, video games, industrial controls, automotive and avionic systems, etc., all typically come packaged in metal (or metalized) enclosures or have shields located directly over specific components on their printed circuit boards.
Shielded enclosures that are properly designed and installed can be a very effective means of attenuating radiated emissions and protecting products from external sources of interference. In fact, a metallic enclosure with no apertures, seams or cable penetrations can typically reduce radiated emissions and improve radiated immunity by 40 dB or more. In other words, even a poorly designed circuit board can meet EMC requirements if it is sealed in a metal box.
However, shielded enclosures are a poor substitute for good EMC design at the board level. Effective enclosures can add significant cost and weight to a product and a single breach of the enclosure (e.g. an unfiltered cable penetration) can completely eliminate any benefit the enclosure would otherwise provide. In many cases, a product in a poorly designed shielded enclosure will radiate more (or be more susceptible) than the same product without the enclosure.
Shields work by reflecting, absorbing or redirecting electric and/or magnetic fields. It is not always necessary for a shield to completely enclose a product in order to be effective. For example, partial shields are often utilized to redirect fields on or above a source circuit to isolate it from another circuit or to prevent coupling to cables or other unintentional antennas.
Choosing proper location, orientation and material for a shield requires a knowledge of the type of field being shielded and the objectives of the shield. The following sections will describe basic shielding theory and provide several examples of good shielding in various situations.
## Plane-Wave Shielding Theory
When an electromagnetic wave propagating in one material encounters another material with different electrical properties, some of the energy in the wave is reflected and the rest is transmitted into the new material. For example, consider the electromagnetic plane wave, Einc, incident upon an infinite slab of material as illustrated in Figure 1. The wave propagates in free space in the x direction until it strikes the material, which has intrinsic impedance, ηs.
Figure 1: Plane wave incident on a shielding material
The magnetic field in the plane wave is perpendicular to the electric field and has amplitude,
$|{H}_{inc}|=\frac{|{E}_{inc}|}{{\eta }_{0}}$ (1)
where ${\eta }_{0}=\sqrt{{\mu }_{0}}{{\epsilon }_{0}}}$ is the intrinsic impedance of free space (~377 ohms).
When the plane wave strikes the slab, a reflected wave, Eref, and a transmitted wave, Eslab, are created. The magnetic field in the shielding material is related to the electric field,
$|{H}_{slab}|=\frac{|{E}_{slab}|}{{\eta }_{s}}$ (2)
In addition, the boundary conditions on the surface at x=0 require that,
${E}_{x={0}^{-}}={E}_{x={0}^{+}}$ (3)
and
${H}_{x={0}^{-}}={H}_{x={0}^{+}}$ (4)
where the subscripts x=0- and x=0+ indicate the fields, just to the right or left of the x=0 surface. In order to satisfy Equations (1) through (4), the amplitude of the reflected field must satisfy the relation,
$|{E}_{ref}|=|{E}_{inc}|\text{\hspace{0.17em}}{\Gamma }_{E}$ (5)
where ΓE is the electric field reflection coefficient,
${\Gamma }_{E}=\frac{{\eta }_{s}-{\eta }_{0}}{{\eta }_{s}+{\eta }_{0}}$ (6)
The amplitude of the transmitted field, Eslab, is
$|{E}_{slab}|=|{E}_{inc}|\text{\hspace{0.17em}}{T}_{{E}_{1}}$ (7)
where
${T}_{{E}_{1}}=\frac{2{\eta }_{s}}{{\eta }_{s}+{\eta }_{0}}$ (8)
is the electric field transmission coefficient.
Note that as ηs gets closer to η0, the transmission coefficient increases and the reflection coefficient decreases. If ηs = η0, all of the incident field is transmitted.
If the material in Figure 1 is lossy, (i.e. σ≠0), the transmitted wave will decrease in amplitude as it propagates,
$|{E}_{slab}\left(x\right)|=|{E}_{slab}\left(x=0\right)|{e}^{-x}{\delta }}$ (9)
where δ is the skin depth of the material. For high-loss materials,
$\delta \approx 1}{\sqrt{\pi f\mu \sigma }}$ (10)
Figure 2: Plane wave incident on a finite thickness shielding material
Now consider the finite slab of shielding material illustrated in Figure 2. An incident field, Einc, strikes the surface of the shielding material. Some of the power in the field is reflected and some continues into the material. The part that penetrates into the material is attenuated before it strikes the second surface at x=t. At that point, once again some of the power is attenuated and some of the power is transmitted. If the attenuation is high, the power reflected at the second interface is absorbed and the field transmitted to the region of free space on the right of the slab is given by,
$|{E}_{trans}|=|{E}_{slab}\left(x=t\right)|\text{\hspace{0.17em}}{T}_{{E}_{2}}$ (11)
where
${T}_{{E}_{2}}=\frac{2{\eta }_{0}}{{\eta }_{0}+{\eta }_{s}}$ (12)
Combining (7), (8), (9), (11) and (12); we obtain an expression for the transmitted electric field in terms of the incident field,
$|{E}_{trans}|=|{E}_{inc}|\text{\hspace{0.17em}}\frac{2{\eta }_{s}}{{\eta }_{0}+{\eta }_{s}}\left(\frac{2{\eta }_{0}}{{\eta }_{0}+{\eta }_{s}}\right){e}^{-t}{\delta }}$ (13)
This expression applies to any shield material that is much thicker than a skin depth. Typically, the best plane-wave shields will be good conductors with a high conductivity, . For good conductors,
$\eta =\sqrt{\frac{j\omega \mu }{\sigma +j\omega \epsilon }}\approx \sqrt{\frac{j\omega \mu }{\sigma }}=\sqrt{\frac{\omega \mu }{\sigma }}{e}^{j\pi }{4}}$ (14)
For these materials, and Equation (13) reduces to,
$|{E}_{trans}|=|{E}_{inc}|\text{\hspace{0.17em}}\frac{4{\eta }_{s}}{{\eta }_{0}}{e}^{-t}{\delta }}$ (15)
If we define the shielding effectiveness of the slab to be,
$S.E.=20\mathrm{log}\frac{{E}_{inc}}{{E}_{trans}}$ (16)
then the shielding effectiveness of an infinite sheet of good conductor can be written in the form,
$S.E.=20\mathrm{log}\frac{{\eta }_{0}}{4{\eta }_{s}}\text{ }+\text{ }20\mathrm{log}{e}^{t}{\delta }}=R\left(\text{dB}\right)\text{ }+\text{ }A\left(\text{dB}\right)$ (17)
where the total shielding effectiveness is observed to consist of two terms. The reflection loss, R(dB), is the attenuation due to the reflection of power at the interfaces. The absorption loss, A(dB), is the attenuation due to power converted to heat as the wave propagates through the material. A web-based calculator for determining the plane-wave shielding effectiveness of various materials can be found at,
The reflection loss is independent of the thickness of the shield and depends entirely on the mismatch between the shield's intrinsic impedance and the intrinsic impedance of free space. The absorption loss is directly proportional to the thickness of the shield expressed in skin depths,
$A\left(\text{dB}\right)=20\mathrm{log}{e}^{-t}{\delta }}\approx 8.7\left(\frac{t}{\delta }\right)\text{\hspace{0.17em}}\text{dB}$ (18)
## Example 1: Calculating Shielding Effectiveness of Copper Foil
Calculate the shielding effectiveness of a sheet of 2-mil copper foil, σ = 5.7x107 S/m, at 100 MHz.
We start by calculating the skin depth in copper at 100 MHz,
${\delta }_{cu}=1}{\sqrt{\pi f\mu \sigma }}=1}{\sqrt{\pi \left({10}^{8}\right)\left(4\pi ×{10}^{-7}\right)\left(5.7×{10}^{7}\right)}}=6.7\text{\hspace{0.17em}}\text{μm}$ (19)
The material thickness (t = 2 mils = 50.8 μm) is clearly much greater than the skin depth so (17) can be used to calculate the shielding effectiveness. In fact, the absorption loss can be easily calculated as,
$A\left(\text{dB}\right)\approx 8.7\left(\frac{t}{\delta }\right)\text{\hspace{0.17em}}=8.7\left(\frac{50.8}{6.7}\right)=66\text{\hspace{0.17em}}\text{dB}$ (20)
To calculate the reflection loss, we need to determine the intrinsic impedance of copper at 100 MHz,
$|{\eta }_{cu@100\text{\hspace{0.17em}}MHz}|=\sqrt{\frac{2\pi f\mu }{\sigma }}=\sqrt{\frac{2\pi \left({10}^{8}\right)\left(4\pi ×{10}^{-7}\right)}{5.7×{10}^{7}}}=3.7×{10}^{-3}\text{\hspace{0.17em}}\Omega$ (21)
Then the reflection loss is quickly determined to be,
$R\left(\text{dB}\right)=20\mathrm{log}\frac{{\eta }_{0}}{4{\eta }_{s}}=20\mathrm{log}\frac{377}{4{\left(3.7×{10}^{-3}\right)}_{s}}=88\text{\hspace{0.17em}}\text{dB}$ (22)
The overall shielding effectiveness is the sum of the reflection loss and the absorption loss,
$S.E.=88\text{\hspace{0.17em}}\text{dB}+66\text{\hspace{0.17em}}\text{dB}\approx 154\text{\hspace{0.17em}}\text{dB}$ (23)
Note that virtually all of the incident power is reflected by the shield. 154 decibels is a very large ratio, suggesting that the transmitted power is smaller than the incident power by a factor of 1015. In practice, attenuations of this magnitude are neither realizable nor measurable. The largest realizable field strengths (without causing ionization of the air) are on the order of 106 V/m. The smallest detectable field strengths (using sensitive field probes) are on the order of 10-6 V/m. This represents a possible dynamic range of,
$20\mathrm{log}\frac{{10}^{6}}{{10}^{-6}}=240\text{\hspace{0.17em}}\text{dB}$ (24)
As a practical matter, most engineering test equipment has a maximum dynamic range of around 80 - 120 dB. Therefore any calculated attenuation or shielding effectiveness much higher than 100 dB implies the material is essentially impenetrable. A material with a calculated shielding effectiveness of 154 dB is essentially no better or worse than a material with a calculated value of 120 dB.
If the material in Fig. 2 is not thick relative to a skin depth, some of the energy that reflects off the second interface (at x=t) propagates back into the slab and is reflected off the inside of the first interface (at x=0+). This energy will then again strike the second interface and some fraction will be transmitted adding to the total energy transmitted and reducing the shielding effectiveness. The wave may bounce back and forth multiple times before attenuating to the point where it no longer contributes significantly to the transmitted field. If the absorption loss term in (17) is less than about 15 dB, the accuracy of the shielding effectiveness estimate is compromised by these multiple reflections.
For conductive materials that are electrically thin (i.e. t<<λ), we can adjust the expression for shielding effectiveness (17) by adding a third term to account for multiple reflections resulting in a general expression for plane-wave shielding effectiveness [1, 2],
$S.E.=20\mathrm{log}\frac{{\eta }_{0}}{4{\eta }_{s}}+20\mathrm{log}{e}^{t}{\delta }}+20\mathrm{log}|1-{e}^{-2t}{\delta }}|=R\left(\text{dB}\right)+A\left(\text{dB}\right)+B\left(\text{dB}\right)$ (25)
Note that the multiple reflection loss term has a negative value and reduces the overall shielding effectiveness. This term is sometimes used to make minor corrections to the expression in (17), but it may not be accurate for thin or low-loss materials (i.e. when (t < δ). It does however provide an indication of when the high-loss assumption used to derive (17) has been violated. If the multiple reflection loss factor is comparable to the reflection loss, then neither shielding effectiveness calculation [(17) or (25)] is accurate.
## Near-Field Shielding
Plane-wave shielding theory conveniently permits us to calculate a shielding effectiveness value for any shielding material based on its material properties and thickness. Unfortunately, practical shields are never located in the far-field of both the source and receptor circuits. Because of this, we are very unlikely to have plane wave propagation on both sides of the material and the calculated shielding effectiveness will not correspond to anything we are likely to measure (except in specially designed test fixtures).
In order to help understand how near-field shielding differs from plane-wave shielding, consider the configurations shown in Figure 3. In Figure 3(a), the incident plane wave has been replaced by a small electric dipole source and the shielding material is located in the near-field of the source. In Figure 3(b), the source is a magnetic dipole, represented by a small loop of electric current.
Figure 3: Shielding electric and magnetic dipole sources.
Recall that in the near field (r << λ), an electric dipole source has a strong electric field. The wave impedance in the near-field is approximately,
${Z}_{{W}_{E}}=\frac{|E|}{|H|}\approx \frac{1}{2\pi f{\epsilon }_{0}r}$ (26)
In the near field of a magnetic dipole source, the magnetic field dominates and the wave impedance is approximately,
${Z}_{{W}_{H}}=\frac{|E|}{|H|}\approx 2\pi f{\mu }_{0}r$. (27)
We can estimate the shielding effectiveness of the slab in Fig. 3, by substituting the wave impedance ( ${Z}_{W}={Z}_{{W}_{E}}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}{Z}_{{W}_{H}}$ ) for the intrinsic impedance of free space, η0, in (25). This yields a new expression for the reflection loss term,
$R\left(dB\right)\approx 20\mathrm{log}\frac{{Z}_{W}}{4{\eta }_{s}}$. (28)
The expressions for absorption loss and multiple reflection loss are unchanged. Although this type of shielding effectiveness calculation is a simple approximation that does not correspond to any particular realizable test structure, it can provide a great deal of insight relative to the performance of various shielding materials in realistic situations. There is a near-field shielding effectiveness calculator based on these equations on the web at,
## Example 2: Shielding a Low-Frequency Magnetic Field Source
A transformer generating primarily a magnetic field is located 10 cm from a shielding structure. The shielding structure is made from a 1-cm thick sheet of copper. Estimate the shielding effectiveness of this structure at 1.5 kHz.
If we start by modeling the transformer as a magnetic dipole source, we can quickly estimate the wave impedance at the position of the shield to be,
${Z}_{{W}_{H}}\approx 2\pi f{\mu }_{0}r=2\pi \left(1.5×{10}^{3}\right)\left(4\pi ×{10}^{-7}\right)\left(0.10\right)=1.2×{10}^{-3}\text{\hspace{0.17em}}\text{Ω}$. (29)
The intrinsic impedance and skin depth of the copper are,
$|{\eta }_{cu@1.5\text{\hspace{0.17em}}kHz}|=\sqrt{\frac{2\pi f\mu }{\sigma }}=\sqrt{\frac{2\pi \left(1.5×{10}^{3}\right)\left(4\pi ×{10}^{-7}\right)}{5.7×{10}^{7}}}=14×{10}^{-6}\text{\hspace{0.17em}}\Omega$ (30)
${\delta }_{cu}=1}{\sqrt{\pi f\mu \sigma }}=1}{\sqrt{\pi \left(1.5×{10}^{3}\right)\left(4\pi ×{10}^{-7}\right)\left(5.7×{10}^{7}\right)}}=1.7\text{\hspace{0.17em}}\text{mm}$. (31)
The calculated shielding effectiveness is therefore,
$\begin{array}{c}S.E.=20\mathrm{log}\frac{0.0012}{4\left(14×{10}^{-6}\right)}+20\mathrm{log}{e}^{10}{1.7}}+20\mathrm{log}|1+{e}^{-2\left(10}{1.7}\right)}|\\ =26\text{\hspace{0.17em}}\text{dB}+51\text{\hspace{0.17em}}\text{dB}+~0\text{\hspace{0.17em}}\text{dB}\\ =77\text{\hspace{0.17em}}\text{dB}\end{array}$. (32)
Note that in this case the absorption loss plays an important role in the overall shielding effectiveness. Generally at low frequencies close to a magnetic field source, the wave impedance is low and therefore the reflection loss due to conductive shields is less significant. Absorption loss also decreases as frequencies get lower, but not as quickly as reflection loss.
## Shielding Effectiveness Measurements
### Plane-wave shielding effectiveness
As discussed in the previous section, the concept of plane-wave shielding effectiveness is convenient because it is a function of only the material properties and thickness of a shielding material. Attempts to measure the plane-wave shielding effectiveness generally involve launching a guided TEM wave in a coaxial test fixture containing a sample of the material, as illustrated in Fig. 4.
Figure 4. Shielding Effectiveness Test Fixture
The transmission line structure has a specific characteristic impedance (usually 50 ohms). The cross-sectional dimensions are scaled up in the mid-section of the test fixture in order to accommodate a reasonably sized material sample, which is disk-shaped with a hole in the center. The measured shielding effectiveness is simply calculated as,
. (33)
When measurements are made with a network analyzer, the shielding effectiveness can be conveniently express in terms of the s-parameters as,
$S.E.=20\mathrm{log}|{S}_{12}|$. (34)
Note that even though the characteristic impedance (ratio of $\frac{{V}^{+}}{{I}^{+}}$ of the test fixture is 50 ohms, the ratio of |E| to |H| is still determined by the intrinsic impedance of the medium ( ${\eta }_{0}\approx 377$ ohms in air).
### Other shielding effectiveness measurements
Of course, the effectiveness of a shielded enclosure may be very different from the plane-wave shielding effectiveness of the material from which the enclosure is made. Many factors influence the effectiveness of a shielded enclosure including the size and shape of the enclosure and the type and location of the source. Also, typically power escaping through apertures and seams in a real enclosure is much more significant than any power propagating directly through the enclosure walls.
For this reason, it is usually more practical to define the shielding effectiveness of an enclosure as follows,
. (35)
For example, suppose the measured radiated field from an electronic product was measured with no enclosure (or a plastic enclosure) and found to be 52 dB(μV/m). Then suppose that the same product were tested in exactly the same manner with a metallic enclosure and the measured field strength was 38 dB(μV/m). The shielding effectiveness of the enclosure in this particular configuration would then be reported as,
$S.E.=52\text{\hspace{0.17em}}\text{dB(μV/m)}-38\text{\hspace{0.17em}}\text{dB(μV/m)}=14\text{\hspace{0.17em}}\text{dB}$ (36)
This is probably a much lower value than the plane-wave shielding effectiveness, but it accounts for the leakage through apertures and seams. It also takes into account the fact that shielded enclosures generally interact with the enclosed sources and the enclosure itself becomes an integral part of the unintentional antenna path converting currents into radiated fields.
## Quiz Question
The shielding effectiveness of an enclosure made of a material that has a plane-wave shielding effectiveness of 60 dB is,
1. ~60 dB
2. always less than 60 dB
3. usually greater than 60 dB
4. sometimes less than 0 dB
Recalling the previous discussion of unintentional radiation sources and antenna efficiency, it should be clear that an inefficient radiation source (e.g. an electrically small circuit) can become many orders of magnitude more efficient by coupling to a larger conducting structure. Therefore it is not only possible, but common, for a shielding enclosure with apertures or seams to increase the radiated emissions due to inefficient sources enclosed. In other words, the shielding effectiveness of a shielded enclosure can easily be less than 0 dB (i.e. the enclosure amplifies the radiation) at some frequencies. Hopefully, the same enclosure also reduces the efficiency of the strongest sources so that the net effect is a reduction in the maximum radiated emissions. Nevertheless, it is not safe to assume that some shielding is better than no shielding. A discussion of practical shielding techniques for solving real-world EMC problems can be found in the tutorial on Practical Shielding.
## References
[1] H. Ott, Electromagnetic Compatibility Engineering, John Wiley & Sons, New York, 2009.
[2] C. R. Paul, Introduction to Electromagnetic Compatibility, 2nd Ed., Wiley Series in Microwave and Optical Engineering, 2006. | 2017-11-20 15:16:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 38, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.567703902721405, "perplexity": 966.9231441988618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00749.warc.gz"} |
https://physics.stackexchange.com/questions/663821/real-image-of-virtual-object/663839#663839 | # Real image of virtual object
Are real images always inverted, even for a virtual object? I tried to make ray diagrams for such a situation in concave mirror, I am getting a real but erect and diminished image between pole and focus. Also, can I use the reversibility of path here? As when we take a real object between focus and pole we get a virtual, erect and enlarged image. I tried using this, by reversing the paths of light then also I am getting the same result. Where am I going wrong? | 2022-10-03 00:45:33 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9145103096961975, "perplexity": 583.9934233938005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00427.warc.gz"} |
https://jira.lsstcorp.org/browse/DM-27696 | # Fix Boost deprecation warning in afw
XMLWordPrintable
#### Details
• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
• Story Points:
3
• Sprint:
AP S21-3 (February)
• Team:
• Urgent?:
No
#### Description
Building afw with GCC emits the following warning:
In file included from tests/statisticsSpeed.cc:37:0: /software/lsstsw/stack_20200922/conda/miniconda3-py37_4.8.2/envs/lsst-scipipe/include/boost/timer.hpp:21:70: note: #pragma message: This header is deprecated. Use the facilities in instead. BOOST_HEADER_DEPRECATED( "the facilities in " )
These changes involve moving from an outdated API to a newer one. The code changes are straightforward (see attached branch), but I get linker errors for all new methods (Boost Timer is a compiled library). I'm therefore creating a separate ticket for investigating this.
#### Activity
Hide
Krzysztof Findeisen added a comment -
It looks like the Boost libraries to be linked against are defined in https://github.com/lsst/sconsUtils/tree/master/configs; i.e. they will become available for any C++ project, not just afw. That makes this more than just a trivial API migration.
Show
Krzysztof Findeisen added a comment - It looks like the Boost libraries to be linked against are defined in https://github.com/lsst/sconsUtils/tree/master/configs ; i.e. they will become available for any C++ project, not just afw . That makes this more than just a trivial API migration.
Hide
Brian Van Klaveren added a comment -
looks good, from the github conversation, boost timer is only used, and likely to only be used, during unit testing, and it was added as such.
Show
Brian Van Klaveren added a comment - looks good, from the github conversation, boost timer is only used, and likely to only be used, during unit testing, and it was added as such.
Hide
Krzysztof Findeisen added a comment -
Sorry, did you also look at afw#569? It seems to have been missed by Jira.
Show
Krzysztof Findeisen added a comment - Sorry, did you also look at afw#569 ? It seems to have been missed by Jira.
#### People
Assignee:
Krzysztof Findeisen
Reporter:
Krzysztof Findeisen
Reviewers:
Brian Van Klaveren
Watchers:
Brian Van Klaveren, Krzysztof Findeisen | 2023-03-21 06:00:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6296070218086243, "perplexity": 9612.365544103617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00696.warc.gz"} |
https://graphworkflow.com/2019/05/28/tilting-scale/ | # Tilting scale in military spending
This post follows the analysis on the Imbalance in Military Spending. To understand the context, you must first read that analysis before you proceed below.
## Graph objective
The analysis on the Imbalance in Military Spending shows how Russia has consistently spent more on military than the European NATO during 1993-2017. Here is the resulting graph from that analysis:
The graph objective in this analysis is to further enhance the perception of the original message, by showing how Russia has ’tilted the scale’ on military spending to its favour. That is to say, I want to visually show a figurative ’tilting balance scale’, and do so in a scientific manner that preserves the data properties.
The data density in the graph above reminds me of an old-fashioned balance weighing scale that is heavily imbalanced. The 45 degree line resembles to a scale pointer that indicates the point of balance, and the two triangles on either side are the scale pans.
I wish to emphasize this perception by rotating the entire graph on its origin by 45 degrees to the left, so that what appears now as the 45 degree line becomes a vertical axis, the x-axis becomes the 45 degree line and the y-axis becomes the 135 degree line.
To achieve this result, I multiply the vectors holding the ordinate values (vertical axis) and the abscissa values (horizontal axis) with the following rotation matrix:
$\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} cos \theta & -sin \theta \\sin \theta & cos \theta \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}$
where x’ and y’ hold the transformed rotated set of coordinates. The rotation principle is based on simple Euclidean geometry, with θ indicating the angle of rotation. For the 45 degree rotation it holds that θ = 𝜋/4. To learn more about degrees and angles see the discussion on Unit Circle.
## Data management
The data source and data management process is reported analysis on the Imbalance in Military Spending. The only other data management step is to generate the rotated coordinates by 45 degrees as described in the Stata code provided at the end of this page.
## Visual implantations
As in Imbalance in Military Spending, I employ a connected type of scatter plot, i.e. using both the point and line implantations. The point implantation encodes the coordinates of the contrasting military spending for each year, and the line implantation connects the points in the order of year so that it encodes the yearly evolution.
## Retinal variables
The point implantation is encoded using small hollow navy colour circles, with the exception of the starting year (1993) and the ending year (2017) that are encoded using orange colour filled circles to assist decoding. The line implantation will be encoded using a relatively thick navy line.
## Graph identification
Internal identification labels the beginning and ending years next to the respective point implantations.
External identification includes a graph title that explains how the user should interpret the graph as a ’tilting scale’. The note to the graph that originally acknowledged the source of the data has been removed as I found that it detracted from decoding, but this information should still be disclosed in the text.
A critical part or external identification is the regular axes labels in order to enable detailed table look-up and an accurate contrast of military spending between the two jurisdictions.
The vertical reference line (what used to be the 45 degree line) is identified as the “balancing line of equal spending”, thus making it clear that Russia has heavily imbalanced limitary spending by comparison to European NATO.
## Graph enhancement
Graph enhancement is an important step. It is critical to maintain a square aspect ratio, 1:1, otherwise the angles and the lengths of line segments would be distorted. The vertical reference line is also an important piece of information, that also relies on setting the aspect ratio to 1:1.
The grids in both axes enable accurate table look-up, and the rotated axes titles explain in which side of the ‘balancing scale’ is either jurisdiction.
I suppress the lines surrounding the plot region, and reduce the overall visual prominence of the grids, ticks, labels and axes titles.
## Visual decoding/perception
Here is the proposed solution:
Although the graph is unusually arranged, it preserves all data properties. This is the same 2D Cartesian plane as the graph presented in imbalanced military spending just rotated by 45 degrees.
I find this graph to convey the message of imbalance more forcefully, particularly because it relies on the partition on the so-called felt axis that assists with the maximisation of visual stress. In the words of Donis Dondis (1973, A Primer of Visual Literacy):
In visual expression or interpretation, this process of stabilization imposes on all things seen and planned as a vertical axis with a horizontal secondary referrent which together establish the structural factors that measure balance. This visual [horizontal] axis is also called a felt axis which better expresses the unseen but dominating presence of the axis in the act of seeing. It is an unconscious constant” (p.23).
By rotating the plane and making the 45 degree line into a horizontal line, I effectively enforce the felt axis as a conscious constant. | 2022-09-30 20:06:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3636440634727478, "perplexity": 1467.1842995340191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00790.warc.gz"} |
http://scratchpad.wikia.com/wiki/II.2.3 | # II.2.3
222,254pages on
this wiki
This problem has Hartshorne Height 1.
### Relevant Results
Prop-Def. II.1.2: For any presheaf $F$ there is a sheaf $F^+$ and a morphism $F \to F^+$ such that any sheaf $G$ and morphism $F \to G$ factors as $F \to F^+ \to G$ for a unique $F^+ \to G$
### HAPPY
Part a)
• Straightforward application of the definition stalk and thinking of elements of the stalk as equivalences classes $\langle s, U \rangle$ etc.
Part b)
• First show $X_{red}$ is a locally ringed space. One way of doing this is showing the following:
• $(\mathcal{O}_{X,p})_{red}$ is a local ring.
• $\mathcal{O}_{X_{red},p} \cong (\mathcal{O}_{X,p})_{red}$, i.e.
$\left( \varinjlim_{V \ni p} \mathcal{O}_X(V) \right)_{red} = \varinjlim_{V \ni p} \mathcal{O}_{X_{red}}(V)$
• one way to prove the above is to show one side has the universal property of the other, this involved using part a)
• Next, show for $\mbox{Spec}A \subset X$ that $\mbox{Spec } A_{red} \cong (\mbox{Spec }A)_{red}$ (as locally ringed spaces!) to conclude that $X$ is a scheme.
• this consists of a map on topological spaces (the identity) and a sheaf map $\mathcal{O}_{(\mbox{Spec }A)_{red}} \to \mathcal{O}_{ \mbox{Spec } A_{red} }$.
• define the sheaf map on distinguished affines and show its and isomorphism i.e. $(A_f)_{red} \cong (A_{red})_{\overline{f}}$.
• Define a map of schemes $X_{red} \to X$
• Map on topological spaces is the identity.
• For $U \subset X$ use the canonical projection $\mathcal{O}_X(U) \to \mathcal{O}_X(U)_{red}$ and the presheaf to sheaf map $\mathcal{O}_X(U)_{red} \to \mathcal{O}_{X_{red}}(U)$ to get the desired sheaf map.
• Check the map on local rings is $\mathcal{O}_{X,p} \to \mathcal{O}_{X,p}/\sqrt 0$ and hence a local homomorphism.
Part c)
• The desired factorization on topological spaces is clear.
• Show the desired factorization on the level of presheaves:
• Use that $\mathcal{O}_X(U)$ is reduced and the universal property of kernels to show there is a factorization $\mathcal{O}_Y(U) \to \mathcal{O}_Y(U)_{red} \to \mathcal{O}_X(U)$.
• Argue this gives a factorization as sheaves, giving a morphism of schemes
• Use Prop-Def II.1.2 to show there is the desired factorization on the level of sheaves and hence a factorization $X \to Y_{red} \to Y$ as locally ringed spaces.
• The above factorization gives a factorization $\mathcal{O}_{Y,f(p)} \to \mathcal{O}_{X,p}$ into $\mathcal{O}_{Y,f(p)} \xrightarrow{\alpha} \mathcal{O}_{Y_{red},p} \xrightarrow{\beta} \mathcal{O}_{X,p}$
• Use that $\alpha$ is local (and surjective!) and that $\beta \circ \alpha$ is local to conclude that $\beta$ is also local.
• Uniqueness follows essentially from the uniqueness of of the map in Prop-Def II.1.2. | 2017-02-27 20:30:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9894494414329529, "perplexity": 563.7142945526888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00445-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://mersenneforum.org/showthread.php?s=01bbaa5b10cd39681b8953ac5c272476&t=21555&page=2 | mersenneforum.org Crypto News
Register FAQ Search Today's Posts Mark Forums Read
2017-02-24, 04:54 #12 CRGreathouse Aug 2006 597910 Posts Very interesting! I was trying to ballpark how much this would cost on EC2 but it's not even clear which instance type to use...
2017-02-24, 07:37 #13
xilman
Bamboozled!
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across
1095610 Posts
Quote:
Originally Posted by ewmayer SHA-1 is officially unsafe - collaboration here was with CWI: Google Online Security Blog: Announcing the first SHA1 collision They could have just said "2^63 SHA1 computations in total", but nooo... o And in other news, a major browser/website-security hole has been reported w.r.to sites which use CloudFlare, which are alas legion. [Note my initial post incorrectly stated the Cloudflare issue was related to the SHA1 collision one.]
And here's a statement about the effect on Git, Mercurial, etc. from the Mercurial project.
If you're not already being extremely diligent about vetting your project's contributors and contributions, cryptography will provide very little defense.
Another one, by Roger Needham or Butler Lampson (each attributes it to the other) is that anyone who believes that security can be solved by the application of cryptography understands neither security nor cryptography.
2017-02-24, 20:41 #14
danaj
"Dana Jacobsen"
Feb 2011
Bangkok, TH
11100011012 Posts
Quote:
Originally Posted by CRGreathouse Very interesting! I was trying to ballpark how much this would cost on EC2 but it's not even clear which instance type to use...
From the paper, they indicate stage 2 uses p2.16xlarge EC2 instances. I haven't gone through math or current prices, but they state about $560k at normal prices or$110k "off-peak".
Stage 1 was a bit over 6500 core-years (~ Xeon E5-2650v3 cores). My numbers come out to about $1M using 3-year contract reserved, or roughly$300k if using optimal spot pricing. Or you could do what the headline reporters are doing and assume that Google will do all this work for you for free or that this part of the solution just drops in your lap.
2017-02-25, 03:56 #15
jwaltos
Apr 2012
24·52 Posts
Quote:
Originally Posted by xilman Another one, by Roger Needham or Butler Lampson (each attributes it to the other) is that anyone who believes that security can be solved by the application of cryptography understands neither security nor cryptography.
lol
2017-02-26, 01:16 #16 ewmayer ∂2ω=0 Sep 2002 República de California 266128 Posts More on the CloudFlare fubar: Everything You Need To Know About Cloudbleed, The Latest Internet Security Disaster | Gizmodo Australia Long story short: '==' in place of '>=' ==> buffer-overrun data-spewage badness. I pity the poor swdev-schlemiel who wrote that single wrong character, hard to say who is more at fault, the coder who committed said mistake or the folks whose QA-test infrastructure failed to catch such catastrophic data-leakage.
2017-02-26, 01:27 #17
retina
Undefined
"The unspeakable one"
Jun 2006
My evil lair
142148 Posts
Quote:
Originally Posted by ewmayer Long story short: '==' in place of '>=' ==> buffer-overrun data-spewage badness.
Technically, yes. But we also have to blame the basic design strategy. Having all that sensitive data available in the clear without sanitisation after using it is a bad design strategy. Allowing the system to be so fragile that just a single comparison can make it fail is a bad design strategy. Not separating the memory regions between tasks is a bad design strategy.
I'm sure it was all done to save costs to enrich the CEOs bank account. But short-cuts lead to long delays. [RIP JRRT]
2017-03-07, 23:19 #18
ewmayer
2ω=0
Sep 2002
República de California
2×3×29×67 Posts
Vault 7: CIA Hacking Tools Revealed | Wikileaks
Quote:
Today, Tuesday 7 March 2017, WikiLeaks begins its new series of leaks on the U.S. Central Intelligence Agency. Code-named “Vault 7” by WikiLeaks, it is the largest ever publication of confidential documents on the agency. The first full part of the series, “Year Zero”, comprises 8,761 documents and files from an isolated, high-security network situated inside the CIA’s Center for Cyber Intelligence in Langley, Virgina. It follows an introductory disclosure last month of CIA targeting French political parties and candidates in the lead up to the 2012 presidential election. Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive. “Year Zero” introduces the scope and direction of the CIA’s global covert hacking program, its malware arsenal and dozens of “zero day” weaponized exploits against a wide range of U.S. and European company products, include Apple’s iPhone, Google’s Android and Microsoft’s Windows and even Samsung TVs, which are turned into covert microphones. Since 2001 the CIA has gained political and budgetary preeminence over the U.S. National Security Agency (NSA). The CIA found itself building not just its now infamous drone fleet, but a very different type of covert, globe-spanning force — its own substantial fleet of hackers. The agency’s hacking division freed it from having to disclose its often controversial operations to the NSA (its primary bureaucratic rival) in order to draw on the NSA’s hacking capacities. By the end of 2016, the CIA’s hacking division, which formally falls under the agency’s Center for Cyber Intelligence (CCI), had over 5000 registered users and had produced more than a thousand hacking systems, trojans, viruses, and other “weaponized” malware. Such is the scale of the CIA’s undertaking that by 2016, its hackers had utilized more code than that used to run Facebook. The CIA had created, in effect, its “own NSA” with even less accountability and without publicly answering the question as to whether such a massive budgetary spend on duplicating the capacities of a rival agency could be justified. In a statement to WikiLeaks the source details policy questions that they say urgently need to be debated in public, including whether the CIA’s hacking capabilities exceed its mandated powers and the problem of public oversight of the agency. The source wishes to initiate a public debate about the security, creation, use, proliferation and democratic control of cyberweapons. Once a single cyber ‘weapon’ is ‘loose’ it can spread around the world in seconds, to be used by rival states, cyber mafia and teenage hackers alike.
The NYT piece on the story predictably has a top reader comment which blames everything on the Evil Rooskies, and said article arguably buries the most important point deep down in paragraph 15:
“Another program described in the documents, named Umbrage, is a voluminous library of cyberattack techniques that the C.I.A. has collected from malware produced by other countries, including Russia. According to the WikiLeaks release, the large number of techniques allows the C.I.A. to mask the origin of some of its cyberattacks and confuse forensic investigators.”
Last fiddled with by ewmayer on 2017-03-12 at 04:14 Reason: sad -> said
2017-03-08, 00:57 #19
bgbeuning
Dec 2014
3×5×17 Posts
Quote:
Originally Posted by ewmayer Flaw in Intel chips could make malware attacks more potent | Ars Technica Specific side-channel exploit that was demoed used the Haswell branch predictor.
ASLR slows down buffer overflow attacks where hackers load code on the stack and then the function return jumps to the hacker code. New CPU have a memory management unit (MMU) bit that makes data pages non-executable (hardware calls it NX bit, Windows calls it DEP) and helps to block buffer overflow attacks by making thread stacks read-writable but non-executable pages. Older MMU only had read-only vs. read-write page protection.
2017-03-23, 13:23 #20 Nick Dec 2012 The Netherlands 22·19·23 Posts For anyone interested in lattice-based crypto, the slides of the Spring School at the Oxford Maths Institute are now publicly available: https://www.maths.ox.ac.uk/groups/cr...d-cryptography (scroll down to "Programme")
2017-05-17, 00:47 #21
ewmayer
2ω=0
Sep 2002
República de California
2×3×29×67 Posts
Apologies if this has been previously linked elsewhere on the forum:
A kilobit hidden SNFS discrete logarithm computation | Joshua Fried and Pierrick Gaudry and Nadia Heninger and Emmanuel Thomé
Quote:
Abstract: We perform a special number field sieve discrete logarithm computation in a 1024-bit prime field. To our knowledge, this is the first kilobit-sized discrete logarithm computation ever reported for prime fields. This computation took a little over two months of calendar time on an academic cluster using the open-source CADO-NFS software. Our chosen prime $p$ looks random, and $p-1$ has a 160-bit prime factor, in line with recommended parameters for the Digital Signature Algorithm. However, our $p$ has been trapdoored in such a way that the special number field sieve can be used to compute discrete logarithms in $\mathbb{F}_p^*$, yet detecting that $p$ has this trapdoor seems out of reach. Twenty-five years ago, there was considerable controversy around the possibility of backdoored parameters for DSA. Our computations show that trapdoored primes are entirely feasible with current computing technology. We also describe special number field sieve discrete log computations carried out for multiple weak primes found in use in the wild.
2017-05-18, 05:29 #22
Dubslow
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3×29×83 Posts
Quote:
We also describe special number field sieve discrete log computations carried out for multiple weak primes found in use in the wild.
Yikes. Confirming that backdooring is possible is just as bad too.
Similar Threads Thread Thread Starter Forum Replies Last Post ElChapo Math 9 2017-06-10 03:26 plandon Lounge 0 2009-06-16 13:55 NBtarheel_33 Hardware 17 2009-05-04 15:52 R.D. Silverman Lounge 2 2007-08-08 20:24 MrHappy Lounge 0 2005-01-19 16:27
All times are UTC. The time now is 14:37.
Fri Oct 22 14:37:53 UTC 2021 up 91 days, 9:06, 1 user, load averages: 1.07, 1.35, 1.34
This forum has received and complied with 0 (zero) government requests for information.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ. | 2021-10-22 14:37:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23178306221961975, "perplexity": 8191.644034213214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00672.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/estimate-annual-revenues-from-sulfur-sales-for-a-us-bureau-of-mines-citrate-fgd-process-th-q3318886 | ## FGD - annual revenue
Estimate annual revenues from sulfur sales for a U.S. Bureau of Mines citrate FGD process that is removing 90% of the SO2 from a 40%?efficient 400?MW power plant. The coal is 5% sulfur and has a heating value of 20,000 kJ/kg. Sulfur can be sold for $150/metric ton, and H2S can be purchased from a nearby refinery for$160/metric ton. Also, calculate the net reduction of operating costs for the system (in mills/kWh) due to the net sulfur revenues. | 2013-05-21 04:03:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5033097863197327, "perplexity": 7957.499278016464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699684236/warc/CC-MAIN-20130516102124-00039-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/172003/integer-regression-coefficients-in-r/173730 | # Integer regression coefficients in R
I'd like to fit integer coefficients, e.g. summing to 10, to a regression equation. The absolute values of the coefficients (i.e. predicted y) aren't important, I just want to retain the appropriate relative values. The use case is for an easily interpretable scoring system.
For example, this regression yields the following coefficients (ignoring the intercept):
set.seed(0)
y <- rnorm(100)
x <- matrix(rnorm(300), ncol=3)
m <- lm(y ~ x)
(coef <- m$coefficients[-1]) # x1 x2 x3 # 0.12100965 0.05506511 0.14708549 Rounding with the below code yields a rounding error (sums to 11): round(10 * coef / sum(coef)) # x1 x2 x3 # 4 2 5 A method like this also doesn't guarantee maximally similar weights to the regression equation. This was asked here without satisfactory answers, and might be addressed in this paywalled research paper. Edit: looks like https://stackoverflow.com/questions/792460/how-to-round-floats-to-integers-while-preserving-their-sum may be able to help minimize the roundoff error. If my question is further specified as minimizing the error of a predicted (scaled) y, I'm unsure whether this is an equivalent optimization. • Have you considered integer linear programming? – Matthew Drury Sep 22 '15 at 21:14 • I haven't, that could be a good approach. I'll take a stab if I get a chance. – Max Ghenis Sep 22 '15 at 21:22 ## 1 Answer We can apply the answer to Round vector of numerics to integer while preserving their sum: smart.round <- function(x) { y <- floor(x) indices <- tail(order(x-y), round(sum(x)) - sum(y)) y[indices] <- y[indices] + 1 y } At that point similar logic gets a reasonable answer: # Setup from question set.seed(0) y <- rnorm(100) x <- matrix(rnorm(300), ncol=3) m <- lm(y ~ x) coef <- m$coefficients[-1]
# Answer
smart.round(10 * coef / sum(coef))
# x1 x2 x3
# 4 2 4
I don't know whether this also minimizes the error of a predicted y, but it does yield something feasible. | 2020-12-04 05:44:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7414440512657166, "perplexity": 2118.6161120052093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733122.72/warc/CC-MAIN-20201204040803-20201204070803-00068.warc.gz"} |
https://en.wikiversity.org/wiki/Spectroscopy/Rotational_spectroscopy | # Spectroscopy/Rotational spectroscopy
Subject classification: this is a chemistry resource.
Type classification: this is a lesson resource.
• A molecule can store energy in its tumbling motion.
• This is only relevant in the gas phase where molecules are in continual motion and are free to rotate unhindered.
• Rotational motion at the molecular level is quantized in accordance with quantum mechanical theory.
• Transitions between discrete rotational energy levels give rise to the rotational spectrum of the molecule (microwave spectroscopy).
We will study:
• classical rotational motion, angular momentum, rotational inertia
• quantum mechanical energy levels
• selection rules and microwave (rotational) spectroscopy
• the extension to polyatomic molecules
## Classical rotational motion
Rotational energy is kinetic energy associated with a tumbling motion.
Consider a particle with mass m, rotating with angular velocity ω, at a distance R from a given axis.
• This particle has angular momentum J
• |J| = I ω where |J| is the magnitude of the angular momentum vector, and I is the moment of inertia (the rotational equivalent of mass).
Where do we use rotational spectroscopy? The 2.45 GHz frequency used by microwave ovens is the most ideal one for causing water molecules to rotate at their fastest possible rate. There is also the added bonus that this frequency is not used for communication, so microwave ovens do not interfere with cell phones, wireless internet, televisions and so on.
## Diatomic molecules: the rigid rotor
Remember that a molecule does not distort under rotation.
R = r1 + r2 (Eq. 1)
The axis passes through the centre of mass (C):
${\displaystyle m_{1}r_{1}=m_{2}r_{2}}$ (Eq. 2)
Moment of inertia:
${\displaystyle I=\sum _{i}m_{i}r_{i}^{2}=m_{1}r_{1}^{2}+m_{2}r_{2}^{2}}$ (Eq. 3)
Using equations 1, 2 and 3:
${\displaystyle I={\frac {m_{1}m_{2}}{(m_{1}+m_{2})}}R^{2}}$ ${\displaystyle I=\mu R^{2}}$ ${\displaystyle \mu ={\frac {m_{1}m_{2}}{(m_{1}+m_{2})}}}$
μ is the reduced mass.
Example: For 1H35Cl - ${\displaystyle \mu ={\frac {1{\mbox{u}}\times 35{\mbox{u}}}{(1{\mbox{u}}+35{\mbox{u}})}}={\frac {35}{36}}{\mbox{u}}=1.61\times 10^{-27}{\mbox{kg}}}$ (Remember that u = atomic mass unit = 1.661x10-27 kg)
## Rotational energy levels
Classical: ${\displaystyle E={\begin{matrix}{\frac {1}{2}}\end{matrix}}I\omega ^{2}={\frac {|J|^{2}}{2I}}}$
Quantum mechanical: solve the Schrödinger equation for the rigid diatomic rotor.
${\displaystyle E_{J}={\frac {h^{2}}{8\pi ^{2}I}}J(J+1)}$
• EJ is measured in Joules
• J is the rotational quantum number (= 0, 1, 2, ...)
Converted to wavenumbers: Rotational constant: ${\displaystyle \varepsilon _{J}={\frac {E_{J}}{hc}}=BJ(J+1)}$ ${\displaystyle B={\frac {h}{8\pi ^{2}Ic}}}$
• The units of εJ and B = cm-1
Below are the rotational energy levels for diazenylium, N2H+.
N2H+ J = 0 → εJ = 0 0 J = 1 → εJ = 2B cm-1 ≈3 cm-1 J = 2 → εJ = 6B cm-1 ≈10 cm-1
Important points to notice:
• The energy of the ground state (J = 0) is zero - because the molecule is not rotating.
• As J increases, the molecules rotate more and more quickly - as a result, the energy levels are more widely spaced apart.
• The energy level separations are compatible with the microwave region of the EM spectrum.
But what about spectroscopic transitions between energy levels? And what about the selection rules for microwave spectroscopy?
### Gross selection rule
Gross Selection Rule: The requirement for a permanent dipole.
For example, take a rotating HCl molecule. The fluctuation in its dipole component has an identical form to the fluctuation in the electric field of EM radiation (see the electromagnetic wave in Lesson 1).
As a result, the electric field of the EM wave exerts a torque on the dipole of the HCl molecule.
This means that energy can be absorbed or emitted, giving rise to a rotational spectrum.
Gross Selection Rule: molecules with permanent dipoles are microwave active (the molecule must be polar), e.g. heteronuclear diatomics - HCl, CO, NO, etc. Homonuclear diatomics are microwave inactive (e.g. O2, N2, etc.) In other words, a dipole must be present in the molecule for you to get a rotational spectrum.
### Transitions between energy levels
Specific Selection Rule: During a transition, the rotational quantum number must change by 1 unit only, i.e. ΔJ = ±1 (angular momentum is conserved)
In other words, only transitions between neighboring energy levels are possible.
Below is an example rotational spectrum.
${\displaystyle \Delta \varepsilon =\varepsilon _{J+1}-\varepsilon _{J}=B(J+1)(J+2)-BJ(J+1)}$ ${\displaystyle \Delta \varepsilon =2B(J+1){\mbox{cm}}^{-1}}$
The top part shows the rotational energy levels, εJ. The bottom part shows the microwave spectrum as observed from an experiment.
Each absorption (red arrow) complies with ΔJ = +1.
As a result of the equations in the box above, you get a series of spectral lines, each separated by 2B.
For CO, Δε = 3.86 cm-1. For HCl, Δε = 21.18 cm-1.
You'd get the same values for emission spectroscopy, except ΔJ = -1.
After obtaining a microwave spectrum from experiment, you measure nexp in cm-1. (The distance between spectral lines is Δnexp). From this you can work the rotational constant B, because Δvexp (this is the intensity of each spectral line) = 2B. Using the formulae in the green boxes further up this page you can work out B and I. Now you can calculate molecular properties (such as R, μ) to a high degree of accuracy.
## Rotational energy level population
Note that the most intense band is not the first line in the spectrum. Why?
Remember the 3 factors from Lesson 1:
1. Amount of sample
2. Population of energy states
3. Selection rules
We need to consider number 2 - the Boltzmann distribution AND degeneracy.
${\displaystyle {\frac {N_{f}}{N_{i}}}={\frac {g_{f}}{g_{i}}}\exp \left(-{\frac {\Delta E}{k_{B}T}}\right)}$
Nf and Ni are the population of molecules in energy levels ef and ei with degeneracy gfand gI.
From quantum mechanics, the degeneracy of each eJ level = 2J + 1
J = 0 1 (non-degenerate) J = 1 3 (3-fold degenerate) J = 2 5 (5-fold degenerate) J = 3 7 (7-fold degenerate)
Population of excited state relative to ground state:
Initial state: J = 0 ei = e0 = 0 gi = g0 = 1 Final state: J ef = eJ = BJ(J + 1) gf = (2J+1)
Δε = εJ - ε0 = BJ(J + 1) →
${\displaystyle {\frac {N_{J}}{N_{0}}}=(2J+1)\exp \left(-{\frac {BJ(J+1)}{k_{B}T}}\right)}$
Example for J = 1 (at T = 300 K):
B = 2 cm-1 Nf/Ni = 2.94 B = 10 cm-1 Nf/Ni = 2.70
The population of the J = 1 level decreases as the energy level separation (i.e. 2B) increases.
Degeneracy (the pre-exponential term) moves the maximum population away from J = 0. You can use calculus to determine the most populated level - the maximum which occurs when ${\displaystyle {\frac {dN_{J}}{dJ}}=0}$.
${\displaystyle {\begin{matrix}{\frac {dN_{J}}{dJ}}&=&N_{0}\left\{2e^{-{\frac {BJ(J+1)}{k_{B}T}}}+(2J+1)e^{-{\frac {BJ(J+1)}{k_{B}T}}}\left[-{\frac {B(2J+1)}{k_{B}T}}\right]\right\}\\\ &=&N_{0}e^{-{\frac {BJ(J+1)}{k_{B}T}}}\left[2+(2J+1)\left(-{\frac {B(2J+1)}{k_{B}T}}\right)\right]=0\\\ &\ &2+(2J+1)^{2}\left(-{\frac {B}{k_{B}T}}\right)=0\to (2J+1)^{2}={\frac {2k_{B}T}{B}}\\\ &\ &{\Bigg \Downarrow }\end{matrix}}}$
${\displaystyle J_{max}={\sqrt {\frac {k_{B}T}{2B}}}-{\frac {1}{2}}}$
For example, at 300K, B = 5 cm-1kBT = 208.5 cm-1Jmax = 4 (which corresponds to the 4→5 transition).
## Non-rigid rotor
Gray text is additional information provided for further reading only. This content will not be tested.
We have assumed so far that the bond length remains fixed during rotation of the molecule - this is the rigid rotor model. However, as the molecule rotates the atoms are subject to centrifugal forces which stretch the bonds - this is the non-rigid rotor model.
Hooke's law states for an elastic bond:
${\displaystyle F=-k(r-r_{eq})}$ ${\displaystyle k=4\pi ^{2}\omega ^{2}c^{2}m}$
• F = restoring force (N)
• r = bond length; req = equilibrium bond length (m)
• k = force constant (Nm-1)
• c = vibrational frequency (cm-1)
• m = reduced mass (kg)
Non-rigid rotor model for diatomic molecules:
Non-rigid rotor model for diatomic molecules Centrifugal distortion coefficient ${\displaystyle \varepsilon _{J}=BJ(J+1)-DJ^{2}(J+1)^{2}}$ ${\displaystyle D={\frac {h^{3}}{32\pi ^{4}I^{2}R^{2}kc}}}$
The first term is the rigid rotor model, and the second term is a correction for the centrifugal distortion. It is important to consider this for high values of J.
Centrifugal distortion leads to lowering of the given energy level (at high J). Consequently, spectral lines cluster together at high J and are no longer equally spaced.
## Polyatomic molecules
The majority of molecules are not diatomic - they can possess rotation around more than one axis.
The classical expression for energy of a body rotating about axis a is ${\displaystyle E_{a}={\begin{matrix}{\frac {1}{2}}\end{matrix}}I\omega ^{2}={\frac {J_{a}^{2}}{2I_{a}}}}$
With similar expressions for the other axes, it follows that:
${\displaystyle E_{total}={\frac {J_{a}^{2}}{2I_{a}}}+{\frac {J_{b}^{2}}{2I_{b}}}+{\frac {J_{c}^{2}}{2I_{c}}}}$
Polyatomic molecules are categorised according to their moments of inertia about 3 perpendicular axes:
• Linear - one moment of inertia equals zero: Ia = Ib, Ic = 0. Molecules include CO2, OCS, C2H2.
• Spherical rotor - three equal moments of inertia: Ia = Ib = Ic. Molecules include CH4, SiH4, SF6.
• Symmetric rotor - two equal moments of inertia: Ia = IbIc. Molecules include NH3, CH3Cl, CH3CN.
• Asymmetric rotor - three different moments of inertia: IaIbIc. Molecules include H2O, CH3OH, H2CO.
Calculation of B: Example
Calculate the value of the rotational constant B for a molecule of carbon monoxide. The bond length (R) of CO is 0.113 nm.
Lesson 3. Vibrational Spectroscopy | 2021-09-17 14:24:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7258465886116028, "perplexity": 2060.199797736851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055645.75/warc/CC-MAIN-20210917120628-20210917150628-00330.warc.gz"} |
https://sources.gentoo.org/proj/pms.git/log/?showmsg=1 | summaryrefslogtreecommitdiff log msg author committer range
Commit message (Collapse)AuthorAgeFilesLines
* pms.cls: Do not define \e for \emphUlrich Müller10 days10-14/+13
| | | | | | | | This is only used a few times, so a shorthand is not needed. (We really should get rid of \i and \t as well, because redefining LaTeX internal commands sucks.) Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.tex: Update copyright yearsUlrich Müller10 days1-1/+1
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.cls: Reinstate TeX4ht/hyperref workaroundUlrich Müller10 days1-14/+26
| | | | | | | This had been removed in commit 1a510e7, but apparently it is needed again with TeX Live 2021. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-env-vars.tex: Clarify wording for profile IUSE injectionUlrich Müller2021-04-151-7/+4
| | | | | | | | | | | Subsume IUSE_REFERENCEABLE and IUSE_EFFECTIVE under a single conditional, which will clarify that these variables are equal if the feature is supported. Also the profile-iuse-inject featurelabel was misplaced (it didn't cover IUSE_REFERENCEABLE). Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.cls: Silence hyperref messagesUlrich Müller2021-01-161-0/+4
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Revert "pms.cls: Remove some tex4ht conditionals."Ulrich Müller2020-09-281-2/+2
| | | | | | | | | The PSNFSS packages cause an issue with missing whitespace between normal and boldface text in HTML output. This partially reverts commit 9d681052334b8b581e0c1218a0fc0c4f6897d091. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-vars.tex: Allow other tokens in PROPERTIES.Ulrich Müller2020-09-211-1/+1
| | | | | | | | The spec allows other tokens for RESTRICT but not for PROPERTIES. There appears to be not good reason to treat the variables differently in this respect. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.tex: Fix an \includepdf warning in DVI mode.Ulrich Müller2020-09-211-2/+1
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.cls: Change line length to reflect what is actually used.Ulrich Müller2020-09-211-2/+2
| | | | | | fill-column (Emacs) and tw (Vim) set to 80. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.cls: Remove some tex4ht conditionals.Ulrich Müller2020-09-211-2/+3
| | | | | | | mathptmx.sty and helvet.sty work just fine. (Apparently Helvetica isn't used, but leave the package in place.) Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* glossary: The term "slave repository" is not used anywhere else.Ulrich Müller2020-07-141-4/+4
| | | | | | So we need not explain it in the glossary. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Cheat sheet: Update ESYSROOT, following PMS.Ulrich Müller2020-07-071-2/+2
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-functions.tex: Phase functions can write to temporary dirs.Ulrich Müller2020-07-051-5/+6
| | | | | | | | All package managers support that functions like pkg_pretend() write to temporary directories T, TMPDIR and HOME. This is also used in the tree, see for example bug 469210. Update the spec to match this. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Revert "pms.cls: Another workaround for tex4ht."Ulrich Müller2020-06-291-2/+0
| | | | | | | | | The workaround for gitinfo2/eso-pic is no longer needed (and won't work anyway, as we no longer load gitinfo2 in the documentclasshook). This reverts commit e9536369d4c032f088683bd8fddfe30d12c3dcc8. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Correct the definition of ESYSROOT as EPREFIX isn't always applicableJames Le Cuirot2020-06-282-4/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was originally envisaged (but not stated in PMS) that SYSROOT would only ever need to equal / or ROOT as a distinct SYSROOT would have no benefit. A check was added to Portage to ensure this held. Myself, the ChromiumOS team, and others have since been caught out by this check when trying to bootstrap brand new systems from scratch. You cannot bootstrap with no headers at all! The check will therefore be adjusted to merely ensure that SYSROOT is / when ROOT is /. There were differing assumptions about how prefixes applied to the above. EPREFIX is traditionally something the user sets so some thought that it would be applied to SYSROOT, regardless of the latter's value. In order to honor the rule about there being no distinct SYSROOT, this would mean that if SYSROOT is / then EPREFIX would have to match BROOT. Despite that limitation, ESYSROOT was written into PMS with a fixed value of ${SYSROOT}${EPREFIX}. Being somewhat unfamiliar with prefix at the time, I didn't realise that this view didn't align with what I'd had in mind and it was only when I came to need a distinct SYSROOT that I realised there was a problem. crossdev toolchains are installed to ${EPREFIX}/usr/${CHOST} but have no further prefix appended and packages subsequently installed with cross-emerge are placed in this location by setting ROOT. Bug #642604 recently revealed that the build system's prefix was being erroneously duplicated on the end but I have now fixed this. What if we want to bootstrap a brand new prefixed system using the crossdev system as SYSROOT? This is the distinct SYSROOT case. The problem is that there is no distinct variable for SYSROOT's prefix and, as already stated, ESYSROOT is always ${SYSROOT}${EPREFIX}. We therefore cannot do it! If the crossdev prefix is blank then ROOT's must be blank too. I also never intended to have the aforementioned limitation where EPREFIX must match BROOT when SYSROOT is /. These are both entirely artificial restrictions. So how should it work instead? We originally intended for SYSROOT to equal either / or ROOT so I imagined the prefix would automatically be adjusted to match the prefix applicable at the matching location, namely BROOT or EPREFIX. This is obviously more flexible than forcing it to match EPREFIX. What about the distinct SYSROOT case? With no distinct variable, we have no way to explicitly set a prefix but this is likely only needed when bootstrapping against crossdev systems, which are unprefixed by nature. We therefore simply assume that the prefix is blank in this case. What about the cross-prefix case? Here, SYSROOT matches both / and ROOT so which prefix do we choose? The bootstrap-prefix.sh script sets flags to build against the target prefix so EPREFIX is used in this case. This happens to fit the current definition of ESYSROOT anyway. Legitimate concerns have been raised about building for a system with a different prefix to the one you're building against. The only binaries that leak from SYSROOT to ROOT are static libraries. Headers from SYSROOT will obviously also influence how ROOT's binaries are built. It is entirely possible that SYSROOT's prefix may leak through a header but grepping /usr/include on my own main system reveals only a few paths from a small handful of packages. pkg-config files invariably include paths but these are almost always used at build time, not runtime. A differing prefix would likely only occur in cases involving core packages like the libc and kernel headers anyway. Also consider that we have never prevented this from happening in the past. It has always been possible to do "EPREFIX=/foo emerge bar" from some system with a different prefix or no prefix at all. All we're doing here is including the prefix (if any) in the ESYSROOT variable. Should this warrant a new EAPI? I don't think so. All existing usage of ESYSROOT that I have seen still fits with this new definition and most of that usage has come from me. We're not even changing what the variable is used for, just loosening the constraints around what it can be set to. If you have doubts about whether this makes sense or actually works in practise, I have experimented with a prefixed system using all the different combinations I could think of, including cross-compiling, and it all worked as expected. Keep in mind that ESYSROOT is not magic and currently isn't used very much. As such, neither the toolchain nor pkg-config will use this sysroot if you don't explicitly tell them to. For the former, I find CC="${CHOST}-gcc --sysroot=${ESYSROOT}" works well. For the latter, crossdev installs a cross-pkg-config wrapper but it is completely lacking prefix support at the moment. I have fixes waiting on this change. Signed-off-by: James Le Cuirot <chewi@gentoo.org> [Replaced "/" by "empty", reworded table cell in ebuild-env-vars.tex] Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.cls: Drop the caption package because it interferes with TeX4ht.Ulrich Müller2020-06-241-39/+28
| | | | | | | | | | | | | | | | The "caption" package already caused issues in the past (for example, see commits b35f619 and 467f1b7), and version v3.4h finally broke the list of tables in HTML output. TeX4ht upstream says that the package should be avoided: https://puszcza.gnu.org.ua/bugs/?313#comment8 Positioning the caption above the table is simple enough without using the package. So we only lose the boldface labels which is a very minor issue. As an added bonus, this allows removal of most workarounds that were necessary for TeX4ht. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Makefile: Remove workaround for list of tables.Ulrich Müller2020-06-151-4/+0
| | | | | | | | This broke HTML output with TeX Live 2020. Without the workaround, formatting within the list of tables is consistent, but the list of tables is now inconsistent with the table of contents. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Revert "(vapier 2.1.1) document why hyphens are not required in category names"Ulrich Müller2020-04-121-3/+0
| | | | | | | Specific category names are tree policy and don't belong in the spec. This reverts commit 5c9ee872cb8d953bc037c765e2ef154eb0b96e4a (svn r63). Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* glossary.tex: Move explanation of new-style virtuals to the appendix.Ulrich Müller2020-04-112-4/+4
| | | | | | | The term "new-style virtual" is not used in the spec, so we need not explain it in the glossary. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-functions.tex: Update array detection code.Ulrich Müller2020-02-151-4/+4
| | | | | | | | | | | | | | | | | Remove the space after "declare -a" for matching "declare -p" output. With the update of the bash version to 4.2 in EAPI 6, variables can have other attributes in addition, which would make the test fail. For example, see https://bugs.gentoo.org/444832#c7. The implementation in Portage already omits the space. Replace grep by functionally equivalent code in bash. This is how the test is implemented in package managers, and follows the principle that external programs should not be called unnecessarily. Redirect stderr for "declare -p", because it outputs an error message if the PATCHES variable is not defined. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Update copyright years.Ulrich Müller2020-01-171-1/+1
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* dependencies.tex: Don't mention blocks on provided virtuals.Ulrich Müller2020-01-051-7/+2
| | | | | | | | This is a remnant of old-style virtuals and should have been removed long ago. Fixes: c8ab6b99bffa85bcd686284ba60a30f53c31c8b0 Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-vars: Remove 'simple filename' mirror fetchingMichał Górny2019-12-301-4/+2
| | | | | | | | | | | The feature of using 'simple filename' to fetch from mirrors is ill-defined (which mirrors?). The Portage implementation works only if GENTOO_MIRRORS are explicitly set. It's not used in any ebuilds. Let's remove it retroactively from the specification. Bug: https://bugs.gentoo.org/695814 Signed-off-by: Michał Górny <mgorny@gentoo.org> Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-vars.tex: Consistent order of *DEPEND variables.Ulrich Müller2019-12-231-1/+1
| | | | | | It is DEPEND, BDEPEND, RDEPEND, PDEPEND elsewhere. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-functions.tex: Reorder phase functions in table.Ulrich Müller2019-11-252-5/+5
| | | | | | Ordering corresponding to their call order (except pkg_nofetch). Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.bib: Add GLEP 74 to bibliography.Ulrich Müller2019-11-062-1/+9
| | | | | | | GLEP 74 has replaced GLEP 44, so use it as reference for the Manifest file. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.bib: Use last date listed in Post-History for GLEPs.Ulrich Müller2019-11-061-8/+8
| | | | | | | This seems to be a closer substitute for the publication date than the "Created" field used until now. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* get_libdir: Clarify that it must be a shell function.Ulrich Müller2019-10-181-2/+2
| | | | | | It is implemented as a bash function in all package managers. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* doins, dodoc: Clarify how directories are created.Ulrich Müller2019-10-011-5/+6
| | | | | | | | | | | | | | | | | | | With the -r option, it was unspecified what the mode of any created directories is. Clarify that doins -r will create them as if dodir was called (i.e., respect diropts), and that dodoc -r will create them as if plain install -d was used. For doins, this agrees with package manager implementations. For dodoc, this agrees with historic Paludis behaviour. Portage behaviour has changed in the past, when dodoc was changed from a standalone helper to reusing parts of doins. Usage in the Gentoo repository indicates that no ebuilds call diropts specifically for installing of documentation. However, for several ebuilds dodoc -r is affected by diropts called previously for another directory, which looks like an unwanted side effect. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* profiles.tex: Wording and typographic fix.Ulrich Müller2019-08-251-1/+1
| | | | | | | The phrase "once again" referred to the preceding virtuals subsection, which was removed in commit c8ab6b99bffa85bcd686284ba60a30f53c31c8b0. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* tree-layout.tex: Typographic fix.Ulrich Müller2019-08-241-1/+1
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pkg-mgr-commands.tex: Fix indentation in einstall listing.Ulrich Müller2019-08-101-10/+10
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* eapi-cheatsheet.tex: Specify --with-sysroot in econf correctly.Ulrich Müller2019-08-011-3/+2
| | | | | | Also reword the sentence, in order to prevent an overfull box. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pkg-mgr-commands.tex: Specify --with-sysroot in econf correctly.Ulrich Müller2019-07-291-1/+1
| | | | | | | | | If ESYSROOT is empty, then econf must pass --with-sysroot="/" as option to configure. This agrees with the implementation in portage and pkgcore. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-env-vars.tex: Clarify trailing slash rule.Ulrich Müller2019-07-291-2/+3
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Recognise "live" as token in PROPERTIES.Ulrich Müller2019-07-191-0/+2
| | | | | Bug: https://bugs.gentoo.org/690220 Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.cls: Drop unused verbatim.sty.Ulrich Müller2019-07-101-1/+0
| | | | | | Only needed for \verbatiminput which is no longer used in the text. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pkg-mgr-commands: Correct ver_cut and ver_rs to use ${PV} by defaultMichał Górny2019-07-081-2/+2 | | | | | | | | | | | | | | | | | | | | | | | Correct the description of ver_cut and ver_rs commands to indicate that they process${PV} when no version argument is specified, rather than wrongly \${PVR}. It seems that the latter was introduced as a typo, as it neither agrees with initial Bugzilla proposal of the function [1], pre-EAPI implementation in eapi7-ver.eclass [2] or EAPI 7 cheatsheet in PMS. Furthermore, it simply makes little sense as the common usage of those functions is to manipulate URLs and filenames, and those do not use ebuild revisions.. It is also how it was implemented in Portage, and initially in PkgCore (afterwards the PkgCore implementation changed to conform to PMS, with expectably breaking results). [1] https://bugs.gentoo.org/482170#c15 [2] https://gitweb.gentoo.org/repo/gentoo.git/commit/eclass/eapi7-ver.eclass?id=59a1a0dda7300177a263eb1de347da493f09fdee Bug: https://bugs.gentoo.org/689494 Signed-off-by: Michał Górny <mgorny@gentoo.org>
* ebuild-env-vars.tex: Allow SYSROOT & BROOT in pkg_setupMichał Górny2019-05-131-3/+4
| | | | | | | | Allow using SYSROOT & BROOT in pkg_setup(), when building from source. This follows the earlier change allowing for build-time dependencies in pkg_setup under the same circumstances. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* eapi-cheatsheet.tex: Fix colour of cross reference links.Ulrich Müller2019-04-071-0/+1
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.cls: Drop page references when processing with tex4ht.Ulrich Müller2019-04-072-10/+11
| | | | | | | They are meaningless in the HTML output. Delete \pageref in the text; it was used only once. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* pms.cls: Change bibliographystyle to unsrturl.Ulrich Müller2019-04-072-9/+10
| | | | | | | This will list references in order of citation. Also, use the url field to specify URLs. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* dependencies.tex: Reorder variables to match ebuild-vars section.Ulrich Müller2019-03-241-4/+4
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Refer to chapters as chapters.Ulrich Müller2019-03-2413-47/+42
| | | | | | | Also rename label prefixes, "ch:" for chapters, "sec:" for sections, as suggested by Michael Orlitzky. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Promote "Package Dependency Specifications" to section.Ulrich Müller2019-03-241-5/+5
| | | | | | | | Its subsubsections "Operators", "Block operator", "Slot dependencies", and "2-style and 4-style USE dependencies" will become subsections, so the maximum section number depth of the document will be 2. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-vars.tex: More precise cross references.Ulrich Müller2019-03-241-8/+9
| | | | | | | Where appropriate, refer to the "Dependency Specification Format" section, instead of the "Dependencies" chapter. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Move some subsections out of the "Dependencies" chapter.Ulrich Müller2019-03-243-84/+86
| | | | | | | | | | SRC_URI, REQUIRED_USE, PROPERTIES, and RESTRICT are ebuild-defined variables. Move them to that chapter. Add reference to tab:uri-arrows-table in SRC_URI section. Otherwise, no change of wording. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Makefile: Change encoding of HTML file to UTF-8.Ulrich Müller2019-03-111-8/+8
| | | | | | | | | | This will allow to drop the dependency on app-text/recode. Replace ligatures in tex4ht output by their components, because they would interfere with text search. Update sed expression for the list of tables workaround. Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* ebuild-functions.tex: S to WORKDIR fallback is conditional for src_test.Ulrich Müller2019-03-051-4/+6
| | | | | | | | | | | | | | | | | | | | Arguably, section 9.1.1 "Initial working directories" applies also to src_test, even if section 9.1.8 "src_test" doesn't refer back to 9.1.1. In src_test, a fallback from S to WORKDIR could only happen for an ebuild that: - Has no files in A to be unpacked. - Doesn't define any of the unpack, prepare, configure, compile or install phases (otherwise it would die in one of these phases). Since that scenario is very unlikely, fix the wording in section 9.1.8 retroactively for EAPI 4 and later. Note: Implementations also differ about this: portage will always fall back, while for pkgcore it is a conditional error. Closes: https://bugs.gentoo.org/652050 Signed-off-by: Ulrich Müller <ulm@gentoo.org>
* Cheat sheet: Whitespace.Ulrich Müller2019-03-031-1/+1
| | | | Signed-off-by: Ulrich Müller <ulm@gentoo.org> | 2021-05-10 05:29:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49754658341407776, "perplexity": 4629.756734590588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.87/warc/CC-MAIN-20210510033850-20210510063850-00028.warc.gz"} |
http://acm.sdut.edu.cn/onlinejudge2/index.php/Home/Index/problemdetail/pid/1055.html | ### Tempus et mobilius. Time and motion
Time Limit: 1000 ms Memory Limit: 65536 KiB
#### Problem Description
Unfortunately, most commercially available ball clocks do not incorporate a date indication, although this would be simple to do with the addition of further carry and indicator tracks. However, all is not lost! As the balls migrate through the mechanism of the clock, they change their relative ordering in a predictable way. Careful study of these orderings will therefore yield the time elapsed since the clock had some specific ordering. The length of time which can be measured is limited because the orderings of the balls eventually begin to repeat. Your program must compute the time before repetition, which varies according to the total number of balls present.
Every minute, the least recently used ball is removed from the queue of balls at the bottom of the clock, elevated, then deposited on the minute indicator track, which is able to hold four balls. When a fifth ball rolls on to the minute indicator track, its weight causes the track to tilt. The four balls already on the track run back down to join the queue of balls waiting at the bottom in reverse order of their original addition to the minutes track. The fifth ball, which caused the tilt, rolls on down to the five-minute indicator track. This track holds eleven balls. The twelfth ball carried over from the minutes causes the five-minute track to tilt, returning the eleven balls to the queue, again in reverse order of their addition. The twelfth ball rolls down to the hour indicator. The hour indicator also holds eleven balls, but has one extra fixed ball which is always present so that counting the balls in the hour indicator will yield an hour in the range one to twelve. The twelfth ball carried over from the five-minute indicator causes the hour indicator to tilt, returning the eleven free balls to the queue, in reverse order, before the twelfth ball itself also returns to the queue.
#### Input
The input defines a succession of ball clocks. Each clock operates as described above. The clocks differ only in the number of balls present in the queue at one o'clock when all the clocks start. This number is given for each clock, one per line and does not include the fixed ball on the hours indicator. Valid numbers are in the range 27 to 127. A zero signifies the end of input.
#### Output
For each clock described in the input, your program should report the number of balls given in the input and the number of days (24-hour periods) which elapse before the clock returns to its initial ordering.
#### Sample Input
30
45
0
#### Sample Output
30 balls cycle after 15 days.
45 balls cycle after 378 days. | 2019-03-19 06:46:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6143943071365356, "perplexity": 1200.4175814114337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201904.55/warc/CC-MAIN-20190319052517-20190319074517-00316.warc.gz"} |
https://stats.stackexchange.com/questions/485464/how-does-a-small-sample-size-underpowered-study-affect-the-prevalence | # How does a small sample size (underpowered study) affect the prevalence?
I am doing a meta-analysis where I calculate an overall prevalence value for multiple studies. I have submitted my manuscript to a journal. I used sample size as one of the risk of bias criteria, where a low sample size represents an underpowered study.
I received the reviewers' comments and one comment mentioned that a low sample size does not invalidate the prevalence estimate (i.e., the prevalence remains valid even if the sample size is low). I am wondering, why is this the case?
• Could it be that the proportion estimator is unbiased, even if the uncertainty (standard error) is greater when the sample size is smaller?
– Dave
Aug 31, 2020 at 19:56
If one or more of the estimates from the primary studies is imprecise then that will be taken into account in the meta-analysis as the procedure used is inverse variance weighting which takes precision into account. If you use random effects models then the weights become more nearly equal so small studies do then have greater influence. In the limit when $$\tau^2$$ vastly exceeds the individual variances then the weights tend to equality. So as long as you trust the study in other respects including it is fine but it may not have much effect on your overall estimate anyway. | 2022-06-28 03:01:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6782936453819275, "perplexity": 578.3878789561954}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00302.warc.gz"} |
https://www.semanticscholar.org/paper/Logical-operations-McGee/f2c661b39cbb70048410b6ea8fc2e086e5539b9d | # Logical operations
@article{McGee1996LogicalO,
title={Logical operations},
author={Vann McGee},
journal={Journal of Philosophical Logic},
year={1996},
volume={25},
pages={567-580}
}
• Vann McGee
• Published 1996
• Philosophy
• Journal of Philosophical Logic
Tarski and Mautner proposed to characterize the “logical” operations on a given domain as those invariant under arbitrary permutations. These operations are the ones that can be obtained as combinations of the operations on the following list: identity; substitution of variables; negation; finite or infinite disjunction; and existential quantification with respect to a finite or infinite block of variables. Inasmuch as every operation on this list is intuitively “logical”, this lends support to…
Logical Operations and Invariance
A notion of invariance under arbitrary surjective mappings for operators on a relational finite type hierarchy generalizing the so-called Tarski–Sher criterion for logicality is presented and the invariant operators are characterized as definable in a fragment of the first-order language.
Logic, Logics, and Logicism
An examination and critique of Tarski’s wellknown proposed explication of the notion of logical operation in the type structure over a given domain of individuals as one which is invariant with respect to arbitrary permutations of the domain and a new notion of homomorphism invariant operation over functional type structures is introduced.
Extensionality and logicality
This paper defines the logical terms of a language as those terms whose extension can be determined by their form, and defines purely logical languages as “sub-extensional”, namely, as concerned only with form, to obtain a wider perspective on both logicality and extensionality.
A completeness theorem for unrestricted first- order languages
• Philosophy
• 2003
Here is an account of logical consequence inspired by Bolzano and Tarski. Logical validity is a property of arguments. An argument is a pair of a set of interpreted sentences (the premises) and an
Invariance and Definability, with and without Equality
• Mathematics
Notre Dame J. Formal Log.
• 2018
This paper generalizes a correspondence due to Krasner between invariance under groups of permutations and definability in $\La_{\ infty\infty}$ so as to cover the cases (quantifiers, logics without equality) that are of interest in the logicality debates, getting McGee's theorem about quantifiers invariant under all permutation and Definability in pure $\La_infty$ as a particular case.
Logical Indefinites∗
• Philosophy
• 2014
The best extant demarcation of logical constants, due to Tarski, classifies logical constants by invariance properties of their denotations. This classification is developed in a framework which
LOGICALITY AND MODEL CLASSES
• Philosophy
The Bulletin of Symbolic Logic
• 2021
Abstract We ask, when is a property of a model a logical property? According to the so-called Tarski–Sher criterion this is the case when the property is preserved by isomorphisms. We relate this to
Which Quantifiers Are Logical? A Combined Semantical and Inferential Criterion
A combined semantical and inferential criterion for logicality is offered and it is shown that any quantifier that is to be counted as logical according to that criterion is definable in first order logic.
Logicality and Invariance
• D. Bonnay
• Philosophy
Bulletin of Symbolic Logic
• 2008
The standard arguments in favor of invariance under permutation, which rely on the generality and the formality of logic, should be modified and shown to support an alternative to Tarski's criterion, according to which an operation is logical iff it is invariant under potential isomorphism.
LOGICALITY AND MEANING
• Gil Sagi
• Philosophy
The Review of Symbolic Logic
• 2018
It is proposed that when a term is considered logical in model theory, what gets fixed is its intension rather than its extension, and it is shown that this leads to a graded account of logicality: the less structure a term requires in order to be fixed, the more logical it is. | 2022-06-26 11:48:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7800617218017578, "perplexity": 1692.4195609679398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00221.warc.gz"} |
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;42dd922d.0307&FT=&P=99649&H=A&S=b | LATEX-L@LISTSERV.UNI-HEIDELBERG.DE
Options: Use Classic View Use Monospaced Font Show Text Part by Default Condense Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>]
Sender: Mailing list for the LaTeX3 project <[log in to unmask]> From: Johannes Kuester <[log in to unmask]> Date: Thu, 10 Apr 1997 16:12:45 +0200 In-Reply-To: <[log in to unmask]> from "Ulrik Vieth" at Apr 10, 97 03:24:07 pm Reply-To: Mailing list for the LaTeX3 project <[log in to unmask]> Parts/Attachments: text/plain (85 lines) Commenting on Barbara Beeton and Ulrik Viet, regarding the inclusion of the upright "d" in the math core font: bb: > this is not arbitrary, and is there for the same reason that an > upright partial sign is included among the "extra greek-like > material" -- it is to represent the differential operator, which > is upright according to an iso standard for math notation (whose > reference number i don't remember at the moment). since that > standard was developed by engineers, not mathematicians, actual > practice in those two communities may differ, but the fact remains > that the upright "d" is standardized and the (more familiar to me) > italic "d" is not. Ulrik Viet: > NO!!! The upright "d" and upright \partial are despartely needed for > differentials, at least according to typesetting rules applicable in > physics. The reason to have them in the MC font is kerning: If you > run the math test of the MFbook testfont.tex program, you'll find that > there are numerous kern pairs between italics "d" and other italics or > greek letters. Kerning between upright "d" and other letters may not > be as critical as for the italics "d", but one shouldn't rule out this > possibility by banning the upright "d" from the MC font prematurely. > > BTW, from the point of view of physics requirements, I'd also like to > have an upright "i" (and perhaps also "j") in the MC font for kerning > reasons. I'm afraid that this suggestion somehow never made it into > Justin's report. Just look closely at I fully agree on Barbara Beeton. I use and like to use it myself and I don't like the italic "d", in mathematics or elsewhere, but there are other symbols to be set upright which aren't included. That was my point. So chosing "d" but not chosing "i" or "e" is arbitrary. Or "D", which is also used as a differential operator in certain cases. The upright partial (in fact a variant delta) is not available in other fonts, and so has a right to reside here (it has so anyway as a Greek letter, hasn't it?) But math italic should match the text font of a document, and so the upright sybols could well be taken from there, if available. The only problem here is kerning, and it seems impossible to come up with a font of 256 characters which could include all needed glyphs. (So we'll have to change to Omega...) ----- [Special Laplace symbol] > Very good idea! Reserving a slot for \Laplacian would allow the font > designer to choose between different styles for \Laplacian and \nabla, > either in Greek'' style (upright and inverted \Delta) or geometric > (upright and inverted triangle of uniform line thickness). Another > symbol in this grooup would be the d'Alembert operator (a.k.a. quabla), > usually just a geometric square, but it should match the size of > the Laplacian. I got the idea of creating a special \Lapla (to match my \Nabla as a control sequence) by the Duden, which in some cases shows glyph variant, the variant used in the DIN rules indicated by an asterisk. Now, it showed here such a triangle. Only later I realizes that DIN uses sans serif (which looks terrible for math), so may be this was just a sans serif Delta... Anyway, I think these symbols should be a little larger than a Delta, somewhere in between uppercase and things like \sum (in textmode) in height, thus "reigning" the following formula just as \sum does (and thus increasing readability). They don't need to be as large as \sum, as normally they don't have such a large scope (the parts following them an belonging to them are often composed only of a single symbol). > Question: Are these really math characters, or just greek text characters? > Remember, we want to do a math font not a greek text font, so if they are > never ever used in math, I'd suggest to get rid of them. At least they were used as number signs in historic Greece. But they don't have to reside in the core, they could be somewhere in the fourth or seventh symbol font... Johannes Kuester -- Johannes Kuester [log in to unmask] Mathematisches Institut der Technischen Universitaet Muenchen | 2023-02-05 11:41:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9070448279380798, "perplexity": 3084.4421798060253}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00601.warc.gz"} |
https://www.classiq.io/competition/hamiltonian | # Hamiltonian simulatioN
## Background
The Hamiltonian Simulation problem describes the evolution of quantum systems, such as molecules and solid state systems, by solving the Schrodinger equation. Quantum computers enable the simulation in a scalable manner, as described in [Lloyd96]. The most notable algorithm is the Trotterization-based product formula.
## The Problem
Generate a circuit, using no more than 10 qubits, that approximates the unitary $e^{-iH}$ where $H$ is the qubit hamiltonian of a LiH (lithium hydride) molecule. The LiH Hamiltonian is composed of 276 Pauli strings, and can be found HERE. The approximation error is defined in the next section, and should be less than 0.1. The circuit should be composed of the CX and single qubit gates only.
## Approximation Error
The “distance” between the approximated circuit and the real unitary operator is defined by the operator norm of the difference between the two operators
$error = \|U_{circuit} - e^{-iH}\|$.
The operator norm is defined as the maximal eigenvalue of the operator $\|A\| = max {|\lambda_i|}$ where $\lambda_i$ are the eigenvalues of $A$.
This error describes the worst inconsistency between the two operators for all possible input states.
Note that for unitary operators the error is bounded between 0 and 2.
## Metric
The winning solution will be the circuit with the minimal depth for which the error defined above is less then 0.1.
## Troterrization Algorithm
In this section we are going to describe the most common algorithm - Troterrization based product formula.
The algorithm relays on the Lie-Trotter formula
$$e^{-i(H_1+H_2+...+H_l)t}= \lim_{n\to\infty} (e^{-\frac{iH_1t}{n}}e^{-\frac{iH_2t}{n}}...e^{-\frac{iH_lt}{n}}) ^n.$$
This formula is especially useful in the context of quantum simulations since generally the Hamiltonian is given by a sum of Pauli terms, and a single Pauli term can be exponentiated as presented.
For a Pauli Z Hamiltonian, the unitary $e^{-iZZZt}$ can be presented using the following equivalent circuits:
An exponentiation of a general Pauli term (consist of I, X, Y, Z Pauli matrices), such as $e^{-iXZXt}$ is composed from $e^{-iZZZt}$ with basis change via single-qubit gates:
Next, we will observe a possibility to optimize the circuit for multi-term Hamiltonian. Note that for a multi-term Hamiltonian such as $H = H_1 + H_2$ the ordering of the terms within a single repetition of the trotter decomposition is irrelevant, i.e
$$e^{-i(H_1 + H_2)t} =\lim_{n\to \infty} (e^{-\frac{iH_1t}{n}}e^{-\frac{iH_2t}n}) = \lim_{n\to \infty} (e^{-\frac{iH_2t}{n}}e^{-\frac{iH_1t}{n}})$$
even if $H_1$ and $H_2$ are not commutative.
These schemes allow compiling of shorter circuits. For example, consider the Hamiltonian $H = ZZZ + ZZ$. Naively the single repetition would compile to:
Using a different implementation for single term evolution:
some CX gates cancel out to get:
## Questions?
Post your questions or see what others have asked on our Competition support site.
We look forward to reviewing your solution. Here are some things to know:
You are welcome to submit multiple solutions. If you submit something today, and find an improvement later on, you're welcome to submit the new solution as well.
Each problem requires slightly different information. Please make sure you carefully review the problem instructions.
The description of the approach is very helpful to the judges in understanding your work. Please include as much detail as you wish. We're interested in how you approached this problem, how your solution works, how long it took you to develop it, and whether there were particular problems that you overcame. We'd love to hear as much as you want to tell us. You can submit this in the 'description' field or include it as the 'Additional file (optional)' attachment.
The QASM code is how we can verify the 2-qubit gate count and/or circuit depth. It should include only CX and single-qubit unitary gates. This note on our Discourse support site might help.
If you created the circuit in something other than QASM, please submit the source code as well.
You will receive an email confirming the submission within a few minutes.
Submissions are closed. the results are here. Thank you!
0
DAYS
:
0
Hours
:
0
Minutes
:
0
Seconds
Oops! Something went wrong while submitting the form.
Submission
*QASM Code | 2023-03-23 04:03:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7001633644104004, "perplexity": 823.1286687893065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00694.warc.gz"} |
https://scicomp.stackexchange.com/questions/19383/help-understanding-the-so-called-spectral-method?noredirect=1 | # Help understanding the so-called “spectral method”
This is a follow-up question to an answer I read here. $$M$$ is some hermitian matrix and $$V$$ an vector.
Since the matrix is hermitian, you could use it as a hamiltonian to propagate it in imaginary time. That is, solve the following system of differential equations:
$$i\frac{d \vec{V}}{dt}=M\vec{V}$$
The general solution to this is:
$$V(t)=V_0e^{iMt}$$
Then you take your $$\vec{V(t)} \cdot \vec{V(0)}$$, fourier transform it, and the height and placement of the peaks will tell you the components along various eigenvectors and their associated eigenvalues. This is sometimes called "the spectral method" in ultrafast atomic physics.
I want to understand this method.
My thoughts on an intuitive level so far: If I take the second equation as given, $$V(t)$$ is a vector-valued process for $$t$$. The scalar product $$\left$$ encodes the matrix $$e^{iMt}$$ in a scalar-valued object. The fourier transform for a scalar $$e^{imt}$$ is the delta distribution $$\delta_{m}$$, up to possibly rescaling by $$2\pi$$. That is, the fourier transform decodes information of test functions $$m\mapsto e^{imt}$$. So the foruier transform of $$\left$$ might somehow decode information about the hermitian matrix $$M$$.
Is this intuition right so far? If so, any help on how to make this rigorous and how to understand what is going on? A small explicit example would be very helpful.
Any source such as a textbook, lecture notes etc. is most welcome. | 2020-07-02 07:17:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8224056363105774, "perplexity": 196.22717417898554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00332.warc.gz"} |
http://math.stackexchange.com/questions/121943/showing-point-is-the-orthocenter | # Showing point is the orthocenter
Given a rectangle WXYZ, let R be a point on its circumscribed circle. Show that, out of the orthogonal projections of R onto WX, XY, YZ, and ZW; one out of these 4 points is the orthocenter of the triangle created by the other three.
-
Have you tried saying $W,X,Y,Z$ are the points $(\pm a,\pm b)$ and $R=(r\cos\theta,r\sin\theta)$ for $r=\sqrt{a^2+b^2}$? Then the projections are $R_{1,2}=(\pm a,r\sin\theta)$ and $R_{3,4}=(r\cos\theta,\pm b)$ and it's fairly straightforward to check that each of the three pairs of lines $\{R_aR_b,R_cR_d\}$ are perpenticular. Thus each of the $R_i$ lies on the altitudes of the triangle formed by the other three points, proving the result. Remember that perpendicularity means the slopes are negative reciprocals, or that their products are $-1$. – bgins Mar 19 '12 at 10:55
If you like complex numbers then the solution may be made very simple. WLOG we may assume that points $W, X, Y, Z$ all lie on the unit circle of the complex plane, and R is another point of the unit circle. We know that $WXYZ$ is a rectangle, so if, say, $W = a + i b$, then the remaining points are expressed as $X = \overline{W} = a - i b$, $Y = -W = - a - i b$, and $Z = - \overline{W} = -a + i b$.
Now let $R = p + i q$, and let us denote by $R_{A B}$ the projection of point $R$ onto line $AB$. Then by pure thought one can find that $R_{X Y} = p - i b$, $R_{Y Z} = - a + i q$, $R_{Z W} = p + i b$, and $R_{W X} = a + i q$.
We can now compute $R_{X Y} - R_{Y Z} = \overline{R} + \overline{W}$ and $R_{Z W} - R_{W X} = \overline{R} - \overline{W}$.
We need to observe that if points $A$ and $B$ lie on the unit circle, their sum $A + B$ and difference $A - B$ are orthogonal. Thus, line $(R_{X Y}, R_{Y Z})$ is perpendicular to line $(R_{Z W}, R_{W X})$
Finally, we find that line $(R_{X Y}, R_{Z W})$ is perpendicular to line $(R_{YZ}, R_{W X})$ by construction, so indeed point $R_{Z W}$ in the orthocenter of triangle $R_{XY}R_{YZ}R_{WX}$.
Similarly you may argue for the other points.
-
Continuing with the notation and analytic-geometric approach from my comment, \eqalign{ R_1R_2 \perp R_3R_4 \iff & \{y=r\sin\theta\} \perp \{x=r\cos\theta\} \\\\&\text{True (horizontal/vertical lines)} \\\\ R_1R_3 \perp R_2R_4 \iff & \overline{(a,r\sin\theta)(r\cos\theta,b)} \perp \overline{(-a,r\sin\theta)(r\cos\theta,-b)} \\\iff &-1= \frac{b-r\sin\theta}{r\cos\theta-a}\cdot\frac{-b-r\sin\theta}{r\cos\theta+a} =\frac{r^2\sin^2\theta-b^2}{r^2\cos^2\theta-a^2} \\&\qquad=\frac{a^2\sin^2\theta-b^2\cos^2\theta}{b^2\cos^2\theta-a^2\sin^2\theta} \\\\&\text{True} \\\\ R_1R_4 \perp R_2R_3 \iff & \overline{(a,r\sin\theta)(r\cos\theta,-b)} \perp \overline{(-a,r\sin\theta)(r\cos\theta,b)} \\\iff&-1= \frac{-b-r\sin\theta}{r\cos\theta-a}\cdot\frac{b-r\sin\theta}{r\cos\theta+a} \\\\&\text{...True?} } Is this approach accesible for you? Can you try the last one? The facts we need are: $$\matrix{ \cos^2\theta+\sin^2\theta=1\qquad&\text{true for all }\theta \\\\ a^2+b^2=r^2\qquad &\matrix{\text{do you see why I set}\\\text{it up this way above?}} \\\\ m_{PQ}=\frac{y_P-y_Q}{x_P-x_Q} &\text{slope formula} \\\\ -(s-t)=t-s &\text{basic algebra} \\\\ (s+t)(s-t)=s^2-t^2 &\text{basic algebra} \\\\ x=r\cos\theta &\text{is a circle with radius }r \\ y=r\sin\theta &\text{centered at the origin} }$$
-
Synthetic solution:
Let the projections of $R$ onto $WX, ZY, WZ$, and $XY$ be $A,B,C,D$, respectively. (In my diagram $R$ lies on the arc $XY$, for what it's worth). Let $E$ be the intersection of line $AD$ with line $CB$.
Since trivially $CD \perp AB$ (intersecting at $R$), it's good enough to show that $ADE \perp CB$.
Claim 1: $RE \perp WY$. Proof: Consider the Simson line of $XWY$ with respect to the point $R$. The projections onto $XW$ and $AY$ are $A$ and $D$, so the projection onto $CB$ must be $E$.
Claim 2: $WARE$ is cyclic. Proof: $WA \perp RA$ and $RE \perp WE$.
Claim 3: $WARC$ is cyclic. Proof: $WA \perp AR$ and $WC \perp RC$.
Claim 4: $WAEC$ is cyclic. Proof: $WARE$ and $WARC$ are cyclic.
Claim 5: $AE \perp CE$. Proof: $WACE$ is cyclic and $WA \perp CW$.
Claim 6: $C,E,$ and $B$ are collinear. Proof: Simson line of $WZY$.
So $AE \perp CEB$, and $D$ is the orthocenter of $ABC$.
- | 2015-01-25 18:48:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532730579376221, "perplexity": 222.63035035115598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861305.18/warc/CC-MAIN-20150124161101-00104-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/99357/an-experiment-shows-that-a-114-ml-gas-sample-has-a-mass-of-0-171-g-at-a-pressure-1 | # Problem: An experiment shows that a 114 mL gas sample has a mass of 0.171 g at a pressure of 720 mm Hg and a temperature of 33 oC.What is the molar mass of the gas?
###### FREE Expert Solution
Using the ideal gas equation.
$\overline{){\mathbf{P}}{\mathbf{V}}{\mathbf{=}}{\mathbf{n}}{\mathbf{R}}{\mathbf{T}}}$
Isolate n (number of moles of gas):
###### Problem Details
An experiment shows that a 114 mL gas sample has a mass of 0.171 g at a pressure of 720 mm Hg and a temperature of 33 oC.
What is the molar mass of the gas? | 2020-07-06 10:11:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7037280201911926, "perplexity": 727.8215055325983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00202.warc.gz"} |
https://dataspace.princeton.edu/handle/88435/dsp01pk02cd90z?mode=full | Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01pk02cd90z
DC FieldValueLanguage
dc.contributor.authorLohry, Mark William
dc.contributor.otherMechanical and Aerospace Engineering Department
dc.date.accessioned2022-06-16T20:34:05Z-
dc.date.available2022-06-16T20:34:05Z-
dc.date.created2022-01-01
dc.date.issued2022
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01pk02cd90z-
dc.description.abstractThe field of computational fluid dynamics is in many ways considered mature, in that CFD tools are regularly used in all fields of engineering analysis and design where they complement experiment and theory. As computing power has increased and algorithmic approaches have improved, the adoption of simulation tools have also increased. Traditional CFD methods have been applied to ever-broader problems at ever-greater resolution with generally good success, but with some notable weaknesses that are likely to be overcome only through further advances in efficient high-fidelity algorithms. While now-traditional second-order methods have been generally successful for simulating smooth attached flows, the simulation of vorticity- and turbulence-dominated flows still present many unmet challenges. As computing power increases with modern computer architectures composed of thousands of multicore processors and GPUs, there remains the need to develop further high-fidelity algorithms suitable for the simulation of high Reynolds number flows on the current generation of heterogeneous computing systems. This thesis describes the formulation, development, verification, and validation of a discontinuous Galerkin method for approximate solutions of the Navier-Stokes equations. The DG method holds promise, but its adoption in practice has been hindered by many issues such as lack of robustness and computational efficiency, as well as questionable evidence of superiority for purpose compared to second-order schemes. This work addresses some key elements that can enable the transition of high-order methods from basic research to a useful computational tool for the analysis of complex aerodynamic flows of importance. The subject ranges from the study of numerical properties of DG methods to their efficient implementation in software. The algorithmic portions of this work focus on the use of implicit time integration with black-box algebraic solvers. The verification study of this approach in subsequent portions suggests this is a viable method for the stable and efficient solution of high fidelity DG discretizations at large computational scale. The software portions of this work focus on the use of modern software development approaches for the reliable and efficient implementation of the underlying numerical methods. As CFD methodology becomes more complex, the challenges of software development grows. Verification of the approaches is performed using test cases ranging from simple scalar transport equations up to direct numerical simulation and large eddy simulations of the compressible Navier-Stokes equations on complex geometries. An implicit large eddy simulation over tandem spheres is performed at higher resolution than published elsewhere to date, as well as at Reynolds numbers not seen elsewhere. Following this is an implicit LES analysis over a high-lift aircraft configuration, a challenging problem meant to push the limits of existing high-fidelity CFD methods.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu>catalog.princeton.edu</a>
dc.subject.classificationAerospace engineering
dc.subject.classificationMechanical engineering
dc.subject.classificationComputational physics
dc.titleThe Development, Verification, and Validation of a Discontinuous Galerkin Method for the Navier-Stokes Equations | 2023-03-24 01:16:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37992942333221436, "perplexity": 1140.897774071054}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00238.warc.gz"} |
https://blog.rossry.net/tag/harvard/ | IN WHICH Ross Rheingans-Yoo—a sometime artist, economist, poet, trader, expat, EA, and programmer—writes on things of interest.
# Reading Feed (last update: March 17)
A collection of things that I was glad I read. Views expressed by linked authors are chosen because I think they're interesting, not because I think they're correct, unless indicated otherwise.
### (17)
Blog: Marginal Revolution | The rise of the temporary scientist — relevant to my interests, naturally.
### (16)
Blog: Marginal Revolution | Has the Tervuren Central African museum been decolonized? — "In a word, no. They shut the place down for five years and spent \$84 million, to redesign the displays, and what they reopened still looks and feels incredibly colonial. That’s not an architectural complaint, only that the museum cannot escape what it has been for well over a century..."
Neat: Submarine Cable Map
# Chelsea Manning / HKS IOP / "Visiting Fellowship"
Here are some Harvard Crimson headlines from this week:
So here we go...
### (1)
One of the more annoying things about this affair has been the way the discussion has chased a dramatized, misleading version of the facts. To borrow a phrase, the commentators (in my social bubble) seem content with -- if not actively interested in -- framing the matter to produce heat, instead of light.
The easiest antidote for this is actually to read Dean Elmendorf's statement announcing and explaining the withdrawal of the IOP's Visiting Fellow appointment. I say this not because I agree with the decision or Elmendorf's justification, but because it at least explains what the decision was:
Some visitors to the Kennedy School are invited for just a few hours to give a talk in the School’s Forum or in one of our lecture halls or
# Mass Ave, Mt Auburn, and a Tale of Two Schools
Still, this report shows that Harvard could learn a lot from MIT about how to run a university.
Harry Lewis, "The Report Harvard Should Have Asked For", 2013
### (0)
Around the time I came to Harvard, both Mass Ave schools were dealing with the fallout of embarrassing, messy institutional mistakes. Both started with relatively small incidents, compounded by administrative decisions that were incredibly contentious during and after the fact.
Harvard's began with the Gov 1310 cheating scandal -- and it escalated when scandal erupted over the administration's search of faculty emails to find which sub-dean had spoken to the press, raising both privacy concerns and unease about the relationship between the faculty and the administration.
MIT's began with the arrest of Aaron Swartz for downloading academic articles from JSTOR -- and escalated over the Institute's complicity with the US Attorney's Office, which many members of the community felt betrayed the school's values.
That fall and spring, I was a freshman overburdened with courses that I could just barely
# Thoughts on "Be Reasonable" as a collaboration policy
I drafted this one back in May when the controversy described was live, but never quite got around to pushing it out the door. There haven't been any real developments in the news since then, and I still believe in the points I made, so I'm publishing it now before it gets any more stale.
A course I used to teach -- cs50 -- has seen some on-campus news (and editorial) coverage recently in the wake of a leak that 60 students in the course were reported to the Administrative Board on suspicions of academic dishonesty. I don't have much to say, not having any relationship with the course since 2016 and not having any particularly relevant inside information as a former Teaching Fellow. (As a relatively junior member of the course staff, I wasn't asked to work on any cases of academic dishonesty, and the revelations that I could give have already been shared with the press.) But there's one thing that's come up in the ensuing
# On “’till the stock of the Puritans die”
attention-conservation notice: Taking poetry seriously. Wholehearted, uncynical, unapologetic Harvardiana.
Today's the first time that many of Harvard's graduands will hear the little-known final verse of "Fair Harvard". So it seems as good a time as any to muse on the administration's decision to change that verse's final lyric.
It would be pretty natural to be outraged at the prospect, but after trying to start that blog post and failing for a while, I realized that I'm actually in favor of the change.
### (1a)
"Fair Harvard", as far as almae matres go, is actually quite good. Here are a few others for comparison:
Notre Dame, our Mother tender, strong, and true, proudly in the heavens, gleams thy gold and blue. Glory’s mantle cloaks thee; golden is thy fame and our hearts forever praise thee Notre Dame. MSU, we love thy shadows When twilight silence falls, glushing deep and softly paling o’er ivy covered halls; beneath the pines we'll gather to give our faith so
# Happy Housing Day!
(In which the author, through timely blogging, attempts to rekindle a fading feeling of connection to his alma mater.)
On a Thursday morning four years ago, upperclassmen pounded on the door of my friends' suite where I had slept over (again), and when we let them in, they popped a (well-shaken) bottle of champagne to welcome us to Eliot House. Over the next three years, I'd spend some of the best afternoons (and the most miserable all-nighters) in Eliot, and though I'd be stretching the truth to say that I became close with everyone in the house, I had a place that was home to come bck to, year after year. Of course, I had the best friends I could possibly have asked for, but for that I owe more thanks to the Freshman Dean's Office for throwing us all into Canaday than the housing lottery for giving us the best of all houses.
(My dad puts his arm around my shoulders and gestures at the courtyard, where the
# Thou then wert our parent, the nurse of our soul... (A thank-you post)
### (1)
To my parents, in whose footsteps I couldn't be prouder to follow.
To my grandmother and late grandfather, who have been working for this moment for the past fifty years.
To my grandmother, who sewed my dance costumes.
To Lucian, the first friend I met here, without whom I would not have found the path I did. I'm so glad we're doing this next thing together.
To Christina, without whom I would not have survived with sense of humor intact. Thank you, friend.
To Ava, who was always there to lend an ear and set my heart true. You're still a klutz and a derp.
To Julia, who could be counted on to be in the audience whenever her blockmates were sweating under the lights. I can't say in words how much I appreciated it.
To Miriam, the stalwart rock of our blocking group.
To Grace, who lit up the room. Smartypants.
To Claire, who had the kindest words. I'm glad I got to
# At What Price ‘Progress’?
Some people are ecstatic at the news. Some people are furious. It'll hit the national news cycle in about twelve hours.
Basically, it's another Friday at Harvard.
Every lunchtime conversation is about the same topic, in hushed tones. Friends measure their words, not quite sure whether what they're about to say will cause offense to their closest friends. One can't sit in the dining hall without overhearing tense, but hushed, conversations about it. "How about that President Faust?" is acceptable as a casual greeting between friends.
It's not just another Friday at all.
### (1a)
Today President Faust announced by email that she's accepting Dean Khurana's recommendations that:
1. For students matriculating in the fall of 2017 and thereafter: any such students who become members of unrecognized single-gender social organization will not be eligible to hold leadership positions in recognized student organizations or athletic teams. Currently enrolled students and those who are matriculating in the fall of 2016 will be exempt from these new policies.
2. ...any such students who become | 2019-07-21 05:36:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25553300976753235, "perplexity": 6153.574020318827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526888.75/warc/CC-MAIN-20190721040545-20190721062545-00510.warc.gz"} |
https://mathoverflow.net/questions/240220/researching-the-irrationality-of-a-number | # Researching the irrationality of a number [closed]
I am conducting a little research on checking if a number, written in positional numeral system is irrational.
Let $h^p_n$ be the most right non-zero digit of number $n!$ written in numeral system with base $p$. For example, in system with base $10$, $8!=40320\Rightarrow h_8^{10}=2.$ The question is following: for which $p$ number $H^p=0,h^p_1h^p_2\ldots h^p_n\ldots$ is irrational?
Obviously, $H^2$ is rational. I also know that $H^{10}$ is irrational, but have no idea on how to prove it and would really appreciate any kind of hint.
• There are recursive formulas for this rightmost nonzero digit. These recursive formulas can probably be used to show that the sequence is not periodic, hence this number will be irrational. Jun 1, 2016 at 17:20
• Have you seen home.wlu.edu/~dresdeng/papers/two.pdf ? Jun 1, 2016 at 17:23
• If $p$ is prime (so $(p-1)!=-1\pmod{p}$) and $n=\sum_id_ip^i$ in base $p$ then it works out that $h^p_n=\pm\prod_id_i!\pmod{p}$, and this is not periodic even if we ignore the $\pm$ signs. Jun 1, 2016 at 17:38
• Were it periodic with period $a = p^e b$ with $(b,p)=1$, by multiplying $b$ suitably (since it divides some $p^f - 1$), wlog $b = p^f - 1$, i.e. $a = p^{e+f} - p^e$, i.e. in base $p$ $a$ is a string of $(p-1)$'s followed by a string of $0$'s. Now take $n = 2p^{e+f-1}$, which has base $p$ digits $(0,2,0,\ldots,0,0,\ldots,0)$, and compare $h_n^p$ with the same for $n+a$, which has base $p$ digits $(1,1,p-1,\ldots,p-1,0,\ldots,0)$. One gets that $2! = 2\equiv \pm 1\pmod{p}$, which is false once $p>3$. Jun 1, 2016 at 20:15
• How, Michael, can you know $H^{10}$ is irrational, if you have no idea how to prove it? Jun 1, 2016 at 23:35 | 2022-08-14 22:42:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088361620903015, "perplexity": 228.4179879410441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00334.warc.gz"} |
https://thepythonguru.com/decoding-captchas-using-python/ | Updated on Oct 09, 2020
As everyone knows, captchas are those annoying things like "Enter the letters that you see on the image" on the registration or feedback pages.
CAPTCHA is designed so that a human could read the text without difficulty, while a machine can not. But on practice this usually does not work, because almost every simple text captcha posted on the site gets cracked after less then several months. Then comes ReCaptcha v2 which is way more complicated, but still it can be bypassed in automatic mode.
While this struggle between captcha makers and captcha solvers seems like endless, different people are interested in automatic captcha solution in order to maintain work of their software. Thats why in this particular article I will show how to crack text captchas using OCR method, as well as how to bypass complex Google ReCaptcha v2 with the help of real people.
All examples are written in Python 2.5 using the PIL library. It should also work in Python 2.6 and it was successfully tested in Python 2.7.3.
Python: www.python.org
Install them in the above order and you are ready to run the examples.
Also, in the examples I will rigidly set many values directly in the code. I have no goal of creating a universal captcha recognizer, but only to show how this is done.
CAPTCHA: what is it actually #
Mostly captcha is an example of one-way conversion. You can easily take a character set and get captcha from it, but not vice versa. Another subtlety - it should be easy for humans to read, but not amenable to machine recognition. CAPTCHA can be considered as a simple test such as "Are you human?" Basically, they are implemented as an image with some symbols or words.
They are used to prevent spam on many websites. For example, captcha can be found on the registration page of Windows Live ID.
You are shown the image and if you are a real person, then you need to enter its text in a separate field. Seems like a good idea that can protect from thousands of automatic registrations for spamming or distributing Viagra on forums, isn't it? The problem is that AI, and in particular image recognition methods, have undergone significant changes and are becoming very effective in certain areas. OCR (Optical Character Recognition) these days is pretty accurate and easily recognizes printed text. So captcha-makers decided to add a little color and lines to captchas to make them more difficult for the computer to solve, but without adding any inconvenience for users. This is a kind of arms race and, as usual, one group comes up with more powerful weapons for every defense made by another group. Defeating such a reinforced captcha is more difficult, but still possible. Plus, the image should remain fairly simple so as not to cause irritation in ordinary people.
This image is an example of a captcha that we will decrypt. This is a real captcha that is posted on a real site.
It's a fairly simple captcha, which consists of characters of the same color and size on a white background with some noise (pixels, colors, lines). You probably think that this noise on the background will make it difficult to recognize, but I will show how easy it is to remove it. Although this is not a very strong captcha, it is a good example for our program.
How to find and extract text from images #
There are many methods for determining the location of text on the image and its extraction. You can google and find thousands of articles that explain new methods and algorithms for locating text.
In this example I will use color extraction. This is a fairly simple technique with which I got pretty good results.
For our examples, I will use a multi-valued image decomposition algorithm. In essence, this means that we first plot a histogram of the colors of the image. This is done by obtaining all the pixels on the image grouped by color, and then counting is performed for each group. If you look at our test captcha, you can see three primary colors:
White (background)
Gray (noise)
Red (text)
In Python, this will look very simple.
The following code opens the image, converts it to GIF (which is easier for us to work, because it has only 255 colors) and prints a histogram of colors:
1 2 3 4 5 6 from PIL import Image im = Image.open("captcha.gif") im = im.convert("P") print im.histogram()
As a result, we get the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0, 0, 2, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0 , 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 1, 2, 0, 1, 0, 0, 1, 0, 2, 0, 0, 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 3, 1, 3, 3, 0, 0, 0, 0, 0, 0, 1, 0, 3, 2, 132, 1, 1, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 15, 0 , 1, 0, 1, 0, 0, 8, 1, 0, 0, 0, 0, 1, 6, 0, 2, 0, 0, 0, 0, 18, 1, 1, 1, 1, 1, 2, 365, 115, 0, 1, 0, 0, 0, 135, 186, 0, 0, 1, 0, 0, 0, 116, 3, 0, 0, 0, 0, 0, 21, 1, 1, 0, 0, 0, 2, 10, 2, 0, 0, 0, 0, 2, 10, 0, 0, 0, 0, 1, 0, 625]
Here we see the number of pixels of each of the 255 colors on the image. You can see that white (255, the most recent) is found most often. It is followed by red (text). To verify this, we will write a small script:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 from PIL import Image from operator import itemgetter im = Image.open("captcha.gif") im = im.convert("P") his = im.histogram() values = {} for i in range(256): values[i] = his[i] for j,k in sorted(values.items(), key=itemgetter(1), reverse=True)[:10]: print j,k
And we get the following data:
Color Number of pixels
255 625
212 365
220 186
219 135
169 132
227 116
213 115
234 21
205 18
184 15
This is a list of the 10 most common colors on the image. As expected, white repeats most often. Then come gray and red.
Once we get this information, we create new images based on these color groups. For each of the most common colors, we create a new binary image (of 2 colors), where the pixels of this color are filled with black, and everything else is white.
Red has become the third among the most common colors, which means that we want to save a group of pixels with a color of 220. When I experimented, I found that the color 227 is pretty close to 220, so we will keep this group of pixels as well. The code below opens the captcha, converts it to GIF, creates a new image of the same size with a white background, and then goes through the original image in search of the color we need. If he finds a pixel with the color we need, then he marks that same pixel on the second image as black. Before shutting down, the second image is saved.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 from PIL import Image im = Image.open("captcha.gif") im = im.convert("P") im2 = Image.new("P",im.size,255) im = im.convert("P") temp = {} for x in range(im.size[1]): for y in range(im.size[0]): pix = im.getpixel((y,x)) temp[pix] = pix if pix == 220or pix == 227: _# these are the numbers to get_ im2.putpixel((y,x),0) im2.save("output.gif")
Running this piece of code gives us the following result.
Original Result
On the picture you can see that we successfully managed to extract the text from the background. To automate this process, you can combine the first and second script.
I hear you asking: "What if the text on the captcha is written in different colors?". Yes, our technology can still work. Assume the most common color is the background color and then you can find the colors of the characters.
Thus, at the moment, we have successfully extracted text from the image. The next step is to determine if the image contains text. I will not write code here yet, because it will make understanding difficult, while the algorithm itself is quite simple.
1 2 3 4 5 6 7 for each binary image: for each pixel in the binary image: if the pixel is on: if any pixel we have seen before is next to it: add to the same set else: add to a new set
At the output, you will have a set of character boundaries. Then all you need to do is to compare them with each other and see if they go sequentially. If yes, then it's a jackpot since you have correctly identified the characters that go next. You can also check the sizes of the received areas or simply create a new image and show it (applying the show () method to the image) to make sure the algorithm is accurate.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 from PIL import Image im = Image.open("captcha.gif") im = im.convert("P") im2 = Image.new("P",im.size,255) im = im.convert("P") temp = {} for x in range(im.size[1]): for y in range(im.size[0]): pix = im.getpixel((y,x)) temp[pix] = pix if pix == 220or pix == 227: # these are the numbers to get_ im2.putpixel((y,x),0) # new code starts here_ inletter = False foundletter=False start = 0 end = 0 letters = [] for y in range(im2.size[0]): _# slice across_ for x in range(im2.size[1]): _# slice down_ pix = im2.getpixel((y,x)) if pix != 255: inletter = True if foundletter == Falseand inletter == True: foundletter = True start = y if foundletter == Trueand inletter == False: foundletter = False end = y letters.append((start,end)) inletter=False print letters
As a result, we got the following:
[(6, 14), (15, 25), (27, 35), (37, 46), (48, 56), (57, 67)]
These are the horizontal positions of the beginning and end of each character.
AI and vector space for pattern recognition #
Image recognition can be considered the greatest success of modern AI, which allowed it to be embedded in all types of commercial applications. A great example of this is zip codes. In fact, in many countries they are read automatically, because teaching a computer to recognize numbers is a fairly simple task. This may not be obvious, but pattern recognition is considered an AI problem, albeit a very highly specialized one.
Almost the first thing that you encounter when meeting with AI in pattern recognition is neural networks. Personally, I have never had success with neural networks in character recognition. I usually teach it 3-4 characters, after which the accuracy drops so low that it would be higher then guessing the characters randomly. Fortunately, I read an article about vector-space search engines and found them an alternative method for classifying data. In the end, they turned out to be the best choice, because:
• They do not require extensive study.
• You can add / remove incorrect data and immediately see the result
• They are easier to understand and program.
• They provide classified results so you can see the top X matches.
• Can't recognize something? Add this and you will be able to recognize it instantly, even if it is completely different from something seen earlier.
Of course, there is no free cheese. The main disadvantage in speed. They can be much slower than neural networks. But I think that their advantages still outweigh this drawback.
If you want to understand how vector space works, then I advise you to read the Vector Space Search Engine Theory. This is the best I have found for beginners and I built my image recognition based on this document. Now we have to program our vector space. Fortunately, this is not at all difficult. Let's get started.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import math class VectorCompare: def magnitude(self,concordance): total = 0 for word,count in concordance.iteritems(): total += count \*\* 2 return math.sqrt(total) def relation(self,concordance1, concordance2): relevance = 0 topvalue = 0 for word, count in concordance1.iteritems(): if concordance2.has\_key(word): topvalue += count \* concordance2[word] return topvalue / (self.magnitude(concordance1) \* self.magnitude(concordance2))
This is an implementation of Python vector space in 15 lines. Essentially, it just takes 2 dictionaries and gives a number from 0 to 1, indicating how they are connected. 0 means that they are not connected and 1 means that they are identical.
Training #
The next thing we need is a set of images with which we will compare our characters. We need a learning set. This set can be used to train any kind of AI that we will use (neural networks, etc.).
The data used can be crucial for the success of recognition. The better the data, the greater the chance of success. Since we plan to recognize a specific captcha and can already extract symbols from it, why not use them as a training set?
This is what I did. I downloaded a lot of generated captcha and my program broke them into letters. Then I collected the received images in a collection (group). After several attempts, I had at least one example of each character generated by the captcha. Adding more examples would increase recognition accuracy, but this was enough for me to confirm my theory.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 from PIL import Image import hashlib import time im = Image.open("captcha.gif") im2 = Image.new("P",im.size,255) im = im.convert("P") temp = {} print im.histogram() for x in range(im.size[1]): for y in range(im.size[0]): pix = im.getpixel((y,x)) temp[pix] = pix if pix == 220or pix == 227: # these are the numbers to get im2.putpixel((y,x),0) inletter = False foundletter=False start = 0 end = 0 letters = [] for y in range(im2.size[0]): _# slice across_ for x in range(im2.size[1]): _# slice down_ pix = im2.getpixel((y,x)) if pix != 255: inletter = True if foundletter == Falseand inletter == True: foundletter = True start = y if foundletter == Trueand inletter == False: foundletter = False end = y letters.append((start,end)) inletter=False # New code is here. We just extract each image and save it to disk with # what is hopefully a unique name count = 0 for letter in letters: m = hashlib.md5() im3 = im2.crop(( letter[0] , 0, letter[1],im2.size[1] )) m.update("%s%s"%(time.time(),count)) im3.save("./%s.gif"%(m.hexdigest())) count += 1
At the output, we get a set of images in the same directory. Each of them is assigned a unique hash in case you process several captchas.
Here is the result of this code for our test captcha:
You decide how to store these images, but I just placed them in a directory with the same name that is on the image (symbol or number).
Putting it all together #
Last step. We have text extraction, character extraction, recognition technique and training set.
We get an image of captcha, select text, get characters, and then compare them with our training set. You can download the final program with a training set and a small number of captchas at this link.
Here we just load the training set to be able to compare our captchas with it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 def buildvector(im): d1 = {} count = 0 for i in im.getdata(): d1[count] = i count += 1 return d1 v = VectorCompare() iconset = ['0','1','2','3','4','5','6','7','8','9','0','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'] imageset = [] for letter in iconset: for img in os.listdir('./iconset/%s/'%(letter)): temp = [] if img != "Thumbs.db": temp.append(buildvector(Image.open("./iconset/%s/%s"%(letter,img)))) imageset.append({letter:temp})
And then all the magic is happening. We determine where each character is and check it with our vector space. Then we sort the results and print them.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 count = 0 for letter in letters: m = hashlib.md5() im3 = im2.crop(( letter[0] , 0, letter[1],im2.size[1] )) guess = [] for image in imageset: for x,y in image.iteritems(): if len(y) != 0: guess.append( ( v.relation(y[0],buildvector(im3)),x) ) guess.sort(reverse=True) print"",guess[0] count += 1
Now we have everything we need and we can try to launch our machine.
The input file is captcha.gif. Expected Result: 7s9t9j
1 2 3 4 5 6 7 python crack.py (0.96376811594202894, '7') (0.96234028545977002, 's') (0.9286884286888929, '9') (0.98350370609844473, 't') (0.96751165072506273, '9') (0.96989711688772628, 'j')
Here we can see the alleged symbol and the degree of confidence that it's it (from 0 to 1).
So, it seems that we really succeeded!
In fact, on test captchas this script will produce a successful result in about only 22% of cases.
1 2 3 4 5 python crack\_test.py Correct Guesses - 11.0 Wrong Guesses - 37.0 Percentage Correct - 22.9166666667 Percentage Wrong - 77.0833333333
Most of incorrect results are related to incorrect recognition of the digit "0" and the letter "O", which is not really unexpected, since even people often confuse them. Also we still have a problem with breaking captcha into characters, but this can be solved simply by checking the result of breaking and finding a middle ground.
However, even with such a not-so-perfect algorithm, we can correctly solve every fifth captcha and it will be faster than a real person could solve one.
Running this code on a Core 2 Duo E6550 gives the following results:
1 2 3 real 0m5.750s user 0m0.015s sys 0m0.000s
With our 22% success rate, we can solve about 432,000 captcha per day and get 95,040 correct results. Imagine using multithreading.
Well, things are more complicated here, since even if creating a CNN (Convolutional Neural Network) for solving ReCaptcha is possible, it would be extremelly expensive to develop and maintain such a project, since Google adds more image types to it on a regular basis.
That's why more efficient solution would be to use an online captcha-solving service as, for example, 2captcha.com
This particular service is really a good example, since it has its significant pros among others, such as:
• high speed of solution (17 seconds for normal (graphic and text) captchas and 33 seconds for ReCaptcha)
• ready libraries for many popular programming languages
• fixed price rates (which don't change along with increasing server's load)
• high accuracy (up to 99%, depending on captcha type)
• money-back guarantee for incorrect answers
• possibility to solve vast volume of captchas (more than 10,000 every minute)
• referal program for soft-developers, customers and workers, which allows to get up to 15% of all spendings of referred users.
The main idea is that you can solve ReCaptcha (as well as other complicated captchas) via simple API anytime and in any number.
1. The target site open credentials (recaptcha's "site key", site url, optional: proxy IP) are copied by you (client) and submitted to the 2captcha service. You find them using simple web developer tools.
2. A worker at the service's end solves reCaptcha with the provided credentials.
3. In 10-30 seconds you request an answer as a g-recaptcha-response token.
4. You use this g-recaptcha-response token inside of the target site [submit] form with recaptcha.
It would be important to tell that all these steps you can do without imitating a browser, but just by pure http GET and POST requests, and I'll show you how.
Get credentials #
2captcha service requires us to provide it with the following parameters:
Request parameter Value
key SERVICE_KEY (2 captchas service key)
So, we are going to the site page and inspect the recaptcha html code in web developer tools (hit F12). There we find and get the data-sitekey attribute value in the g-recaptcha block. Its value is a constant for a single site, the site_key value provided by Google.
We select it and right-click to copy.
Now we have gotten the googlekey parameter (google site_key for this particular site): 6Lf5CQkTAAAAAKA-kgNm9mV6sgqpGmRmRMFJYMz8
SERVICE_KEY for the following requests is taken from the 2captcha account settings.
Submit to service a request for recaptcha solution #
Now we make a GET or POST request to the 2captcha service (in.php endpoint) with the above-mentioned parameters:
http://2captcha.com/in.php?key=SERVICE_KEY&method=userrecaptcha&googlekey=6Lf5CQkTAAAAAKA-kgNm9mV6sgqpGmRmRMFJYMz8&pageurl=http://testing-ground.scraping.pro/recaptcha
1 2 3 4 5 6 7 8 9 10 11 12 import requests from time import sleep, time service\_key = 'xxxxxxxxxxxxxx'; # 2captcha service key google\_site\_key = '6LfxxxxxxxxxxxxxxxxxxxxxFMz856JY' pageurl = 'http://testing-ground.scraping.pro/recaptcha' url = "http://2captcha.com/in.php?key=" + service\_key +"&method=userrecaptcha&googlekey=" + google_site_key + "&pageurl=" + pageurl resp = requests.get(url) if resp.text[0:2] != 'OK': quit('Service error. Error code:' + resp.text) captcha_id = resp.text[3:]
The 2captcha service renders a response in the form of: OK|Captcha_ID where Captcha_ID – is the id of the recaptcha in the system.
Now we need to wait till a worker solves the recaptcha and Google returns a valid token to the service. To do this we make a request to the 2captcha service every 5 seconds until we get a valid token. Take a look on a request to res.php endpoint with all the necessary parameters:
1 2 3 4 5 6 7 8 9 10 fetch_url = "http://2captcha.com/res.php?key="+ service_key + "&action=get&id=" + captcha_id for i in range(1, 10): sleep(5) # wait 5 sec. resp = requests.get(fetch_url) if resp.text[0:2] == 'OK': break print('Google response token: ', resp.text[3:])
Submit google's token in form #
Now we submit the form with the g-recaptcha-response token.
This token is checked on a server of a target site. The site's script sends a request to Google to check the g-recaptcha-response token's validity: is it true or not, pertaining to that site or not, etc. At our Captcha testing ground this token is checked before the form submission. It is done by passing a token through ajax (xhr) request to proxy.php which, in turn, inquires of google if the site is verified and returns google's response.
proxy.php
1 2 3 4 5 header('Content-type: application/json'); $response =$_GET['response']; $secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";$json = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret=' . $secret . '&response=' .$response); echo \$json;
Python code to send g-recaptcha-response to proxy.php for site verification by google #
1 2 3 verify_url = "http://testing-ground.scraping.pro/proxy.php?response=" + resp.text[3:] resp = requests.get(verify_url) print(resp.text)
The script should result in a json:
1 2 3 { "success": true, "challenge\_ts": "2016-09-29T09:25:55Z", "hostname": "testing-ground.scraping.pro"}
Python code of a form submitting with g-recaptcha-response: #
1 2 3 4 submit_url = "http://testing-ground.scraping.pro/recaptcha headers = {'user-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'} payload = {'submit': 'submit', 'g-recaptcha-response': resp.test[3:] } resp = requests.post(submit_url, headers=headers, data=payload)
The whole code #
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 import requests from time import sleep, time start_time = time() # send credentials to the service to solve captcha # returns service's captcha_id of captcha to be solved url="http://2captcha.com/in.php?key=1069c3052adead147d1736d7802fabe2&method=userrecaptcha&googlekey=6Lf5CQkTAAAAAKA-kgNm9mV6sgqpGmRmRMFJYMz8&pageurl=http://testing-ground.scraping.pro/recaptcha" resp = requests.get(url) if resp.text[0:2] != 'OK': quit('Error. Captcha is not received') captcha_id = resp.text[3:] # fetch ready 'g-recaptcha-response' token for captcha_id fetch_url = "http://2captcha.com/res.php?key=1069c3052adead147d1736d7802fabe2&action=get&id=" + captcha_id for i in range(1, 20): sleep(5) # wait 5 sec. resp = requests.get(fetch_url) if resp.text[0:2] == 'OK': break print('Time to solve: ', time() - start_time) # final submitting of form (POST) with 'g-recaptcha-response' token submit_url = "http://testing-ground.scraping.pro/recaptcha" # spoof user agent headers = {'user-agent': 'Mozilla/5.0 Chrome/52.0.2743.116 Safari/537.36'} # POST parameters, might be more, depending on form content payload = {'submit': 'submit', 'g-recaptcha-response': resp.text[3:] } resp = requests.post(submit_url, headers=headers, data=payload)
Limitations #
The received g-recaptcha-response token (from 2captcha service) is valid for only 120 seconds (2 min), so you are responsible to apply it on the target site [submit] form within that time limit.
Other language solutions #
You might also look at other language options for how to apply 2captcha service:
C# code (code for the same testing-ground page)
Java example (with Russian comments only)
Afterword #
Here I showed you different approaches to solve captchas in automatic mode, which allows to perform wide variety of actions online. While the it is up to person — in what way to use this knowlege, a development of defensive methods agains unwanted online activity makes the appropriate development of cracking methods inevitable. | 2022-12-08 19:35:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2478393316268921, "perplexity": 1008.4321113324704}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711360.27/warc/CC-MAIN-20221208183130-20221208213130-00138.warc.gz"} |
https://www.gamedev.net/forums/topic/682747-how-the-information-i-send-via-raknetbitstream-is-converted-to-raknetpacket-when-received/ | • 10
• 10
• 12
• 12
• 14
# How the information I send via RakNet::BitStream is converted to RakNet::Packet when received?
This topic is 574 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Guys, the question is very stupid, but I can't figure it out all morning that's why I decided to ask it here.
Basically I'm writing the server code for my paintball game, pure client-server model seems the easiest for me, that's why the server just receives gamestates from the clients and then sends some stuff based on what was received.
When I receive packets, I do something like this:
RakNet::Packet *packet = nullptr;
for( packet = peer->Receive(); packet; peer->DeallocatePacket( packet ), packet = peer->Receive() )
{
if( packet->data[ 0 ] == PLAYER_MOVE_FORWARD ) {send coords }
if( packet->data[ 0 ] == PLAYER_MOVE_BACKWARD ) { send coords }
if( packet->data[ 0 ] == PLAYER_MOVE_LEFT ) { send coords }
if( packet->data[ 0 ] == PLAYER_MOVE_RIGHT ) { send coords }
}
As you can see, when I receive packets, I use the RakNet::Packet structure which looks like this:
namespace RakNet
{
struct Packet
{
/// The system that send this packet.
/// A unique identifier for the system that sent this packet, regardless of IP address (internal / external / remote system)
/// Only valid once a connection has been established (ID_CONNECTION_REQUEST_ACCEPTED, or ID_NEW_INCOMING_CONNECTION)
/// Until that time, will be UNASSIGNED_RAKNET_GUID
RakNetGUID guid;
/// The length of the data in bytes
unsigned int length;
/// The length of the data in bits
BitSize_t bitSize;
/// The data from the sender
unsigned char* data;
} // Namespace
You can see that in the RakNet::Packet structure there is a systemAddress and some other stuff that I receive.
But the problem is I didn't send anything like this.
I do the packet sending like this:
RakNet::BitStream bsOut;
bsOut.Write( (RakNet::MessageID) gameState );
peer->Send( &bsOut, HIGH_PRIORITY, RELIABLE_ORDERED, 0, RakNet::UNASSIGNED_SYSTEM_ADDRESS, true );
So I send a bitstream that contains only one thing and that is the gamestate and nothing else, but then when I receive the message on the other side, I do packet->Receive(). How was my bitstream converted to RakNet::Packet and how does raknet know which data from my bitstream goes to what data in the Raknet::Packet structure when the conversion is made??? :blink:
And in the bitstream I just send one thing and that is the gameState variable, I don't send my IP or RaknetGUID, so the server shouldn't know who sent the packet. But in the RakNet::Packet there is a systemAddress field, there is RakNetGUID field, so how are these fields filled with data when I haven't send anything other that gameState in the bitstream? :huh:
Guys, thanks a lot for reading this. And, please, if you have the time and you know the reasons, try to explain this in the simplest way you can, because I'm kind of studying this stuff on my own, and I don't know what I don't know. :unsure:
Edited by codeBoggs
##### Share on other sites
The last parameter for "Send" method:
RakNet::BitStream bsOut;
bsOut.Write( (RakNet::MessageID) gameState );
peer->Send( &bsOut, HIGH_PRIORITY, RELIABLE_ORDERED, 0, RakNet::UNASSIGNED_SYSTEM_ADDRESS, true );
which you've set to "true" tells the peer to broadcast. So your server sends gameState to every connected client. If you want to send it only to particular one you should set it to "false" and instead of passing "RakNet::UNASSIGNED_SYSTEM_ADDRESS" just pass desired system identifier.
Now about question why client knows all this stuff:
struct Packet
{
/// The system that send this packet.
/// A unique identifier for the system that sent this packet, regardless of IP address (internal / external / remote system)
/// Only valid once a connection has been established (ID_CONNECTION_REQUEST_ACCEPTED, or ID_NEW_INCOMING_CONNECTION)
/// Until that time, will be UNASSIGNED_RAKNET_GUID
RakNetGUID guid;
/// The length of the data in bytes
unsigned int length;
/// The length of the data in bits
BitSize_t bitSize;
/// The data from the sender
unsigned char* data;
} // Namespace
https://en.wikipedia.org/wiki/Encapsulation_(networking)
https://en.wikipedia.org/wiki/OSI_model
https://en.wikipedia.org/wiki/User_Datagram_Protocol
When your client receives a packet it just "decapsulates" and reads extra information so it can feed "Packet" structure. That's all story.
Edited by 8artek0v0
##### Share on other sites
This cleared everything up for me. Thanks a lot, 8artek0v0. :cool:
Edited by codeBoggs | 2018-04-27 00:54:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2461482584476471, "perplexity": 4881.330933080446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948738.65/warc/CC-MAIN-20180427002118-20180427022118-00110.warc.gz"} |
https://www.physicsforums.com/threads/finding-electric-field.353148/ | # Homework Help: Finding electric field
1. Nov 9, 2009
### du_uk
1. The problem statement, all variables and given/known data
A spherical hole is located inside a uniformly charged sphere of charge density p. The centre of the hole is at a distance a from the centre of the sphere, and the radii of the sphere and the hole are given by R and R' respectively. Determine the electric field strength E inside the hole.
2. Relevant equations
3. The attempt at a solution
I think I need to use Gauss's law the find the electric field around this red surface:
http://img20.imageshack.us/img20/6870/electroqy.jpg [Broken]
and since there is symmetry, integrate around 0 to 2(pi) wrt the extra (third dimensional) coordinate.
Then minus the electric field for a sphere outside the charge.
Thanks
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
Last edited by a moderator: May 4, 2017
2. Nov 10, 2009
### gabbagabbahey
Hi du_uk, welcome to PF!
I'm not sure exactly what you mean here, but no, you are not going in the right direction. What exactly is the symmetry you are referring to here?
Instead, take advantage of the superposition principle...what happens if you place an object of charge density $-\rho$ inside a larger object of charge density $+\rho$?
Last edited by a moderator: May 4, 2017
3. Nov 10, 2009
### Phrak
More to the point, place a spherical charge density $-\rho$ inside a larger spherical charge density $+\rho$. What are the forces inside the smaller sphere? | 2018-07-18 11:20:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6884846687316895, "perplexity": 629.5848058165972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590127.2/warc/CC-MAIN-20180718095959-20180718115959-00219.warc.gz"} |
https://converter.ninja/volume/milliliters-to-us-dry-pints/480-ml-to-drypint/ | 480 milliliters in US dry pints
Conversion
480 milliliters is equivalent to 0.871759664898102 US dry pints.[1]
Conversion formula How to convert 480 milliliters to US dry pints?
We know (by definition) that: $1\mathrm{ml}\approx 0.00181616596853771\mathrm{drypint}$
We can set up a proportion to solve for the number of US dry pints.
$1 ml 480 ml ≈ 0.00181616596853771 drypint x drypint$
Now, we cross multiply to solve for our unknown $x$:
$x\mathrm{drypint}\approx \frac{480\mathrm{ml}}{1\mathrm{ml}}*0.00181616596853771\mathrm{drypint}\to x\mathrm{drypint}\approx 0.8717596648981008\mathrm{drypint}$
Conclusion: $480 ml ≈ 0.8717596648981008 drypint$
Conversion in the opposite direction
The inverse of the conversion factor is that 1 US dry pint is equal to 1.14710514866146 times 480 milliliters.
It can also be expressed as: 480 milliliters is equal to $\frac{1}{\mathrm{1.14710514866146}}$ US dry pints.
Approximation
An approximate numerical result would be: four hundred and eighty milliliters is about zero point eight seven US dry pints, or alternatively, a US dry pint is about one point one four times four hundred and eighty milliliters.
Footnotes
[1] The precision is 15 significant digits (fourteen digits to the right of the decimal point).
Results may contain small errors due to the use of floating point arithmetic. | 2020-10-30 08:11:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5911970138549805, "perplexity": 2334.6892675310824}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107909746.93/warc/CC-MAIN-20201030063319-20201030093319-00615.warc.gz"} |
https://ciqihibubag.bextselfreset.com/introduction-to-riemann-surfaces-book-28478eo.php | Last edited by Faezilkree
Monday, August 10, 2020 | History
2 edition of Introduction to Riemann surfaces. found in the catalog.
Introduction to Riemann surfaces.
George Springer
# Introduction to Riemann surfaces.
## by George Springer
Written in English
Edition Notes
ID Numbers Series Addison-Wesley mathematics series Open Library OL20320978M
An Introduction to Riemann Surfaces and Algebraic Curves: Complex 1-Tori and Elliptic Curves by Dr. T.E. Venkata Balaji, Department of Mathematics, . The point of the introduction of Riemann surfaces made by Riemann, Klein and Weyl (), was that Riemann surfaces can be considered as both a one-dimensional complex manifold and an algebraic curve. Another possibility is to study Riemann surfaces as two-dimensional real manifolds, as Gauss () had taken on the problem of taking a.
Download This volume provides an introduction to dessins d'enfants and embeddings of bipartite graphs in compact Riemann surfaces. The first part of the book presents basic material, guiding the reader through the current field of research. APA Citation (style guide). Springer, G. (). Introduction to Riemann surfaces. Reading, Mass.: Addison-Wesley Pub. Co. Chicago / Turabian - Author Date Citation.
Introduction to Riemann Surfaces 作者: George Springer 出版社: American Mathematical Society 出版年: 页数: 定价: USD 装帧: Hardcover ISBN: An Introduction to Riemann Surfaces. by Terrence Napier,Mohan Ramachandran. Cornerstones. Share your thoughts Complete your review. Tell readers what you thought by rating and reviewing this book. Rate it * You Rated it *.
You might also like
Millions can be saved by improving the productivity of state and local governments administering federal income maintenance assistance programs
Millions can be saved by improving the productivity of state and local governments administering federal income maintenance assistance programs
Beppo the conscript
Beppo the conscript
The soul of the camera
The soul of the camera
Medical disaster response
Medical disaster response
MANY LIVES OF UNDERFOOT CAT
MANY LIVES OF UNDERFOOT CAT
Elegy ; Some October notes
Elegy ; Some October notes
Oxford companion to music
Oxford companion to music
Continuation of the national emergency with respect to Sierra Leone and Liberia
Continuation of the national emergency with respect to Sierra Leone and Liberia
Local government policies for urban development
Local government policies for urban development
Preparing teacher-librarians
Preparing teacher-librarians
full dress
full dress
### Introduction to Riemann surfaces by George Springer Download PDF EPUB FB2
An Introduction to Riemann Surfaces (Cornerstones) th Edition by Terrence Napier (Author), Mohan Ramachandran (Author) ISBN ISBN Why is ISBN important. ISBN. This bar-code number lets you verify that you're getting exactly the right version or edition of a book.
Introduction to Riemann Surfaces book. Read reviews from world’s largest community for readers. This well-known book is a self-contained treatment of the /5(3). The book also contains treatments of other facts concerning the holomorphic, smooth, and topological structure of a Riemann surface, such as the biholomorphic classification of Riemann surfaces, the embedding theorems, the integrability of almost complex structures, the Schönflies theorem (and the Jordan curve theorem), and the existence of.
The second five chapters cover differentials and uniformization. For compact Riemann surfaces, there are clear treatments of divisors, Weierstrass points, the Riemann-Roch theorem and other important topics. Springer's book is an excellent text Introduction to Riemann surfaces.
book an introductory course on Riemann surfaces. An Introduction to Riemann Surfaces by Terrence Napier,available at Book Depository with free delivery worldwide. An introduction to Riemann surfaces, algebraic curves and moduli spaces This book gives an introduction to modern geometry. Starting from an elementary level the author develops deep geometrical concepts, playing an important role nowadays in contemporary theoretical physics.
He presents various techniques and viewpoints, thereby showing. Introduction to Riemann Surfaces | George Springer | download | B–OK. Download books for free. Find books. It also deals quite a bit with non-compact Riemann surfaces, but does include standard material on Abel's Theorem, the Abel-Jacobi map, etc.
I would also recommend Griffiths's Introduction to Algebraic Curves — a beautiful text based on lectures. $\endgroup$ – Ted Shifrin May 30 '13 at Although Riemann surfaces are a time-honoured field, this book is novel in its broad perspective that systematically explores the connection with other fields of mathematics.
It can serve as an introduction to contemporary mathematics as a whole as it develops background material from algebraic topology, differential geometry, the calculus of.
J.-B. Bost, Introduction to compact Riemann surfaces, Jacobians, and abelian varieties, in From number theory to physics (Les Houches, ), Springer, Berlin,pp.
It is clearly written, contains historical comments and a lot of mathematical gems. For compact Riemann surfaces, there are clear treatments of divisors, Weierstrass points, the Riemann-Roch theorem and other important topics. Springer's book is an excellent text for an introductory course on Riemann surfaces.
It includes exercises after each chapter and is illustrated with a beautiful set of by: This textbook presents a unified approach to compact and noncompact Riemann surfaces from the point of view of the so-called L2 $\bar{\delta}$-method.
This method is a powerful technique from the theory of several complex variables, and provides for a unique approach to the fundamentally different characteristics of compact and noncompact Riemann surfaces. This book grew out of lectures on Riemann surfaces which the author gave at the universities of Munich, Regensburg and Munster.
Its aim is to give an introduction to this rich and beautiful subject, while presenting methods from the theory of complex manifolds which, in the special case of one complex variable, turn out to be particularly elementary and transparent.
1 Riemann Surfaces 5 are holomorphic f1,2(z) = f2,1(z) = 1/z. To a large extent the beauty of the theory of Riemann surfaces is due to the fact that Riemann surfaces can be described in many completely different ways. Interrelations between these descriptions make up an essential part of the theory.
In mathematics, particularly in complex analysis, a Riemann surface is a one-dimensional complex surfaces were first studied by and are named after Bernhard n surfaces can be thought of as deformed versions of the complex plane: locally near every point they look like patches of the complex plane, but the global topology can be quite different.
Lee "An Introduction to Riemann Surfaces" por Terrence Napier disponible en Rakuten Kobo. This textbook presents a unified approach to compact and noncompact Riemann surfaces from the point of view of the so-ca. “The present book gives a solid introduction to the theory of both compact and non-compact Riemann surfaces.
While modern introductions often take the view point of algebraic geometry, the present book tries to also cover the analytical aspects. Author: Terrence Napier. Introduction. It is gratifying to learn that there is new life in an old field that has been at the center of one's existence for over a quarter of a century.
It is particularly pleasing that the subject of Riemann surfaces has attracted the attention of a new generation of mathematicians from (newly) adjacent fields (for example, those. From the reviews:"The present book gives a solid introduction to the theory of both compact and non-compact Riemann surfaces.
While modern introductions often take the view point of algebraic geometry, the present book tries to also cover the analytical aspects. Introduction to Riemann Surfaces מבוא למשטחי רימן Spring semester Ma - J Lecturer Bo'az Klartag RoomZiskind building Phone: e-mail: Classes Monday, -Ziskind TA sessions Wednesday, -Ziskind Syllabus.
Introduction to Riemann Surfaces. Supporting materials for a course on Riemann surfaces based on Simon Donaldson's book "Riemman Surfaces".
You can see an early draft of his notes here. Lecture 1 Exercises Solutions Lecture 2 Exercises Solutions Lecture 3 Exercises Solutions Lecture 4 Slides Exercises Solutions Lecture 5 Slides Exercises.nition and examples of Riemann Surfaces tand statement: S2 is unique genus 0 Riemann surface.
tand statement: All genus 1 surfaces are given as C. The moduli space is biholomorphic to C. 4. S2 is unique surface with a meromorphic function with exactly 1 pole of degree 1. 5. TODO: The C= are the only compact surfaces with a.This book grew out of lectures on Riemann surfaces which the author gave at the universities of Munich, Regensburg and Munster.
Its aim is to give an introduction to this rich and beautiful subject, while presenting methods from the theory of complex manifolds which, in the special case of one complex variable, turn out to be particularly elementary and transparent.4/5(1). | 2021-07-28 23:42:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5130792260169983, "perplexity": 1488.8217731765205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153803.69/warc/CC-MAIN-20210728220634-20210729010634-00428.warc.gz"} |
https://jacobzelko.com/03272020064312-exponential-smoothing/ | the cedar ledge
# Exponential Smoothing
Date: March 27 2020
Summary: An overview on how to use the exponential smoothing algorithm
Keywords: ##zettel #signalprocessing #noise #artifact #smoothing #window #julialang #archive
# Bibliography
Not Available
The exponential smoothing algorithm is a recursive algorithm and is one of the more simple smoothing methods commonly used to remove small noise and motion artifacts from a discrete time series signal. However, it can be considered a "manual" algorithm due to having to manually determine a smoothing factor for it to work properly.
Per conversation with Post-Doc Researcher, Fredrik Bagge Carlson, another definition for the smoothing factor is the "forgetting factor". A bigger value for the forgetting factor results in forgetting the memory built into the algorithm faster and focusing more on recent inputs.
Also, this method is classified as a moving average filter!
### Algorithm
The algorithm is very simple in which it is described as:
$s_1 = x_1$ $s_t = ax_t + (1 - \alpha)s_{t - 1} \space | \space t > 0$
The variables are defined as follows:
$\{x_t\}$
* The raw signal sequence
$\{s_t\}$
* The smoothed output signal sequence
$t$
* Time (where $t > 0$)
$\alpha$
* Smoothing factor (must be chosen such that $0 < \alpha <1$)
The weighted average in this case works when you take a portion of the current value x(t) from the original signal and a portion of the s(t -1) is summed together after being scaled by the forgetting factor. [Explanation thanks to Fredrik Bagge Carlson]
• Each term in the sequence, $\{s_t\}$, is counted as the weighted average of the current data point from the sequence $\{x_t\}$ and the prior smoothed statistic, $s_t$.
• There is no clear method for choosing the value of the smoothing factor
• $0 <<\alpha < 1$
yields a smaller smoothing effect and "value" updating values more highly
• $0 < \alpha << 1$
yields a greater smoothing effect but does not respond greatly to recent updates
### Example Implementation
using Plots # IMPORT FOR PLOTTING
using LaTeXStrings # IMPORT TO ENABLE LaTeX FORMATTING
gr()
let
# Choose Smoothing Factor, α, And Input Values Over Which To Calculate
# Choose α: 0 < α < 1
input = 0:0.001:1
α = 0.05
# Generate Generic Signal - In This Case Sin(2π)
signal = [sin(2 * pi * i) for i in input]
# Adding Random Noise To Function
noisy_signal =
[sin(2 * pi * i) + rand([-1, 1]) * round(rand(), digits = 2) for i in input]
# Filter The Signal Using An Exponential Smoothing Filter
exponential_signal::Array{Float32} = [noisy_signal[1]]
for i in 2:length(signal)
smooth_term = α * noisy_signal[i] + (1 - α) * exponential_signal[i-1]
append!(exponential_signal, smooth_term)
end
# Plot Signals
plot(
input,
noisy_signal,
label = "Noisy Signal",
title = "Example of Exponential Smoothing",
)
plot!(
input,
exponential_signal,
label = "Exponentially Smoothed Signal",
linewidth = 3
)
plot!(
input,
signal,
label = L"sin(2\pi)",
linewidth = 5
)
end
## How To Cite
Zelko, Jacob. Exponential Smoothing. https://jacobzelko.com/03272020064312-exponential-smoothing. March 27 2020.
## Discussion:
CC BY-SA 4.0 Jacob Zelko. Last modified: January 17, 2023. Website built with Franklin.jl and the Julia programming language. | 2023-01-30 08:30:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6831346750259399, "perplexity": 3829.9676210436005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00672.warc.gz"} |
https://mathleaks.com/study/graphing_linear_functions_in_standard_form/grade-3 | {{ item.displayTitle }}
navigate_next
No history yet!
Progress & Statistics equalizer Progress expand_more
Student
navigate_next
Teacher
navigate_next
{{ filterOption.label }}
{{ item.displayTitle }}
{{ item.subject.displayTitle }}
arrow_forward
{{ searchError }}
search
{{ courseTrack.displayTitle }}
{{ printedBook.courseTrack.name }} {{ printedBook.name }}
# Graphing Linear Functions in Standard Form
There are several different ways to graph a linear function. Sometimes, the way the rule of the function is written can dictate the simplest way to graph it. Below, the graphs of linear functions given in standard form will be explored.
Rule
## Standard Form of a Line
One way to write linear function rules is in standard form.
$Ax+By=C$
Here, $A,$ $B,$ and $C$ are real numbers and $A$ and $B$ cannot both equal $0.$ Several combinations of $A,$ $B,$ and $C$ can describe the same line, but representing them with the smallest possible integers is preferred.
fullscreen
Exercise
Graph the linear function given by the equation using a table of values. $4x-2y=7$
Show Solution
Solution
To graph the function, we can create a table of values giving different points on the line. To do this, we'll substitute arbitrarily-chosen $x$-values into the equation to find the corresponding $y$-values. Let's start with $x=0.$
$4x-2y=7$
$4\cdot {\color{#0000FF}{0}} - 2y=7$
$\text{-} 2y=7$
$y=\dfrac{7}{\text{-} 2}$
$y=\text{-} 3.5$
One point on the line is $(0,\text{-}3.5).$ We can use the same process for finding other points.
$x$ $4x-2y=7$ $y$
${\color{#0000FF}{1}}$ $4 \cdot {\color{#0000FF}{1}}-2y=7$ $\text{-}1.5$
${\color{#0000FF}{2}}$ $4 \cdot {\color{#0000FF}{2}}-2y=7$ $0.5$
${\color{#0000FF}{3}}$ $4 \cdot {\color{#0000FF}{3}}-2y=7$ $2.5$
${\color{#0000FF}{4}}$ $4 \cdot {\color{#0000FF}{4}}-2y=7$ $4.5$
To draw the graph of the function, we can plot all five points in a coordinate plane and connect them with a line.
Theory
## Graphing Linear Functions using Intercepts
A function's $x$- and $y$-intercepts are the points where the graph of a function intersects with the $x$- and $y$-axes, respectively. It's possible to use a linear function's intercepts to graph it.
Method
## Finding the Intercepts of a Graph
The intercepts of a graph share an important feature. For all $x$-intercepts, the $y$-coordinate is $0,$ and for all $y$-intercepts, the $x$-coordinate is $0.$ \begin{aligned} x\text{-int} &: (x,0) \\ y\text{-int} &: (0,y) \end{aligned} This can be used to find the intercepts of a graph when its rule is known. For example, consider the line given by the following equation. $2x+5y=10$
Method
### Finding the $x$-intercept
To find the $x$-intercept, $y=0$ can be substituted into the equation.
$2x+5y=10 \quad \Rightarrow \quad 2x+5\cdot {\color{#0000FF}{0}} =10$ Next, solve the equation for $x.$
$2x+5\cdot0 =10$
$2x=10$
$x=5$
The $x$-intercept is $(5,0).$
Method
### Finding the $y$-intercept
The $y$-intercept can be found in a similar way. Substitute $x=0$ into the equation and solve for $y.$
$2\cdot {\color{#0000FF}{0}}+5x =10$
$5y=10$
$y=2$
The $y$-intercept is $(0,2).$
fullscreen
Exercise
The amusement park ride "Spinning Teacups" has two different sizes of cups, large and small. Large cups fit $6$ people and small cups fit $4$ people. Maximum capacity for each ride is $48$ people. The equation $4x+6y=48$ models this situation, where $x$ is the number of small cups and $y$ is the number of large cups. Graph the situation and interpret the intercepts.
Show Solution
Solution
Example
### Finding the intercepts
To begin, we will find each of the intercepts. Starting with the $x$-intercept, we can substitute $y=0$ into the rule and solve for $x.$
$4x+6y=48$
$4x+6\cdot{\color{#0000FF}{0}}=48$
$4x=48$
$x=12$
The $x$-intercept is $(12,0).$ To find the $y$-intercept we can substitute $x=0$ and solve for $y.$
$4x+6y=48$
$4\cdot{\color{#0000FF}{0}}+6y=48$
Solve for $y$
$6y=48$
$y=8$
The $y$-intercept is $(0,8).$
Example
### Graphing the function
To graph the function, we can plot the intercepts in a coordinate plane, and connect them with a line.
Notice that the graph does not extend infinitely. This is because, since $x$ and $y$ represent the numbers of different cups, negative numbers should not be included.
Example
### Interpreting the intercepts
We can interpret the intercepts in terms of what $x$ and $y$ represent. The $x$-intercept is $(12,0).$ This means a ride with $12$ small cups can not have any large cups, because the maximum capacity of people has already been met. Similarly, the $y$-intercept of $(8,0),$ tells us that a ride with $8$ large cups will not allow for any small cups. | 2021-02-27 06:56:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 109, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076605200767517, "perplexity": 664.8506807322909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00471.warc.gz"} |
https://zbmath.org/authors/?q=ai%3Ajacob.birgit | # zbMATH — the first resource for mathematics
## Jacob, Birgit
Compute Distance To:
Author ID: jacob.birgit Published as: Jacob, B.; Jacob, Birgit
Documents Indexed: 76 Publications since 1995, including 3 Books Reviewing Activity: 71 Reviews
all top 5
#### Co-Authors
8 single-authored 24 Partington, Jonathan R. 17 Zwart, Hans J. 12 Pott, Sandra 6 Trunk, Carsten 5 Morris, Kirsten A. 3 Schwenninger, Felix L. 3 Wegner, Sven-Ake 2 Augner, Björn 2 Baroun, Mahmoud 2 Haak, Bernhard H. 2 Schnaubelt, Roland 2 Winkin, Joseph J. 2 Wintermayr, Jens 2 Wynn, Andrew 2 Wyss, Christian 1 Bátkai, András 1 Callier, Frank M. 1 Curtain, Ruth Frances 1 Damm, Tobias 1 Dragan, Vasile 1 Eisner, Tanja 1 Elbern, Hendrik 1 Kaiser, Julia T. 1 Laasri, Hafida 1 Langer, Matthias 1 Larsen, Mikael 1 Leblond, Juliette 1 Maniar, Lahcen 1 Marmorat, Jean-Paul 1 Mironchenko, Andrii 1 Möller, Sebastian 1 Nabiullin, Robert 1 Omrane, Abdennebi 1 Pritchard, Anthony J. 1 Ran, André C. M. 1 Reis, Timo 1 Rydhe, Eskil 1 Staffans, Olof Johan 1 Tretter, Christiane 1 Ünalmiş, Banu 1 Vogt, Hendrik 1 Voigt, Jurgen 1 Vorberg, Lukas A. 1 Winklmeier, Monika 1 Wirth, Fabian Roger 1 Wu, Xueran 1 Zuazua, Enrique
all top 5
#### Serials
12 SIAM Journal on Control and Optimization 8 Systems & Control Letters 8 Journal of Evolution Equations 5 Integral Equations and Operator Theory 4 MCSS. Mathematics of Control, Signals, and Systems 3 Journal of Mathematical Analysis and Applications 3 IEEE Transactions on Automatic Control 3 Journal of Differential Equations 3 Journal of Functional Analysis 3 Semigroup Forum 2 Mathematische Nachrichten 2 Oberwolfach Reports 2 Operators and Matrices 2 Operator Theory: Advances and Applications 1 International Journal of Control 1 Mathematical Methods in the Applied Sciences 1 Automatica 1 Proceedings of the Edinburgh Mathematical Society. Series II 1 Journal of Operator Theory 1 Journal of Integral Equations and Applications 1 Linear Algebra and its Applications 1 Journal of Mathematical Systems, Estimation, and Control 1 International Journal of Applied Mathematics and Computer Science 1 Complex Analysis and Operator Theory 1 PAMM. Proceedings in Applied Mathematics and Mechanics 1 Annals of the Academy of Romanian Scientists. Mathematics and its Applications 1 Evolution Equations and Control Theory
all top 5
#### Fields
56 Systems theory; control (93-XX) 38 Operator theory (47-XX) 10 Partial differential equations (35-XX) 9 Ordinary differential equations (34-XX) 9 Calculus of variations and optimal control; optimization (49-XX) 8 Functions of a complex variable (30-XX) 5 Functional analysis (46-XX) 4 Dynamical systems and ergodic theory (37-XX) 3 General and overarching topics; collections (00-XX) 3 Integral equations (45-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Several complex variables and analytic spaces (32-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Numerical analysis (65-XX) 1 Mechanics of particles and systems (70-XX) 1 Fluid mechanics (76-XX) 1 Biology and other natural sciences (92-XX)
#### Citations contained in zbMATH
64 Publications have been cited 460 times in 262 Documents Cited by Year
Linear port-Hamiltonian systems on infinite-dimensional spaces. Zbl 1254.93002
Jacob, Birgit; Zwart, Hans J.
2012
Infinite-dimensional input-to-state stability and Orlicz spaces. Zbl 1390.93661
Jacob, Birgit; Nabiullin, Robert; Partington, Jonathan R.; Schwenninger, Felix L.
2018
Admissibility of control and observation operators for semigroups: a survey. Zbl 1083.93025
Jacob, Birgit; Partington, Jonathan R.
2004
Counterexamples concerning observation operators for $$C_0$$-semigroups. Zbl 1101.93042
Jacob, Birgit; Zwart, Hans
2004
The Weiss conjecture on admissibility of observation operators for contraction semigroups. Zbl 1031.93107
Jacob, Birgit; Partington, Jonathan R.
2001
On controllability of diagonal systems with one-dimensional input space. Zbl 1129.93323
Jacob, Birgit; Partington, Jonathan R.
2006
Exact observability of diagonal systems with a finite-dimensional output operator. Zbl 0978.93010
Jacob, B.; Zwart, H.
2001
On Laplace-Carleson embedding theorems. Zbl 1267.46040
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2013
Admissible and weakly admissible observation operators for the right shift semigroup. Zbl 1176.47065
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2002
Optimal control for age-structured population dynamics of incomplete data. Zbl 1191.92033
Jacob, Birgit; Omrane, Abdennebi
2010
A resolvent test for admissibility of Volterra observation operators. Zbl 1118.45008
Jacob, Birgit; Partington, Jonathan R.
2007
Stability and stabilization of infinite-dimensional linear port-Hamiltonian systems. Zbl 1302.93173
Augner, Björn; Jacob, Birgit
2014
On continuity of solutions for parabolic control systems and input-to-state stability. Zbl 07036305
Jacob, Birgit; Schwenninger, Felix L.; Zwart, Hans
2019
Analyticity and Riesz basis property of semigroups associated to damped vibrations. Zbl 1156.47038
Jacob, Birgit; Trunk, Carsten; Winklmeier, Monika
2008
Minimum-phase infinite-dimensional second-order systems. Zbl 1366.93277
Jacob, Birgit; Morris, Kirsten; Trunk, Carsten
2007
A formula for the stability radius of time-varying systems. Zbl 0902.34039
Jacob, Birgit
1998
Admissible control and observation operators for Volterra integral equations. Zbl 1076.93025
Jacob, Birgit; Partington, Jonathan R.
2004
Zwart, Hans; Jacob, Birgit; Staffans, Olof
2003
$$C_{0}$$-semigroups for hyperbolic partial differential equations on a one-dimensional spatial domain. Zbl 1320.35199
Jacob, Birgit; Morris, Kirsten; Zwart, Hans
2015
Applications of Laplace-Carleson embeddings to admissibility and controllability. Zbl 1294.30098
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2014
On the Hautus test for exponentially stable $$C_0$$-groups. Zbl 1193.47044
Jacob, Birgit; Zwart, Hans
2009
On the boundedness and continuity of the spectral factorization mapping. Zbl 0994.47020
Jacob, Birgit; Partington, Jonathan R.
2001
Graphs, closability, and causality of linear time-invariant discrete-time systems. Zbl 1162.93312
Jacob, Birgit; Partington, Jonathan R.
2000
Equivalent conditions for stabilizability of infinite-dimensional systems with admissible control operators. Zbl 0945.93018
Jacob, Birgit; Zwart, Hans
1999
Admissibility and controllability of diagonal Volterra equations with scalar inputs. Zbl 1161.93005
Haak, Bernhard H.; Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2009
Conditions for admissibility of observation operators and boundedness of Hankel operators. Zbl 1046.47036
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2003
Location of the spectrum of operator matrices which are associated to second order equations. Zbl 1128.47003
Jacob, Birgit; Trunk, Carsten
2007
Exact observability of diagonal systems with a one-dimensional output operator. Zbl 1031.93065
Jacob, Birgit; Zwart, Hans
2001
Spectrum and analyticity of semigroups arising in elasticity theory and hydromechanics. Zbl 1175.47036
Jacob, Birgit; Trunk, Carsten
2009
Zero-class admissibility of observation operators. Zbl 1161.93016
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2009
Interpolation by vector-valued analytic functions, with applications to controllability. Zbl 1137.46015
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2007
A constrained approximation problem arising in parameter identification. Zbl 1001.65067
Jacob, Birgit; Leblond, Juliette; Marmorat, Jean-Paul; Partington, Jonathan R.
2002
Corrections and extensions of “Optimal control of linear systems with almost periodic input” by G. da Prato and A. Ichikawa. Zbl 0914.49023
Jacob, Birgit; Larsen, Mikael; Zwart, Hans
1998
Infinite dimensional time-varying systems with nonlinear output feedback. Zbl 0839.93057
Jacob, B.; Dragan, V.; Pritchard, A. J.
1995
On the right multiplicative perturbation of non-autonomous $$L^p$$-maximal regularity. Zbl 1389.35191
Augner, Björn; Jacob, Birgit; Laasri, Hafida
2015
Weighted interpolation in Paley-Wiener spaces and finite-time controllability. Zbl 1206.47020
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2010
Properties of the realization of inner functions. Zbl 1019.93007
Jacob, Birgit; Zwart, Hans
2002
$$\beta$$-admissibility of observation operators for hypercontractive semigroups. Zbl 1398.30036
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra; Wynn, Andrew
2018
The weighted Weiss conjecture and reproducing kernel theses for generalized Hankel operators. Zbl 1300.30095
Jacob, B.; Rydhe, E.; Wynn, A.
2014
Hamiltonians and Riccati equations for linear systems with unbounded control and observation operators. Zbl 1254.47041
Wyss, C.; Jacob, B.; Zwart, H. J.
2012
Admissibility and observability of observation operators for semilinear problems. Zbl 1170.47037
Baroun, Mahmoud; Jacob, Birgit
2009
Observability of polynomially stable systems. Zbl 1111.93010
Jacob, Birgit; Schnaubelt, Roland
2007
Continuity of the spectral factorization on a vertical strip. Zbl 0948.93025
Jacob, Birgit; Winkin, Joseph; Zwart, Hans
1999
Time-varying infinite dimensional state-space systems. Zbl 0853.93004
Jacob, Birgit
1995
Well-posedness of systems of 1-D hyperbolic partial differential equations. Zbl 07062563
Jacob, Birgit; Kaiser, Julia T.
2019
Optimal control and observation locations for time-varying systems on a finite-time horizon. Zbl 1339.93127
Wu, Xueran; Jacob, Birgit; Elbern, Hendrik
2016
Desch-Schappacher perturbation of one-parameter semigroups on locally convex spaces. Zbl 1323.47050
Jacob, Birgit; Wegner, Sven-Ake; Wintermayr, Jens
2015
Semilinear observation systems. Zbl 1281.93027
Baroun, M.; Jacob, B.; Maniar, L.; Schnaubelt, R.
2013
Spectral properties of pseudo-resolvents under structured perturbations. Zbl 1248.93051
Curtain, Ruth F.; Jacob, Birgit
2009
Tangential interpolation in weighted vector-valued $$H^p$$ spaces, with applications. Zbl 1217.30034
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2009
What is the better signal space for discrete-time systems: $$\ell_2(\mathbb Z)$$ or $$\ell_2(\mathbb N_0)$$? Zbl 1101.93065
Jacob, Birgit
2005
Zeros of Fredholm operator valued $$H^p$$-functions. Zbl 0992.47002
Jacob, Birgit
2001
Well-posedness of a class of hyperbolic partial differential equations on the semi-axis. Zbl 07149146
Jacob, Birgit; Wegner, Sven-Ake
2019
Systems with strong damping and their spectra. Zbl 06986309
Jacob, Birgit; Tretter, Christiane; Trunk, Carsten; Vogt, Hendrik
2018
Perturbations of positive semigroups on AM-spaces. Zbl 06893038
Bátkai, András; Jacob, Birgit; Voigt, Jürgen; Wintermayr, Jens
2018
Variational principles for self-adjoint operator functions arising from second-order systems. Zbl 1356.47019
Jacob, Birgit; Langer, Matthias; Trunk, Carsten
2016
Asymptotics of evolution equations beyond Banach spaces. Zbl 1342.47059
Jacob, B.; Wegner, S.-A.
2015
Weighted multiple interpolation and the control of perturbed semigroup systems. Zbl 1285.47021
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2013
Second-order systems with acceleration measurements. Zbl 1369.93276
Jacob, Birgit; Morris, Kirsten
2012
Mini-workshop: Wellposedness and controllability of evolution equations. Abstracts from the mini-workshop held December 12th – December 18th. Zbl 1235.00035
Jacob, Birgit (ed.); Partington, Jonathan R. (ed.); Pott, Sandra (ed.); Zwart, Hans (ed.)
2010
Spectral factorization by symmetric extraction for distributed parameter systems. Zbl 1088.47009
Winkin, J. J.; Callier, F. M.; Jacob, B.; Partington, J. R.
2005
An operator theoretical approach towards systems over the signal space $$\ell_2(\mathbb Z)$$. Zbl 1024.93015
Jacob, Birgit
2003
On discrete-time linear systems with almost-periodic kernels. Zbl 0992.93079
Jacob, Birgit; Partington, Jonathan R.; Ünalmiş, Banu
2000
Destabilization of infinite-dimensional time-varying systems via dynamical output feedback. Zbl 0866.93073
Jacob, Birgit
1996
On continuity of solutions for parabolic control systems and input-to-state stability. Zbl 07036305
Jacob, Birgit; Schwenninger, Felix L.; Zwart, Hans
2019
Well-posedness of systems of 1-D hyperbolic partial differential equations. Zbl 07062563
Jacob, Birgit; Kaiser, Julia T.
2019
Well-posedness of a class of hyperbolic partial differential equations on the semi-axis. Zbl 07149146
Jacob, Birgit; Wegner, Sven-Ake
2019
Infinite-dimensional input-to-state stability and Orlicz spaces. Zbl 1390.93661
Jacob, Birgit; Nabiullin, Robert; Partington, Jonathan R.; Schwenninger, Felix L.
2018
$$\beta$$-admissibility of observation operators for hypercontractive semigroups. Zbl 1398.30036
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra; Wynn, Andrew
2018
Systems with strong damping and their spectra. Zbl 06986309
Jacob, Birgit; Tretter, Christiane; Trunk, Carsten; Vogt, Hendrik
2018
Perturbations of positive semigroups on AM-spaces. Zbl 06893038
Bátkai, András; Jacob, Birgit; Voigt, Jürgen; Wintermayr, Jens
2018
Optimal control and observation locations for time-varying systems on a finite-time horizon. Zbl 1339.93127
Wu, Xueran; Jacob, Birgit; Elbern, Hendrik
2016
Variational principles for self-adjoint operator functions arising from second-order systems. Zbl 1356.47019
Jacob, Birgit; Langer, Matthias; Trunk, Carsten
2016
$$C_{0}$$-semigroups for hyperbolic partial differential equations on a one-dimensional spatial domain. Zbl 1320.35199
Jacob, Birgit; Morris, Kirsten; Zwart, Hans
2015
On the right multiplicative perturbation of non-autonomous $$L^p$$-maximal regularity. Zbl 1389.35191
Augner, Björn; Jacob, Birgit; Laasri, Hafida
2015
Desch-Schappacher perturbation of one-parameter semigroups on locally convex spaces. Zbl 1323.47050
Jacob, Birgit; Wegner, Sven-Ake; Wintermayr, Jens
2015
Asymptotics of evolution equations beyond Banach spaces. Zbl 1342.47059
Jacob, B.; Wegner, S.-A.
2015
Stability and stabilization of infinite-dimensional linear port-Hamiltonian systems. Zbl 1302.93173
Augner, Björn; Jacob, Birgit
2014
Applications of Laplace-Carleson embeddings to admissibility and controllability. Zbl 1294.30098
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2014
The weighted Weiss conjecture and reproducing kernel theses for generalized Hankel operators. Zbl 1300.30095
Jacob, B.; Rydhe, E.; Wynn, A.
2014
On Laplace-Carleson embedding theorems. Zbl 1267.46040
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2013
Semilinear observation systems. Zbl 1281.93027
Baroun, M.; Jacob, B.; Maniar, L.; Schnaubelt, R.
2013
Weighted multiple interpolation and the control of perturbed semigroup systems. Zbl 1285.47021
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2013
Linear port-Hamiltonian systems on infinite-dimensional spaces. Zbl 1254.93002
Jacob, Birgit; Zwart, Hans J.
2012
Hamiltonians and Riccati equations for linear systems with unbounded control and observation operators. Zbl 1254.47041
Wyss, C.; Jacob, B.; Zwart, H. J.
2012
Second-order systems with acceleration measurements. Zbl 1369.93276
Jacob, Birgit; Morris, Kirsten
2012
Optimal control for age-structured population dynamics of incomplete data. Zbl 1191.92033
Jacob, Birgit; Omrane, Abdennebi
2010
Weighted interpolation in Paley-Wiener spaces and finite-time controllability. Zbl 1206.47020
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2010
Mini-workshop: Wellposedness and controllability of evolution equations. Abstracts from the mini-workshop held December 12th – December 18th. Zbl 1235.00035
Jacob, Birgit (ed.); Partington, Jonathan R. (ed.); Pott, Sandra (ed.); Zwart, Hans (ed.)
2010
On the Hautus test for exponentially stable $$C_0$$-groups. Zbl 1193.47044
Jacob, Birgit; Zwart, Hans
2009
Admissibility and controllability of diagonal Volterra equations with scalar inputs. Zbl 1161.93005
Haak, Bernhard H.; Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2009
Spectrum and analyticity of semigroups arising in elasticity theory and hydromechanics. Zbl 1175.47036
Jacob, Birgit; Trunk, Carsten
2009
Zero-class admissibility of observation operators. Zbl 1161.93016
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2009
Admissibility and observability of observation operators for semilinear problems. Zbl 1170.47037
Baroun, Mahmoud; Jacob, Birgit
2009
Spectral properties of pseudo-resolvents under structured perturbations. Zbl 1248.93051
Curtain, Ruth F.; Jacob, Birgit
2009
Tangential interpolation in weighted vector-valued $$H^p$$ spaces, with applications. Zbl 1217.30034
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2009
Analyticity and Riesz basis property of semigroups associated to damped vibrations. Zbl 1156.47038
Jacob, Birgit; Trunk, Carsten; Winklmeier, Monika
2008
A resolvent test for admissibility of Volterra observation operators. Zbl 1118.45008
Jacob, Birgit; Partington, Jonathan R.
2007
Minimum-phase infinite-dimensional second-order systems. Zbl 1366.93277
Jacob, Birgit; Morris, Kirsten; Trunk, Carsten
2007
Location of the spectrum of operator matrices which are associated to second order equations. Zbl 1128.47003
Jacob, Birgit; Trunk, Carsten
2007
Interpolation by vector-valued analytic functions, with applications to controllability. Zbl 1137.46015
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2007
Observability of polynomially stable systems. Zbl 1111.93010
Jacob, Birgit; Schnaubelt, Roland
2007
On controllability of diagonal systems with one-dimensional input space. Zbl 1129.93323
Jacob, Birgit; Partington, Jonathan R.
2006
What is the better signal space for discrete-time systems: $$\ell_2(\mathbb Z)$$ or $$\ell_2(\mathbb N_0)$$? Zbl 1101.93065
Jacob, Birgit
2005
Spectral factorization by symmetric extraction for distributed parameter systems. Zbl 1088.47009
Winkin, J. J.; Callier, F. M.; Jacob, B.; Partington, J. R.
2005
Admissibility of control and observation operators for semigroups: a survey. Zbl 1083.93025
Jacob, Birgit; Partington, Jonathan R.
2004
Counterexamples concerning observation operators for $$C_0$$-semigroups. Zbl 1101.93042
Jacob, Birgit; Zwart, Hans
2004
Admissible control and observation operators for Volterra integral equations. Zbl 1076.93025
Jacob, Birgit; Partington, Jonathan R.
2004
Zwart, Hans; Jacob, Birgit; Staffans, Olof
2003
Conditions for admissibility of observation operators and boundedness of Hankel operators. Zbl 1046.47036
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2003
An operator theoretical approach towards systems over the signal space $$\ell_2(\mathbb Z)$$. Zbl 1024.93015
Jacob, Birgit
2003
Admissible and weakly admissible observation operators for the right shift semigroup. Zbl 1176.47065
Jacob, Birgit; Partington, Jonathan R.; Pott, Sandra
2002
A constrained approximation problem arising in parameter identification. Zbl 1001.65067
Jacob, Birgit; Leblond, Juliette; Marmorat, Jean-Paul; Partington, Jonathan R.
2002
Properties of the realization of inner functions. Zbl 1019.93007
Jacob, Birgit; Zwart, Hans
2002
The Weiss conjecture on admissibility of observation operators for contraction semigroups. Zbl 1031.93107
Jacob, Birgit; Partington, Jonathan R.
2001
Exact observability of diagonal systems with a finite-dimensional output operator. Zbl 0978.93010
Jacob, B.; Zwart, H.
2001
On the boundedness and continuity of the spectral factorization mapping. Zbl 0994.47020
Jacob, Birgit; Partington, Jonathan R.
2001
Exact observability of diagonal systems with a one-dimensional output operator. Zbl 1031.93065
Jacob, Birgit; Zwart, Hans
2001
Zeros of Fredholm operator valued $$H^p$$-functions. Zbl 0992.47002
Jacob, Birgit
2001
Graphs, closability, and causality of linear time-invariant discrete-time systems. Zbl 1162.93312
Jacob, Birgit; Partington, Jonathan R.
2000
On discrete-time linear systems with almost-periodic kernels. Zbl 0992.93079
Jacob, Birgit; Partington, Jonathan R.; Ünalmiş, Banu
2000
Equivalent conditions for stabilizability of infinite-dimensional systems with admissible control operators. Zbl 0945.93018
Jacob, Birgit; Zwart, Hans
1999
Continuity of the spectral factorization on a vertical strip. Zbl 0948.93025
Jacob, Birgit; Winkin, Joseph; Zwart, Hans
1999
A formula for the stability radius of time-varying systems. Zbl 0902.34039
Jacob, Birgit
1998
Corrections and extensions of “Optimal control of linear systems with almost periodic input” by G. da Prato and A. Ichikawa. Zbl 0914.49023
Jacob, Birgit; Larsen, Mikael; Zwart, Hans
1998
Destabilization of infinite-dimensional time-varying systems via dynamical output feedback. Zbl 0866.93073
Jacob, Birgit
1996
Infinite dimensional time-varying systems with nonlinear output feedback. Zbl 0839.93057
Jacob, B.; Dragan, V.; Pritchard, A. J.
1995
Time-varying infinite dimensional state-space systems. Zbl 0853.93004
Jacob, Birgit
1995
all top 5
#### Cited by 329 Authors
26 Jacob, Birgit 22 Partington, Jonathan R. 15 Zwart, Hans J. 8 Mironchenko, Andrii 8 Pott, Sandra 8 Weiss, George 7 Le Gorrec, Yann 7 Makila, Pertti M. 6 Chen, Jianhua 6 Goreac, Dan 6 Haak, Bernhard H. 6 Leblond, Juliette 6 Morris, Kirsten A. 6 Nguyen Huu Du 6 Schwenninger, Felix L. 5 Guo, Bao-Zhu 5 Mophou, Gisèle Massengo 5 Zheng, Jun 5 Zhu, Guchuan 4 Augner, Björn 4 Bounit, Hamid 4 Engel, Klaus-Jochen 4 Maschke, Bernhard M. 4 Matignon, Denis 4 Rydhe, Eskil 4 Schnaubelt, Roland 4 Trunk, Carsten 4 Waurick, Marcus 3 Benabdallah, Assia 3 Guiver, Chris 3 Hadd, Said 3 Hafdallah, Abdelhak 3 Karafyllis, Iasson 3 Krstić, Miroslav 3 Kucik, Andrzej Stanisław 3 Laasri, Hafida 3 Lhachemi, Hugo 3 Logemann, Hartmut 3 Mehrmann, Volker 3 Prieur, Christophe 3 Ramírez, Héctor C. 3 Staffans, Olof Johan 3 Thuan, Do Duc 3 Wang, Junmin 3 Wegner, Sven-Ake 3 Wirth, Fabian Roger 3 Wynn, Andrew 3 Wyss, Christian 3 Xu, Gen-Qi 2 Alazard, Daniel 2 Ammar-Khodja, Farid 2 Ayadi, Abdelhamid 2 Baratchart, Laurent 2 Beattie, Christopher A. 2 Berger, Thomas R. 2 Brugnoli, Andrea 2 Curtain, Ruth Frances 2 Do Duc Thuan 2 Driouich, Abderrahim 2 ElMennaoui, Omar 2 Fridman, Emilia 2 Gesztesy, Fritz 2 González-Burgos, Manuel 2 Grubišić, Luka 2 Holden, Helge 2 Idrissi, Abdelali 2 Iftime, Orest V. 2 Ito, Hiroshi C. 2 Kotyczka, Paul 2 Kurula, Mikael 2 Lefèvre, Laurent 2 Leiva, Hugo 2 Li, Donghai 2 Linh, Vu Hoang 2 Mei, Zhandong 2 Miller, Luc 2 Morancey, Morgan 2 Nabiullin, Robert 2 Nguyen Thu Ha 2 Opmeer, Mark R. 2 Ouhabaz, El Maati 2 Peláez, José Ángel 2 Peloso, Marco Maria 2 Peng, Jigen 2 Pommier-Budinger, Valerie 2 Ponce, Rodrigo F. 2 Rättyä, Jouni 2 Rebarber, Richard 2 Respondek, Jerzy Stefan 2 Sakhnovich, Alexander L. 2 Salvatori, Maura 2 Schmid, Jochen 2 Schönlein, Michael 2 Shorten, Robert N. 2 Sun, Bing 2 Tretter, Christiane 2 Tucsnak, Marius 2 Ünalmiş, Banu 2 Wu, Yongxin 2 Xiao, Ti-Jun ...and 229 more Authors
all top 5
#### Cited in 82 Serials
36 Systems & Control Letters 19 MCSS. Mathematics of Control, Signals, and Systems 16 Automatica 13 SIAM Journal on Control and Optimization 13 Journal of Evolution Equations 12 Journal of Mathematical Analysis and Applications 12 Journal of Differential Equations 10 International Journal of Control 10 Journal of Functional Analysis 7 Complex Analysis and Operator Theory 6 Integral Equations and Operator Theory 6 Semigroup Forum 5 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 3 Archiv der Mathematik 3 Mathematische Nachrichten 3 Applied Mathematical Modelling 3 Applied and Computational Harmonic Analysis 3 European Journal of Control 3 Evolution Equations and Control Theory 2 Journal of Computational Physics 2 Mathematical Methods in the Applied Sciences 2 Mathematical Notes 2 Applied Mathematics and Optimization 2 Indagationes Mathematicae. New Series 2 St. Petersburg Mathematical Journal 2 Advances in Difference Equations 2 Mathematical Control and Related Fields 1 Analysis Mathematica 1 Applicable Analysis 1 International Journal of Systems Science 1 Ukrainian Mathematical Journal 1 Wave Motion 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Applied Mathematics and Computation 1 Czechoslovak Mathematical Journal 1 Illinois Journal of Mathematics 1 International Journal of Mathematics and Mathematical Sciences 1 Journal of Approximation Theory 1 Journal of the London Mathematical Society. Second Series 1 Journal of Optimization Theory and Applications 1 Mathematische Annalen 1 Mathematics and Computers in Simulation 1 Memoirs of the American Mathematical Society 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Proceedings of the American Mathematical Society 1 Transactions of the American Mathematical Society 1 Zeitschrift für Analysis und ihre Anwendungen 1 SIAM Journal on Matrix Analysis and Applications 1 Journal of Integral Equations and Applications 1 The Journal of Geometric Analysis 1 Geometric and Functional Analysis. GAFA 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 SIAM Review 1 SIAM Journal on Scientific Computing 1 Annales Mathématiques Blaise Pascal 1 Journal of Inverse and Ill-Posed Problems 1 Boletín de la Sociedad Matemática Mexicana. Third Series 1 Sbornik: Mathematics 1 Discrete and Continuous Dynamical Systems 1 Abstract and Applied Analysis 1 Positivity 1 ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik 1 Discrete Dynamics in Nature and Society 1 International Journal of Applied Mathematics and Computer Science 1 Gravitation & Cosmology 1 Nonlinear Analysis. Real World Applications 1 Foundations of Computational Mathematics 1 Journal of Systems Science and Complexity 1 Journal of Numerical Mathematics 1 Mediterranean Journal of Mathematics 1 African Diaspora Journal of Mathematics 1 Science in China. Series F 1 Operators and Matrices 1 Asian-European Journal of Mathematics 1 International Journal of Differential Equations 1 Journal of Spectral Theory 1 Asian Journal of Control 1 Concrete Operators 1 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings and Surveys 1 Ural Mathematical Journal 1 Pure and Applied Analysis 1 Annales Henri Lebesgue
all top 5
#### Cited in 31 Fields
164 Systems theory; control (93-XX) 90 Operator theory (47-XX) 50 Partial differential equations (35-XX) 29 Ordinary differential equations (34-XX) 23 Functions of a complex variable (30-XX) 18 Calculus of variations and optimal control; optimization (49-XX) 15 Functional analysis (46-XX) 14 Dynamical systems and ergodic theory (37-XX) 11 Integral equations (45-XX) 11 Numerical analysis (65-XX) 11 Mechanics of deformable solids (74-XX) 10 Harmonic analysis on Euclidean spaces (42-XX) 8 Probability theory and stochastic processes (60-XX) 8 Biology and other natural sciences (92-XX) 6 Mechanics of particles and systems (70-XX) 4 Linear and multilinear algebra; matrix theory (15-XX) 4 Approximations and expansions (41-XX) 2 Real functions (26-XX) 2 Abstract harmonic analysis (43-XX) 2 Integral transforms, operational calculus (44-XX) 2 Optics, electromagnetic theory (78-XX) 2 Operations research, mathematical programming (90-XX) 2 Information and communication theory, circuits (94-XX) 1 Combinatorics (05-XX) 1 Number theory (11-XX) 1 Potential theory (31-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Difference and functional equations (39-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Relativity and gravitational theory (83-XX) 1 Astronomy and astrophysics (85-XX) | 2021-04-12 13:24:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6354268789291382, "perplexity": 12528.276725938435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067400.24/warc/CC-MAIN-20210412113508-20210412143508-00214.warc.gz"} |
https://puzzling.stackexchange.com/questions/93113/counting-with-towers | # Counting with towers
An ancient civilization has been discovered which uses an unusual counting system, based on stacks of blocks. However, most of the information on this civilization has been lost to time, and little remains about their counting system. In fact, the only information you have is about a single number.
Here are several different representations of the number 10:
In text format, if you prefer:
1 1 1 1 1 1 1 1 1 1
2 0 0 0 0 0 0 0 0
2 1 0 0 0 0 0 0
2 1 1 0 0 0 0
2 1 1 1 0 0
2 1 1 1 1
2 2 1 0
3 0 0
4 0
10
With this in mind, what is this number?
I believe that the answer is
175
because
they used the following numbering system (with $$k$$ stacks for example): $$1=(1,0,0,\dots,0)$$ $$2=(1,1,0,\dots,0)$$ $$\dots$$ $$k=(1,1,1,\dots,1)$$ $$k+1=(2,0,0,\dots,0)\ \mathrm{(emptying\ all\ subsequent\ stacks)}$$ $$k+2=(2,1,0,\dots,0)$$ $$\dots$$ $$2k=(2,1,1,\dots,1)$$ $$2k+1=(2,2,0,\dots,0)$$ $$\dots$$ $$3k-1=(2,2,1,\dots,1)$$ $$3k=(2,2,2,0,\dots,0)$$ $$\dots$$ After $$(2,2,2,\dots,2)$$ will follow $$(3,0,0,\dots,0)$$ etc. up to $$(3,3,3,\dots,3)$$, then $$(4,0,0,...,0)$$ etc.
So, to find what $$(5,4,3,2,1)$$ is, let's iterate through the process, To avoid mistakes (actually I did firstly the manual computation and obtained the wrong answer 120 instead of 175), I've written some Python code (Try it online!, sorry for poor variable naming and magic numbers):
x = [0, 0, 0, 0, 0]
i = 0
while x != [5, 4, 3, 2, 1]:
idx = 4
while idx and x[idx] == x[idx - 1]: idx -= 1
x[idx] += 1
for ind in range(idx + 1, 5): x[ind] = 0
i += 1
print(i, x)
P.S.
I believe that the general system has something to do with the triangular (as well as pyramidal etc.) numbers, but I didn't devise the generic principle.
• That's right! There is an easier way to convert numbers that doesn't involve counting by ones. – Woofmao Jan 26 at 19:27 | 2020-03-30 17:50:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5812316536903381, "perplexity": 369.19185661031366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497171.9/warc/CC-MAIN-20200330150913-20200330180913-00275.warc.gz"} |
https://www.econometricsociety.org/publications/econometrica/about/journal-news/2022/05/25/theoretical-economics-volume-17-number-2 | # Theoretical Economics Volume 17, Number 2 (2022) is online
Theoretical Economics
Volume 17, Number 2 (2022)
https://econtheory.org/ojs/index.php/te/issue/view/55
Articles
========
Pages: 507-519
Authors: Itzhak Gilboa, Larry Samuelson
Abstract: Decision theory can be used to test the logic of decision making---one may ask whether a given set of decisions can be justified by a decision-theoretic model. Indeed, in principal-agent settings, such justifications may be required---a manager of an investment fund may be asked what beliefs she used when valuing assets and a government may be asked whether a portfolio of rules and regulations is coherent. In this paper we ask which collections of uncertain-act evaluations can be simultaneously justified under the maxmin expected utility criterion by a single set of probabilities. We draw connections to the the Fundamental Theorem of Finance (for the special case of a Bayesian agent) and revealed-preference results.
Keywords: Decision theory, revealed preference, coherence, maxmin expected utility
JEL classification: D8
--------------------------------------------------
Pages: 521-537
Authors: Sean Horan, Yves Sprumont
Abstract: We propose a class of decisive collective choice rules that rely on a linear ordering to partition the majority relation into two acyclic relations. The first of these relations is used to pare down the set of the feasible alternatives into a shortlist while the second is used to make a final choice from the shortlist.
Rules in this class are characterized by four properties: two classical rationality requirements (Sen's expansion consistency and Manzini and Mariotti's weak WARP); and adaptations of two classical collective choice requirements (Arrow's independence of irrelevant alternatives and Saari and Barney's no preference reversal bias). These rules also satisfy some other desirable properties, including an adaptation of May's positive responsiveness.
Keywords: Majority rule, decisiveness, IIA, monotonicity, rational shortlist methods
JEL classification: D71, D72
--------------------------------------------------
Pages: 539-559
Authors: Sebastian Gryglewicz, Aaron Kolb
Abstract: We study dynamic signaling in a game of stochastic stakes. Each period, a privately informed agent of binary type chooses whether to continue receiving a return that is an increasing function of both her reputation and an exogenous public stakes variable or to irreversibly exit the game. A strong type has a dominant strategy to continue. In the unique perfect Bayesian equilibrium, the weak type plays a mixed strategy that depends only on current stakes and their historical minimum, and she builds a reputation by continuing when the stakes reach a new minimum. We discuss applications to corporate reputation management, online vendor reputation, and limit pricing with stochastic demand.
Keywords: Dynamic signaling, reputation building, history dependence, exit dynamics
JEL classification: C73, D82, D83
--------------------------------------------------
Pages: 561-585
Authors: Anton Kolotilin, Timofiy Mylovanov, Andriy Zapechelnyuk
Abstract: We consider a Bayesian persuasion problem where a sender's utility depends only on the expected state. We show that upper censorship that pools the states above a cutoff and reveals the states below the cutoff is optimal for all prior distributions of the state if and only if the sender's marginal utility is quasi-concave. Moreover, we show that it is optimal to reveal less information if the sender becomes more risk averse or the sender's utility shifts to the left. Finally, we apply our results to the problem of media censorship by a government.
Keywords: Bayesian persuasion, information design, censorship, media
JEL classification: D82, D83, L82
--------------------------------------------------
Pages: 587-615
Authors: Alfredo Di Tillio, Ehud Lehrer, Dov Samet
Abstract: The main purpose of this paper is to provide a simple criterion enabling to conclude that two agents do not share a common prior. The criterion is simple, as it does not require information about the agents'
knowledge and beliefs, but rather only the record of a dialogue between the agents. In each stage of the dialogue the agents tell each other the probability they ascribe to a fixed event and update their beliefs about the event. To characterize dialogues consistent with a common prior, we first study monologues, which are sequences of probabilities assigned by a single agent to a given event in an exogenous learning process. A dialogue is consistent with a common prior if and only if each selection sequence from the two monologues comprising the dialogue is itself a monologue.
Keywords: Learning processes, Bayesian dialogue, Bayesian monologue, Ratio variation, Joint fluctuation, Agreement
JEL classification: D83
--------------------------------------------------
Pages: 617-650
Authors: Begum Guney, Michael Richter
Abstract: We introduce a game-theoretic model with switching costs and endogenous references. An agent endogenizes his reference strategy and then, taking switching costs into account, he selects a strategy from which there is no profitable deviation. We axiomatically characterize this selection procedure in one-player games. We then extend this procedure to multi-player simultaneous games by defining a Switching Cost Nash Equilibrium (SNE) notion, and prove that (i) an SNE always exists; (ii) there are sets of SNE which can never be a set of Nash Equilibrium for any standard game; and (iii) SNE with a specific cost structure exactly characterizes the Nash Equilibrium of nearby games, in contrast to Radner's
(1980) $\varepsilon$-equilibrium. Subsequently, we apply our SNE notion to a product differentiation model, and reach the opposite conclusion of Radner
(1980): switching costs for firms may benefit consumers. Finally, we compare our model with others, especially K\"{o}szegi and Rabin's (2006) personal equilibrium.
Keywords: Switching cost Nash equilibrium, choice, endogenous reference, switching costs, epsilon equilibrium
JEL classification: D00, D01, D03, C72
--------------------------------------------------
Pages: 651-686
Authors: Efe A. Ok, Gerelt Tserenjigmid
Abstract: Among the reasons behind the choice behavior of an individual taking a stochastic form are her potential indifference or indecisiveness between certain alternatives, and/or her willingness to experiment in the sense of occasionally deviating from choosing a best alternative in order to give a try to other options. We introduce methods of identifying if, and when, a stochastic choice model may be thought of as arising due to any one of these three reasons. Each of these methods furnishes a natural way of making deterministic welfare comparisons within any model that is rationalized as such.
In turn, we apply these methods, and characterize the associated welfare orderings, in the case of several well-known classes of stochastic choice models.
Keywords: Stochastic choice, indifference, incomplete preferences, experimentation, the general Luce model, random utility, additive perturbed utility, individual welfare
JEL classification: D01, D11, D81, D91
--------------------------------------------------
Pages: 687-724
Authors: Laura Doval
Abstract: I introduce a stability notion, dynamic stability, for two-sided dynamic matching markets where (i) matching opportunities arrive over time,
(ii) matching is one-to-one, and (iii) matching is irreversible. The definition addresses two conceptual issues. First, since not all agents are available to match at the same time, one must establish which agents are allowed to form blocking pairs. Second, dynamic matching markets exhibit a form of externality that is not present in static markets: an agent’s payoff from remaining unmatched cannot be defined independently of what other contemporaneous agents’ outcomes are. Dynamically stable matchings always exist. Dynamic stability is a necessary condition to ensure timely participation in the economy by ensuring that agents do not strategically delay the time at which they are available to match.
Keywords: Dynamic stability, dynamic matching, stable matching, non-transferable utility, externalities, credibility, market design, dynamic arrivals, aftermarkets
JEL classification: D47, C78
--------------------------------------------------
Pages: 725-762
Authors: Lukasz Balbus, Pawel Dziewulski, Kevin Reffett, Lukasz Wozny
Abstract: We present a new approach to studying equilibrium dynamics in a class of stochastic games with a continuum of players with private types and strategic complementarities. We introduce a suitable equilibrium concept, called Markov Stationary Nash Distributional Equilibrium (MSNDE), prove its existence, and determine comparative statics of equilibrium paths and the steady state invariant distributions to which they converge. Finally, we provide numerous applications of our results including: dynamic models of growth with status concerns, social distance, and paternalistic bequests with endogenous preferences for consumption.
Keywords: Large games, distributional equilibria, supermodular games, comparative dynamics, non-aggregative games, social interactions
JEL classification: C62, C72, C73
--------------------------------------------------
Pages: 763-800
Authors: Rahul Deb, Matthew Mitchell, Mallesh M. Pai
Abstract: Motivated by markets for expertise,'' we study a bandit model where a principal chooses between a safe and risky arm. A strategic agent controls the risky arm and privately knows whether its type is high or low.
Irrespective of type, the agent wants to maximize duration of experimentation with the risky arm. However, only the high type arm can generate value for the principal. Our main insight is that reputational incentives can be exceedingly strong unless both players coordinate on maximally inefficient strategies on path. We discuss implications for online content markets, term limits for politicians and experts in organizations.
for politicians and experts in organizations.
JEL classification: D82, D86
--------------------------------------------------
Pages: 801-839
Authors: Jan Christoph Schlegel
Abstract: Several structural results for the set of competitive equilibria in trading networks with frictions are established: The lattice theorem, the rural hospitals theorem, the existence of side-optimal equilibria, and a group-incentive-compatibility result hold with imperfectly transferable utility and in the presence of frictions. While our results are developed in a trading network model, they also imply analogous (and new) results for exchange economies with combinatorial demand and for two-sided matching markets with transfers.
Keywords: Trading Networks, Full Substitutability, Imperfectly Transferable Utility, Competitive Equilibrium, Indivisible Goods, Frictions, Lattice, Rural Hospitals
JEL classification: C78, D47, D52, L14
--------------------------------------------------
Pages: 841-881
Authors: Stephan Lauermann, Asher Wolinsky
Abstract: This paper analyzes a common-value, first-price auction with state-dependent participation. The number of bidders, which is unobservable to them, depends on the true value. For participation patterns with many bidders in each state, the bidding equilibrium may be of a "pooling"
type---with high probability, the winning bid is the same across states and is below the ex-ante expected value---or of a "partially revealing"
type---with no significant atoms in the winning bid distribution and an expected winning bid increasing in the true value. Which of these forms will arise is determined by the likelihood ratio at the top of the signal distribution and the participation across states. We fully characterize this relation and show how the participation pattern determines the extent of information aggregation by the price.
Keywords: Auction theory, bargaining, competition
JEL classification: D44, D82
--------------------------------------------------
Title: Long information design
Pages: 883-927
Authors: Frederic Koessler, Marie Laclau, Jérôme Renault, Tristan Tomala
Abstract: We analyze information design games between two designers with opposite preferences and a single agent. Before the agent makes a decision, designers repeatedly disclose public information about persistent state parameters. Disclosure continues until no designer wishes to reveal further information. We consider environments with general constraints on feasible information disclosure policies. Our main results characterize equilibrium payoffs and strategies of this long information design game and compare them with the equilibrium outcomes of games where designers move only at a single predetermined period. When information disclosure policies are unconstrained, we show that at equilibrium in the long game, information is revealed right away in a single period; otherwise, the number of periods in which information is disclosed might be unbounded. As an application, we study a competition in product demonstration and show that more information is revealed if each designer could disclose information at a predetermined period. The format that provides the buyer with most information is the sequential game where the last mover is the ex-ante favorite seller.
Keywords: Bayesian persuasion, concavification, convexification, information design, Mertens-Zamir solution, product demonstration, splitting games, statistical experiments, stochastic games
JEL classification: C72, D82
--------------------------------------------------
Pages: 929-942
Authors: Abhigyan Bose, Souvik Roy
Abstract: Theorem 1 in Bhargava, Mohit et al. (2015) provides a necessary condition for a social choice function to be LOBIC with respect to a belief system satisfying top-set (TS) correlation. In this paper, we provide a counter example to that theorem and consequently provide a new necessary condition for the same in terms of sequential ordinal nondomination.
Keywords: Ordinal Bayesian incentive compatibility, correlated beliefs, sequential ordinal nondomination property
JEL classification: D71, D82
Publication Date:
Wednesday, May 25, 2022 | 2022-08-14 00:00:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4814694821834564, "perplexity": 3217.3995418195636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00025.warc.gz"} |
http://mail.scipy.org/pipermail/scipy-user/2009-January/019245.html | # [SciPy-user] Can scipy resolve this problem?
Robert Kern robert.kern@gmail....
Wed Jan 7 02:15:19 CST 2009
On Wed, Jan 7, 2009 at 02:05, zhang chi <zhangchipr@gmail.com> wrote:
> hi
> I want to get the minimum value of a derivative free optimization
> problem. The function F(x1,x2) can't be given the expression, but the
> function can be realized using python language. Where x1 $\in$ [1,100], and
> x2 $\in$ [50,80]. Can scipy resolve this problem?
Use fmin_tnc, fmin_l_bfgs_b, or fmin_slsqp for plain bounds like this.
> I have tried the cobyla,
> but it cannot find the minimum value.
> By the way, the step of x1 and x2 is 1.
What do you mean by "the step"?
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco | 2014-09-03 01:15:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.701830267906189, "perplexity": 5174.381770443589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535923940.4/warc/CC-MAIN-20140909030513-00095-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://nbviewer.jupyter.org/github/braddelong/long-form-drafts/blob/master/econ-105-lecture-notes.ipynb | # Smith, Marx, Keynes ¶
## 0. Introduction ¶
Sociologists assert that Adam Smith is one of a number of important thinkers between Niccolo Machiavelli around 1500 and Max Weber around 1900 who discovered "society". Historians vehemently deny that this is the case—say that things are more complicated, and that to claim that before 1500-1900 people did not understand that "society" was a thing is to make a false and misleading and oversimplified claim. The historians are right. But. There is a sense in which the sociologists are right too. So take the sentence "Adam Smith is one of a number of important thinkers between Niccolo Machiavelli around 1500 and Max Weber around 1900 who discovered 'society', and ask: what could that mean that is true?
A good place to start is with the highest eagle-eye view possible of the human economy over the past 70000 years: numbers of human beings, and the average level of prosperity at which they lived:
###### Source: https://www.icloud.com/numbers/0jaPx8AjooD2TDbNM4Og3Z2ow¶
From the year -3000 to the year 1500 we guess that average human living standards were roughly constant at perhaps what modern-day developoment economists would call 2.5 dollars a day—not quite extreme poverty, but close. We guess that average worker productivity levels roughly constant at about 1800 dollars a year. And we guess that over those 4500 years the human population grew from roughly 15 to 500 million. Why did popuation grow? Because human ingenuity generated better ideas about how to manipulate nature and organize production: we can make heroic assumptions and try to value the stock of useful human ideas about economic production, and if we do we can build an index $H$ that rises nearly sixfold from the year -3000 to the year 1500. Why didn't average living standards and productiviy levels increase? Because increasing populations generated greater resource scarcity: smaller average farm sizes and less pasture per herder eroded all the potential gains from better ideas. So over the 4.5 millennia from -3000 to 1500 human population grew at an average rate of 0.08% per year, or 2% in a typical generation.
70000 years ago—when the descendants of those who had been anatomically modern humans became behaviorally modern humans—there were fewer than 100,000 of us on the globe, perhaps many fewer than 100,000, perhaps shortly before (meaning a few thousands of years before) there had been only 1000 breeding pairs that have given nearly all of us nearly all of our genes. As a result, there is today more genetic diversity in a typical fifty-animal baboon troop than in the entire human race: we are all more closely one another's cousins than is the average baboon with his or her troopmates. Becoming behaviorally modern, however, gave us a large edge. Over the subsequent 20000 years we spread out within our common motherland of Africa. And starting perhaps 50000 years ago we launched ourselves across the Red Sea from the Horn of Africa to Yemen, and began to spread out over the entire world. By 10000 years ago we could be found nearly everywhere there was land that wasn't Antarctica.
By that moment—10000 years ago—there were perhaps 2.5 million of us on the globe. We were all then, still, as we had been since our dawn, gatherer-hunters. We had proven successful in a Darwinian sense: we had expanded into many more—indeed, into nearly all—land environments, and in so doing we had multiplied our populations at least 25-fold. But that is a very slow rate of average population growth: only 0.005% per year or 0.125% per generation during the long 60000 years of the behaviorally-modern gatherer-hunter epoch.
How ferocious was mortality to keep population growth so low? A pre-industrial nutritionally-unstressed human population with access to the technologies of settlement—building walls, roofs, and chimneys and weaving and sewing clothes—will roughly double in population every two generations. That is what the British settlers in America did in the generations after they hit the coast from Georgia to Maine. But human gatherer-hunter populations grew at an average rate of increase a rate of increase of 0.25% every two generations: two generations saw not twice as many people as its parent generation, but rather only a quarter of a percent more—one extra person for each 400. The rest had been carried off by the high mortality of the gatherer-hunter age.
Gatherer-hunter nutritional standards were adequate and diets were varied in large part because population densities were low and foraging territories relatively large. Population densities were low because mortality was ferocious. You got to watch your friends die, your spouse die, your comrades die, worst of all a large fraction of your children die, and then you died at a relatively young age.
By 10000 years ago we knew a lot more—about how to make tools, manipulate and cooperate with nature—and organize our societies to make our livelihoods, survive, and reproduce—than we had known 70000 years ago. Each band, after all, had to know about its own climate, geography, and ecology. We were able to build up our knowledge because our precious possession of language made us an anthology intelligence: what one of us knew or learned, pretty soon all who came within earshot or within earshot of someone who had once been within earshot knew. Our propensity to gossip about everything and anything is very strong, and odds are it provided a very powerful evolutionary edge. If we make truly heroic assumptions in order to construct a quantitative index of the effectiveness of our knowledge, we might guess that humans 10000 years ago collectively knew five times as much about nature, technology, and organization than their predecessors 70000 years ago had collectively known.
#### 2.1.2. Living Standards ¶
Even though we knew more, we did not live better. We guess that modern development economists would rate our average standard of living back in the gatherer-hunter era as the equivalent of about 3.5 dollars a day—an average economic productivity for the half of the population adult and working of something like 7 dollars a day, or 2500 dollars a year. That is not extreme poverty by today's standards: the United Nations counts extreme poverty as a living standard of less than 2 a day, and if you drop below that it becomes difficult to think about much other than how important it is to get more food and how tired even minor exertions make you.
Poor did not mean malnourished. Biomedically, our hunter-gatherer ancestors appear to have been about as healthy as we in the modern world are through early middle age—if they survived to early middle age, that is. Life expectancy at birth was twenty-five on a generous estimate. The average adult height of mesolithic—i.e., the period that ended 10,000 years ago—hunter-gatherers appears to have been about 5’8” for men and 5’5” for women, perhaps a hair less than average adult height in the rich postindustrial economies today. Our gatherer-hunter ancestors were, plausibly, better-nourished than we are today: even in the richest countries today diets are tilted toward high-caloric density carbohydrates—rice, wheat, corn, and potatoes—relative to nutritional requirements.
Thus as a gatherer-hunter you lived a well-nourished, physically-strenuous life that kept you fit. Life was also at least moderately interesting, in terms of the day-to-day cognitive puzzles that you had to solve. Gatherer-hunters avoided the mind-numbing boredom of doing the same thing over and over again to the next row of the same crop what Karl Marx called the “idiocy of rural life”, or the next item to come down the assembly line, or the next set of symbols to be copied into the next spreasheet row.
But even though life was not that of boring routinized repetitive labor, it was not what we would call comfortable: you spent a not-small part of your life hungry, cold (or too hot), or wet.
Why is it that people knew more yet lived no better 10000 years ago than 70000?
#### 2.1.3. A Malthusian Age ¶
To understand why the gatherer-hunter age from 70000 to 10000 years ago was one in which standards of living and productivity stagnated and in which population grew slowly, you need to keep three things in the front of your mind:
1. that the drive to make love is one of the very strongest of all human drives,
2. that the rate of technological and organizatonal progress from 70000 to 10000 yearw ago was, by our standards, glacial
3. that even small gatherer-hunter population densities put pressure on resources—and worker productivity falls when resources become less abundant.
Before the coming of abundant and relatively reliable means of artificial birth control at the end of the nineteenth century, making love is followed almost invariably if not immediately by children—over a lifetime, lots of children. And once humans have children that they survive and flourish becomes the most important thing for almost every parent for two reasons:
The first reason is that you love them almost as much as and in some cases more than yourself. Recall Hektor’s prayer for his son Astyanax (a prayer that Hera and Athene worked very hard to make certain that Zeus did not grant):
Zeus, grant that this my child may be, like me, first among the Trojans. Let him be not less excellent in strength. Let him rule Ilius with his might. And may the people say of him as he comes home from battle: “He is far better than his father!”...
The second reason is that if you survive into your old age you will need someone to take care of you, and the only people likely to be willing to take care of you are your descendants. With infant and child mortality rates of 50% and life expectancies of less than thirty years, lots of pregnancies is the only way to be reasonably sure that you will have a still-living child when you go blind and toothless.
Thus human populations—back before widespread female literacy enlarged the options open to women, back before the fall in infant mortality created the expectation that your children would survive to grow up, back before widespread artificial birth control allowed women to have the number of children they wanted and not more—tended to grow until something stopped them.
A number of things can stop fertility. Perhaps celibacy and abstention from reproduction is thought of as pleasing to God. Perhaps a prospective father-in-law might tell a prospective son-in-law that marriage will be delayed until he establishes a higher status, and makes that stick. But most often and to the greatest extent that “something” is poverty: women become too skinny to reliably ovulate, and populations become to weak to harvest food in strenuous ways. As Thomas Robert Malthus was the first to see, a population subject to slow growth in technology and organization will tend to have its population grow and average productivity and living standards decline until restrained fertility and perhaps elevated mortality are just so on average that the population grows on average at the rate warranted by the growth of the stock of useful ideas. Improvements in technology and organization then lead to larger human populations, and not to higher standards of living and productivity levels, on average.
### 2.2. The Agrarian Age: 10000 to 500 Years Ago ¶
#### 2.2.1. The Neolithic Revolution and Its Consequences ¶
Then, about 10000 years ago, comes an innovation miracle: the domestication of animals and the selective breeding of crops, and so the start of herding and of farming—the so-called Neolithic Revolution. Agriculture and herding quickly—in a couple of a thousand years—spread far. By the year -6000 farming and herding are nearly omnipresent in Eurasia and Africa, and the human population has nearly tripled, to perhaps 7 million.
Farming and herding are much more productive per unit of land than gathering and hunting. And the first few generations to adopt these technologies and social organizations experience a true bonanza. And yet when the dust settled—in the year -6000 or so—the more numerous agrarian-age humanity appeared poorer than humanity had been in the gatherer-hunter era: figure a standard of living that would be reckoned by today's development economists as roughly 2.5 rather than 3.5 dollars a day, and an average worker productivty level of not 2400 but 1800 dollars a year.
Why did these better nature-manipulation and social-organization technologies produce a poorer humanity? Somewhat paradoxically, because farm life and herding life is easier and less strenuous. Mortality is thus lower. And so, at the same living standard, population grows faster. In order to keep population growth to the rate warranted by the pace of idea invention and innovation, living standards need to fall. And the population boom from -8000 to -6000 that nearly tripled human numbers put enough scarcity pressure on natural resources to accomplish this.
A reasonable view of what we think of as “material well-being” classifies basic human needs and desires as sixfold:
• to have enough food that you are not too hungry,
• to have enough clothing that you are not too cold,
• to have enough shelter that you are not too wet,
• to have enough conceptual puzzles and diversions that you are not too bored, and
• to have enough status that you can spitefully gloat (at least in private) as you incite the envy of others.
By those yardsticks, the mass of humanity in the agrarian age was worse off than in the gather-hunter age. Relative status—is, alas!, conserved: you cannot generate it from some without taking it away from others, and so there we are stuck at an equal average level no matter what the society. The upper classes in the agrarian age may well have lived better, and increasingly better, than their gatherer-hunter age predecessors. (Indeed, it is not clear what one would mean by "upper class" in a gatherer-hunter society.) But -6000 to 1500 saw no greater life expectancy than -8000. Infant and adult mortality in agrarian societies is no lower than in hunter-gatherer ones. Mortality may well be higher for adults, because plagues and famines like dense human populations. Bacteria do not care (much) if their rapid growth kills their hosts as long as that happens only after they have found a new host to jump to. Denser populations terribly vulnerable to famine, either through blight or through weather—too hot or too cold, too wet or too dry—adverse to the growth of whatever the staple happens to be.
#### 2.2.2. Agrarian-Age Quality of Life ¶
Dense, agarian populations become giant culture dishes for endemic debilitating diseases or periodic epidemic mortal plagues. And so population growth ceases. Generation-to-generation the population jumped up and down as the spread of agricultural techniques produced an edge in food and more children survive, as plagues and wars devastated provinces, and as bounceback takes place in the aftermath of plagues and famines that left provinces depopulated, but the survivors with large and fertile farms—which induced rapid population growth, which pushed living standards back down to 2.5 dollars a day.
An agricultural cereal-heavy diet does not contain enough iron to avoid anemia. It does not contain enough calcium to avoid tooth loss and bone weakness. Rome’s legions were paid in bread and a little salt—that’s what “salary” means. Add to this whatever meat they could find and whatever greens and seasonings they could gather, and you had the diet of the legionaries, collectively at least the most powerful group of men of their age. They wear highly-skilled practitioners of violence. They were mean. They were also short. And they were, by what we would regard as early middle age, largely toothless.
Have we mentioned endemic hookworm, tapeworm, and other parasites yet? Or that agricultural and commercial labor likely involves heavy lifting-and-carrying labor that damages your spine? Or that the relatively high population densities create greater vulnerability to infectious diseases that debilitate even when they do not kill?
#### 2.2.3. Another Malthusian Age ¶
Up to 1500, and even later, agricultural and commercial societies people were short. Average adult male heights of 5’3” (and adult female heights averaging 5’0” or less) appear to have been the rule for humanity once we started to farm. This indicates extraordinary malnutrition by our standards. Were we today to feed our children a diet to produce such adult heights, Alameda County Child and Protective Services would take our children away. Like the gatherer-hunter age, the agrarian age was a Malthusian age: growth in population, but not sustained permanent growth in productivity levels (outside the upper classes) or living standards, as the benefits of technological and organizational progress were offset by the pressure of population on resources.
#### 2.2.4. Why Then the Transition to Agriculture? ¶
Comparing the lifestyle of hunter-gatherers ten-thousand to that of illiterate peasant farmers five-hundred years ago raises an obvious question: why would people ever become farmers? Jared Diamond claims that we should—even in the United States, even today—envy our gatherer-hunter ancestors. I don’t buy this: I do not, or at least I think we should not, envy them. (He does not either: Full Professors of Physiology at UCLA and of Economics at U.C. Berkeley have chosen a life far, far removed from that of our ancestors.) But there is an important kernel here: almost all of our agricultural and commercial-era ancestors between -8000 and 1500 and even later did have good reason to envy our common pre-industrial ancestors. We understand why the transition from hunting and gathering to pre-industrial agriculture is good for those at the top of the pyramid. But why do those not at the top of the socioeconomic pyramid go along?
Most important, is that the first generation to farm—or to adopt any of the many subsequent agricultural productivity-multiplying innovations—does live the life of Riley, off the fat of the land. If you can figure out how to do it, it is good for you and your children and your chldren’s children to farm. But a well-fed and well-nourished population multiplies. So farming population densities explode far beyond hunter-gatherer densities.
Some human populations did not pursue the agricultural road. Some settled into a halfway role as nomadic or transhumant herders following their flocks on land that was, for the time and given the available biotechnology, marginal for settled agriculture. Some remained hunter-gatherers for a while. But, eventually, somebody nearby had become farmers. And the population density of the farmers grew. Hunter-gatherers rarely exceed population densities of one per square mile. Farmers on land that is good for their particular version of agricultural technology can easily support many more than a thousand in the same space. The old “forty acres and a mule” for a family of six translates into a population density of roughly 100 per square mile. When those nearby who had become farmers decided that they wanted the hunter-gatherers’ or the herdsmen’s land, they took it: numbers of 100-to-1 or 1000-to-1 are not easy to argue with.
The upshot is that—unless you were part of the rich, literate upper classes—per capita standards of living were not that much higher in 1500 as they had been back in 8000 BC. Population, however, was much greater: 500 million people in 1500, compared to 2.5 million or so back in -8000: a 50-fold multiplication in 9500 years.
#### 2.2.5. The Pace of Agrarian-Age Progress ¶
Contrast that with the gatherer-hunter era average rate of population growth of 0.005% per year, or 0.125% per generation. Technological and organizational progress was thus more than ten times as fast in the agrarian age. Why?
A Scottish moral philosopher...
Dealing with the post-Medieval world...
Duby: The Three Orders...
Braudel: The Structures of Everyday Life and The Wheels of Commerce...
Gibbon:
A game-changing insight into how economies worked...
Adam Smith is not just another moral philosopher working within the tradition of moral philosophy. Adam Smith wants to do much more. Adam Smith wants to change the game. And Adam Smith succeeds...
Creating the science of economics...
The real recompense of labour... the necessaries and conveniences of life which it can procure to the labourer, has, during the course of the present century, increased.... Grain has become somewhat cheaper... other things from which the industrious poor derive an agreeable and wholesome variety of food have become a great deal cheaper. Potatoes, for example, do not at present, through the greater part of the kingdom, cost half the price which they used to do thirty or forty years ago. The same thing may be said of turnips, carrots, cabbages; things which were formerly never raised but by the spade, but which are now commonly raised by the plough. All sort of garden stuff, too.... The great improvements in the coarser manufactures of both linen and woollen cloth furnish the labourers with cheaper and better clothing; and those in the manufactures of the coarser metals, with cheaper and better instruments of trade, as well as with many agreeable and convenient pieces of household furniture.... The common complaint that luxury extends itself even to the lowest ranks of the people, and that the labouring poor will not now be contented with the same food, clothing, and lodging which satisfied them in former times, may convince us that it is not the money price of labour only, but its real recompense, which has augmented.... Is this improvement in the circumstances of the lower ranks of the people to be regarded as an advantage or as an inconveniency to the society? The answer seems at first sight abundantly plain. Servants, labourers, and workmen of different kinds, make up the far greater part of every great political society. But what improves the circumstances of the greater part can never be regarded as an inconveniency to the whole. No society can surely be flourishing and happy, of which the far greater part of the members are poor and miserable. It is but equity, besides, that they who feed, clothe, and lodge the whole body of the people, should have such a share of the produce of their own labour as to be themselves tolerably well fed, clothed, and lodged...
Inequality? A stoic-cynic-snarky response:
Snarky: That the wealth of the rich does not produce human flourishing...
Cynical: Humans are made to be sheered, because they are sheep...
Stoic: It is well that nature imposes on us in this way...
If inequality is of great concern in his heart-of-hearts, he hides it well in his public literary face...
The Wealth of Nations, Tribe said, could not be a book of economics because a book of economics had to be about the economy. And there was no such thing as the economy in 1776 for a book of economics to be about. What was there? There was the undifferentiated stuff of the mixed social-cultural-political-trading system that governed production and distribution: material life. There was the study of the management of public finances. This was conceived in a manner analogous to the domestic-economic management of household finances. Just as--to Robert Filmer and others--the King was the father of the people, so the King's household--which became the state--had to be properly and prudently managed.
In the words of James Steuart, who wrote his Principles of Political Oeconomy nine years before the Wealth of Nations, in 1767: "Oeconomy, in general, is the art of providing for all the wants of a family, with prudence and frugality. What oeconomy is in a family, political oeconomy is in a state." It is managing affairs to make the people prosperous and the tax collections ample by governing "in such a manner as naturally to create the reciprocal relations and dependencies between [inhabitants], so as to make their several interests lead them to supply one another with their reciprocal wants."
There wasn't, Tribe argued, an economy that an economist could write a book of economics about until the 1820s or so.
Strip Tribe's (and Foucault's) arguments of their rhetoric of apparent contradiction and you can understand that within the mystical shell there is a rational kernel. It is--or, at least, I read them as--an injunction to analyze a school of thought in more-or-less the following way:
Tribe applied this methodology to Adam Smith, his predecessors, contemporaries, and successors. What they were doing, before Ricardo, was Political Oeconomy--writing manuals of tactics and policy as advice to statesmen, although manuals restricted to what Adam Smith would have called (did call) a subclass of police: how to keep public order and create public prosperity. Hence for Adam Smith Book V of Wealth of Nations is the payoff: it tells British statesmen what they ought to do in order to make the nation prosperous, their tax coffers full, and thus the state well-funded. Book IV is a necessary prequel to Book V: it tells the statesmen in the audience why the advice that they are being given by others in other books of Political Oeconomy--by Mercantilists and Physiocrats. Book III is another necessary prequel: it teaches statesmen about the economic history of Europe and how political oeconomy of various kinds has been practiced in the past.
But Tribe's (and Foucault's) methodology collapses when we work back to Books II and I of the Wealth of Nations. For Adam Smith is not the prisoner of the discursive formation of Political Oeconomy. He is not the simple bearer of currents of thought and ideas that he recombines as other authors do in more-or-less standard and repeated ways. Adam Smith is a genius. He is the prophet and the master of a new discipline. He is the founder of economics.
Adam Smith is the founder of economics because he has a great and extraordinary insight: that the competitive market system is a remarkably powerful social calculating and organizing mechanism, and that the sophisticated division of labor to which a competitive market system backed up by secure and honest enforcement of property rights give rise is the key to the wealth of nations. Some others before had had this insight in part: Richard Cantillon writing of how once you have specified demands the market does by itself all the heavy lifting that a central planner would need to do; Bernard de Mandeville that dextrous management by a statesman can use the power of private greed to produce the benefit of public utility. But it is Smith who sees what the power of the "system of natural liberty" that is the market could be--and who follows the argument through to the conclusion that it forever upsets and overturns the previous intellectual moves made in and conclusions reached by the discursive formation of Political Oeconomy.
And once I had worked my way through to this conclusion, I could start to write my own thesis. I had broken the thralldom. Foucault's ideas of "discourse" and "archaeology" were not my masters, but my tools. And as I wrote it became very clear to me that between David Ricardo and even the later John Stuart Mill the discursive formation that was Classical Economics did not produce anybody like Adam Smith. There was nobody who made the intellectual leap--produced the epistemological break--that Smith had done that shattered Political Oeconomy and enabled the birth of Classical Economics. I could write my thesis about how the British Classical Economists never understood the Industrial Revolution that they were living through.
What Is Human Nature? Two Views from Adam Smith: Two models of human nature in one book.
In Books I and II of the Wealth of Nations, Adam Smith lays out how the economy works. People seek material comfort and are naturally sociable--have a predisposition to "truck, barter, and exchange." From this derives market exchange, the division of labor, specialization, high productivity, accumulation and investment, higher productivity, comfort, and material wealth. This process driven by human nature, Smith says, starts in the countryside with the expansion of productivity in making the necessities of life, moves to the towns with the subsequent expansion of productivity in making the conveniences of life and then shows itself at last with the development of long-distance international trade in luxuries. That, at least, is the "natural" history of the economy.
But in Book III things change. Humans are no longer naturally sociable beings with a propensity to trade seeking material comfort. Instead, they are creatures of "rapine and violence," desperate for "power and protection," vain and seeking luxury, unwilling to take pains to pay attention to smalls savings and small gains, loving to domineer, mortified at even the thought of having to persuade his inferiors.
This is a different "Adam Smith problem" than is usually posed. And, I think, it is in many ways more interesting than the standard Adam Smith problem:
Adam Smith, from Book III of the Wealth of Nations:
According to the natural course of things... capital of every growing society is, first, directed to agriculture, afterwards to manufactures, and last of all to foreign commerce.... But though this natural order of things must have taken place... it has, in all the modern states of Europe, been, in many respects, entirely inverted. The foreign commerce of some of their cities has introduced all their finer manufactures, or such as were fit for distant sale; and manufactures and foreign commerce together have given birth to the principal improvements of agriculture. The manners and customs which the nature of their original government introduced, and which remained after that government was greatly altered, necessarily forced them into this unnatural and retrograde order....
When the German and Scythian nations overran the western provinces of the Roman empire, the confusions which followed so great a revolution lasted for several centuries. The rapine and violence which the barbarians exercised against the ancient inhabitants interrupted the commerce between the towns and the country. The towns were deserted, and the country was left uncultivated, and the western provinces of Europe, which had enjoyed a considerable degree of opulence under the Roman empire, sunk into the lowest state of poverty and barbarism.... [T]he chiefs and principal leaders of those nations acquired or usurped to themselves the greater part of the lands of those countries....
This original engrossing of uncultivated lands... might have been but a transitory evil.... [But] primogeniture hindered them from being divided by succession: the introduction of entails prevented their being broke into small parcels by alienation. When land... is considered as the means only of subsistence and enjoyment, the natural law of succession divides it... among all the children... equally dear to the father.... But when land was considered as the means, not of subsistence merely, but of power and protection, it was thought better that it should descend undivided to one.... The security of a landed estate... the protection which its owner could afford to those who dwelt on it, depended upon its greatness. To divide it was to ruin it.... The law of primogeniture, therefore, came... in the succession of landed estates, for the same reason that it has generally taken place in that of monarchies....
In the present state of Europe, the proprietor of a single acre of land is as perfectly secure of his possession as the proprietor of a hundred thousand. The right of primogeniture, however, still continues to be respected, and... is still likely to endure for many centuries.... Entails are the natural consequences of the law of primogeniture. They were introduced to... hinder any part of the original estate from being carried out of the proposed line either by gift, or devise, or alienation; either by the folly, or by... misfortune.... Great tracts of uncultivated land were, in this manner, not only engrossed by particular families, but the possibility of their being divided again was as much as possible precluded for ever.
It seldom happens... that a great proprietor is a great improver. In the disorderly times which gave birth to those barbarous institutions... [h]e had no leisure to attend to the cultivation and improvement of land. When the establishment of law and order afforded him this leisure, he often wanted the inclination, and almost always the requisite abilities.... To improve land with profit, like all other commercial projects, requires an exact attention to small savings and small gains, of which a man born to a great fortune, even though naturally frugal, is very seldom capable.... The elegance of his dress, of his equipage, of his house, and household furniture, are objects which from his infancy he has been accustomed to have some anxiety about.... There still remain in both parts of the United Kingdom some great estates which have continued without interruption in the hands of the same family since the times of feudal anarchy. Compare the present condition of those estates with the possessions of the small proprietors in their neighbourhood, and you will require no other argument to convince you how unfavourable such extensive property is to improvement....
If little improvement was to be expected from such great proprietors, still less was to be hoped for from those who occupied the land under them. In the ancient state of Europe, the occupiers of land were... all or almost all slaves.... Whatever they acquired was acquired to their master, and he could take it from them at pleasure.... This species of slavery still subsists in Russia, Poland, Hungary, Bohemia, Moravia, and other parts of Germany. It is only in the western and south-western provinces of Europe that it has gradually been abolished altogether. But if great improvements are seldom to be expected from great proprietors, they are least of all to be expected when they employ slaves for their workmen.... A person who can acquire no property, can have no other interest but to eat as much, and to labour as little as possible. Whatever work he does beyond what is sufficient to purchase his own maintenance can be squeezed out of him by violence only, and not by any interest of his own....
The pride of man makes him love to domineer, and nothing mortifies him so much as to be obliged to condescend to persuade his inferiors. Wherever the law allows it, and the nature of the work can afford it, therefore, he will generally prefer the service of slaves to that of freemen. The planting of sugar and tobacco can afford the expence of slave-cultivation. The raising of corn, it seems, in the present times, cannot. In the English colonies, of which the principal produce is corn, the far greater part of the work is done by freemen.... In our sugar colonies, on the contrary, the whole work is done by slaves, and in our tobacco colonies a very great part of it.... Both can afford the expence of slave-cultivation, but sugar can afford it still better than tobacco. The number of negroes accordingly is much greater, in proportion to that of whites, in our sugar than in our tobacco colonies...
[...]
In some parts of Lancashire, it is pretended, I have been told, that bread of oatmeal is a heartier food for labouring people than wheaten bread, and I have frequently heard the same doctrine held in Scotland. I am, however, somewhat doubtful of the truth of it. The common people in Scotland, who are fed with oatmeal, are in general neither so strong nor so handsome as the same rank of people in England, who are fed with wheaten bread. They neither work so well, nor look so well; and as there is not the same difference between the people of fashion in the two countries, experience would seem to shew, that the food of the common people in Scotland is not so suitable to the human constitution as that of their neighbours of the same rank in England. But it seems to be otherwise with potatoes. The chairmen, porters, and coal-heavers in London, and those unfortunate women who live by prostitution, the strongest men and the most beautiful women perhaps in the British dominions, are said to be, the greater part of them, from the lowest rank of people in Ireland, who are generally fed with this root. No food can afford a more decisive proof of its nourishing quality, or of its being peculiarly suitable to the health of the human constitution...
[...]
Nobody ever saw a dog make a fair and deliberate exchange of one bone for another with another dog.... When an animal wants to obtain something either of a man or of another animal, it has no other means of persuasion but to gain the favour of those whose service it requires. A puppy fawns upon its dam, and a spaniel endeavours by a thousand attractions to engage the attention of its master who is at dinner, when it wants to be fed by him. Man sometimes uses the same arts with his brethren, and when he has no other means of engaging them to act according to his inclinations, endeavours by every servile and fawning attention to obtain their good will. He has not time, however, to do this upon every occasion. In civilised society he stands at all times in need of the cooperation and assistance of great multitudes, while his whole life is scarce sufficient to gain the friendship of a few persons....
Man has almost constant occasion for the help of his brethren, and it is in vain for him to expect it from their benevolence only. He will be more likely to prevail if he can interest their self-love in his favour, and show them that it is for their own advantage to do for him what he requires of them. Whoever offers to another a bargain of any kind, proposes to do this. Give me that which I want, and you shall have this which you want, is the meaning of every such offer; and it is in this manner that we obtain from one another the far greater part of those good offices which we stand in need of. It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love....
It is this same trucking disposition which originally gives occasion to the division of labour. In a tribe of hunters or shepherds a particular person makes bows and arrows, for example, with more readiness and dexterity than any other. He frequently exchanges them for cattle or for venison with his companions; and he finds at last that he can in this manner get more cattle and venison than if he himself went to the field to catch them. From a regard to his own interest, therefore, the making of bows and arrows grows to be his chief business, and he becomes a sort of armourer. Another excels in making the frames and covers of their little huts or movable houses. He is accustomed to be of use in this way to his neighbours, who reward him in the same manner with cattle and with venison, till at last he finds it his interest to dedicate himself entirely to this employment, and to become a sort of house-carpenter.... [T]he certainty of being able to exchange all that surplus part of the produce of his own labour, which is over and above his own consumption, for such parts of the produce of other men's labour as he may have occasion for, encourages every man to apply himself to a particular occupation, and to cultivate and bring to perfection whatever talent or genius he may possess for that particular species of business.
The difference of natural talents in different men is, in reality, much less than we are aware of; and the very different genius which appears to distinguish men... is not upon many occasions so much the cause as the effect of the division of labour. The difference between the most dissimilar characters, between a philosopher and a common street porter, for example, seems to arise not so much from nature as from habit, custom, and education... and widens by degrees, till at last the vanity of the philosopher is willing to acknowledge scarce any resemblance. But without the disposition to truck, barter, and exchange, every man must have procured to himself every necessary and conveniency of life which he wanted. All must have had the same duties to perform, and the same work to do, and there could have been no such difference of employment as could alone give occasion to any great difference of talents.... By nature a philosopher is not in genius and disposition half so different from a street porter, as a mastiff is from a greyhound, or a greyhound from a spaniel, or this last from a shepherd's dog....
Among men... the most dissimilar geniuses are of use to one another; the different produces of their respective talents, by the general disposition to truck, barter, and exchange, being brought, as it were, into a common stock, where every man may purchase whatever part of the produce of other men's talents he has occasion for...
[...]
Adam Smith: Smith: Wealth of Nations, Book I, Chapter 8: The liberal reward of labour, as it encourages the propagation, so it increases the industry of the common people.... A plentiful subsistence increases the bodily strength of the labourer, and the comfortable hope of bettering his condition, and of ending his days perhaps in ease and plenty, animates him to exert that strength to the utmost. Where wages are high, accordingly, we shall always find the workmen more active, diligent, and expeditious, than where they are low; in England, for example, than in Scotland; in the neighbourhood of great towns, than in remote country places. Some workmen, indeed, when they can earn in four days what will maintain them through the week, will be idle the other three. This, however, is by no means the case with the greater part.
Workmen, on the contrary, when they are liberally paid by the piece, are very apt to over-work themselves, and to ruin their health and constitution in a few years. A carpenter in London, and in some other places, is not supposed to last in his utmost vigour above eight years. Something of the same kind happens in many other trades, in which the workmen are paid by the piece; as they generally are in manufactures, and even in country labour, wherever wages are higher than ordinary. Almost every class of artificers is subject to some peculiar infirmity occasioned by excessive application to their peculiar species of work. Ramuzzini, an eminent Italian physician, has written a particular book concerning such diseases....
If masters would always listen to the dictates of reason and humanity, they have frequently occasion rather to moderate, than to animate the application of many of their workmen. It will be found, I believe, in every sort of trade, that the man who works so moderately, as to be able to work constantly, not only preserves his health the longest,
In cheap years, it is pretended, workmen are generally more idle, and in dear ones more industrious than ordinary. A plentiful subsistence, therefore, it has been concluded, relaxes, and a scanty one quickens their industry. That a little more plenty than ordinary may render some workmen idle, cannot well be doubted; but that it should have this effect upon the greater part, or that men in general should work better when they are ill fed than when they are well fed, when they are disheartened than when they are in good spirits, when they are frequently sick than when they are generally in good health, seems not very probable. Years of dearth, it is to be observed, are generally among the common people years of sickness and mortality, which cannot fail to diminish the produce of their industry.
In years of plenty, servants frequently leave their masters, and trust their subsistence to what they can make by their own industry. But the same cheapness of provisions, by increasing the fund which is destined for the maintenance of servants, encourages masters, farmers especially, to employ a greater number. Farmers upon such occasions expect more profit from their corn by maintaining a few more labouring servants, than by selling it at a low price in the market. The demand for servants increases, while the number of those who offer to supply that demand diminishes. The price of labour, therefore, frequently rises in cheap years.
In years of scarcity, the difficulty and uncertainty of subsistence make all such people eager to return to service. But the high price of provisions, by diminishing the funds destined for the maintenance of servants, disposes masters rather to diminish than to increase the number of those they have. In dear years too, poor independent workmen frequently consume the little stocks with which they had used to supply themselves with the materials of their work, and are obliged to become journeymen for subsistence. More people want employment than can easily get it; many are willing to take it upon lower terms than ordinary, and the wages of both servants and journeymen frequently sink in dear years.
Masters of all sorts, therefore, frequently make better bargains with their servants in dear than in cheap years, and find them more humble and dependent in the former than in the latter. They naturally, therefore, commend the former as more favourable to industry. Landlords and farmers, besides, two of the largest classes of masters, have another reason for being pleased with dear years. The rents of the one and the profits of the other depend very much upon the price of provisions. Nothing can be more absurd, however, than to imagine that men in general should work less when they work for themselves, than when they work for other people. A poor independent workman will generally be more industrious than even a journeyman who works by the piece. The one enjoys the whole produce of his own industry; the other shares it with his master. The one, in his separate independent state, is less liable to the temptations of bad company, which in large manufactories so frequently ruin the morals of the other. The superiority of the independent workman over those servants who are hired by the month or by the year, and whose wages and maintenance are the same whether they do much or do little, is likely to be still greater. Cheap years tend to increase the proportion of independent workmen to journeymen and servants of all kinds, and dear years to diminish it...
Division of Labor: Observe the accommodation of the most common artificer or day-labourer in a civilized and thriving country, and you will perceive that the number of people... employed in procuring him this accommodation, exceeds all computation. The woollen coat... is the produce of the joint labour of... [t]he shepherd, the sorter of the wool, the wool-comber or carder, the dyer, the scribbler, the spinner, the weaver, the fuller, the dresser, with many others....
How many merchants and carriers, besides, must have been employed in transporting the materials from some of those workmen to others who often live in a very distant part of the country! how much commerce and navigation in particular, how many ship-builders, sailors, sail-makers, rope-makers, must have been employed in order to bring together the different drugs made use of by the dyer, which often come from the remotest corners of the world! What a variety of labour too is necessary in order to produce the tools of the meanest of those workmen! To say nothing of such complicated machines as the ship of the sailor, the mill of the fuller, or even the loom of the weaver.... The miner, the builder of the furnace for smelting the ore, the feller of the timber, the burner of the charcoal to be made use of in the smelting-house, the brick-maker, the brick-layer, the workmen who attend the furnace, the mill-wright, the forger, the smith, must all of them join their different arts in order to produce them....
This division of labour, from which so many advantages are derived, is not originally the effect of any human wisdom, which foresees and intends that general opulence to which it gives occasion. It is the necessary, though very slow and gradual, consequence of a certain propensity in human nature... to truck, barter, and exchange one thing for another.
Whether this propensity be one of those original principles in human nature... it belongs not to our present subject to enquire. It is common to all men, and to be found in no other race of animals.... Nobody ever saw a dog make a fair and deliberate exchange of one bone for another with another dog....
When an animal wants to obtain something either of a man or of another animal, it has no other means of persuasion but to gain the favour of those whose service it requires. A puppy fawns upon its dam, and a spaniel endeavours by a thousand attractions to engage the attention of its master who is at dinner, when it wants to be fed by him. Man sometimes uses the same arts with his brethren, and when he has no other means of engaging them to act according to his inclinations, endeavours by every servile and fawning attention to obtain their good will.
He has not time, however, to do this upon every occasion. In civilized society he stands at all times in need of the cooperation and assistance of great multitudes, while his whole life is scarce sufficient to gain the friendship of a few persons.... [M]an has almost constant occasion for the help of his brethren, and it is in vain for him to expect it from their benevolence only. He will be more likely to prevail if he can interest their self-love... it is in this manner that we obtain from one another the far greater part of those good offices which we stand in need of. It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages...
Extent of the Market: As it is the power of exchanging that gives occasion to the division of labour, so the extent of this division must always be limited by... the extent of the market. When the market is very small, no person can have any encouragement to dedicate himself entirely to one employment, for want of the power to exchange all that surplus part of the produce... for such parts of the produce of other men's labour as he has occasion for.
There are some sorts of industry... which can be carried on no where but in a great town.... In the lone houses and very small villages which are scattered about in so desert a country as the Highlands of Scotland, every farmer must be butcher, baker and brewer for his own family. In such situations we can scarce expect to find even a smith, a carpenter, or a mason, within less than twenty miles of another of the same trade. The scattered families that live at eight or ten miles distance from the nearest of them, must learn to perform themselves a great number of little pieces of work, for which, in more populous countries, they would call in the assistanc.... A country carpenter... is not only a carpenter, but a joiner, a cabinet maker, and even a carver in wood, as well as a wheelwright, a ploughwright, a cart and waggon maker.... It is impossible there should be such a trade as even that of a nailer in the remote and inland parts of the Highlands of Scotland....
As by means of water-carriage a more extensive market is opened to every sort of industry than what land-carriage alone can afford it, so it is upon the sea-coast, and along the banks of navigable rivers, that industry of every kind naturally begins to subdivide and improve itself...
When we consider the condition of the great, in those delusive colours in which the imagination is apt to paint it. it seems to be almost the abstract idea of a perfect and happy state. It is the very state which, in all our waking dreams and idle reveries, we had sketched out to ourselves as the final object of all our desires. We feel, therefore, a peculiar sympathy with the satisfaction of those who are in it. We favour all their inclinations, and forward all their wishes. What pity, we think, that any thing should spoil and corrupt so agreeable a situation! We could even wish them immortal; and it seems hard to us, that death should at last put an end to such perfect enjoyment. It is cruel, we think, in Nature to compel them from their exalted stations to that humble, but hospitable home, which she has provided for all her children.
"Great King, live for ever!" is the compliment, which, after the manner of eastern adulation, we should readily make them, if experience did not teach us its absurdity. Every calamity that befals them, every injury that is done them, excites in the breast of the spectator ten times more compassion and resentment than he would have felt, had the same things happened to other men…. To disturb, or to put an end to such perfect enjoyment, seems to be the most atrocious of all injuries. The traitor who conspires against the life of his monarch, is thought a greater monster than any other murderer. All the innocent blood that was shed in the civil wars, provoked less indignation than the death of Charles I.
A stranger to human nature, who saw the indifference of men about the misery of their inferiors, and the regret and indignation which they feel for the misfortunes and sufferings of those above them, would be apt to imagine, that pain must be more agonizing, and the convulsions of death more terrible to persons of higher rank, than to those of meaner stations.
Upon this disposition of mankind, to go along with all the passions of the rich and the powerful, is founded the distinction of ranks, and the order of society. Our obsequiousness to our superiors more frequently arises from our admiration for the advantages of their situation, than from any private expectations of benefit from their good-will…. Neither is our deference to their inclinations founded chiefly, or altogether, upon a regard to the utility of such submission, and to the order of society, which is best supported by it. Even when the order of society seems to require that we should oppose them, we can hardly bring ourselves to do it.
That kings are the servants of the people, to be obeyed, resisted, deposed, or punished, as the public conveniency may require, is the doctrine of reason and philosophy; but it is not the doctrine of Nature. Nature would teach us to submit to them for their own sake, to tremble and bow down before their exalted station, to regard their smile as a reward sufficient to compensate any services, and to dread their displeasure, though no other evil were to follow from it, as the severest of all mortifications. To treat them in any respect as men, to reason and dispute with them upon ordinary occasions, requires such resolution, that there are few men whose magnanimity can support them in it, unless they are likewise assisted by familiarity and acquaintance…
Observe the accommodation of the most common artificer or day-labourer in a civilized and thriving country, and you will perceive that the number of people of whose industry a part, though but a small part, has been employed in procuring him this accommodation, exceeds all computation. The woollen coat, for example, which covers the day-labourer, as coarse and rough as it may appear, is the produce of the joint labour of a great multitude of workmen. The shepherd, the sorter of the wool, the wool-comber or carder, the dyer, the scribbler, the spinner, the weaver, the fuller, the dresser, with many others, must all join their different arts in order to complete even this homely production.
How many merchants and carriers, besides, must have been employed in transporting the materials from some of those workmen to others who often live in a very distant part of the country! how much commerce and navigation in particular, how many ship-builders, sailors, sail-makers, rope-makers, must have been employed in order to bring together the different drugs made use of by the dyer, which often come from the remotest corners of the world! What a variety of labour too is necessary in order to produce the tools of the meanest of those workmen! To say nothing of such complicated machines as the ship of the sailor, the mill of the fuller, or even the loom of the weaver, let us consider only what a variety of labour is requisite in order to form that very simple machine, the shears with which the shepherd clips the wool. The miner, the builder of the furnace for smelting the ore, the feller of the timber, the burner of the charcoal to be made use of in the smelting-house, the brick-maker, the brick-layer, the workmen who attend the furnace, the mill-wright, the forger, the smith, must all of them join their different arts in order to produce them….
If we examine, I say, all these things, and consider what a variety of labour is employed about each of them, we shall be sensible that without the assistance and co-operation of many thousands, the very meanest person in a civilized country could not be provided, even according to what we very falsely imagine, the easy and simple manner in which he is commonly accommodated…
The difference between the most dissimilar characters, between a philosopher and a common street porter, for example, seems to arise not so much from nature as from habit, custom, and education. When they came into the world, and for the first six or eight years of their existence, they were perhaps very much alike, and neither their parents nor playfellows could perceive any remarkable difference. About that age, or soon after, they come to be employed in very different occupations. The difference of talents comes then to be taken notice of, and widens by degrees, till at last the vanity of the philosopher is willing to acknowledge scarce any resemblance.
But without the disposition to truck, barter, and exchange, every man must have procured to himself every necessary and conveniency of life which he wanted. All must have had the same duties to perform, and the same work to do, and there could have been no such difference of employment as could alone give occasion to any great difference of talents. As it is this disposition which forms that difference of talents, so remarkable among men of different professions, so it is this same disposition which renders that difference useful.
Many tribes of animals acknowledged to be all of the same species derive from nature a much more remarkable distinction of genius, than what, antecedent to custom and education, appears to take place among men. By nature a philosopher is not in genius and disposition half so different from a street porter, as a mastiff is from a greyhound, or a greyhound from a spaniel, or this last from a shepherd's dog. Those different tribes of animals, however, though all of the same species, are of scarce any use to one another.
The strength of the mastiff is not, in the least, supported either by the swiftness of the greyhound, or by the sagacity of the spaniel, or by the docility of the shepherd's dog. The effects of those different geniuses and talents, for want of the power or disposition to barter and exchange, cannot be brought into a common stock, and do not in the least contribute to the better accommodation ind conveniency of the species. Each animal is still obliged to support and defend itself, separately and independently, and derives no sort of advantage from that variety of talents with which nature has distinguished its fellows. Among men, on the contrary, the most dissimilar geniuses are of use to one another; the different produces of their respective talents, by the general disposition to truck, barter, and exchange, being brought, as it were, into a common stock, where every man may purchase whatever part of the produce of other men's talents he has occasion for.
Kieran Healy:
Kieran Healy writes:
Teaching Adam Smith: Sources of Sociological Theory.... After a crash course on the state of Europe and America prior to 1780 or so (100% guaranteed to make historians come out in at least hives, and possibly trigger fits), we've started reading Adam Smith. It's always a pleasure to teach Smith as a social theorist. For one thing, he's a clear enough writer (certainly compared to, e.g., Weber) and more importantly his central insight about the possibility of decentralized co-ordination always catches students by surprise. Even though students are all exposed one way or another to the rhetoric of free enterprise, free trade, market capitalism and what have you, in my experience even talented undergraduates have to work a bit to really see the power and elegance of Smith' vision of a complex, co-ordinated division of labor. I do a few classroom exercises (based on ideas from Mitch Resnick and Tom Schelling, amongst others) to bring out the problem of co-ordination, the many ways it can fail, and the distinctive qualities of markets as a solution. (Though, as Schelling notes, not all cases of distributed co-ordination are markets, just as not all ellipses are circles.) Although Smith is often presented as the champion of the individual, and opposed to thinkers who emphasize social structure or the state, it's immediately clear when you read him that Smith was as much a "discoverer of society"--that is, of the idea that the social world is a human product consisting of myriad interlocking relationships dependent on specific institutions and human capacities--as any of the other theorists typically recognized as founders of modern sociology. His treatment of the problem of the division of labor also provides a platform to understand the others. Marx is much easier to understand once you know a bit about Smith, of course, but so are Durkheim's ideas about social solidarity and the nonrational foundations of contractual exchange. And much of Weber's work on the origins of capitalism was conceived explicitly with Smith in mind...
From DAVID HUME
Lisle Street, Leicester Fields
April 12, 1759
Dear Smith,
I give you thanks for the agreeable present of your Theory [of Moral Sentiments]. Wedderburn and I made presents of our copie to such of our acquaintance as we thought good judges, and proper to spread the reputation of the book. I sent one to the Duke of Argyle, to Lord Lyttleton, Horace Walpole, Soames Jennyns, and Burke, an Irish gentleman, who wrote lately a very pretty treatise on the sublime. Millar desired my permission to send one in your name to Dr. Warburton.
I have delayed writing to you until I could tell you something of the success of the book, and could prognosticate with some probability whether it should be finally damned to oblivion, or should be registered in the temple of immortality. Tough it has been published only a few weeks, I think there appear already such strong symptoms, that I can almost venture to fortell its fate. It is, in short, this--
But I have been interrupted in my letter by a foolish impertinent visit of one who has lately come from Scotland. He tells me, that the Univerity of Glasgow intend to declare Rouet's office vacant upon his going abraod with Lord Hope. I question not but you will have our friend, Ferguson, in your eye, in case another project for procuring him a place in the University of Edinburgh should fail. Ferguson has very much polished and improved his treatise on refinement, and with some amendments it will make an admirable book, and discovers an elegant and singular genius.
The Epigoniad, I hope, will do; but it is sometimes uphill work. As I doubt not but you consult the reviews sometimes at present, you will see in the Critical Review a letter upon that poem; and I desire you to employ your conjectures n finding out the author. Let me see a sample of your skill in knowing hands by your guessing at the person.
I am afraid of Lord Kames's Law Tracts. A man might as well think of making a fine sauce by a mixture of wormwood and aloes as an agreeable composition by joining metaphysics and Scotch law. However the book, I believe, has merit; though few people will take the pains of dividng into it.
A plague of interruptions! I ordered myself to be denied; and yet here is one that has broken in upon me again. He is a man of letters, and we have had a good deal of literary conversation. You told me that you were curious of literary anecdotes, and therefoare I shall inform you of a few that have come to my knowledge.
I believe I have mentioned to you already Helvetius's book De l'Esprit. It is worth your reading not for it philosophy, whic I do not highly value, but for its agreeable compostion. I had a letter from him a few days ago, wherein he tells me that my name was much oftener in the manuscript, but that the censor of books at Paris obliged him to strike it out.
Voltaire has lately published a small work called Candide, ou l'Optimisme. It is full of sprightliness and impiety, and is indeed a satire upon Providence, under the pretext of criticizing the Leibnizian system. I shall give you a detail of it--
"But what is all this to my book?" say you--
My Dear Mr. Smith, have patience: compose yourself to tranquillity: show yourself a philosopher in practice as well as profession: think on the emptiness and rashness and futility of the common judgments of men: how little they are regulated by reason in any subject, much more in philosophical subjects, which so far exceed the comprehension of the vulgar.
Non si quid improba Roma, elevet, accedas examenque improbum in illa, perpendas trutina, nec te quaesiveris extra.[1] A wise man's kingdom is his own breast: or, if he ever looks farther, it will only be to the judgment of a select few, who are free from prejudices, and capable of examining his work. Nothing indeed can be a stronger presumption of falsehood than the approbation of the multitude; and Phocion, you know, always suspected himself of some blunder when he was attended with the applauses of the populace.
Supposing, therefore, that you have duly prepared yourself for the worst by all these reflections; I proceed to tell you the melancholy news, that your book has been very unfortunate: for the public seem disposed to applaud it extremely.
It was looked for by the foolish people with some impatience; and the mob of literati are beginning already to be very loud in its praises. Three bishops called yesterday at Millar's shop in order to buy copies, and to ask questions about the author. The Bishop of Peterborough said he had passed the evening in a company where he heard it extolled above all books in the world.
You may conclude what opinion true philosopher will entertain of it, when these retainers to superstition praise it so highly.
The Duke of Argyle is more decisive than he uses to be in its favour: I suppose he either considers it as an exotic, or thinks the author will be serviceable to him in the Glasgow elections. Lord Lyttleton says that Robertson and Smith and Bower are the glories of English literature. Oswald protets he does not know whether he has reaped more instruction or entertainment from it: but you may easily judge what reliance can be put on his judgment, who has been engaged all his life in public business and who never sees any faults in his freinds.
Millar exults and brags that two-thirds of the edition is already sold, and that he is now sure of success. You see what a son of the earth that is, to value books only by the profit they bring him. In that view, I believe it may prove a very good book.
Charles Townsend, who passes for the cleverest fellow in England, is so taken with the performance that he said to Oswald he would put the Duke of Buccleugh under the author's care, and would endeavor to make it worth his while to accept of that charge. As soon as I heard this, I called on him twice with a view of talking with him about the matter, and of convincing him of the propriety of sending that young nobleman to Glasgow: for I could not hope tht he could offer you any terms which would tempt you to renounce your professorship: but I missed him. Mr. Townsend passes for being a little uncertain in his resolutions: so perhaps you need not build much on this sally.
In recompense for so many mortifying things, which nothing but truth could have extorted from me, and which I could easily have multiplied to a greater number; I doubt not but you are so good a Christian as to return good for evil and to flatter my vanity; by telling me that all the godly in Scotland abuse me for my account of John Knox and the Reformation, etc.
I suppose you are glad to see my paper end, and that I am obliged to conclude with
David Hume
[1]"If foolish Rome underweighs anything, neither go up and correct the false tongue in the balance, nor seek anyone besides yourself." Persius, Satirarum Liber
Adam Smith on the Death of David Hume | 2020-12-02 22:03:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.371883749961853, "perplexity": 3128.94409106779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141716970.77/warc/CC-MAIN-20201202205758-20201202235758-00028.warc.gz"} |
https://socratic.org/questions/59a45de8b72cff2bd7e1c31f | # What results when a species is OXIDIZED?
Aug 28, 2017
Well, cations are oxidation products.......
#### Explanation:
And thus for an element, we would write......
$M \left(s\right) + \Delta \rightarrow {M}^{+} + {e}^{-}$
And anions, which typically result from non-metals, are reduction products....
$\frac{1}{2} {X}_{2} + {e}^{-} \rightarrow {X}^{-}$
Salts result from the stoichiometric combination of anions and cations......i.e.
$M \left(s\right) + \frac{1}{2} {X}_{2} \rightarrow M X$
Aug 28, 2017
Cations (positive charge), anions (negative charge)
#### Explanation:
The difference between a cation and an anion is the net electrical charge of the ion. Ions are atoms or molecules which have gained or lost, one or more valence electrons (electrons on their outer shell) giving the ion a net negative or positive charge.
Cations lose one or more valence electrons. Therefore, they have a net positive charge .
Anions gain electrons, means they gain a net negative charge.
Example of cations:
• $N {a}^{+} 1$
• $C {a}^{+} 2$
• $A {l}^{+} 3$
Example of anions:
• $C {l}^{-} 1$
• ${O}^{-} 2$
• ${N}^{-} 3$ | 2019-08-25 04:32:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5627819895744324, "perplexity": 5564.86131745455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00278.warc.gz"} |
http://math.stackexchange.com/tags/exterior-algebra/hot?filter=month | # Tag Info
3
They both may be right. If wedge product differs (Similar problem) and we set the definition of exterior derivative as $$d\omega=\sum_{I}d\omega_I\wedge dx^I,$$ then $d$ may differs as well (cause $\wedge$ appears). If we take axiomatic approach to exterior derivative, then one of axioms says ...
2
I'll propose to you another (slightly different, but isomorphic) definition of the adjugate (classical adjoint). Im borrowing from section 8 of http://people.reed.edu/~jerry/332/27exterior.pdf . Let $f:V\rightarrow V$ (with $n$ the dimension of $V$). We have a canonical isomorphism $\phi:V=\wedge^1 V\rightarrow\mathrm{Hom}(\wedge^{n-1} V,\wedge^n V)$ ...
2
In general, consider a one form $\beta = \sum_i \beta_i dx^i$ on $U\subset\mathbb R^m$. If $$\| \beta\|^2 = \beta_1^2 + \cdots + \beta_m^2 \neq 0$$ on $U$, then the $(n-1)$-form $$\alpha = \frac{1}{\|\beta\|^2} \sum_i (-1)^{i-1} \beta_i dx^1 \wedge \cdots \wedge \hat{dx^i} \wedge \cdots \wedge dx^m$$ on $U$ safisfies $$\beta \wedge \alpha = dx^1 ... 1 We do, but for historical reasons they are called (linear) isometries. 1 Your derivation is right and equation [1] cannot be correct without further assumptions. In fact the exterior derivative of a 1-form can be shown to be $$\mathrm{d}\omega(X,Y)= X(\omega(Y)) - Y(\omega(X))-\omega([X,Y])$$ which is equivalent to what you wrote. 1 A very nice gentle (albeit abstract) introduction to forms and connections can be found in R.W.R Darling's Differential Forms and Connections (1), a more physics based text book would have to be Nakahara's Geometry, Topology & Physics (2) - these helped me greatly when I had a similar need to you. Good Luck! (1) ... 1 you can read S.S.Chern's"lectures on differential geometry". He did very well on that book 1 First we parametrice the surface as follows,$$\left\{\begin{matrix}x&=&x\\y&=&\frac{1}{x}\\z&=&z\end{matrix}\right.$$so we get that,$$\left\{\begin{matrix}dx&=&dx\\dy&=&-\frac{1}{x^2}dx\\dz&=&dz\end{matrix}\right.$$so in xy=1 it is,$$x dy\wedge dz+ydz\wedge dx=-\frac{1}{x}dx\wedge dz-\frac{1}{x}dz\wedge ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2015-10-04 10:07:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9889392256736755, "perplexity": 798.9536818042242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673081.9/warc/CC-MAIN-20151001215753-00231-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://covalent.readthedocs.io/en/latest/how_to/execution/creating_custom_executors.html | # Creating a Custom Executor#
Executors define how abackend resource handles computations. They specify everything about the resource: the hardware and configuration, the computation strategy, logic, and even goals.
Executors are plugins. Any executor plugins found by the dispatcher are imported as classes in the covalent.executor name space.
Covalent already contains a number of versatile executors. (See Choosing an Executor For a Task for information about choosing an existing executor.)
If an existing executor does not fit your needs, you can write your own, using your choice of environments, hardware, and cloud resources to execute Covalent electrons however you like. A template to write an executor can be found here.
## Prerequisites#
Decide the purpose of the executor. You should have a good handle on the following questions: - What is the purpose of the executor? - What types of tasks is it designed to run? - What capabilities does the executor require that aren’t already in an existing executor? - What hardware or cloud resource will it run on? - Will it scale? How?
## Procedure#
The following example creates a TimingExecutor that computes the CPU time used by the function to help determine its efficiency. It then writes this result to a file along with its dispatch_id and node_id.
1. Decide whether to make your executor asynchronous.
Covalent is written to be capable of running asynchronous (async) executors. In general, Covalent suggests that you write your custom executors to be async-capable as well, especially if it depends on network communication or has I/O-bound logic inside the run() function.
Some examples of async executors are: - The default DaskExecutor - SSHPlugin - SlurmPlugin.
To make your executor async-capable, do the following: 1. Subclass AsyncBaseExecutor instead of BaseExecutor 2. Define your run() function with:
async def run(...)
def run(...)
1. Import the Covalent BaseExecutor (or AsyncBaseExecutor) and Python typing libraries.
[1]:
# timing_plugin.py
from covalent.executor import BaseExecutor
from typing import Callable, Dict, List
import time
from pathlib import Path
1. Write the plugin class. The class must contain:
• The class name of the executor, shared in executor_plugin_name to make it importable by covalent.executors.
• A run() function that handles the task to be executed. The run() function must take these parameters:
• A Callable object to contain the task;
• A list of arguments (args) and a dictionary of keyword arguments (kwargs) to pass to the Callable.
• A dictionary, task_metadata, to store the dispatch_id and node_id (and possibly other metadata in the future).
• _EXECUTOR_PLUGIN_DEFAULTS, if there are any defaults for the executor.
With all the above in mind, the example TimingExecutor class looks like this:
[2]:
executor_plugin_name = "TimingExecutor" # Required by covalent.executors
class TimingExecutor(BaseExecutor):
def __init__(self, timing_filepath, **kwargs):
self.timing_filepath = str(Path(timing_filepath).resolve())
super().__init__(**kwargs)
start = time.process_time()
result = function(*args, **kwargs)
time_taken = time.process_time() - start
with open(f"{self.timing_filepath}", "a") as f:
close(f)
return result
At this point the executor is ready for use (or at least testing).
1. Construct electrons and assign them to the new executor, then execute them in a lattice:
[3]:
import covalent as ct
timing_log = "./cpu_timing.log"
timing_executor = TimingExecutor(timing_log)
# Calculate e based on a series
@ct.electron
def e_ser(x):
e_est = 1
fact = 1
for i in range(1, x):
fact *= i
e_est += 1/fact
return e_est
@ct.lattice
def workflow(x):
return e_ser(x)
1. Run the lattice:
[4]:
dispatch_id = ct.dispatch(workflow)(10)
result = ct.get_result(dispatch_id, wait=True)
print(result)
for line in open(timing_log, 'r'):
print(line)
Lattice Result
==============
status: COMPLETED
result: 2.7182815255731922
input args: ['10']
input kwargs: {}
error: None
start_time: 2023-01-31 23:07:16.920729
end_time: 2023-01-31 23:07:17.030380
results_dir: /Users/mini-me/agnostiq/covalent/doc/source/how_to/execution/results
dispatch_id: a2119573-5465-4390-869b-5709991da0e1
Node Outputs
------------
e_ser(0): 2.7182815255731922
:parameter:10(1): 10
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [4], in <cell line: 5>()
2 result = ct.get_result(dispatch_id, wait=True)
3 print(result)
----> 5 for line in open(timing_log, 'r'):
6 print(line)
FileNotFoundError: [Errno 2] No such file or directory: './cpu_timing.log' | 2023-03-20 23:20:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2044796198606491, "perplexity": 11931.40576168858}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00690.warc.gz"} |
https://aakashdigitalsrv1.meritnation.com/ask-answer/question/pls-say-how-to-do/light-reflection-and-refraction/16551009 | # Pls say how to do
Dear student, It is a convex mirror. If the magnification value is greater than one, mirror is concave. If the magnification value is between one and zero, mirror is convex. If the magnification value is equal to one, mirror is plane. Thus, the magnification produced by a mirror is , then the type of mirror is convex. The magnification is positive so image is virtual.object is between infinity and the pole of the mirror and position of the image is between p and f ,behind the mirror. Regards
• 0
What are you looking for? | 2021-04-23 18:02:44 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8215172290802002, "perplexity": 613.0745186194276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00608.warc.gz"} |
https://codedump.io/share/opqmbUJmHZ5e/1/intersection-between-two-arrays | Amir Ebrahimi - 11 months ago 65
Java Question
Intersection between two arrays
I have a problem finding the difference between two arrays in java, my case is like imagine we have two arrays. array
A = {1 , 3 , 5 , 7 ,9 }
and array
B = {1 ,3 , 4 ,5 , 6 ,7 , 10}
. I want to have two results first result is an array which finds the missing objects from array "A" , and the second result is an array which finds the added objects in array "B". the first result should be Like
A'={9}
and the second result is like
B'={4,6,10}
.
I think following code helps you
/* package whatever; // don't place package name! */
import java.util.*;
import java.lang.*;
import java.io.*;
/* Name of the class has to be "Main" only if the class is public. */
class Ideone
{
public static void main (String[] args) throws java.lang.Exception
{
Map<Integer,Integer> map1=new HashMap<Integer,Integer>();
int A[]={1 , 3 , 5 , 7 ,9 };
int B[]={1 ,3 , 4 ,5 , 6 ,7 , 10};
int i;
for(i=0;i<B.length;i++)
map1.put(B[i],1);
for(i=0;i<A.length;i++)
{
Integer v1=map1.get(A[i]);
if(v1==null)
{
System.out.println("Missing number="+A[i]);
}
}
for(i=0;i<A.length;i++)
{
Integer v1=map1.get(A[i]);
if(v1!=null)
{int val=v1;
map1.put(A[i],val+1);
// System.out.println("Missing number="+A[i]);
}
}
for(i=0;i<B.length;i++)
{
Integer v1=map1.get(B[i]);
if(v1!=null && v1<2)
{ | 2017-08-21 01:08:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5499297380447388, "perplexity": 4362.5565273332295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00680.warc.gz"} |
https://codegolf.stackexchange.com/questions/55422/hello-world/66722 | # “Hello, World!”
So... uh... this is a bit embarrassing. But we don't have a plain "Hello, World!" challenge yet (despite having 35 variants tagged with , and counting). While this is not the most interesting code golf in the common languages, finding the shortest solution in certain esolangs can be a serious challenge. For instance, to my knowledge it is not known whether the shortest possible Brainfuck solution has been found yet.
Furthermore, while all of Wikipedia (the Wikipedia entry has been deleted but there is a copy at archive.org ), esolangs and Rosetta Code have lists of "Hello, World!" programs, none of these are interested in having the shortest for each language (there is also this GitHub repository). If we want to be a significant site in the code golf community, I think we should try and create the ultimate catalogue of shortest "Hello, World!" programs (similar to how our basic quine challenge contains some of the shortest known quines in various languages). So let's do this!
## The Rules
• Each submission must be a full program.
• The program must take no input, and print Hello, World! to STDOUT (this exact byte stream, including capitalization and punctuation) plus an optional trailing newline, and nothing else.
• The program must not write anything to STDERR.
• If anyone wants to abuse this by creating a language where the empty program prints Hello, World!, then congrats, they just paved the way for a very boring answer.
Note that there must be an interpreter so the submission can be tested. It is allowed (and even encouraged) to write this interpreter yourself for a previously unimplemented language.
• Submissions are scored in bytes, in an appropriate (pre-existing) encoding, usually (but not necessarily) UTF-8. Some languages, like Folders, are a bit tricky to score - if in doubt, please ask on Meta.
• This is not about finding the language with the shortest "Hello, World!" program. This is about finding the shortest "Hello, World!" program in every language. Therefore, I will not mark any answer as "accepted".
• If your language of choice is a trivial variant of another (potentially more popular) language which already has an answer (think BASIC or SQL dialects, Unix shells or trivial Brainfuck-derivatives like Alphuck), consider adding a note to the existing answer that the same or a very similar solution is also the shortest in the other language.
As a side note, please don't downvote boring (but valid) answers in languages where there is not much to golf - these are still useful to this question as it tries to compile a catalogue as complete as possible. However, do primarily upvote answers in languages where the authors actually had to put effort into golfing the code.
For inspiration, check the Hello World Collection.
## The Catalogue
The Stack Snippet at the bottom of this post generates the catalogue from the answers a) as a list of shortest solution per language and b) as an overall leaderboard.
## Language Name, N bytes
where N is the size of your submission. If you improve your score, you can keep old scores in the headline, by striking them through. For instance:
## Ruby, <s>104</s> <s>101</s> 96 bytes
If there you want to include multiple numbers in your header (e.g. because your score is the sum of two files or you want to list interpreter flag penalties separately), make sure that the actual score is the last number in the header:
## Perl, 43 + 2 (-p flag) = 45 bytes
You can also make the language name a link which will then show up in the snippet:
## [><>](https://esolangs.org/wiki/Fish), 121 bytes
/* Configuration */
var QUESTION_ID = 55422; // Obtain this from the url
// It will be like https://XYZ.stackexchange.com/questions/QUESTION_ID/... on any question page
var COMMENT_FILTER = "!)Q2B_A2kjfAiU78X(md6BoYk";
var OVERRIDE_USER = 8478; // This should be the user ID of the challenge author.
/* App */
return "https://api.stackexchange.com/2.2/questions/" + QUESTION_ID + "/answers?page=" + index + "&pagesize=100&order=desc&sort=creation&site=codegolf&filter=" + ANSWER_FILTER;
}
}
jQuery.ajax({
method: "get",
dataType: "jsonp",
crossDomain: true,
success: function (data) {
data.items.forEach(function(a) {
});
comment_page = 1;
}
});
}
jQuery.ajax({
method: "get",
dataType: "jsonp",
crossDomain: true,
success: function (data) {
data.items.forEach(function(c) {
if (c.owner.user_id === OVERRIDE_USER)
});
else process();
}
});
}
var SCORE_REG = /<h\d>\s*([^\n,<]*(?:<(?:[^\n>]*>[^\n<]*<\/[^\n>]*>)[^\n,<]*)*),.*?(\d+)(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/;
function getAuthorName(a) {
return a.owner.display_name;
}
function process() {
var valid = [];
var body = a.body;
if(OVERRIDE_REG.test(c.body))
body = '<h1>' + c.body.replace(OVERRIDE_REG, '') + '</h1>';
});
var match = body.match(SCORE_REG);
if (match)
valid.push({
user: getAuthorName(a),
size: +match[2],
language: match[1],
});
else console.log(body);
});
valid.sort(function (a, b) {
var aB = a.size,
bB = b.size;
return aB - bB
});
var languages = {};
var place = 1;
var lastSize = null;
var lastPlace = 1;
valid.forEach(function (a) {
if (a.size != lastSize)
lastPlace = place;
lastSize = a.size;
++place;
.replace("{{NAME}}", a.user)
.replace("{{LANGUAGE}}", a.language)
.replace("{{SIZE}}", a.size)
var lang = a.language;
lang = jQuery('<a>'+lang+'</a>').text();
languages[lang] = languages[lang] || {lang: a.language, lang_raw: lang, user: a.user, size: a.size, link: a.link};
});
var langs = [];
for (var lang in languages)
if (languages.hasOwnProperty(lang))
langs.push(languages[lang]);
langs.sort(function (a, b) {
if (a.lang_raw.toLowerCase() > b.lang_raw.toLowerCase()) return 1;
if (a.lang_raw.toLowerCase() < b.lang_raw.toLowerCase()) return -1;
return 0;
});
for (var i = 0; i < langs.length; ++i)
{
var language = jQuery("#language-template").html();
var lang = langs[i];
language = language.replace("{{LANGUAGE}}", lang.lang)
.replace("{{NAME}}", lang.user)
.replace("{{SIZE}}", lang.size)
language = jQuery(language);
jQuery("#languages").append(language);
}
}
body {
text-align: left !important;
display: block !important;
}
width: 290px;
float: left;
}
#language-list {
width: 500px;
float: left;
}
font-weight: bold;
}
table td {
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="language-list">
<h2>Shortest Solution by Language</h2>
<table class="language-list">
<tr><td>Language</td><td>User</td><td>Score</td></tr>
<tbody id="languages">
</tbody>
</table>
</div>
<tr><td></td><td>Author</td><td>Language</td><td>Size</td></tr>
</tbody>
</table>
</div>
<table style="display: none">
</tbody>
</table>
<table style="display: none">
<tbody id="language-template">
</tbody>
</table>
• @isaacg No it doesn't. I think there would be some interesting languages where it's not obvious whether primality testing is possible. – Martin Ender Aug 28 '15 at 13:56
• If the same program, such as "Hello, World!", is the shortest in many different and unrelated languages, should it be posted separately? – aditsu quit because SE is EVIL Aug 28 '15 at 15:33
• @mbomb007 Well it's hidden by default because the three code blocks take up a lot of space. I could minify them so that they are a single line each, but I'd rather keep the code maintainable in case bugs come up. – Martin Ender Aug 28 '15 at 19:34
• @ETHproductions "Unlike our usual rules, feel free to use a language (or language version) even if it's newer than this challenge." Publishing the language and an implementation before posting it would definitely be helpful though. – Martin Ender Aug 29 '15 at 23:01
• @MartinEnder ... Almost. If two BF solutions have the same size, the one with smaller lexicographical order will take smaller number of bytes in Unary. Of course the smallest Unary solution translated to BF is guaranteed to be smallest. – user202729 May 20 '18 at 10:20
# Mathematica, 21 bytes
Print@"Hello, World!"
# Mathemaica 10.3, 20 bytes
Echo@"Hello, World!"
• Feel free to count the 10.3 version as the main solution and include the older one for reference. – Martin Ender Nov 10 '15 at 13:38
# Kotlin, 49 bytes
fun main(a:Array<String>){print("Hello, World!")}
This is a programming language created by JetBrains to overcome the limitations of Java (like Scala), be fast (like Java itself) and yet retain full interoperability with Java. This means that Kotlin can easily call Java code ... and vice versa.
# Arcyóu, 18 15 bytes
"Hello, World!"
Arcyóu is a LISP-like golfing language. Since this is the only thing in the program, we don't need a p function or even parentheses. Just quotes.
• No need for the disclaimer here. Newer languages are allowed and even encouraged this time. – Martin Ender Nov 22 '15 at 15:11
## AutoHotkey, 61 bytes
DllCall("AllocConsole")
FileAppend % "Hello, World!", CONOUT$ AHK was written to automate Windows tasks and it seems as if the authors considered StdOut/In as an after thought. This is the shortest method I could come up with. When executed the console will flash with Hello World! and exit immediately, it would require additional code (either adding a Hotkey or #persistent or sleep command) to keep the console active, however I feel this does the job and meets the requirements. I could also make the program with DLLCall("AttachConsole, Int, -1") so that it can be executed from the command line and write to the same console it was executed from, however this code golf. • Greetings from the future! I can't tell if this worked at the time but it works now and its 27 bytes: FileAppend,Hello, World!,* – Engineer Toast Apr 5 '17 at 15:53 ## Hodor, 66 bytes hodor.hod("Hhodor? Hodor!? Hodor!? o, Hooodorrhodor orHodor!? d!") This only works in the previous version of Hodor (the one before the update from 1 July 2015). The latest version prints HODOR instead, which could be fixed at the cost of 3 bytes: hodor.hod("Hhodor? Hodor!? Hodor!? o, Hooodorrhodor orHodor!? "+"d!") ## TRANSCRIPT, 36 bytes He is here. >HE, Hello, World! >X HE The second line sets HE, and the third line outputs it. • I was initially going to post this, but for some reason I kept getting errors whenever I tried to use single-char NPC names... – Sp3000 Sep 1 '15 at 13:02 • @Sp3000 You're right, I just looked at the interpreter and found that it only matches two-letter words or longer. – LegionMammal978 Sep 1 '15 at 21:15 • @LegionMammal978 You should use He. – mbomb007 Sep 1 '15 at 21:55 ## Seriously 0.1, 1 byte H Try it online Yes, I made my language have a one-byte Hello World program. A less boring answer for 16 bytes: "Hello, World!". "Hello, World!" pushes that string onto the stack, and . pops the top value on the stack and prints it. ## Par, 14 bytes Hello, World! I don't know Par, but it looks golfy. ## Templates Considered Harmful, 50 bytes St<72,'e','l','l','o',44,32,87,'o',114,'l','d',33> Templates Considered Harmful is a language defined by C++ templates. The St template creates a string of characters, which is then implicitly printed to STDOUT. # Factor, 17 21 bytes Push a string, then print it without quotes. "Hello, World!" print # Jolf, 14 bytes "Hello, World! Records a string, implicit output. Try it here. ## Jolf, 9 bytes, cheating (unprintable chars replaced with ?): e.$nsp#0?
e evaluate as Jolf code
. from the object
$# nsp, get 0 property 0 ? (08, backspace character; restrain implicit output) nsp is an object on the interpreter page that contains example programs. The zeroth one is the Hello, World! program. Try it here. • This is a catalogue, is it not? Therefore, this answer is completely valid. – SuperJedi224 Dec 15 '15 at 23:13 • @SuperJedi224 Indeed, yes. Fixing. – Conor O'Brien Dec 15 '15 at 23:14 ## Eodermdrome, 18 bytes al(Hello, World!)a Replaces the a - l edge on the initial graph with the a node, and outputs Hello, World! in the process. • Hi, your program works, but not for the reason you think - it actually matches the o-g edge, because the l in your program must represent a node with only one outgoing edge. – Ørjan Johansen Apr 18 '17 at 1:41 ## BASIC-80, 16 bytes BASIC-80 aka MBASIC does not need a trailing " to end string constants at the end of the line, so... 1?"Hello, World! ...is all you need. CP/M nostalgia... A>mbasic BASIC-80 Rev. 5.21 [CP/M Version] Copyright 1977-1981 (C) by Microsoft Created: 28-Jul-81 32824 Bytes free Ok 1?"Hello, World! run Hello, World! Ok system A>_ ## ROOP, 17 bytes "Hello, World!" h At the beginning an object is created with the string that is in quotation marks, then the h operator prints all existing objects and ends the program. ## X.so, 48 42 bytes $A($Main("X"Include"Hello, World!"X.Show)) Requires XCore to run, so it can use the X.Show command. # Visual Basic.NET, 63 bytes Module A Sub Main System.Console.Write("Hello, World!") End Sub End Module ## JavaScript function golf, 19 bytes p("Hello, World!"); I made this[1] for you! JavaScript function golf is included into the language page HTML, so use it right from the console! If you want it as an alert, here you are (21 byte): p2a("Hello, World!"); That said, I finally got time for improvement of the framework. [1]: I mean, the language golfing framework. • Welcome to PPCG. This is a good start for a Golfing language, however there a lot of features of JavaScript, like Prototypes, that you might want to take advantage of (e.g. 42.s() could turn a number into a string instead of i2s(42).) If you want help or tips, feel free to visit chat.stackexchange.com/rooms/27364/… for help, tips and showcasing your language. – wizzwizz4 Jan 2 '16 at 12:39 • @wizzwizz4 thanks, but my real introduction to PPCG was a question. :P Also, I have still to learn about prototypes, and I'm not active enough to chat in the PPCG rooms. – user48538 Jan 2 '16 at 12:43 • @wizzwizz4 42.s() is a syntax error in some js engines, you'd have to do (42).s() which doesn't actually save anything – SuperJedi224 Jan 2 '16 at 14:49 • @SuperJedi224 That was an example. :-P – wizzwizz4 Jan 2 '16 at 15:19 • @zyabin101 You don't need much reputation to chat. – wizzwizz4 Jan 2 '16 at 15:19 # Boo, 22 bytes The guys who came up with “public static void main” were probably kidding, the problem is that most people didn't get it was a joke. The infamous HelloWorld in all its boo glory: print("Hello, World!") “public static void main”, that was a good one! # Brachylog, 3 bytes @Hw @H is the string "Hello, World!", and w is the write predicate. # Jolf, 7 bytes Try it here! ξrμ\t\x0FΉ\x1B ξ read three characters and interprets them as a base 256 number index in a gigantic word list. 'Nuff said. # Detour, 19 bytes u @"Hello, World!" Try it online! This language was not designed with strings in mind. "How do you fit a string literal into a 2D language represented on a grid of characters?" You don't! Just put a , and then define what the 's stand for on the bottom with @ signs (sigh)! This will push all its code points to the cell, and the u cell will print it as a string I'll try to come up with a shorter way to fit in strings later. At least it's not Java. # Gogh, 14 bytes "Hello, World! This one is pretty self-explanatory. Gogh has self-closing strings, so if there isn't a closing double-quote, it tacks one on the end and you have yourself a string. You can run it from the command line like this: $ ./gogh o '"Hello, World!'
# NTFJ, 118 bytes
NTFJ is an esoteric programming language intended to be a Turing tarpit. It is stack-based, and pushes bits to the stack, which can be later coalesced to an 8-bit number. I believe that this is the optimal, using a loop. (Maybe something can be done by hard-coding @ into the string, which would allow for us to double the l. I haven't checked, but I believe this would come out as more bytes.)
Anyhow, this is the full code:
~~#~~~~#~##~~#~~~##~##~~~###~~#~~##~####~#~#~###~~#~~~~~~~#~##~~~##~####~##~##~~~##~##~~~##~~#~#~#~~#~~~@(*~##~#~~~@^)
~~#~~~~#~##~~#~~~##~##~~~###~~#~~##~####~#~#~###~~#~~~~~~~#~##~~~##~####~##~##~~~##~##~~~##~~
#~#~#~~#~~~@(*~##~#~~~@^)
All the ~s push 0 and the #s push 1. The interesting part is the output loop:
@(*~##~#~~~@^)
@ Coalesce to bit (top 8 items); is 0 on an empty stack
( ) Skip the inside if the top of the stack is not truthy.
* Output as character.
~##~#~~~@ Push 104 to the stack
^._____________________________________________________/
The interpreter is here, but with no permalinks as of yet.
Boring Loop-less version, 130 bytes:
~#~~#~~~@*~##~~#~#@*~##~##~~@*~##~##~~@*~##~####@*~~#~##~~@*~~#~~~~~@*~#~#~###@*~##~####@*~###~~#~@*~##~##~~@*~##~~#~~@*~~#~~~~#@*
Doubling (:) the l character, 122 bytes:
~#~~#~~~@*~##~~#~#@*~##~##~~@:**~##~####@*~~#~##~~@*~~#~~~~~@*~#~#~###@*~##~####@*~###~~#~@*~##~##~~@*~##~~#~~@*~~#~~~~#@*
## Scratch, 2 blocks
Self explanatory really.
• You're missing a comma in "Hello, World!" Also, I believe the byte count for Scratch submissions is usually done with this: scratchblocks.github.io – a spaghetto Mar 24 '16 at 21:13
• @quartata That means this script takes up 43 bytes. – haykam Jul 20 '16 at 4:06
# Verilog, 60 bytes
module m;initial
begin
$write("Hello, World!");end endmodule ## JavaScript (Node.js), 28 bytes console.log("Hello, World!") # Javascript (Nashorn), 22 bytes Nashorn is the JS engine that comes built in to Java. print('Hello, World!') # .kill();, 39 bytes SFTp^B2lA=ZkWj\9@+*+@9\jWkZ=Al2B^pTGT Alright, so I made another monster! This is how this works. First, the code is iterated through, and a resulting string is made. First, let's look at the first character and some related information: char: S opposite char: T average char floored: (@S + @T) / 2 = (83 + 84) / 2 = 83.5 => 83 = S index: 0 result: S Each character in the new string is calculated by averaging the values of the current char and the char that lies the same distance from the end of the string; this value is incremented by the index (starting at zero) then floored. The resulting character is appended to the result. Once this result is made, we look for a valid base64 string in it. This is what that result looks like: SGVsbG8sIFdvcmxkIQ==?UOs#yq'vZ_,Rc!4xky This will result in the string SGVsbG8sIFdvcmxkIQ== being found as the base 64 string for "Hello, World!", and is thus outputted. (When no such string is found, then a more complicated algorithm ensues that transpiles this to JavaScript, so this is most definitely turing-complete and thus a valid language.) # Pickle, 34 bytes cbuiltins\nprint\n\x8c\rHello, World!\x85R. Replace the escape sequences by their appropriate character code. Surprise. Python's default serialization implementation actually uses an interpreter over a stack-based language. Just call pickle.load on it to run it. # UGL, 80 bytes cuu$u$$*d*O*u*uO@++uOO^^+O@@$$uu**dO%$$***O@*u*dddO%OuuuO%OdOuO Try it online! With comments: #H e l l o , W o r l d ! #72 101 108 108 111 44 32 87 111 114 108 100 33 cuuu$$*$d*O #print H 72 (stack:2 3 3 3 3 3 3)$*u$*u$O$@ #print e 101 (stack:101 2 3 3 3 3 101) ++u$O$O #print ll 108 108 (stack:101 2 3 3 108) ^^+$O@@ #print o 111 (stack:108 111 101 2 3 3)
$$uu**dO #print , 44 (stack:108 111 101 2 3) %$$*$**$O@ #print 32 (stack:32 108 111 101 3)
$*$u*dddO #print W 87 (stack:32 108 111 101)
%\$O #print o 111 (stack:32 108 101 111)
uuuO #print r 114 (stack:32 108 101)
%O #print l 108 (stack:32 101)
dO #print d 100 (stack:32)
uO #print ! 33 (stack:)
• Which part is the actual 80-byte source code? The non-space prefixes of the lines? Might be best to include that separately for clarity. – Martin Ender Apr 26 '16 at 11:39 | 2020-06-03 05:31:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23296265304088593, "perplexity": 5583.553754312695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432237.67/warc/CC-MAIN-20200603050448-20200603080448-00250.warc.gz"} |
https://mathoverflow.net/questions/325354/a-curious-inequality-concerning-binomial-coefficients | # A curious inequality concerning binomial coefficients
Has anyone seen an inequality of this form before? It seems to be true (based on extensive testing), but I am not able to prove it.
Let $$a_1,a_2,\ldots,a_k$$ be non-negative integers such that $$\sum_i a_i = A$$. Then, for any non-negative integer $$B \le A$$: $$\sum_{(b_1,\ldots,b_k): \sum_i b_i = B} \prod_i \frac{\binom{a_i}{b_i}}{\binom{A-a_i}{B-b_i}} \ge {\binom{A}{B}}^{2-k}.$$ The sum on the left is over all tuples $$(b_1,b_2,\ldots,b_k)$$ of non-negative integers, with $$b_i \le a_i$$ for all $$i$$, whose sum is equal to $$B$$.
By Cauchy–Bunyakovsky–Schwarz inequality we have $$\left(\sum \prod_i \frac{\binom{a_i}{b_i}}{\binom{A-a_i}{B-b_i}}\right)\left(\sum\prod_i \binom{a_i}{b_i}\binom{A-a_i}{B-b_i}\right)\geqslant \left(\sum \prod_i\binom{a_i}{b_i}\right)^2=\binom{A}{B}^2 .$$
Thus it suffices to prove that $$\sum\prod_i \binom{a_i}{b_i}\binom{A-a_i}{B-b_i}\leqslant \binom{A}B^k.$$ But RHS is just the sum of the same guys without the restriction $$\sum b_i=B$$. | 2022-01-21 06:13:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591128826141357, "perplexity": 82.93940626651792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00031.warc.gz"} |
https://www.biostars.org/p/298192/#382317 | MITE hunter stops after step 2
1
0
Entering edit mode
4.5 years ago
danielke ▴ 10
Hey everyone,
I started working with MITE hunter and for some reason it stops running after step 2, error showing is:
sh: 1: /home/my-user/MITE_hunter/mdust/: Permission denied
Use of uninitialized value $Seq in split at /home/my-user/MITE_hunter/MITE_hunter_2011/low_complexity_filter.pl line 70. Use of uninitialized value$Len in division (/) at /home/my-user/MITE_hunter/MITE_hunter_2011/low_complexity_filter.pl line 86.
Illegal division by zero at /home/my-user/MITE_hunter/MITE_hunter_2011/low_complexity_filter.pl line 86.
No such file or directory
This is the code I used:
/home/bariah/MITE_hunter/MITE_hunter_2011# perl MITE_Hunter_manager.pl -i /home/bariah/Genomes/Chromosome_files/tau_chr/tau_d1.fasta -g tau_d1 -c 95 -f 100 -L 50 -2 12345678
The last files created are: tau_d1_Step2.fa and tau_d1_Step2.fa.dusted (which is empty)
What can be the problem ?
Thank you!
MITE-hunter MITEs transposons • 2.4k views
1
Entering edit mode
I added code markup to your post for increased readability. You can do this by selecting the text and clicking the 101010 button. When you compose or edit a post that button is in your toolbar, see image below:
1
Entering edit mode
Have you indexed the genome you are giving after -i option. Try this after indexing.
Else- You can try to change the privilege to mdust.
You can also try to install folllowing way:
# Unzip and change folder and subfolder names (replace space with "_")
cd ./MITE_Hunter-11-2011/MITE_Hunter perl MITE_Hunter_Installer.pl -d
# install
/home/user/repeats/MITE_Hunter-11-2011/MITE_Hunter/ -f /usr/bin/formatdb -b /usr/bin/blastall -m /usr/bin/mdust -M /usr/bin/muscle
0
Entering edit mode
What do you mean index the genome? it is done by the RepeatMasker, if I'm not wrong..
What do you mean change the privilege to mdust? what should i do?
1
Entering edit mode
If you have indexed genome in the same folder then it may not be the cause for error. change permissions to the folder using "sudo chmod 777 -R /home/my-user/MITE_hunter/mdust/
If these things do not work then the problem could be with the installation.
Best
0
Entering edit mode
OK so I've tried it and still the same problem. What could be the problem with installation?
This is what I do to run MITE_hunter after installation. Might it be the first error ?
perl MITE_Hunter_manager.pl -i /home/my_user/MITE_hunter/tau/tau_d1.fasta -g tau_d1 -c 95 -f 200 -L 50 -n 20 -S 12345678 &
[2] 4086
my_user@my_computer:~/MITE_hunter/MITE_Hunter_2011\$ formating database ...
Can't exec "/home/my_user/MITE_hunter/ncbi-blast-2.2.28+/bin/": Permission denied at /home/my_user/MITE_hunter/MITE_Hunter_2011/blast_formatdb_index.pl line 87.
16403/328040
0
Entering edit mode
Can you check that blastall program can be called globally or you can export the the blasall to your path. Basically all the required program paths (complete) should have been given correctly while installing. I suggest once again try to install as I have mentioned in earlier reply. Best.
0
Entering edit mode
I’ve met the same problem in step 2,it is said that sh: /share_bio/Buchh/MITEs/mdust/mdust/mdust.c: Permission denied . Did you solve this problem ?
0
Entering edit mode
3.2 years ago
Colaptes ▴ 70
I had the same problem with the same permission denied error message. In my case it turned out I did not have write permissions on the genome sequence I was analyzing. No idea if it is the same problem you had, but I fixed mine by copy-pasting the fasta file that was owned by another user to create a copy where I had read-write privileges. | 2022-08-10 17:08:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2977769374847412, "perplexity": 9934.641756462264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00017.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-the-inequality-2-5-4-5x | # How do you solve the inequality -2/5 < -4/5x?
$x < \frac{1}{2}$
$- \frac{2}{5} \cdot - \frac{5}{4} > x$
$- \frac{\cancel{2} 1}{\cancel{5} 1} \cdot - \frac{\cancel{5} 1}{\cancel{4} 2} > x$
$x < \frac{1}{2}$ | 2021-11-28 08:29:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7600483894348145, "perplexity": 2324.340876746189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00051.warc.gz"} |
https://puzzling.stackexchange.com/tags/real/hot?filter=year | Tag Info
32
This appears to be Elian Script. I'm not sure I can read the writer's handwriting entirely (and they seem to have added some nonstandard things like a zigzag for T), but the first few lines read: PAR?NRE RENR ?HE C PROGRAMMING LANGUAGE! I REALLY WANT MY PEN! Edit by OP: I spend some time to fully decipher it but I think Deusovi deserves the ...
22
20
I'm not entirely sure this is exactly a puzzle (but also not sure enough to suggest closing the question or anything). Anyway, I guess the reason is that
16
Was it Reasoning: and and
12
Here's my guess Take the first 100 digits of pi: 1415926535 8979323846 2643383279 5028841971 6939937510 5820974944 5923078164 0628620899 8628034825 3421170679 STEP 1: Based on whether a digit is odd and even, convert it to AB format. Result: ABAAA BBAAA BAAAA BABBB BBBAA BABAA ABBBB BAAAA BAAAA AAAAB ABBBA ABABB AABAB ABABB BBBBB BBBAA BBBBB ...
11
Perhaps
10
I think the history is something like this. Start with 8 colours because 8 is a small power of 2. These are simple: bit 0 for red, bit 1 for green, bit 2 for blue, but 000 means grey rather than black because you're only controlling the foreground and not the background. Then add another 8 colours. Actually, in the image here it looks as if 8-15 may be the ...
10
Did you eat Because
10
A couple of possibilities come to mind: 1: A trusted third party. Have someone else toss the coin. 2: "Geohashing" type approach: toss the coin by unpredictable means that everyone can independently observe, even over the internet. 3: Crowd-seeded pseudorandom number generator: have each participant send you a number. Add them up, and seed a PRNG with the ...
10
I was able to find one of the solutions using "paper and pencil". I stopped searching for more solutions after that. It is certainly very very time consuming. In my explanation I name rows as A,B,C,D,E - columns as 1, 2, 3, 4, 5. My search strategy is based on an observation that As you can see this gave me only 4 possible values for $B1$ Page 2 ...
9
Not sure if I'm going at this the right way, but Because
8
Looks like this is It says
8
Looking at this: CoinBrothers it is rarely true. For example, Australia 2019: https://coin-brothers.com/catalog/coin3771
8
To a real-life problem I had to give a real-life answer: But you asked for an actual tiling, without gaps, so here it is. PS: there is a simpler pattern where pairs disassemble with a single translation:
6
Agree on a future public event that all parties can observe, and a means to generate a bitstream from it. Perhaps the parity of the last digit of the Dow Jones Industrial Average at a predetermined set of times? (I was going to suggest daily high temperatures from a given source for a list of N cities, but you'd probably end up with much higher correlation ...
6
This is just for the teaser question (do not know the answer for the real one). I think you can pull the puzzle apart by
5
Counting the first male as Generation 0 (and his mother as Generation 1) the male will have
5
I think the messages read: The code is ... How did I decode it?
5
Partial answer: Assumption: Assumption #2: Also, here's the transctipted text from both (don't think there's any need to hide it as a spoiler): Left (since it's handwritten I may have misinterpreted certain characters): 659CK4MG3659XTG39C - MG/AG/GMAH B5659XTV8CKXT6599CV8CKW2G3CK W2A34M4MA3 MSA3659G3CK CK K9CK9CK9 9CX16599C854A3 9C G3A3P2A34M F78544M, ...
5
Note that It turns out that For example, Indeed, The answer is obviously
5
Although I was too lazy to get the angles exact, the idea should hold in principle if the picture isn't quite right. They interlock.
4
The answer is: Have a look at the following diagram: If you: Here is one possible roster:
3
You should The intuition behind this decision: Calculation Generalization calculation
3
EDIT: Just to let you know before you read this, this actually isn't the solution at all. I've made lots of assumptions and serious mistakes! :) The way to think about this problem is My algorithm is really simple: Notice that This algorithm is generalizable From this it's easy to see that And that's it.
3
Very partial solution Suppose there are an even number of people: say 2n. (Call them A1..An and B1..Bn.) Then as per Brandon_J's conjecture However, We can get a lower bound on the number of meeting periods needed from Here's a construction that's not too bad, though in general it's far from optimal. [EDITED to add:] No, wait, I'm not sure $m-1$ is ...
3
Here is a good solution for 8 people: I don't have a general solution for larger numbers yet.
3
Disclaimer: This is a proof of p_sutherland's answer. If you consider this correct, please accept their answer. Spoiler-splitter Spoiler-splitter Spoiler-splitter
3
[EDIT: After this was answered, the puzzle was restricted to $8$ players. In this case, $8+3+3-3=11$ games are required in the worst case.] The following method requires up to games. Suppose $n=2^k$. First, Then, Finally, In total, at most games are played. Suppose we add Alice so that $n=2^k+1$. Modifying the above method:
3
How about the Any two adjacent pieces can be slid apart, but I don't think the whole thing can be split by sliding from any direction.
3
Florian F's 2nd pattern is far and away my favorite, but if anyone was curious, I'll post my answers. First I wanted to show an example of something that doesn't work but really seems like it should: It's just like the third example from the question, but it uses joinery such that the pieces slide together at an angle. It comes apart in the same way, ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2019-08-21 14:42:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7553238868713379, "perplexity": 1080.3402150005402}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316021.66/warc/CC-MAIN-20190821131745-20190821153745-00273.warc.gz"} |
https://serverfault.com/questions/16323/store-file-in-zip-archive-with-different-name-linux-command-shell | # Store file in zip archive with different name (linux command shell)
In a linux command line, you zip a file by:
zip -mqj archive.zip file.txt
Now, I need to store 'file.txt' as 'file2.txt' in 'archive.zip', without renaming the file before zipping. When unzipped, the file should be called 'file2.txt'.
How can I store the file with a different name? Read through the MAN page and didn't find an answer.
Does creating a hard link to file.txt count?
ln file.txt file2.txt
Create file2.txt which points to the exact same inode as file.txt, without actually doubling the space
The solution below is the exact copy of the answer of @mkrnr on stackoverflow
You can use zipnote which should come with the zip package.
First build the zip archive with the myfile.txt file:
zip archive.zip myfile.txt
Then rename myfile.txt inside the zip archive with:
printf "@ myfile.txt\n@=myfile2.txt\n" | zipnote -w archive.zip
(Thanks to Jens for suggesting printf instead of echo -e.)
A short explanation of "@ myfile.txt\n@=myfile2.txt\n":
From zipnote -h: "@ name" can be followed by an "@=newname" line to change the name
And \n separates the two commands.
Hy there, this is my first answer so I hope I've done everything correct :-)
Here's my solution to your problem, a nice one-liner:
cp file.txt file2.txt | zip -mqj archive.zip file2.txt
Hope I could help!
• This is a good attempt, but the pipe is confusing. I think the person who asked didn't want to create a copy of the file first, but if he did, another way of doing this might be cp file.txt file2.txt && zip -mqj archive.zip file2.txt && rm -f file2.txt - this would clean up the temporary file2.txt that got created. – Matt Simmons May 31 '09 at 18:05
• thanks for pointing this out - I would post this approach, too (but you already did, thanks!) cp file.txt file2.txt && zip -mqj archive.zip file2.txt (because of the -m switch the file already gets moved and there's no need to remove afterwards!) – hajowieland May 31 '09 at 18:09
• Ah! Good call. I'm less familiar with zip than I am tar. thanks! – Matt Simmons May 31 '09 at 18:14
• @Matt. Cant we just use mv instead of cp which wont create a copy, will rename the file, zip it and remove the copy to clear space. Just a thought. – Viky Jun 1 '09 at 4:37
• @Viky - That would work fine, except one of the requests in the question was that we not rename the file. I think if we got more about the situation, a better answer would have presented itself, but as long as the person who asked the question is happy... – Matt Simmons Jun 1 '09 at 12:19 | 2019-10-16 20:23:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.447601854801178, "perplexity": 4770.820604117691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00549.warc.gz"} |
https://www.physicsforums.com/threads/proving-interaction-picture-field-satisfies-kg-eqn.393383/ | # Proving Interaction picture field satisfies KG eqn
## Main Question or Discussion Point
Hi,
I'm just trying to convince myself that the field in the interaction picture (IP) $$\phi_I(x,t)=e^{iH_0t}\phi(x,0)e^{-iH_0t}$$ satisfies the Klein Gordan equation: $$(\tfrac{\partial^2}{\partial t^2}-\nabla^2+m^2)\phi_I(x,t)=0$$.
I have so far worked out that the time derivative is:
$$\frac{\partial\phi_I}{\partial t}=-i[\phi_I,H_0]$$
and the second time deriv is:
$$\frac{\partial^2\phi_I}{\partial^2 t}=[H_0, [\phi_I,H_0]]$$.
I'm now trying to work out the commutator in these expressions. My $$H_0=\int d^3x \left(\tfrac{1}{2}\Pi(x,0)^2+\tfrac{1}{2}(\nabla\phi(x,0))^2+\tfrac{1}{2}m^2\phi^2(x,0)\right)$$. This can also be expressed as $$H_0=\int \tfrac{d^3\vec{k}\omega}{(2\pi)^3} a^{\dag}(\vec{k})a(\vec{k})$$
So considering $$[\phi_I,H_0]=e^{iH_0t}\phi(x,0)e^{-iH_0t}H_0-H_0e^{iH_0t}\phi(x,0)e^{-iH_0t}=e^{iH_0t}[\phi(x,0), H_0]e^{-iH_0t}$$ I see I need to evalate the commutator in the Schroedinger picture:
$$[\phi(x,0), H_0]=\phi(x,0)H_0-H_0\phi(x,0)$$
I'm having issues doing this however, would it be better to work with H and the field expressed in terms of creation and annihilation or is it easier just to stay in the phi Pi representation?
Related Quantum Physics News on Phys.org
There must be an easier way to show the simple fact that the interaction field obeys the Klein Gordon eqn. I just did a billion pages of algebra, working out commutators in terms of the fields, only to find I'd prob made a mistake.
LAHLH,
why don't you use the explicit form of the quantum field? For example, in the scalar case it is
$$\phi(\mathbf{x}, t) = \int d\mathbf{p} \left( e^{-i \mathbf{px}+i\omega_p t} a_p + e^{i \mathbf{px}-i\omega_p t} a^{\dag}_p \right)$$
and it is very easy to show that KG equation is satisfied.
Eugene.
I think the field you quoted is in the Heisenburg picture, I'm trying to show that the field $$\phi_I(x,t)=e^{iH_0t}\phi(x,0)e^{-iH_0t}$$ satisifes the KG eqn. My general Hamiltonian is $$H=H_0+H_1$$. This is problem 9.5 in Srendicki to make things clearer: http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf
Cheers
I think the field you quoted is in the Heisenburg picture, I'm trying to show that the field $$\phi_I(x,t)=e^{iH_0t}\phi(x,0)e^{-iH_0t}$$ satisifes the KG eqn. My general Hamiltonian is $$H=H_0+H_1$$. This is problem 9.5 in Srendicki to make things clearer: http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf
Cheers
The field that I wrote evolves in time with the help of the non-interacting Hamiltonian. This is exactly what you need. See eq. (4.5) in Srednicki.
Eugene.
samalkhaiat
$$\phi_I(x,t)=e^{iH_0t}\phi_{S}(x)e^{-iH_0t}$$
Expand this using the Baker-Hausdroff identity, you find
$$\phi_{I}(x,t) = \phi_{S}(x) +i[tH_{0},\phi_{S}] + \frac{i^{2}}{2!}[tH_{0},[tH_{0},\phi_{S}]] + ... \ \ \ (1)$$
In the Schrodinger picture, we expand the field operator $\phi_{S}(x)$ in terms of some base functions $u_{k}(x)$, as
$$\phi_{S}(x) = \sum_{k} \left( a_{k} u_{k}(x) + a^{\dagger}_{k} u^{*}_{k}(x) \right) \ \ \ (2)$$
Putting eq(2) in eq(1), the problem get reduced to calculating the commutators of $a_{k}$ and $a^{\dagger}_{k}$ with the free Hamiltonian, which I will take to be
$$H_{0} = \sum_{k} \omega_{k} a^{\dagger}_{k} a_{k}$$
Using the canonical algebra
$$[a_{n},a_{k}] = [a^{\dagger}_{n},a^{\dagger}_{k}] = 0$$
$$[a_{n},a^{\dagger}_{k}] = \delta_{nk}\ ,$$
we find
$$[H_{0},a_{k}] = \omega_{k}a_{k} \ ,$$
$$[H_{0},[H_{0},a_{k}]] = \omega^{2}_{k} a_{k} \ , ....$$
and similar results for $a^{\dagger}_{k}$. If you now insert the whole lot back into eq(1), you find
$$\phi_{I}(x,t) = \sum_{k} \left( a_{k}e^{-i\omega_{k}t} u_{k}(x) + a^{\dagger}_{k}e^{+i\omega_{k}t} u^{*}(x) \right)$$
It is now easy to show that the field $\phi_{I}$ satisfies the K-G equation when
$$\omega_{k} = + \sqrt{ k^{2} + m^{2}} , \ u_{k}(x) = \exp(ip.x)$$
regards
sam
Last edited:
Thanks to you both. I think I've proved this in my own (rather complicated and perhaps uneccessary) way in the end before I came back to this thread.
$$\phi_I(x,t)=e^{iH_0t}\phi(x,0)e^{-iH_0t}$$
From which it follows that:
$$\frac{\partial^2\phi_I}{\partial^2 t}=-[ [\phi_I,H_0], H_0]$$
So 1) Calculate $$[\phi_I,H_0]$$. Expanding: $$[\phi_I,H_0]=e^{iH_0t} \phi(x,0)e^{-iH_0t}H_0-H_0e^{iH_0t}\phi(x,0)e^{-iH_0t}=e^{iH_0t}[\phi(x,0), H_0]e^{-iH_0t}$$ (eqn 1)
So what is$$[\phi(x,0), H_0]$$? : Remembering that $$H_0=\int d^3x \left(\tfrac{1}{2}\Pi(x,0)^2+\tfrac{1}{2}(\nabla\phi(x,0))^2+\tfrac{1}{2}m^2\phi^2(x,0)\right)$$
Then
$$[\phi(x,0), H_0]=\tfrac{1}{2} \int d^3y [\phi(x,0), \Pi^2(y,0)]$$
$$[\phi(x,0), H_0]=\tfrac{1}{2} \int d^3y [\phi(x,0), \Pi(y,0)] \Pi(y,0)+ \Pi(y,0) [\phi(x,0), \Pi(y,0)]$$
Using equal time commuation relations now
$$[\phi(x,0), H_0]=\tfrac{1}{2} \int d^3y i\delta^3(x-y)\Pi(y,0)+ \Pi(y,0) i\delta^3(x-y)$$
$$[\phi(x,0), H_0]=i\Pi(x,0)$$
Plugging this back into eqn 1:
$$[\phi_I,H_0]=e^{iH_0t}[\phi(x,0), H_0]e^{-iH_0t} =i e^{iH_0t}\Pi(x,0)e^{-iH_0t}=i\Pi_I(x,t)$$
Step 2) Plug this result back into $$\frac{\partial^2\phi_I}{\partial^2 t}=-[ [\phi_I,H_0], H_0] =-i[ \Pi_I(x,t), H_0]$$
So now what is $$[ \Pi_I(x,t), H_0] =e^{iH_0t} \Pi(x,0)e^{-iH_0t}H_0-H_0e^{iH_0t}\Pi(x,0)e^{-iH_0t}=e^{iH_0t}[\Pi(x,0), H_0]e^{-iH_0t}$$?
Now considering $$[\Pi(x,0), H_0]$$:
$$[ \Pi(x,0), H_0]= \tfrac{1}{2} \int d^3y [\Pi(x,0), \nabla^{i}\phi(y,0)\nabla^{i}\phi(y,0)+m^2\phi(y,0)\phi(y,0)]$$
$$[ \Pi(x,0), H_0]= \tfrac{1}{2} \int d^3y \left( [\Pi(x,0), \nabla^{i}\phi(y,0)]\nabla^{i}\phi(y,0)+\nabla^{i}\phi(y,0) [\Pi(x,0), \nabla^{i}\phi(y,0)]+m^2([\Pi(x,0)\phi(y,0)]\phi(y,0)+\phi(y,0)[\Pi(x,0)\phi(y,0)])\right)$$
Since the dels are acting only on y we can bring out of the commutator completely as follows
$$[ \Pi(x,0), H_0]= \tfrac{1}{2} \int d^3y \left( \nabla^{i}_{y}[\Pi(x,0), \phi(y,0)]\nabla^{i}\phi(y,0)+\nabla^{i}\phi(y,0) \nabla^{i}_{y}[\Pi(x,0), \phi(y,0)]+m^2([\Pi(x,0)\phi(y,0)]\phi(y,0)+\phi(y,0)[\Pi(x,0)\phi(y,0)])\right)$$
Finally using the canonical equal time commutation relations:
$$[ \Pi(x,0), H_0]= -i \int d^3y \left( \nabla^{i}_{y}\delta^3(x-y)\nabla^{i}\phi(y,0)+m^2\delta^3(x-y)\phi(y,0)\right)$$
$$[ \Pi(x,0), H_0]= -i \left( -\nabla^{2}\phi(x,0)+m^2\phi(x,0)\right)$$
Thus:
$$[ \Pi_I(x,t), H_0] =-ie^{iH_0t}\left(\nabla^{2}\phi(x,0)+m^2\phi(x,0)\right)e^{-iH_0t} =-i\left(\nabla^{2}+m^2\right)\phi_I(x,t)$$
This finally.....gives:
$$\frac{\partial^2\phi_I}{\partial^2 t}=-[ [\phi_I,H_0], H_0] =-i[ \Pi_I(x,t), H_0]=-\left(-\nabla^{2}+m^2\right)\phi_I(x,t)=\left(\nabla^{2}-m^2\right)\phi_I(x,t)$$
So we have recovered the KG equation:
$$\frac{\partial^2\phi_I}{\partial^2 t}=\left(\nabla^{2}-m^2\right)\phi_I(x,t)$$
1) $$[\phi(x,0), H_0]=\tfrac{1}{2} \int d^3y [\phi(x,0), \Pi^2(y,0)]$$
This is true because \phi(x,0) obviously commutes with the mass term $$m^2\phi^2(x,0)$$, but it is also true that $$\int d^3y [\phi(x,0), \nabla^{i}\phi(y,0)\nabla^{i}\phi(y,0)]=0$$, as I will now show:
$$\int d^3y [\phi(x,0), \nabla^{i}\phi(y,0)\nabla^{i}\phi(y,0)]=\int d^3y [\phi(x,0), \nabla^{i}\phi(y,0)]\nabla^{i}\phi(y,0)+\nabla^{i}\phi(y,0)[\phi(x,0), \nabla^{i}\phi(y,0)]$$
Since the nabla's are acting on y's we can pull them outside the commutators:
$$\int d^3y [\phi(x,0), \nabla^{i}\phi(y,0)\nabla^{i}\phi(y,0)]=\int d^3y \nabla^{i}_{y}[\phi(x,0), \phi(y,0)]\nabla^{i}\phi(y,0)+\nabla^{i}\phi(y,0)\nabla^{i}_{y}[\phi(x,0), \phi(y,0)]$$
This is obviously zero since $$[\phi(x,0), \phi(y,0)]=0$$. So we have the result that:
$$[\phi(x,0), (\nabla^{i}\phi(y,0))^2]=0$$
2) $$[ \Pi(x,0), H_0]= -i \int d^3y \left( \nabla^{i}_{y}\delta^3(x-y)\nabla^{i}\phi(y,0)+m^2\delta^3(x-y)\phi(y,0)\right)$$
Concentrating on the first term:
$$\int d^3y \left( \nabla^{i}_{y}\delta^3(x-y)\nabla^{i}\phi(y,0) \right)$$
I use integration by parts here to shift the nabla off the delta:
$$\left[\delta^3(x-y)\nabla^{i}_{y}\phi(y,0)\right]- \int d^3y\delta^3(x-y)\nabla^{2}_{y}\phi(y,0)$$
The first term dies by compact support and we're left with$$-\int d^3y\delta^3(x-y)\nabla^{2}_{y}\phi(y,0)=-\nabla^{2}\phi(x,0)$$
This is the origin of the -ve term in my KG equation on the del squared.
I feel like maybe this was a waste of time now, if one can just use the expansion sam posted, alas............................
I'd still be grateful if anyone could comment if my proof is actually correct. The only niggle I have with it is this part:
$$e^{iH_0t}\left(\nabla^{2}\phi(x,0)+m^2\phi(x,0)\right)e^{-iH_0t} =-\left(\nabla^{2}+m^2\right)\phi_I(x,t)$$
Am I allowed to pass $$e^{iH_0t}$$ through the $$\nabla^{2}$$ to turn my $$\phi(x,0)$$ into $$\phi_I(x,t)$$ as I require? Since $$e^{iH_0t}$$ is obviously an operator and I can't see an obvious reason that is should commute with the operator $$\nabla^{2}$$?
I feel like maybe this was a waste of time now, if one can just use the expansion sam posted, alas............................
The calculation is much much easier if both $$H_0$$ and $$\phi(x,t)$$ are expanded in the momentum space as samalkhaiat showed you.
I'd still be grateful if anyone could comment if my proof is actually correct. The only niggle I have with it is this part:
$$e^{iH_0t}\left(\nabla^{2}\phi(x,0)+m^2\phi(x,0)\right)e^{-iH_0t} =-\left(\nabla^{2}+m^2\right)\phi_I(x,t)$$
Am I allowed to pass $$e^{iH_0t}$$ through the $$\nabla^{2}$$ to turn my $$\phi(x,0)$$ into $$\phi_I(x,t)$$ as I require? Since $$e^{iH_0t}$$ is obviously an operator and I can't see an obvious reason that is should commute with the operator $$\nabla^{2}$$?
$$H_0$$ can be regarded as differentiation on t, so it commutes with $$\nabla^{2}$$, which differentiates on x.
Eugene.
$$H_0$$ can be regarded as differentiation on t, so it commutes with $$\nabla^{2}$$, which differentiates on x.
Eugene.
Thanks again Eugene. Could you elaborate a little on this part for me. Is this just because $$\dot{A}=[A,H]$$ ? or something similar?
Thanks again Eugene. Could you elaborate a little on this part for me. Is this just because $$\dot{A}=[A,H]$$ ? or something similar?
Yes, the field derivative on t is given by the field commutator with $$H_0$$
$$\frac{\partial\phi}{\partial t}=-i[\phi,H_0]$$
$$H_0=\frac{1}{(2\pi)^3 } \int d^3\vec{k}\omega_k a^{\dag}(\vec{k})a(\vec{k})$$
The field derivative on $$\vec{x}$$ is given by the field commutator with the total momentum operator $$\vec{P}$$
$$\frac{\partial\phi}{\partial \vec{x}}=-i[\phi, \vec{P}]$$
$$\vec{P} = \frac{1}{(2\pi)^3 } \int d^3\vec{k} \vec{k} a^{\dag}(\vec{k})a(\vec{k})$$
One can show easily that the two operators commute $$[H_0, \vec{P}] = 0$$.
Eugene.
Thanks for the help | 2020-02-21 11:35:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060765504837036, "perplexity": 460.27089748823397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145529.37/warc/CC-MAIN-20200221111140-20200221141140-00254.warc.gz"} |
https://inet.omnetpp.org/docs/users-guide/ch-clock.html | # Clock Model¶
## Overview¶
In most communication network simulations, time is simply modeled as a global quantity. All components of the network share the same time throughout the simulation independently of where they are physically located or how they are logically connected to the network.
In contast, in time sensitive networking, the bookkeeping of time is an essential part, which should be explicitly simulated independently of the underlying global time. The reason is that the differences among the local time of the communication network components significantly affects the simulation results.
In such simulations, hardware clocks are simulated on their own, and communication protocols don’t rely on the global value of simulation time, which is in fact unknown in reality, but on the value of their own clocks. With having hardware clocks modeled, it’s also often required to use various time synhronization protocols, because clocks tend to drift over time and communication protocols rely on the precision of the clocks they are using.
In INET, the clock model is a completely optional feature, which has no effect on the simulation performance when disabled. Even if the feature is enabled, the usage of clock modules by communication protocols and applications is still optional, and enabling the feature has negligible performance hit when not in use. For testing that the mere usage of a clock has no effect on the simulation results, INET also includes an ideal clock mechanism.
## Clocks¶
Clocks are implemented as modules, and are used by other modules via direct C++ method calls. Clock modules implement the IClock module interface and the corresponding IClock C++ interface.
The C++ interface provides an API similar to the standard OMNeT++ simulation time based scheduling mechanism, but it relies on the underlying clock implementation for (re)scheduling events according to the clock. These events are transparently scheduled for the client module, and they will be delivered to it when the clock timers expire.
The clock API uses the clock time instead of the simulation time as arguments and return values. The interface contains functions such as getClockTime(), scheduleClockEventAt(), scheduleClockEventAfter(), cancelClockEvent().
INET contains optional clock modules (not used by default) at the network node and the network interface levels. The following clock models are available:
• IdealClock: clock time is identical to the simulation time.
• OscillatorBasedClock: clock time is the number of oscillator ticks multiplied by the nominal tick length.
• SettableClock: a clock which can be set to a different clock time.
## Clock Time¶
In order to avoid confusing the simulation time (which is basically unknown to communication protocols and hardware elements) with the clock time maintained by hardware clocks, INET introduces a new C++ type called the ClockTime.
This type is pretty much the same as the default SimTime, but the two types cannot be implicitly converted into each other. This approach prevents accidentally using clock time where simulation time is needed, and vice versa. Simlarly to how simtime_t is an alias for SimTime, INET also introduces the clocktime_t alias for ClockTime type.
For the explicit conversion between clock time and simulation time, one can use the CLOCKTIME_AS_SIMTIME and the SIMTIME_AS_CLOCKTIME C++ macros. Note that these macros don’t change the numerical value, they simply convert between the C++ types.
When the actual clock time is used by a clock, the value may be rounded according to the clock granularity and rounding mode (e.g. OscillatorBasedClock). For example, when a clock with a us granularity is instructed to wait for 100 ns, while its oscillator is right in the middle of its ticking period, it may actually wait for the next tick to happen to start the timer, and wait another tick to happen to account for the requested wait time interval.
## Oscillators¶
The clock interface is quite general in the sense that it allows many different ways to implement it. Nevertheless, the most common way is to use an oscillator based clock model.
An oscillator efficiently models the periodic generation of ticks that are usually counted by a clock module. The tick period is not necessarily constant, it can change over time. Oscillators implement the IOscillator module interface and the corresponding IOscillator C++ interface.
The following oscillator models are available:
• IdealOscillator: generates ticks periodically with a constant length (mostly useful for testing).
• ConstantDriftOscillator: tick length changes proportional to the elapsed simulation time (clock drift).
• RandomDriftOscillator: updates clock drift with a random walk process.
## Clock Users¶
The easiest way to use a clock in applications and communication protocols is to add a clockModule parameter that specifies where the clock module can be found. Then the C++ user module should be simply derived from either ClockUserModuleBase or the parameterizable ClockUserModuleMixin base classes. The clock can be used via the inherited clock related methods or through the methods of the IClock C++ interface on the inherited clock field.
## Clock Events¶
The clock model requires the use of a specific C++ class called ClockEvent to schedule clock timers. It’s also allowed to derive new C++ classes from ClockEvent if necessary. In any case, clock events must be scheduled and canceled via the IClock C++ interface to operate properly.
## Controlling Clocks According to a Scenario¶
In order to support the simulation of specific scenarios, where the clock time or the oscillator drift must be changed according to a predefined script, INET provides clocks and oscillators that implement the interface required by the ScenarioManager module. This allows the user to update the clock and oscillator state from the ScenarioManager XML script and to also mix these operations with many other supported operations.
For example, the SettableClock model supports setting the clock time and also to optionally reset the oscillator at a specific moment of simulation time as follows:
<set-clock at="10 s" module="server.clock" time="1.2 s" reset-oscillator="true"/>
The above example means that the clock time of the server node’s clock will be set to 1.2 seconds when the simulation time reaches 10 seconds, and the clock’s oscillator will restart its duty cycle.
For another example, the ConstantDriftOscillator supports changing the state of the oscillator with the following command:
<set-oscillator at="10 us" module="server.clock.oscillator" drift-rate="42 ppm" tick-offset="1 us"/>
This example simultaneously changes the drift rate and the tick offset of the oscillator in the server node’s clock. | 2021-04-21 03:56:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2966773509979248, "perplexity": 1794.912850052111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039508673.81/warc/CC-MAIN-20210421035139-20210421065139-00491.warc.gz"} |
http://mathoverflow.net/questions/96799/reference-request-name-of-a-transform?answertab=oldest | # Reference request: name of a transform
Define a transform on polynomials which is linear and acts on each monomial as $$\widehat{z^k} = \frac{(1+z)(2+z)\ldots(k+z)}{k!}.$$ Does anyone know whether this has a name (and therefore has been studied)?
(Also, not sure what to tag this with... please suggest/edit.)
-
The right hand side is just $\binom {k+z} k$ or its suitable generalization to noninteger $z$, for what it's worth. – Jiahao Chen May 13 '12 at 1:46
I've seen $(z+1)\cdots(z+k)$ called "$z$ to the $k$ rising", although this also sometimes means $z\cdots (z+k-1)$, and denoted "$z^{\overline k}$". From this perspective, binomial coefficients like $\binom{z+k}{k}$ are divided rising powers. (More common are "falling" powers, so that $\binom{x}{k}$ is a divided falling power.) The reason to invent rising and falling powers is to have good basis for differences. Differentiation of polynomials is locally nilpotent, and so you can write it as a Jordan block with zeros on the diagonal. That picks out the usual (divided) monomial basis. Rising ... – Theo Johnson-Freyd May 13 '12 at 2:31
... (divided) powers have the same structure for $f(z) \mapsto f(z) - f(z-1)$. So the only thing that seems strange to me about your transformation is the division by $k!$. Or rather, to me divided powers are most natural, so I would have expected $z^k/k! \mapsto \binom{z+k}{k}$, and not $z^k \mapsto \binom{z+k}{k}$. The operator that is like differentiation whose basis consists of non-divided powers is the map $f(z) \mapsto \frac1z\bigl(f(z)-f(0)\bigr)$. This map is not translation invariant, but does turn up occasionally. – Theo Johnson-Freyd May 13 '12 at 2:38
I don't know if this terminology is standard, but I would call it an inverse Beta transform because, although off by a factor of 1, your expression above can be rewritten as $$\frac{\Gamma(z+k+1)}{\Gamma(k+1)\Gamma(z+1)}$$ – Suvrit May 13 '12 at 23:47 | 2016-05-02 04:01:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9027106165885925, "perplexity": 433.46013080834234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122420.60/warc/CC-MAIN-20160428161522-00210-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://machinelearning.wustl.edu/mlpapers/papers/icml2014c2_tosh14 | Search Machine Learning Repository: Lower Bounds for the Gibbs Sampler over Mixtures of Gaussians
Authors: Christopher Tosh and Sanjoy Dasgupta
Conference: Proceedings of the 31st International Conference on Machine Learning (ICML-14)
Year: 2014
Pages: 1467-1475
Abstract: The mixing time of a Markov chain is the minimum time $t$ necessary for the total variation distance between the distribution of the Markov chain's current state $X_t$ and its stationary distribution to fall below some $\epsilon > 0$. In this paper, we present lower bounds for the mixing time of the Gibbs sampler over Gaussian mixture models with Dirichlet priors.
[pdf] [BibTeX]
authors venues years
Suggest Changes to this paper.
Brought to you by the WUSTL Machine Learning Group. We have open faculty positions (tenured and tenure-track). | 2017-11-24 03:33:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6458632349967957, "perplexity": 765.0859081469536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807084.8/warc/CC-MAIN-20171124031941-20171124051941-00278.warc.gz"} |
https://journal.psych.ac.cn/acps/EN/10.3724/SP.J.1041.2016.00174 | ISSN 0439-755X
CN 11-1911/B
Acta Psychologica Sinica ›› 2016, Vol. 48 ›› Issue (2): 174-184.
### Cognitive mechanisms of the emotional attentional blink: Evidence from behavior and ERPs
JIA Lei1; ZHANG Chang-Jie1; ZHANG Qing-lin2
1. (1 Department of Psychology, Zhejiang University of Technology, Hangzhou 310014, China)
(2 Faculty of Psychology, Southwest University, Chongqing 400715, China)
• Received:2015-01-20 Published:2016-02-25 Online:2016-02-25
• Contact: ZHANG Qing-lin, E-mail: zhangql@swu.edu.cn
Abstract:
The emotional attentional blink (EAB) refers to a specific limitationreduced ability to report the second of two targets (T2) in a stream of distractors if it appears within 200-500 msec following the first target (T1). This effect is known as attentional blink (AB). However, when emotional/ affective stimulus is used as T1 and T2 is neutral, the AB effect can be strengthened. This specific effect of attentional blink is emotional attentional blink (EAB). Compared with the standard AB effect, the EAB has its unique characteristics. For example, the stimulus onset asynchrony (SOA) between T1 and T2 in EAB could be less than that in standard attentional blink (e.g., ≤134 msec; Stein et al., 2009). Nevertheless, the task of T1 recognition should be aimed at the dimension of emotional process. of humans' attention system that consciously perception ability of target stimuli distributed across time is reduced by emotional/affective processes. Under conditions of rapid serial visual presentation (RSVP), participants usually display a significant
Although recent behavior studies have provided much evidence about the process of EAB, details about the cognitive neural mechanisms of EAB are still unknown. Therefore, this research aimed to examine the cognitive neural processing mechanisms of the EAB and verify the divergences between views of the Bottleneck Theory (Martens & Wyble, 2010; Zhang & Wang, 2009) and the Overinvestment theory (as well as the Boost and Bounce Theory; Olivers & Meeter, 2008; Olivers & Nieuwenhuis, 2006).
To achieve this purpose, the present study employed a modified dual-task RSVP paradigm referred from the Study 1 of Stein et al. (2009). Moreover, the technology of event related potentials (ERPs) was used to examine the fast neural process of the EAB. In this RSVP stream, emotional faces (three conditions: fear faces, neutral faces, and face absent) were used as T1, and pictures of house scene (neutral: outdoor vs. indoor) were used as T2 stimuli. Participants were instructed to recognize T1 and T2 when the visual stream was presented. Once the visual stream disappeared, participants had to make judgment of T1 and T2 based on their categories or features. Meanwhile, EEG/ERPs from the facial recognition of T1 to the scene recognition of T2 were recorded and off-line analyzed.
The results of the final behavioral data analysis revealed that the condition of emotion T1 (fear faces) led to a significant reduction in the efficiency of T2 recognition, which was much lower than that in the conditions of neutral T1 (neutral faces) and T1 missing. These behavioral results indicated a typical EAB effect. In addition, the ERPs results provided the first evidence for the process of the EAB. In this research, we focused on the P3 components of the two processing stages of T1 and T2, respectively. Because these P3 components indexed the resource of attention in central processing. The final results showed that compared with other stimuli conditions (neutral T1 and T1 absent), the P3 amplitudes evoked by emotion T1 and T2 presentation were both enhanced. This effect should deny the resource bottleneck between the T1 and T2 competition but support the emotional/affective overinvestment in EAB. Based on these results, the neural mechanisms of EAB were discussed. | 2022-10-05 18:10:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21466712653636932, "perplexity": 5430.788096178265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00680.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/cpaa.2005.4.267 | American Institute of Mathematical Sciences
June 2005, 4(2): 267-281. doi: 10.3934/cpaa.2005.4.267
Approximations of degree zero in the Poisson problem
1 Dipartimento di Georisorse e Territorio, University of Udine, 33100 Udine, Italy 2 LMGC, Université de Montpellier II, Montpellier, France
Received April 2004 Revised November 2004 Published March 2005
We discuss a technique for the approximation of the Poisson problem under mixed boundary conditions in spaces of piece-wise constant functions. The method adopts ideas from the theory of $\Gamma$-convergence as a guideline. Some applications are considered and numerical evaluation of the convergence rate is discussed.
Citation: C. Davini, F. Jourdan. Approximations of degree zero in the Poisson problem. Communications on Pure & Applied Analysis, 2005, 4 (2) : 267-281. doi: 10.3934/cpaa.2005.4.267
[1] Patrick Henning. Convergence of MsFEM approximations for elliptic, non-periodic homogenization problems. Networks & Heterogeneous Media, 2012, 7 (3) : 503-524. doi: 10.3934/nhm.2012.7.503 [2] Chuchu Chen, Jialin Hong. Mean-square convergence of numerical approximations for a class of backward stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2051-2067. doi: 10.3934/dcdsb.2013.18.2051 [3] Z. Foroozandeh, Maria do rosário de Pinho, M. Shamsi. On numerical methods for singular optimal control problems: An application to an AUV problem. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2219-2235. doi: 10.3934/dcdsb.2019092 [4] Qiang Long, Xue Wu, Changzhi Wu. Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020009 [5] L’ubomír Baňas, Amy Novick-Cohen, Robert Nürnberg. The degenerate and non-degenerate deep quench obstacle problem: A numerical comparison. Networks & Heterogeneous Media, 2013, 8 (1) : 37-64. doi: 10.3934/nhm.2013.8.37 [6] George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2020 doi: 10.3934/jcd.2021003 [7] Alberto Bressan, Carlotta Donadello. On the convergence of viscous approximations after shock interactions. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 29-48. doi: 10.3934/dcds.2009.23.29 [8] Gabriella Bretti, Roberto Natalini, Benedetto Piccoli. Numerical approximations of a traffic flow model on networks. Networks & Heterogeneous Media, 2006, 1 (1) : 57-84. doi: 10.3934/nhm.2006.1.57 [9] Yanzhao Cao, Song Chen, A. J. Meir. Analysis and numerical approximations of equations of nonlinear poroelasticity. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1253-1273. doi: 10.3934/dcdsb.2013.18.1253 [10] Haibo Cui, Zhensheng Gao, Haiyan Yin, Peixing Zhang. Stationary waves to the two-fluid non-isentropic Navier-Stokes-Poisson system in a half line: Existence, stability and convergence rate. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4839-4870. doi: 10.3934/dcds.2016009 [11] Emmanuel Frénod. Homogenization-based numerical methods. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : i-ix. doi: 10.3934/dcdss.201605i [12] Sandrine Anthoine, Jean-François Aujol, Yannick Boursier, Clothilde Mélot. Some proximal methods for Poisson intensity CBCT and PET. Inverse Problems & Imaging, 2012, 6 (4) : 565-598. doi: 10.3934/ipi.2012.6.565 [13] David Bourne, Howard Elman, John E. Osborn. A Non-Self-Adjoint Quadratic Eigenvalue Problem Describing a Fluid-Solid Interaction Part II: Analysis of Convergence. Communications on Pure & Applied Analysis, 2009, 8 (1) : 143-160. doi: 10.3934/cpaa.2009.8.143 [14] Giuseppe Maria Coclite, Lorenzo di Ruvo, Jan Ernest, Siddhartha Mishra. Convergence of vanishing capillarity approximations for scalar conservation laws with discontinuous fluxes. Networks & Heterogeneous Media, 2013, 8 (4) : 969-984. doi: 10.3934/nhm.2013.8.969 [15] Emmanuel Gobet, Mohamed Mrad. Convergence rate of strong approximations of compound random maps, application to SPDEs. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4455-4476. doi: 10.3934/dcdsb.2018171 [16] Jie Shen, Xiaofeng Yang. Numerical approximations of Allen-Cahn and Cahn-Hilliard equations. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1669-1691. doi: 10.3934/dcds.2010.28.1669 [17] Alexandre Caboussat, Roland Glowinski. A Numerical Method for a Non-Smooth Advection-Diffusion Problem Arising in Sand Mechanics. Communications on Pure & Applied Analysis, 2009, 8 (1) : 161-178. doi: 10.3934/cpaa.2009.8.161 [18] Christos V. Nikolopoulos, Georgios E. Zouraris. Numerical solution of a non-local elliptic problem modeling a thermistor with a finite element and a finite volume method. Conference Publications, 2007, 2007 (Special) : 768-778. doi: 10.3934/proc.2007.2007.768 [19] Krzysztof Bartosz. Numerical analysis of a nonmonotone dynamic contact problem of a non-clamped piezoelectric viscoelastic body. Evolution Equations & Control Theory, 2020, 9 (4) : 961-980. doi: 10.3934/eect.2020059 [20] Alexander Mielke. Weak-convergence methods for Hamiltonian multiscale problems. Discrete & Continuous Dynamical Systems - A, 2008, 20 (1) : 53-79. doi: 10.3934/dcds.2008.20.53
2019 Impact Factor: 1.105 | 2020-09-20 13:06:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4236266613006592, "perplexity": 9330.232217750057}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198213.25/warc/CC-MAIN-20200920125718-20200920155718-00704.warc.gz"} |
https://testbook.com/blog/miscellaneous-quiz-2-sbi-po/ | • Share
# Miscellaneous Quiz 2 SBI PO 2018
1 year ago .
Save
Are you preparing for Banking, Insurance and other Competitive Recruitment or Entrance exams? You will likely need to solve a section on Quant for sure. Miscellaneous Quiz 2 SBI PO 2018 will help you learn concepts on an important topic in Quant – Speed, Time and Distance, Algebra, Interest. This Miscellaneous Quiz 2 SBI PO is important for exams such as IBPS PO, IBPS Clerk, IBPS RRB Officer, IBPS RRB Office Assistant, IBPS SO, SBI PO, SBI Clerk, SBI SO, Indian Post Payment Bank (IPPB) Scale I Officer, LIC AAO, GIC AO, UIIC AO, NIACL AO, NICL AO.
## Miscellaneous Quiz 2 SBI PO 2018 –
Que. 1
Three cooks have to make 80 idlis. They are known to make 20 pieces every minute working together. The first cook began working alone and made 20 pieces having worked for some time more than three minutes. The remaining part of the work was done by the second and the third cooks working together. It took a total of 8 minutes to complete the 80 idlis. How many minutes would it take the first cook alone to cook 160 idlis for a marriage party the next day?
1.
16 minutes
2.
24 minutes
3.
32 minutes
4.
40 minutes
5.
None of these
Que. 2
A rope makes 70 rounds of a circumference of a cylinder whose radius of the base is 14 cm. How many times can it go around a cylinder with radius 20 cm?
1.
72
2.
7
3.
49
4.
51
5.
None of these
Que. 3
A rectangular plate is of 6 inch breadth and 12 inch length. Two apertures of 2 inch diameter each and one aperture of 1 inch diameter have been made with the help of a gas cutter, what is the area of the remaining portion of the plate?
1.
62.5 sq. inch
2.
68.5 sq. inch
3.
64.5 sq. inch
4.
66.5 sq. inch
5.
None of these
Que. 4
Two workers A and B working together completed a job in 5 days. If A worked twice as efficiently as he actually did and B worked 1/3 as efficiently as he actually did, the work would have been completed in 3 days. To complete the job alone, A would require :
1.
$$5\frac{1}{5}days$$
2.
$$6\frac{1}{4}days$$
3.
$$7\frac{1}{2}days$$
4.
$$8\frac{3}{4}days$$
5.
None of these
Que. 5
Srivari Riveria is a big housing complex in Coimbatore. Giant tanks are placed in every complex building to cater to the needs of the residents. In a block named Annapurna, three taps P, Q and R can fill a tank in 12, 15 and 30 hours respectively. The caretaker of the complex has instructions to keep tap P open all the time and Q and R are to be opened for one hour alternately. In which hour will the tank in Annapurna become full?
1.
6th hour
2.
7th hour
3.
5th hour
4.
8th hour
5.
data insufficient
Que. 6
After loading a dock, each worker on the night crew loaded 3/4 as many boxes as each worker on the day of the crew. If the night crew has 4/5 as many workers as the day crew, what fraction of all the boxes loaded by two crews did the day crew load?
1.
1/4
2.
1/8
3.
1/2
4.
5/8
5.
3/8
Que. 7
A liquid is filled in a hemisphere of inner diameter 9 cm. This is to be poured into cylindrical bottles of diameter 3 cm and height 4 cm. The number of bottles required are
1.
5
2.
8
3.
12
4.
3
5.
7
Que. 8
The difference between the length and breadth of a rectangle is 23m. Its perimeter is 206m, then its area is
1.
1520 m2
2.
2420 m2
3.
2480 m2
4.
2520 m2
5.
2400 m2
Que. 9
A contractor agreeing to finish a work in 150 days, employed 75 men each working 8 hours daily. After 90 days, only 2/7 of the work was completed. Increasing the number of men by _____ and each one is working now for 10 hours daily, the work can be completed in time.
1.
75
2.
225
3.
150
4.
100
5.
175
Que. 10
There is a square of side 6 cm. A circle is inscribed inside the square. Find the ratio of the area of circle to square.
1.
11/2
2.
14/11
3.
11/14
4.
11/7
5.
None of these
Did you like this Miscellaneous Quiz 2 SBI PO 2018? Let us know! You may also like –
Get more quizzes here:
### SBI PO Quiz Compilation
Kshitija Gotpagar is a Senior Content Writer and a YouTube Video Presenter with experience in Academic Content Writing & a degree in Journalism. Having appeared for various competitive exams like UPSC, CDS, IB ACIO, SBI PO and currently specializing in Sociology as a subject, Kshitija is adept in providing crisp, to the point & SEO-rich content.
17 hours ago
2 days ago
3 days ago
4 days ago | 2019-10-20 03:17:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4439598321914673, "perplexity": 3228.6575387682888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00133.warc.gz"} |
https://guzuzidadihic.orioltomas.com/sensitivity-of-conditions-for-lumping-finite-markov-chains-book-22693ou.php | # Sensitivity of conditions for lumping finite Markov chains
• 39 Pages
• 0.51 MB
• English
by
Naval Postgraduate School, Available from the National Technical Information Service , Monterey, Calif, Springfield, Va
The Physical Object
Pagination39 p. ;
ID Numbers
Open LibraryOL25502889M
Markov chains with large transition probability matrices occur in many applications such as manpower models. Under certain conditions the state space of a stationary discrete parameter finite Markov chain may be partitioned into subsets, each of which may be treated as a single state of a smaller chain that retains the Markov property.
Finite Markov Chains Here we introduce the concept of a discrete-time stochastic process, investigat-ing its behaviour for such processes which possess the Markov property (to make predictions of the behaviour of a system it suffices to consider only the present state of the system and not its history).
We then add a further restriction of. When the initial and transition probabilities of a finite Markov chain in discrete time are not well known, we should perform a sensitivity analysis.
This is done by considering as basic uncertainty models the so-called credal sets that these probabilities are known or believed to belong to, and by allowing the probabilities to vary over such sets. This leads to the definition of an imprecise Author: Gert de Cooman, Filip Hermans, Erik Quaeghebeur.
For finite, homogeneous, continuous-time Markov chains having a unique stationary distribution, we derive perturbation bounds which demonstrate the connection between the sensitivity to.
Markov chain might not be a reasonable mathematical model to describe the health state of a child. We shall now give an example of a Markov chain on an countably infinite state space.
The outcome of the stochastic process is gener-ated in a way such that the Markov property clearly holds. The stateFile Size: KB. Abstract: A lumping of a Markov chain is a coordinate-wise projection of the chain.
### Description Sensitivity of conditions for lumping finite Markov chains EPUB
We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original.
• know under what conditions a Markov chain will converge to equilibrium in long time; • be able to calculate the long-run proportion of time spent in a given state.
1 Definitions, basic properties, the transition matrix Markov chains were introduced in by Andrei Andreyevich Markov (–). Chapter 3 FINITE-STATE MARKOV CHAINS Introduction The counting processes {N(t); t > 0} described in Section have the property that N(t) changes at discrete instants of time, but is defined for all real t > 0.
The Markov chains to be discussed in this chapter are stochastic processes defined only at integer values of time, n = 0, 1.At each integer time n ≥ 0, there is an. For an n-state finite, homogeneous, ergodic Markov chain, with transition matrix ${\bf P}$ and stationary distribution ${\boldsymbol \pi}$ we assume that the entries of ${\bf P}$ are differentiabl.
Distribution of First Passage Times for Lumped States in Markov Chains To illustrate these definitions, reconsider the inventory example where Xt is the number of cameras on hand at the end of week t, where we start with X0. Suppose that it turns out that.
Sensitivity of finite Markov chains under perturbation E. Seneta as a measure of relative sensitivity (‘condition number’) r under perturbation of P to P, while on the basis of (4) and rank-one updates for finite Markov chains, in: WI.
Stewart, ed. Discounted approximations in risk-sensitive average Markov cost chains with finite state space 5 December | Mathematical Methods of Operations Research, Vol.
48 Variance-Based Risk Estimations in Markov Processes via Transformation with State Lumping. tantly, the stationary distributions of both Markov chains are related through a simple linear transformation.
To illustrate our ideas, we use as an example the compu-tation of the stationary distribution of Google's Markov chain|the so-called PageRank (Brin et al., ). The lumping of states is particularly e ective in this context.
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): When the initial and transition probabilities of a finite Markov chain in discrete time are not well known, we should perform a sensitivity analysis.
This is done by considering as basic uncertainty models the so-called credal sets that these probabilities are known or believed to belong to, and by allowing the.
Lumping a Markov chain appears as a useful tool in this kind of investigation, since by lumping Markov chain the spectral gap cannot decrease. Informally, when a Markov chain is lumpable, it possible to reduce the number of states by a sort of aggregation process, obtaining a “smaller†arkov chain.
There is a close connection between stochastic matrices and Markov chains. To begin, let $S$ be a finite set with $n$ elements $\{x_1, \ldots, x_n\}$. The set $S$ is called the state space and $x_1, \ldots, x_n$ are the state values.
### Details Sensitivity of conditions for lumping finite Markov chains EPUB
A Markov chain $\{X_t\}$ on $S$ is a sequence of random variables on $S$ that have the Markov. Sensitivity of conditions for lumping finite Markov chains. By Moon Taek Suh Get PDF (3 MB).
In general, if a Markov chain has rstates, then p(2) ij = Xr k=1 p ikp kj: The following general theorem is easy to prove by using the above observation and induction.
Theorem Let P be the transition matrix of a Markov chain. The ijth en-try p(n) ij of the matrix P n gives the probability that the Markov chain, starting in state s i, will.
We prove that the optimal lumping quotient of a finite Markov chain can be constructed in O(mlgn) time, where n is the number of states and m is the number of proof relies on the use of splay trees (designed by Sleator and Tarjan [J. In book: Sensitivity Analysis: Matrix Methods in Demography and Ecology, Publisher: Springer Nature, pp an application is given which concerns the analysis of a finite Markov chain.
A Markov chain is a stochastic process, but it differs from a general stochastic process in that a Markov chain must be "memory-less."That is, (the probability of) future actions are not dependent upon the steps that led up to the present state.
This is called the Markov the theory of Markov chains is important precisely because so many "everyday" processes satisfy the Markov. Lumping the States of a Finite Markov [email protected], [email protected] Abstract: In this work we show how the lumping of states of a finite Markov chain can be regarded as a special decomposition of its transition matrix called stochastic factorization.
(see for example the book by Horn and Johnson,Theorem ). We present an efficient finite difference method for the computation of parameter sensitivities that is applicable to a wide class of continuous time Markov chain models.
The estimator for the method is constructed by coupling the perturbed and nominal processes in a natural manner, and the analysis proceeds by utilizing a martingale. Finite Markov Chains With a New Appendix "Generalization of a Fundamental Matrix" Authors: Kemeny, John G., Snell, J.
Laurie. We prove that the optimal lumping quotient of a finite Markov chain can be con-structd in O(mlgn) time, where n is the number of states and m is the number of transitions. The proof relies on the use of splay trees [18] to sort transition weights.
Key words: bisimulation, computational complexity, lumpability, Markov chains, splay trees 1. The approach here makes it easy to compute the sensitivity of a variety of dependent variables calculated from the Markov chain. As an example of this flexibility, consider a recently developed demographic index, the number of years of life lost due to mortality (Vaupel and Canudas Romo ).
The transient states of the chains are age classes, absorption corresponds to death, and absorbing. Abstract. Perturbation theory for finite discrete-time Markov chains was systematically studied by several authors. In particular, Schweitzer [8] recognized the importance of the fundamental matrix for perturbation theory and obtained perturbation formulas for so-called regular perturbations of a discrete-time Markov chain.
[7] G. Chen and L. Saloff-Coste, Comparison of cutoffs between lazy walks and Markovian semigroups, Journal of Applied Probability (): – [8] P. Diaconis and L. Saloff-Coste, Logarithmic Sobolev inequalities for finite Markov chains, The Annals of Applied Probability, 6(3) (): – [9] J. Ding, E. Lubetzky and Y.
Peres, Total variation cutoff in birth-and-death. And @Sasha already explained that every finite Markov chain, even the periodic ones, has at least one stationary distribution. (3) and (4) make no sense to me.
You could try reading this or the quite accessible book Markov chains by James Norris. I bought this book to re-learn finite markov chain, because previously I used another book that is not very good. The good points of this book: does not assume too much mathematical background; classifies the states of finite markov chains and also the types of finite markov chains early on, so that I have a clear picture of what to expect in later chapters; most theorems are proved, though Reviews: 1.
Discounted approximations in risk-sensitive average Markov cost chains with finite state space 5 December | Mathematical Methods of Operations Research, Vol.
91, No. 2 Risk-sensitive continuous-time Markov decision processes with unbounded rates and Borel spaces.When the initial and transition probabilities of a finite Markov chain in discrete time are not well known, we should perform a sensitivity analysis.
This is done by considering as basic uncertainty models the so-called credal sets that these probabilities are known or believed to belong to, and by allowing the probabilities to vary over such sets.A Sufficient Condition for Ergodicity.
Classification of the State Space. Lumping of Markov Chains. | 2021-08-05 23:43:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8048856854438782, "perplexity": 709.880684977237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00591.warc.gz"} |
https://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=1372 | Time Limit : sec, Memory Limit : KB
English
## Problem F Three Kingdoms of Bourdelot
You are an archaeologist at the Institute for Cryptic Pedigree Charts, specialized in the study of the ancient Three Kingdoms of Bourdelot. One day, you found a number of documents on the dynasties of the Three Kingdoms of Bourdelot during excavation of an ancient ruin. Since the early days of the Kingdoms of Bourdelot are veiled in mystery and even the names of many kings and queens are lost in history, the documents are expected to reveal the blood relationship of the ancient dynasties.
The documents have lines, each of which consists of a pair of names, presumably of ancient royal family members. An example of a document with two lines follows.
Alice Bob
Bob Clare
Lines should have been complete sentences, but the documents are heavily damaged and therefore you can read only two names per line. At first, you thought that all of those lines described the true ancestor relations, but you found that would lead to contradiction. Later, you found that some documents should be interpreted negatively in order to understand the ancestor relations without contradiction. Formally, a document should be interpreted either as a positive document or as a negative document.
• In a positive document, the person on the left in each line is an ancestor of the person on the right in the line. If the document above is a positive document, it should be interpreted as "Alice is an ancestor of Bob, and Bob is an ancestor of Clare".
• In a negative document, the person on the left in each line is NOT an ancestor of the person on the right in the line. If the document above is a negative document, it should be interpreted as "Alice is NOT an ancestor of Bob, and Bob is NOT an ancestor of Clare".
A single document can never be a mixture of positive and negative parts. The document above can never read "Alice is an ancestor of Bob, and Bob is NOT an ancestor of Clare".
You also found that there can be ancestor-descendant pairs that didn't directly appear in the documents but can be inferred by the following rule: For any persons $x$, $y$ and $z$, if $x$ is an ancestor of $y$ and $y$ is an ancestor of $z$, then $x$ is an ancestor of $z$. For example, if the document above is a positive document, then the ancestor-descendant pair of "Alice Clare" is inferred.
You are interested in the ancestry relationship of two royal family members. Unfortunately, the interpretation of the documents, that is, which are positive and which are negative, is unknown. Given a set of documents and two distinct names $p$ and $q$, your task is to find an interpretation of the documents that does not contradict with the hypothesis that $p$ is an ancestor of $q$. An interpretation of the documents contradicts with the hypothesis if and only if there exist persons $x$ and $y$ such that we can infer from the interpretation of the documents and the hypothesis that
• $x$ is an ancestor of $y$ and $y$ is an ancestor of $x$, or
• $x$ is an ancestor of $y$ and $x$ is not an ancestor of $y$.
We are sure that every person mentioned in the documents had a unique single name, i.e., no two persons have the same name and one person is never mentioned with different names.
When a person A is an ancestor of another person B, the person A can be a parent, a grandparent, a great-grandparent, or so on, of the person B. Also, there can be persons or ancestor-descendant pairs that do not appear in the documents. For example, for a family tree shown in Figure F.1, there can be a positive document:
A H
B H
D H
F H
E I
where persons C and G do not appear, and the ancestor-descendant pairs such as "A E", "D F", and "C I" do not appear.
Figure F.1. A Family Tree
### Input
The input consists of a single test case of the following format.
$p$ $q$
$n$
$c_1$
.
.
.
$c_n$
The first line of a test case consists of two distinct names, $p$ and $q$, separated by a space. The second line of a test case consists of a single integer $n$ that indicates the number of documents. Then the descriptions of $n$ documents follow.
The description of the $i$-th document $c_i$ is formatted as follows:
$m_i$
$x_{i,1}$ $y_{i,1}$
.
.
.
$x_{i,m_i}$ $y_{i,m_i}$
The first line consists of a single integer $m_i$ that indicates the number of pairs of names in the document. Each of the following $m_i$ lines consists of a pair of distinct names $x_{i,j}$ and $y_{i,j}$ ($1 \leq j \leq m_i$), separated by a space.
Each name consists of lowercase or uppercase letters and its length is between 1 and 5, inclusive.
A test case satisfies the following constraints.
• $1 \leq n \leq 1000$.
• $1 \leq m_i$.
• $\sum^{n}_{i=1} m_i \leq 100000$, that is, the total number of pairs of names in the documents does not exceed 100000.
• The number of distinct names that appear in a test case does not exceed 300.
### Output
Output "Yes" (without quotes) in a single line if there exists an interpretation of the documents that does not contradict with the hypothesis that $p$ is an ancestor of $q$. Output "No", otherwise.
### Sample Input 1
Alice Bob
3
2
Alice Bob
Bob Clare
2
Bob Clare
Clare David
2
Clare David
David Alice
### Sample Output 1
No
### Sample Input 2
Alice David
3
2
Alice Bob
Bob Clare
2
Bob Clare
Clare David
2
Clare David
David Alice
### Sample Output 2
Yes
### Sample Input 3
Alice Bob
1
2
Clare David
David Clare
### Sample Output 3
Yes | 2021-11-30 00:17:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.771433413028717, "perplexity": 611.1142187258123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00417.warc.gz"} |
https://www.gamedev.net/blogs/entry/2150190-digitanks---the-early-bloomer/ | • entries
37
52
• views
42754
# Digitanks - The Early Bloomer
554 views
One of the first visual improvements I made to Digitanks was to add bloom. Ah, bloom, the often used and eternally abused layman's visual enhancement. It makes a simple-looking scene look more sophisticated by taking out that embedded polygonal look and imbuing the frame with gradients and natural, circular shapes. It's a relatively easy way to upgrade your game's visuals from 1998 to 2008 with not too much development effort. But, if you overdo it, it can look nasty and terrible.
One of the most notorious abuses of Bloom was in the original Fable. This game put bloom everywhere. Every bright portion of the scene got bloom on it. The problem is, that's not the way light works in real life. In reality objects will gain that halo if they have a very bright light falling on them, or if they're emitting light. In Fable, they made things glow just because. The guy's clothes, the sidewalk, the rocks and leaves, everything was glowy and it gave me a headache after a while. It was just uncomfortable to be looking at all of this glowy crap. I suppose it looks nice in that screenshot but not so much when you're playing the game.
In general, the rule of thumb you want to use is that if the object is either emitting light or a very, very bright light is falling onto the object it can get bloom. If you look at a stoplight at night, you'll see a red glaze around the light. (Hopefully you won't see a green glaze around the light, because that means you need to hit the accelerator.) That glaze looks really neat in video games. But you'll never see that glaze appear on the sidewalk. On a very sunny day in the desert, you might see this glaze on a white surface that's next to a dark area (say, the entrance to a building?) but you won't see it on the ground or on people's shirts.
I suppose you can't much blame Fable for getting it wrong, it was one of the first games to implement bloom. They were going for a semi-realistic, if not stylized environment where bloom simply doesn't fit all that well. It was a new technology, and they simply didn't know what the most effective use of it was. The game that I have to give credit to for the correct use of bloom is Tron 2.0. The rules I just explained to you about regular objects glowing? Well Tron 2.0 threw that out the window. They could afford to do that though, because they weren't going after that semi-realistic style. The primary difference is that Tron 2.0 has mostly dark backgrounds, so the glow of the bloom doesn't wash out the image and cause damage to my vision. Tron 2.0 even has more bloom than Fable did, but it ended up working better because of the environments of the game.
More recently in video games the technology of "high dynamic range rendering" has percolated into most AAA titles. This change has provided a much better use of bloom for photorealistic scenes. In HDR rendering, the scene is rendered with an infinite (well, almost infinite) range of light values, from totally dark to bright as the sun, and then a small slice of those values is removed and rendered to the screen. Parts of the scene that are too dark in that slice are rendered as completely black, and parts that are too bright are rendered as completely white. Bloom is used to highlight the parts of the scene that are overexposed and completely white. This is much closer to what happens in actual photography and with the human eye. One of the first games to use HDR rendering was Valve's Lost Coast demo, and the benefits of using bloom in this situation were clear. Lost Coast looked fantastic, and HDR is now a standard feature in all Valve games. The use of bloom this way in HDR matches what we usually see with our eyes and follows the rule I put forward before - only light sources and very, very bright surfaces receive the glowy effect.
So getting back to Digitanks. Bearing our examples in mind I set forth to add the perfect amount of bloom to Digitanks. At a technical level, bloom is just a blurring of the bright portions of the scene. The first step is to render the game scene to an off-screen frame buffer. Then, a filter is run over the scene so that the bright portions of the image are isolated. Lastly, the blurred images are superimposed over the final image. Let's take a look at the initial scene that we'll be dealing with. We have some bright elements like the shields, combined with some darker elements. Overall the scene is fairly bright. The first thing we need is a shader that passes the bright elements of the scene through while omitting the dark elements. For this I wrote what I call a "bright-pass" shader. Here's the glsl sources for it, non-technical people can just skip it:
uniform sampler2D iSource; uniform float flBrightness; uniform float flScale; void main(void) { vec4 vecFragmentColor = texture2D(iSource, gl_TexCoord[0].xy); float flValue = vecFragmentColor.x; if (vecFragmentColor.y > flValue) flValue = vecFragmentColor.y; if (vecFragmentColor.z > flValue) flValue = vecFragmentColor.z; if (flValue < flBrightness && flValue > flBrightness - 0.2) { float flStrength = RemapVal(flValue, flBrightness - 0.2, flBrightness, 0.0, 1.0); vecFragmentColor = vecFragmentColor*flStrength; } else if (flValue < flBrightness - 0.2) vecFragmentColor = vec4(0.0, 0.0, 0.0, 0.0); gl_FragColor = vecFragmentColor*flScale; }
First, this shader finds the brightest value of each pixel. If the pixel is brighter than flBrightness, then it saves the value. There's a slight ramp near the cutoff point where the value is ramped in softly so that abrupt changes in pixel brightness don't cause lines. Three copies of the scene are made at three different resolutions, and the shader is applied to each copy at three different intensities. This is done to take advantage of the hardware acceleration's very fast image scaling operations. The highest resolution takes only the very brightest portions of the image, and the lowest resolution receives more less-bright portions of the image. This is done by passing progressively lower values into the flBrightness uniform in the shader.
One important thing to note about this bright-pass shader is that it doesn't take the average pixel brightness, but rather the brightest color value. That is, it doesn't take (r+g+b)/3 but rather looks at the brightest of the three to determine if it passes the filter. That's important, because it allows bright solid colors to pass. For example, the tanks in the game all have bright solid colors, in this case solid blue. Blue ends up having a bright value, but red and green have values of 0. If we had taken the average, we would have gotten (0+0+1)/3 = 0.3, which wouldn't have passed the filter. However, since we use only the brightest pixel to test, this bright blue value passes.
Once we have our bright scene portions isolated, then we can blur them. Each frame is blurred using a simple gaussian filter. I won't post the shader for that since it's mostly covered rather well in other places, but you can see the results. Just like before, the blur is applied to each of the three different resolutions. Since the lower resolution frame gets the same blur as the higher resolution frame, its blur is actually much more pronounced once it gets resampled up to the final image size.
The next step is to combine these three blurs together into a single frame. This is done using additive blending, which is fantastic for creating effects that seem to be glowing. The values for brightness were chosen carefully so that the parts of the scene that I wanted to stand out can be seen clearly. For example, the tank that has his shields up the highest (it has more energy for its shields since hasn't moved as much as the others) has a very bright bloom on its shields, while the tank with less shields has barely any bloom on his shields. So, the bloom actually helps to highlight the shield strength of the tank. The movement and range indicators also get bloom highlighting on them, to help them stand out from the terrain shader.
Lastly, the bloom gets combined with the original scene, again with additive blending.
That's it! The scene looks much more lively now. Notice the slight haze around the targeting trajectories and the brighter targeting rings on the ground, and the bright shields bleeding their energy into the surrounding terrain. The terrain also has gained a warmer feel to it. It complements the scene, and it doesn't get in the way, I'm pretty satisfied with this bloom.
If you want more technical details on how to implement bloom with glsl, there's a great tutorial on Phillip Rideout's website. Thanks for reading!
That is a nice little write up you have posted - good job. I've been quietly reading your journal entries, and am impressed at the rate at which you have been progressing. Are you working on this full time, or as a side project?
One thing that I think would be interesting to see is a with and without bloom screenshot similar to your final image. That would really emphasize the difference between the two, and I'm interested to see the difference in contrast between the two images.
Keep up the good work!
a much better method for bloom than what you have now (though it is more expensive)
Is to have a bloom buffer, i.e. instead of taking the existing screen + doing a bloom with that.
start with a black texture, render the depths, and then render the meshes that you want bloom to occur with. This way you gain far better control over the actual "glowingness"
Thanks guys.
Don't be too impressed at my rate of progress, I've been working on this since April and posting to my blog all the while, but only posting to this journal once a day since I purchased an account a week or two ago :)
I did include a shot without bloom, it's up above! You want to see them side by side though? Here you go:
zedz, I was actually thinking of a way of doing bloom that would afford a large degree of control, which is to have each material for every model have a "bloom map" which defines the amount of bloom that material emits. Essentially it'd be like the "bloom buffer" idea that you mentioned, except that when the models are rendered to the bloom map, they are rendered using the bloom map instead of the normal diffuse map. That way you can control exactly what gets bloomed and what doesn't by authoring these bloom maps. That's a lot of work though, I decided against it :) | 2018-03-21 05:42:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3857949376106262, "perplexity": 1458.646268563626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00174.warc.gz"} |
https://math.stackexchange.com/questions/2056745/group-which-doesnt-have-mathrmautg?noredirect=1 | # Group, which doesn't have $\mathrm{Aut}(G)$. [duplicate]
Does there some groups which doesn't contain $(\mathrm{Aut}(G),*)$ (only $\mathrm{id}$), where $*$ is composition. I thought that $\mathbb{R}$ could be such group, but I can build bijection between $\mathbb{R}$ and $\mathbb{R}$ ($[i,i+1) \mapsto (-i-1,-i]$).
Have no idea how to build such group. Any hints? It would be great if you're have some link about it.
## marked as duplicate by Dietrich Burde, Derek Holt group-theory StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Dec 13 '16 at 9:24
The only two groups for which $\mathrm{Aut}(G)$ is trivial are the trivial group $G = \{1\}$ and the cyclic group on two elements $G = \{\pm 1\}$.
To see this, first note that if $\mathrm{Aut}(G)$ is trivial, then $G$ is abelian, because if $h \in G$ is not central then $g \mapsto h^{-1}gh$ is a non-trivial automorphism. Now if $(G,+)$ (switching to additive notation) is an abelian group, then $g \mapsto -g$ is an automorphism (being its own inverse). If $\mathrm{Aut}(G)$ is trivial, this means that this automorphism is the identity, so $g = -g$ for each element $g \in G$.
In other words, all elements have order 2. This makes $G$ a vector space over the field $\mathbb F_2$ with two elements, simply by defining the multiplication with 0 to always give the identity element of $G$, and letting multiplication with 1 be the identity. Now, fix a basis $\{g_i\}_{i \in I}$ of $G$ as a vector space over $\mathbb F_2$.
If $|I| \geq 2$, you can define an automorphism of $G$ by exchanging $g_i$ and $g_j$, for some $i \neq j$, and leaving all other basis vectors fixed. Hence, $|I| \leq 1$, and this leaves us only with the two groups mentioned in the first sentence. | 2019-08-24 02:50:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649355173110962, "perplexity": 261.526068526045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00130.warc.gz"} |
http://math.stackexchange.com/questions/231824/how-can-i-calculate-the-radical-of-an-ideal-in-ring-bbb-z-n | # How can I calculate the radical of an ideal in ring ${\Bbb Z}_n$?
I learned the concept radical of an ideal from this wikipedia article. I tried some examples and I found that it's not easy to find $Rad(I)$. (That article gives some examples when $R={\Bbb Z}$.) For examples, let $R$ be the ring ${\Bbb Z}_n$ and let $I=\langle m\rangle$ where $m\in {\Bbb Z}_n$. How can I find $Rad(I)$? One properties I read in the article may be useful is
$Rad(I)$ is the intersection of all the prime ideals of $R$ that contain $I$.
But how can I find all the prime ideals of ${\Bbb Z}_n$ that contains $\langle m\rangle$?
I didn't find related materials in some introduction-level abstract algebra textbooks. Some happen to give this concept in exercises, say, "show that $Rad(I)$" is an ideal. Can any one come up with some useful references for this topic?
-
I assume your notation means $\mathbb{Z}_n=\mathbb{Z}/n$. If $m=p_1^{n_1}\cdots p_k^{n_k}$ is the prime decomposition then $Rad(m)=(p_1\cdots p_k) \trianglelefteq \mathbb{Z}_n$. In particular, if $m$ is squarefree, then $(m)$ is a radical ideal in $\mathbb{Z}_n$. – Ralph Nov 7 '12 at 2:11
@Ralph, doesn't $n$ play a role as well? For instance, the radical of $m \bmod n$ will contain all the nilpotent elements and for general $n$ there are nontrivial nilpotent elements in $\mathbb Z/n$. – lhf Nov 7 '12 at 2:17
If $m$ chosen is the unique one satisfying $m | n$, then what Ralph said is true. – user27126 Nov 7 '12 at 2:52
Let $g = gcd(n,m)$. Write $m=gq$ with $q,n$ coprime. Then $q$ is a unit in $\mathbb{Z}_n$ and $(m)=(g)$ in $\mathbb{Z}_n$. Now if $g=p_1^{n_1}\cdots p_k^{n_k}$ then I think we have $Rad(m)=(p_1\cdots p_k)$. – Ralph Nov 7 '12 at 3:08
I assume your notation means $\mathbb{Z}_n=\mathbb{Z}/n$. Set $\bar{m} := m + n\mathbb{Z} \in \mathbb{Z}_n$. Let $g=gcd(n,m)$ have the prime decomposition $g=p_1^{n_1}\cdots p_k^{n_k}$ and set $g_0 := p_1\cdots p_k$. Then we have $$Rad(\bar{m})=(\overline{g_0}) \trianglelefteq \mathbb{Z}_n$$
Proof: By writing $m=gq$, $\bar{q}$ is a unit in $\mathbb{Z}_n$ and hence $(\bar{m})=(\bar{g})$.
$(\supseteq)$ Let $\bar{x} \in (\overline{g_0})$. There are $y,z \in \mathbb{Z}$ s.t. $x=g_0y+nz=:g_0w$. Choose $l > 0$ s.t. $g \mid g_0^l$ (say $g_0^l=gh$). Then $\bar{x}^l=\overline{g_0}^l\bar{w}^l=\bar{g}\bar{h}\bar{w}^l \in (\bar{g})=(\bar{m})$. Thus $\bar{x} \in Rad(\bar{m})$.
$(\subseteq)$ Let $\bar{x} \in Rad(\bar{m})$. There is $l > 0$ s.t. $\bar{x}^l \in (\bar{m})=(\bar{g})$, i.e. there is an integer $y$ with $\bar{x}^l=\bar{g}\bar{y}=\overline{gy}$. Hence there is an integer $z$ with $x^l=gy+nz=:g_0w$. Thus $p_i \mid x^l$, whence $p_i \mid x$ for all $1 \le i \le k$. Consequently $g_0=p_1 \cdots p_k \mid x$ (say $x=g_0h$) and hence $\bar{x}=\overline{g_0}\bar{h} \in (\overline{g_0})$. | 2014-08-31 03:04:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9759781360626221, "perplexity": 103.20192296981324}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835872.63/warc/CC-MAIN-20140820021355-00152-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://www.r-bloggers.com/why-are-r-users-so-damn-stingy/ | # Why are R users so damn Stingy?!
April 26, 2014
By
(This article was first published on Econometrics by Simulation, and kindly contributed to R-bloggers)
YES R!
Looking at rapporter's recent blog post on "R activity Around the World", I am shocked by how few users actually monetarily support the R Foundation. Looking at the US for instance there are only 27 donars representing a little more than 0.1% of those registered on R users groups (of which there is nearly 27,000 members) which is a small estimate of the user base which is hard to estimate.
To get a better idea of the user base according to the same report it states that over 8 million packages have been downloads from the US. Imaging that each active user may download on average 50 packages this gives a US user base of no less than 160,000 active users as an extreme lower bound. This notation of large usage is complemented by Robert A. Muenchen's long running article tracking usage statistics of statistical software.
Yet we know that R users represent some of the highest earning professional skills we know that if users were not using R they would likely end up paying for proprietary software which costs thousands of dollars in order to do the same tasks. And still the number of actual donors is abysmal.
So Why Do R Users not Contribute to the Foundation?
1. Nobody knows what the foundation does!
The R foundation lists three things under its purpose:
• Provide support for the R project and other innovations in statistical computing. We believe that R has become a mature and valuable tool and we would like to ensure its continued development and the development of future innovations in software for statistical and computational research.
• Provide a reference point for individuals, instititutions or commercial enterprises that want to support or interact with the R development community.
The first two points (1. provide general support for R and 2. provide public face for R) are critical points for which R has continued to be handicapped in gaining popularity I believe largely because the foundation has been underfunded and inactive. I believe that being a board member on the R-Foundation should be seen with similar prestige as other massive open source projects such as Firefox, Apache, and GNOME.
Yet when I called the foundation to talk with someone, I cannot even be sure I got the write number (despite it being listed on the foundation's webpage). It is clear to me that the foundation has taken an extreme back position in promoting the public image of R, leaving it almost entirely up to the user base, other foundations, and corporations to promote the language.
This has worked fine, yet the roles outlined above are important roles which should not be left by the wayside.
2. It is pain in the %$# to contribute to the foundation! In the process of writing this post I attempted to call the phone number on the R-Project Foundation website. The person I got was not happy to talk to me and well did not seem interested in talking to me at all. When I looked at contributing to the foundation I found a PDF form that was supposed to be printed off and mailed with check or credit card information to the foundation! What decade are we in? I am surprised even 27 people in the US gave to the foundation. Besides the R-Project website interface clearly have not undergone any major renovations in the last ten years. Who uses frames anymore? Why would anybody fill out a pdf document to mail in when the standard for professional websites is to provide a secure interface for making payments online? 3. Projects funds usually cover software budgets And since R is free nobody factors in the software cost to their budget. Professionals never want to pay for something out of their own pocket which could be paid for out of their project budget. However, R is so clearly free that it is impossible for a project to allocate a donation to the continued support of R even though the program administrators might be very willing to provide such funds. I therefore propose an optional annual "Maintenance Fee" that will provide businesses and institutions with a (expense account) justification for funding R. Such a service could come with priority support on R mailing lists or forums with maybe three tears (Gold-$1000, Silver- $500, and Bronze$100, maybe). Users could post their status when asking questions and other users who respect the donors willingness to pay to support the R-Project will be more generous with their time when answering such questions. Such a system would allow for projects grants and funds to channel some small portion of their resources to help support the continued existence of R.
4. Users do not like paying to a single service
This is something that find I particularly difficult. My logic goes, why give to R when there are so many people here in need in Mozambique? But how do I decide which organization to give to say Free The Girls an organization which provide an alternative source of income for prostitutes or Massana an organization which provides food and education to street kids in Mozambique (I personally know board members of both these organizations and they are excellent people who serve faithfully). Then I must wonder how much to give and in what increments etc. Long and short of it, I give much less than I intend to and when I do give it is usually for a friend raising money for this thing or that thing.
I am therefore suggesting that if you are like me then please consider giving money through flattr. It is an organization which acts as a clearinghouse of donations. You give a fixed amount to flattr each month and flattr redistributes 90% of those funds to the organizations that you have chosen. The other 10% it keeps for itself. It seems to me that this is an excellent way for users of R to fund R as well as other initiatives which seem worthy.
Since I was unable to contact the R-Foundation I have set up a flattr account in there name which people can donate to using the following button:
RFoundation
As soon as I am contacted by a verified Foundation Member, I will transfer over complete control of the flattr account. (Yes it is strange that there is no verification step to ensure that creators are actually the ones who set up the flattr accounts)
But Why Give to the R-Foundation?
I have frequently wondered why it is that despite a super abundance of resources R maintains its reputation as a language which has a steep learning curve. I personally attribute this reputation primarily to the horrible user interface that R new users routinely encounter when going to R-Project.org. It is frankly embarrassing to be an R user when the platform is so bad at representing itself.
Likewise the foundation clearly needs to have some resources to fund staff members. This staff could focus on developing resources in order to provide basic support to media, business, universities, students, etc. The R-Foundation compares its existence to the Apache Foundation and the GNOME Foundation yet despite the tremendous success of R, it has no official public image to speak of if I can gauge from the webpage or the failed 5 minute phone conversation I had with the official number. I believe, all users of R will benefit by the language representing itself more professionally.
With additional funding the foundation could also provide additional support to making R user conferences appear more professional as well support the development of the R-Journal and other R other publications.
However, the primary goal of the foundation which could be furthered through the support of a wider donor base is the continued development of resources to facilitate the use of R by existing users as well as continuing to develop new tools for new R users.
Thank for reading! A good rule might be to think about how much you would be willing to pay to use R if it were proprietary then give say 5% of that.
If you are a frequent reader of my blog please consider flattring me! In the last year I have made 3 dollars and 62 cents from people flattring my blog :)
Econometrics by Simulation | 2014-09-02 11:38:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1738090217113495, "perplexity": 1477.200595424151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921957.9/warc/CC-MAIN-20140901014521-00347-ip-10-180-136-8.ec2.internal.warc.gz"} |