The web is giant. Very giant. It consumes all human data and extends throughout the large world to attach us, lets us share footage of meals, and permits us to view a plethora of cat movies. However one query that has at all times bothered me is, “how a lot does the web eat?” To […]

The post Pointless Calculations: How a lot meals does the web eat? appeared first on Self Scroll.

]]>The web is giant. Very giant. It consumes all human data and extends throughout the large world to attach us, lets us share footage of meals, and permits us to view a plethora of cat movies. However one query that has at all times bothered me is, “how a lot does the web eat?” To reply this query we have to make some pointers. After I say the web, I imply the ** data** transferred between all of the customers on the earth. So how a lot is that? Nicely, Cisco says that its about 167 terabits per second. So some fast google math says that this turns into about 1.83618×10^14 (1.eight adopted by 14 zeroes) bits per second. WOW that’s quite a bit… What about in a single 12 months? A whopping 5.7905772×10^21 bits! Now that is an inconceivable large quantity. By comparability, the variety of stars within the universe is round 6×10^22. So now we have to decide how a lot vitality the web makes use of. There are a whole lot of methods to outline this however lets use essentially the most fascinating. Do you know that data itself accommodates vitality? This isn’t as loopy because it appears and not too long ago experimental physicists have really extracted usable vitality from data alone. The way in which to think about vitality is in two elements. The primary half consists of what all of us consider as vitality like warmth, lifting one thing or throwing a ball. The opposite half is known as entropy or dysfunction. The way in which to think about entropy is to consider what number of potential methods to order one thing. For instance, the rationale that individuals guess on a 7 when rolling two cube is as a result of the 7 is at all times almost certainly to look as a result of there are extra mixtures of it on two cube than another quantity. So how a lot entropy does a bit of data have? Nicely it’s simply both 1 or zero so there are solely two configurations. The theoretical vitality from that is known as the

Now don’t overlook that we’re trying purely on the data vitality, and never together with the warmth, or mechanical vitality wanted to run the connections. So subsequent time you gouge your self on some superior deliciousness, do it with a free aware realizing that there are not any ravenous internets in Africa that would eat for only a greenback a day.

The post Pointless Calculations: How a lot meals does the web eat? appeared first on Self Scroll.

]]>This is part 1 of 3 about utilizing chart theory to engage with information. Part 2 will be published on Sunday June 24 th. Chart theory is a branch of mathematics, initially presented in the 18 th century, as a method to design a puzzle Charts are outstanding at producing streamlined, abstract designs of issues. […]

The post Information Designing With Chart Theory– Part 1– Intro appeared first on Self Scroll.

]]>* This is part 1 of 3 about utilizing chart theory to engage with information. Part 2 will be published on Sunday June 24 th.*

Chart theory is a branch of mathematics, initially presented in the 18 th century, as a method to design a puzzle Charts are outstanding at producing streamlined, abstract designs of issues. The body of chart theory enables mathematicians and computer system researchers to use numerous recognized principals, algorithms, and theories to their design.

Basically, a chart is extremely easy. It is made up of 2 sort of aspects, vertices and edges (often called nodes and links in computer technology).

Let’s take a look at utilizing chart theory to rapidly fix an issue. Expect I run a sports league, with 8 groups. I desire each of those groups to play precisely 3 video games– is this possible? It’s not, and I can utilize chart theory to show it.

Let G be a chart, with 7 vertices. Let each vertex represent a group, and let each edge represent a video game in between groups. There is a principal, understood often as “the handshake lemma”, which mentions that a chart should have an * even* variety of vertices that have * an odd variety of edges *( typically phrased as * o* dd degree). This is an effect of the sensible reality that every edge need to link to 2 vertices– one on each end. As we have an odd variety of vertices, and each of those vertices has an odd variety of edges, the overall degree of the chart is 21 (AKA 7 * 3). In any chart, variety of edges need to be half the overall degree, however 21 is odd, which indicates this chart cannot exist. As it cannot exist, we understand that such a competition bracket cannot either.

We can utilize charts to successfully search for relationships or courses in between approximate aspects. A typical usage case is to discover a maximum course in between 2 points, provided some type of expense specifications. A basic however enjoyable example of utilizing this in practice was a “ snake AI” that I developed with some buddies and colleagues.

Charts enable us to develop complex, instinctive relationships in between information points. Unlike numerous conventional techniques of structuring information, which concentrate on the * format* of relationships, charts concentrate on the * geography* of relationships in the dataset.

This might look rather easy, however the terms, categories, and analytics of charts can rapidly get hairy. Let’s have a look deeper into chart theory and chart modeling.

Chart theory, like any subject, has numerous particular terms for elements of a chart. Initially, we must most likely take a fast drive past set theory and chart aspects, which is necessary when speaking about groups of vertices or edges.

Common notations:

- G: a chart. This letter differs, such as when talking about 2 charts, we may state G and H, or G ¹ and G ².
- V( G): the set of all vertices in the chart.
- E( G): the set of all edges in the chart.
- |X |: the variety of aspects in X. For instance,|E( G)|= “the variety of edges in G”.

Secret terms:

- Order: The variety of vertices in a chart. The order amounts to|V( G) |.
- Nearby: A vertex
*v*is nearby to a vertex*u*IFF (if and just if) there is an edge from*v*to*u* - Linked: 2 vertices
*u*and*v*are linked IFF there exists a series of consecutively nearby vertices, beginning with*u*and ending with*v*That is, you can follow an edge from*u*and ultimately reach*v* - Degree: A vertex
*v*‘s degree is the variety of edges event to*v*Notated as d(*v*).

Prior to we get unfathomable into chart theory or issues, let’s take a look at the fundamentals of programs utilizing the chart information structure. There are a couple of methods to represent charts in our programs– we’ll take a look at the most typical 3, and the fundamental tradeoffs.

Each of these techniques has its own strengths and weak points. Like with any information structure, you must select your execution based upon the method( s) that you intend on querying the information, and the anticipated “shape” of the information. In specific, the edge: vertex ratio and the degree of the highest-degree vertices have the tendency to prevail metrics.

Edge lists are an incredibly easy method to represent a chart. They are just made up of … a list of edges, frequently simply through [source, destination] vertex sets. This is well fit to performant lookups of an edge, or noting all edges, however is sluggish with numerous other question types. For instance, to discover all vertices nearby to a provided vertex, every edge should be analyzed.

An adjacency matrix is a two-dimensional matrix, with the chart’s vertices as rows and columns. A provided crossway holds true if those vertices are nearby, or incorrect if they are not (note: if the chart is directed, make certain to specify that relationship in rows vs columns).

Adjacency matrices carry out highly with edge lookups, with a constant-time lookup provided a set of vertex IDs. They have the tendency to be sluggish for other operations– for instance, noting whatever nearby to a vertex needs examining each vertex in the chart.

They likewise usually need more area than other designs, specifically with sporadic charts (charts with “couple of” edges). An adjacency matrix has to reference every vertex versus each vertex, offering O(|V( G) |) area required.

This approach has a list, for every single vertex, of every aspect nearby to that vertex. It bears similarity to both edge lists, and adjacency matrices.

It enables constant-time lookup of nearby vertices, which works in numerous question and pathfinding situations. It is slower for edge lookups, as the entire list of vertices nearby to * u* need to be analyzed for * v*, in order to discover edge * uv*

This usually utilizes far less area than an adjacency matrix (as it does not have to track edges that * do not* exist), however it can get simply as huge in a chart with numerous edges.

Adjacency lists are the normal option for “basic function” usage, though edge lists or adjacency matrices have their own strengths, which might match a particular usage case.

Abstracting chart gain access to is crucial if your chart is going to cover more than a single function call. As in any programs context, over-exposing internals causes over-reliance on understanding the internals, throughout scopes in the codebase … which causes slow advancement, and a great deal of bugs.

You can likewise create your very own bugs … ahem, techniques, to enhance designated usage cases. For instance, if you have to note all edges, think about keeping a different internal list, instead of repeating over all vertices.

There are numerous libraries that you can utilize, such as gonum in Go, or networkx in Python, to utilize pre-built abstractions. Nevertheless, composing a does-everything abstraction is hard, and I routinely curse random style options of such libraries. I ‘d recommend composing your very own execution if you have the interest, or space to experiment. We’ll do this in part 2, in Go.

The post Information Designing With Chart Theory– Part 1– Intro appeared first on Self Scroll.

]]>The history of physics is filled with fantastic concepts that you have actually become aware of, like the Requirement Design, the Big Bang, General Relativity, and so on. However it’s likewise filled with fantastic concepts that you most likely have not become aware of, like the Sakata Design, Technicolor theory, the Steady State Design. and […]

The post Is Theoretical Physics Squandering Our Finest Living Minds On Rubbish? appeared first on Self Scroll.

]]>The history of physics is filled with fantastic concepts that you have actually become aware of, like the Requirement Design, the Big Bang, General Relativity, and so on. However it’s likewise filled with fantastic concepts that you most likely have not become aware of, like the Sakata Design, Technicolor theory, the Steady State Design. and Plasma Cosmology. Today, we have theories that are extremely trendy, however with no proof for them: supersymmetry, grand marriage, string theory, and the multiverse.

Since of the method the field is structured, bogged down in a sycophancy of concepts, professions in theoretical high-energy physics that concentrate on these subjects are typically effective. On the other hand, selecting other subjects suggests going it alone. The concept of “charm” or “naturalness” has actually been an assisting concept in physics for a very long time, and has actually led us to this point. In her brand-new book, * Lost In Mathematics*, Sabine Hossenfelder convincingly argues that continuing to follow this concept is precisely what’s leading us astray.

Picture you were provided a theoretical issue of choosing 2 billionaires off of a list, and approximating the distinction in their net worths. Picture they’re confidential, which you will not understand which one deserves more, where they rank on the Forbes billionaires list, or just how much either one is in fact worth at the minute.

We can call the very first one * A*, the 2nd one * B*, and the distinction in between them * C*, where * A– B = C* Even with no other understanding about them, there’s one essential thing you can specify about * C*: it’s * extremely* not likely that it will be much, much smaller sized than * A* or * B* To puts it simply, if * A * and * B* are both in the billions of dollars, then it’s most likely that * C* will remain in the billions, too, or a minimum of in the numerous millions.

For instance, * A* may be Pat Stryker(#703 on the list), worth, let’s state, $3,592,327,960 And * B* may be David Geffen( #190), worth $8,467,103,235 The distinction in between them, or * A– B*, is then -$ 4,874,775,275 * C* has a 50/50 shot of being favorable or unfavorable, however for the most parts, it’s going to be of the very same order of magnitude (within an element of 10 approximately) of both * A* and * B*

However it will not constantly be. For instance, the majority of the over 2,200 billionaires on the planet deserve less than $2 billion, and there are hundreds worth in between $1 billion and $1.2 billion. If you occurred to select 2 of them at random, it would not amaze you awfully if the distinction in their net worth was just a few 10s of countless dollars.

It might, nevertheless, surprise you if the distinction in between them was just a few thousand dollars, or was absolutely no. “How not likely,” you ‘d believe. However it may not be all that not likely after all.

After all, you do not know which billionaires were on your list. Would you be stunned to discover the Winklevoss twins– Cameron and Tyler, the very first Bitcoin billionaires– had similar net worths? Or that the Collison siblings, Patrick and John (co-founders of Stripe), deserved the very same total up to within a couple of hundred dollars?

No. This would not be unexpected, and it exposes a reality about great deals: in basic if * A* is big and * B* is big, then * A – B * will likewise be big … however it will not be if there’s some factor that * A * and * B* are extremely close together. The circulation of billionaires isn’t really entirely random, you see, therefore there may be some hidden factor for these 2 relatively unassociated things to in fact be related. (When it comes to the Collisons or Winklevosses, actually!)

This very same home holds true in physics. The electron, the lightest particle that comprises the atoms we discover in the world, is more than 300,000 times less huge than the leading quark, the heaviest Basic Design particle. The neutrinos are at least 4 million times lighter than the electron, while the Planck mass– the so-called “natural” energy scale for deep space– is some 10 ¹⁷ (or 100,000,000,000,000,000) times much heavier than the leading quark.

If you weren’t familiar with any hidden reason these masses need to be so various, you ‘d presume there was some factor for it. And perhaps there is one. This kind of thinking is called a fine-tuning or “naturalness” argument. In its easiest type, it specifies that there should be some sort of physical description for why parts of deep space with extremely various residential or commercial properties should have those distinctions in between them.

In the 20 th century, physicists utilized naturalness arguments to fantastic result. One method to describe fantastic distinctions in scale is to enforce a proportion at high energies, and after that to study the effects of breaking it at a lower energy. A variety of fantastic concepts came out of this thinking, especially in the field of particle physics. The gauge bosons in the electroweak force happened from this line of idea, as did the Higgs system and, as was verified simply a couple of years back, the Higgs boson. The whole Basic Design was developed on these kinds of balances and naturalness arguments, and nature occurred to concur with our finest theories.

Another fantastic success was cosmic inflation. Deep space had to have actually been finely-tuned to a fantastic degree in the early phases to produce deep space we see today. The balance in between the growth rate, the spatial curvature, and the quantity of matter-and-energy within it should have been remarkable; it seems abnormal. Cosmic inflation was a proposed system to describe it, and has given that had a number of its forecasts verified, such as:

- an almost scale-invariant spectrum of variations,
- the presence of super-horizon overdensities and underdensities,
- with density flaws that are adiabatic in nature,
- and a ceiling to the temperature level reached in the early, post-Big Bang Universe.

However regardless of the successes of these naturalness arguments, they do not constantly flourish.

There’s an unnaturally percentage of CP-violation in the strong decays. The proposed service (a brand-new proportion called the Peccei-Quinn proportion) have actually had absolutely no of its brand-new forecasts verified. The distinction in mass scale in between the heaviest particle and the Planck scale (the hierarchy issue) was the inspiration for supersymmetry; once again, it’s had absolutely no of its forecasts verified. The unnaturalness of the Requirement Design has actually caused brand-new balances through Grand Marriage and, more just recently, String Theory, which (once again) have actually had none of their forecasts verified. And the unnaturally low-but-non-zero worth of the cosmological constant has actually caused the forecasts of a particular kind of multiverse that can not even be checked. This too, obviously, is unofficial.

Yet unlike in the past, these dead-ends continue to represent the fields where the leading theorists and experimentalists cluster to examine. These blind streets, which have actually borne no fruit for actually 2 generations of physicists, continue to draw in financing and attention, regardless of potentially being detached from truth entirely. In her brand-new book, * Lost In Mathematics*, Sabine Hossenfelder adroitly challenges this crisis head on, speaking with traditional researchers, Nobel Laureates, and (non-crackpot) contrarians alike. You can feel her disappointment, as well as the desperation of a number of individuals she speaks to. The book responds to the concern of “have we let wishful considering exactly what tricks nature holds cloud our judgment?” with a definite “yes!”

The book is a wild, deep, thought-provoking read that would make any affordable individual in the field who’s still efficient in self-questioning doubt themselves. Nobody likes facing the possibility of having actually lost their lives going after a phantasm of a concept, however that’s exactly what being a theorist is everything about. You see a couple of pieces of an insufficient puzzle and think exactly what the complete photo really is; most times, you’re incorrect. Possibly, in these cases, all our guesses have actually been incorrect. In my preferred exchange, she interviews Steven Weinberg, who makes use of his large experience in physics to describe why naturalness arguments ready guides for theoretical physicists. However he just handles to persuade us that they readied concepts for the classes of issues they formerly prospered at resolving. There’s no assurance they’ll ready guideposts for the present issues; in reality, they demonstrably have actually not been.

If you are a theoretical particle physicist, a string theorist, or a phenomenologist– especially if you struggle with cognitive harshness– you will not like this book. If you are a real follower in naturalness as the assisting light of theoretical physics, this book will aggravate you significantly. However if you’re somebody who isn’t really scared to ask that huge concern of “are we doing it all incorrect,” the response may be a huge, unpleasant “yes.” Those people who are intellectually truthful physicists have actually been dealing with this pain for numerous years now. In Sabine’s book, * Lost In Mathematics*, this pain is now made available to the rest people.

The post Is Theoretical Physics Squandering Our Finest Living Minds On Rubbish? appeared first on Self Scroll.

]]>Now define the heuristic, the euclidean distance (the distance formula). This calculation guides the algorithm from the current point to the goal. float euclideanDistance(point a, point b) {return (pow( pow( a.x - b.x, 2.0) + pow( a.y - b.y, 2.0), 0.5));} Now thatpoint and euclideanDistance are defined, let’s generate a random maze. void randomMaze(int maze[HEIGHT][WIDTH], […]

The post Graphs & paths: A*, getting out of a maze. appeared first on Self Scroll.

]]>Now define the heuristic, the euclidean distance (the distance formula). This calculation guides the algorithm from the current point to the goal.

float euclideanDistance(point a, point b) {

return (pow( pow( a.x - b.x, 2.0) + pow( a.y - b.y, 2.0), 0.5));

}

Now that`point`

and `euclideanDistance`

are defined, let’s generate a random maze.

void randomMaze(int maze[HEIGHT][WIDTH], point p) {

point rn[4] = {

point(p.x-2, p.y, direction::L),

point(p.x+2, p.y, direction::R),

point(p.x, p.y+2, direction::U),

point(p.x, p.y-2, direction::D)

};

std::random_shuffle(&rn[0], &rn[4]);for(point cn : rn) {

if(cn.inBounds() && !maze[cn.y][cn.x]) {

if(cn.d == direction::L)

maze[cn.y][cn.x+1] = 1;

else if(cn.d == direction::R)

maze[cn.y][cn.x-1] = 1;

else if(cn.d == direction::U)

maze[cn.y-1][cn.x] = 1;

else if(cn.d == direction::D)

maze[cn.y+1][cn.x] = 1;

maze[cn.y][cn.x] = 1;

randomMaze(maze, cn);

}

}

}

This code is interesting, because this code generates a random maze by using the recursive call stack.

The code starts by creating an array of second degree neighbor points. Next the code shuffles those points in a random ordering. Then the code iterates on each second degree neighbor. If the second degree neighbor is `inBounds`

and the point is a wall; convert the point to a path, convert the first degree neighbor to a path, and make a recursive call on the second degree neighbor.

**Note, walls are the value **0** and paths are the value **1** .*

Now `point`

, `euclideanDistance`

, and `randomMaze`

are defined. Let’s code A*.

std::vector<points> astar(int maze[HEIGHT][WIDTH],point s,point g) {

//initialize sets//

std::vector<point> paths[HEIGHT][WIDTH];

float dist[HEIGHT][WIDTH] = { 0 };

bool visited[HEIGHT][WIDTH] = { 0 };

for(int i=0; i<HEIGHT; i++)

for(int j=0; j<WIDTH; j++)

dist[i][j] = INT_MAX;

//initialize starting point//

point cur = s;

dist[cur.y][cur.x] = euclideanDistance(s,g);

//best-fit search algorithm//

while( !(cur == g) ) {//update current point to being visited//

visited[cur.y][cur.x] = 1;

//neighbors of the current point//

point nb[4] = {

point(cur.x-1,cur.y,direction::L),

point(cur.x+1,cur.y,direction::R),

point(cur.x,cur.y-1,direction::U),

point(cur.x,cur.y+1,direction::D)

};

//calculate distances//

for(point cn : nb )

if( cn.inBounds() && maze[cn.y][cn.x] &&

(euclideanDistance(cn,g) + dist[cur.y][cur.x] + maze[cn.y][cn.x] < dist[cn.y][cn.x]) ) {

dist[cn.y][cn.x] = euclideanDistance(cn,g) + dist[cur.y][cur.x] + maze[cn.y][cn.x];

paths[cn.y][cn.x] = paths[cur.y][cur.x], paths[cn.y][cn.x].push_back(cur);

}

//select point of next iteration//

cur = point(-1,-1);

float md = INT_MAX;

for(int i=0; i<HEIGHT; i++)

for(int j=0; j<WIDTH; j++)

if(!visited[i][j] && dist[i][j]!=INT_MAX && dist[i][j] < md) { cur = point(j,i), md = dist[i][j]; }}

//return path from start to goal//

paths[g.y][g.x].push_back(g);

return paths[g.y][g.x];

}

A* starts by initializing sets for *distance*, *visited*, and *paths*. Initially all *distances* are `INT_MAX`

(unreachable) and all *visited* points are `0`

(unvisited). Set the `cur`

point to the starting point, `s`

, and set the *distance* at `cur`

to the `euclideanDistance`

. Now walk the maze.

For each iteration set *visited* at the `cur`

point to `1`

and then calculate the distances* *to the neighboring* *points.

To calculate the distances to the neighbor points; take the distance* *at the `cur`

point add the distance of the neighbor point and add the `euclideanDistance`

(from the neighbor point to the goal).

If this calculated distance is less than the currently assigned distance, update the *distance*.

The last step is selecting the next move. Select the smallest *distance* value that is reachable and unvisited. When the current point, `cur`

, is at the goal point, `g`

, the algorithm is complete.

*Note, A* is Dijkstra’s algorithm + heuristic. For more explicit steps implementing Dijkstra’s algorithm read **this** now.*

The post Graphs & paths: A*, getting out of a maze. appeared first on Self Scroll.

]]>or Alice & Bob — a love story Gödel’s Incompleteness Theorem Half One: Uncertainty Abounds In any formal system that’s written to precise arithmetic, there’ll exist a proposition that’s undecidable. Neither it, nor its negation will be proved. One of many issues that can’t be proved in a proper system is the consistency of that theorem. — Goedel’s Incompleteness […]

The post Spooky Motion at a Distance appeared first on Self Scroll.

]]>or **Alice & Bob** — a love story

*— Goedel’s Incompleteness Theorems.** [1]*

Within the wreckage of war-torn Europe within the 1920s, **David Hilbert**, essentially the most influential mathematician of his time, sought to show the consistency of arithmetic. From the ruins of the current, he noticed this venture as a automobile to a golden period.

He put out a name to colleagues and students from the famed Vienna Circle, most of whom have been sure that **arithmetic and logic** have been going to be **provably constant.**

So it may need come as a substantial blow to this very best when a shy doctoral scholar, **Kurt Gödel**, declared that this ambition could be unattainable. In a mumbled presentation, the final on the Königsberg convention in 1930 held to have a good time the success of Hilbert’s quest, he offered his notorious **incompleteness theorems**.

It *may* have come as a shock, however the truth is the importance of his findings have been largely ignored, attributable to Gödel’s halting, mumbled supply. It was **John Von Neumann**, identified for his contribution to laptop science (the Von Neumann Mannequin for laptop structure continues to be the premise of nearly all of trendy computer systems, however he additionally labored on the **Manhattan Venture**, and attributable to his attraction, wit and fondness of the heady social lifetime of Californian academia of the 1950s got here to be often known as Uncle Johnny) who realised the significance of Gödel’s two theorems, which in a generalised formulation are quoted above.

It could be helpful at this level to explain the 2 theorems by means of analogy. Let’s take a look at the primary theorem: **In any formal system that’s written to precise arithmetic, there’ll exist a proposition that’s undecidable.**

A pleasant strategy to visualise that is to think about constructing a** ‘mathematical field’** — exactly conceived and formalised utilizing arithmetic logic. On this field, we want to completely confine a cat (additionally formally outlined, utilizing the identical absolute logical guidelines we used to construct the field).

We’re pleased with the logic of our field, and of our cat, and we hope our cat can be endlessly trapped by the logically constant guidelines now we have used to create the 2 entities. However in line with Gödel, there’ll all the time be the opportunity of the cat **escaping even our most intelligent and fiendish designs**.

(It seems our cat is much extra artful than Schrödinger’s— the cat who continues to dwell and die one million occasions within the telling of the well-known ‘experiment’ that takes its title.)

However that’s nothing in comparison with the second theorem: **One of many issues that can’t be proved in a proper system is the consistency of that theorem.**

This tells us is that when now we have constructed our field and cat — after which open the field, what we discover may not be a cat in any respect— it might simply as properly be a rooster, a canine or a marine cephalopod… or something in any respect. It might fairly as simply be a neutron bomb related to a timer studying 00:00:00:01.

So, regardless of the very best efforts of those nice mathematicians and philosophers, uncertainty stays untouched on the coronary heart of logic, simply out of attain.

*— Lord Kelvin, c1900.** [2]*

We would hope that the ‘actual’ world will be identified with certainty, even when the summary recreation of arithmetic can’t. Because the quote above exhibits, in direction of the top of the nineteenth century, some have been so assured of this that they declared the pursuit of **physics was successfully at an finish.**

Lord Kelvin uttered these phrases simply earlier than the foundations of physics have been profoundly shaken by the dual revelations of **common relativity** and **quantum mechanics.** What quantum mechanics revealed was that uncertainty in physics was not attributable to an epistemological lack of element, nor any deficiency in our measuring equipment — it lay on the very coronary heart of those theories and was an unavoidable and important element, central to the workings of nature.

This important ambiguity, embedded in actuality not our understanding of it, is enshrined in Heisenberg’s well-known **Uncertainty Precept. **This states that** **there’s a elementary restrict to the precision with which we will be know the bodily properties of a pair of particles, comparable to their place and momentum. The extra correct our data of 1 property, the fuzzier

it’s in regards to the different.

So, it seems humanity’s ongoing quest for certainty in life continues to be foiled. For the reason that first awakenings of human cognition, tradition and society, now we have sought to present sense and which means to the seemingly random vagaries of an uncaring and typically vicious surroundings — who lives and who dies, which one prospers and which one fails.

We’ve got constructed temples and church buildings and sacred monuments in our makes an attempt to propitiate the gods, placate nature, and produce some certainty right into a random universe. We raised big** stone circles** to have a good time and attempt to perceive these issues on the earth that not less than appeared to embody certainty — the every day appearances and journeys of the solar and moon, and the actions of the heavens.

For the reason that enlightenment of the eighteenth century ushered within the so-called **Age of Cause**, and by no means extra so than within the twentieth, now we have constructed monuments of a unique type. Enormous telescopes have been raised to gaze on the heavens; extremely advanced and delicate spacecraft have been despatched to look far again to the creation — the beginnings of time and area itself.

We construct gigantic particle accelerators with which we peer deep into the depths of the** constructing blocks of nature** that make up every little thing we all know (however not every little thing there’s, as now we have found in the previous few many years) and we assemble equipment that may detect actions smaller than the dimensions of a proton to listen to the sound of **black holes colliding**, many hundreds of thousands of sunshine years away in area and time. Lawrence Krauss, a cosmologist and Professor of Physics at Arizona State college, considers these initiatives the **‘gothic cathedrals of our age’ **— bringing collectively the efforts of hundreds of individuals, labouring for years with a singular objective.

Through the use of these monuments to science, we could have conceded in our quest for absolute certainty, however in **quantum mechanics**, now we have our most dependable and examined idea of actuality but. In actual fact, so profitable has the speculation been, the interval following it’s discovery and refinement has typically been known as the **‘shut up and calculate’** period. The central weirdness and uncertainty of quantum mechanics was ignored as we used it to construct the trendy world, of which there’s little that exists that’s not essentially constructed on the speculation.

One among these weirdnesses is the quantum mechanical notion of **entanglement**. There are various difficult explanations for the concepts round entanglement, however in essence it may be defined fairly merely.

We take to particles, and produce them shut collectively. We make them work together — make them ‘really feel’ one another a little bit bit. We’ve got now entangled these particles — they’re now linked in a elementary and particular method.

One of many properties of our pair of entangled particles is that after we change the state of 1, the state of the opposite change too. That is known as **complementarity**. It will occur nonetheless far you progress the particles away from one another. That is unusual — how can they sign one another over a distance? However what’s stranger — a lot stranger, is that this modification occurs immediately.

Einstein’s theories of relativity inform us that there’s a cosmic pace restrict — that nothing can journey sooner than the pace of sunshine. However our pair of particles bear their modifications concurrently and instantaneously over an arbitrarily giant distance — even when the particles have been at reverse ends of the universe.

Einstein didn’t like quantum mechanics. He significantly didn’t like complementarity and entanglement. He known as it **‘Spooky Motion at a Distance’. **Regardless that his evaluation of this spookiness has shone mild onto complete new areas of physics together with quantum teleportation, cryptography and computing.

If we’re getting uninterested in all this uncertainty, it’s in these topics we’d discover some reduction. We could be feeling a little bit jaded by all this speak of maths and physics too. Maybe we want a narrative. One thing with a little bit drama, perhaps some romance. Area journey (no FTL!) and galactic themes And diamonds. Positively diamonds.

However who might inform such a narrative? And the way might or not it’s advised?

One of many grand challenges of contemporary physics is the search to** unify the 2 theories** that describe the phrase we dwell in — the theories of relativity and quantum mechanics — the very giant with the very small.

Very latest analysis into **quantum gravity **(one method to this problem) sees the entire of area being entangled in a really particular method. **Leonard Susskind**, professor of theoretical physics at Stanford College and director of the Stanford Institute for Theoretical Physics is on the forefront of this analysis. And from his theories, he can inform our story.

The story is about in someday sooner or later, simply round tea-time. The heroes of our story, **Alice and Bob**, are deeply in love and have not too long ago wed. As a result of they’re each particle physicists and foresightful girls, conversant in Susskind’s theories of entangled area, they ensured that the Graff diamond rings they exchanged on their engagement have been made totally from **entangled particle pairs**, simply in case.

Shortly after our story begins, our couple are separated, each ordered on separate missions, missions which take them to far-flung reaches of the galaxy, **many light-years aside.**

Then catastrophe strikes. As a result of unsuitable change being pulled on the proper time, within the management room of the brand new, planet-circling particle accelerator, a black gap is created that destroys the galaxy and every little thing in it. Mysteriously, solely** Alice **and** Bob **survive…** Can they ever get again collectively?**

Properly, Leonard Susskind says **sure, they’ll.** If Alice and Bob observe his theories, they are often reunited. All they should do is compress their still-entangled diamond rings a lot as to create a pair of **entangled black holes. **As a result of Susskind’s notion of complexly entangled area, the 2 entangle black holes are related by an ‘**Einstein-Rosen Bridge**’ — extra generally known as a **wormhole** — by way of which they’ll journey and meet.

Alice and Bob are as soon as once more collectively, entangled, and nonetheless head over heels in love.

Is the story of Alice and Bob a battle of fantasy? Properly in fact it’s. There are such a lot of flaws within the story, however the conclusions from the physics on the coronary heart of the story are sound. The problem is within the doing. We are able to simply entangle particles, with the proper tools. Quantum computation relies on creating and shepherding pairs of entangled electrons.

**Leo Kouwenhoven**, Professor of **Quantum Transpor**t may be capable of assist. Together with the DiCarlo Group at TUDelft have proven that even giant objects will be entangled, and successfully occupy two locations in area without delay. He demonstrates entanglement on the millimetre scale at room temperature. So the thought of entangling diamonds doesn’t appear fairly so not possible in any case.

Someday within the not too distant future, a jeweller comparable to **Graff** may be capable of **entangle the diamonds** of their engagement vary. Entanglement is sort of the apt metaphor to make use of for objects meant to **additional entangle** their wearers in everlasting romantic bliss — nonetheless far aside they might be, their diamonds are all the time in the identical place. Actually value a pitch to the advertising group.

This quite frivolous and frothy chain of thought is a little bit of enjoyable, and never significantly consequential, purchase it’d lead one to suppose a little bit deeper about how **individuals are entangled**, and the way one may have the ability experiment with this concept of spooky motion at a distance. How may or not it’s doable to keep up contact, not by way of dry textual content, and even voice — simply the data of the **presence and proximity** of one other, nonetheless far aside in area. An analogue, qualitative impression, with all info felt, not spoken. A **tele-haptic** intervention maybe.

Contact between each other is widespread to us all, and conveys magnitudes. To research the thought of **contact at a distance**, it’s wise to begin maybe one of many fundamental atoms of contact — strain. Might we mediate the strain of **contact over the community**, at a distance? Wouldn’t it ‘translate’?

The picture beneath exhibits an experimental prototype of what such a tool may appear like. It encompass two pairs of elements, equally matched, however having reverse behaviour. These matched machines can in idea be separated by any distance, being related by way of the community.

The paired units would wish 4 primary elements — one thing to afford and invite touching; a sensor to ship real-time information; a micro-controller to make sense of this information and ship it to its twin; and a motor to react to the customers contact and strain.

As one particular person pushes on the touch-pad, the strain of her finger will trigger the **motor** on her gadget to **transfer away from her**, at a pace that depends upon the strain of her finger. It can additionally trigger the motor on the opposite gadget to maneuver within the **other way**.

It’s not fairly so easy although. Somebody might also be urgent on the touch-pad — throughout the desk, in one other room, or maybe in **Hong Kong** or **Sydenham**, **Karachi** or **DesMoines**. The strain from their contact will in fact impede the movement of the opposite consumer. On this method, it might be doable to sense the presence of an actual particular person on the different finish of the road — actual contact, in [almost] actual time.

The thing is totally not supposed to be a product, within the regular sense of the phrase. It’s meant to be a poetic and inquisitive intervention, and experiment in essentially the most fundamental sense of contact, at its most simple and easy degree. It’s this simplicity that invitations participation. It asks questions in regards to the reality of our senses — is somebody actually there on the different finish, or is that this only a simulation of contact and motion, being performed out in blind algorithmic response to our contact, however darkish, unconscious, un-present. And will we inform the distinction?

It’s that nearly that’s of concern. The community is quick, however like anything, it may possibly solely switch information beneath the pace of sunshine. And after we’re coping with way more advanced tele-haptics and tele-presence over distances, the quantity of that must be sensed, collected, processed, transferred, processed a second time, to be lastly remodeled into motion by actuators on the different finish… the pace of sunshine is method too sluggish.

**Mischa Dohler,** Professor in Wi-fi Communications at King’s Faculty London has thought loads about this downside. Together with his expertise in telecommunications he’s properly positioned to search out methods to beet the pace of sunshine.

The issue is within the time it takes to ship the massive data-streams wanted to allow absolutely real-time tele-haptics. The answer is to ship solely small quantities of knowledge, and do quite a lot of pre-processing at both (or any) node. Dohler and his colleagues on the Centre for Telecommunications Analysis, King’s Faculty London, intend to leverage the speeds and options of the 5G community to permit solely the optimum quantity information to be despatched over the community, whereas predictive machine studying fashions will deal with all of the sense information and actuator motion at both finish of the tele-haptic connection.

**multi-touch**

Probably the most fundamental module described above will be considered being a geometrically two dimensional object — a line in area between the furthest extents of the 2 touch-pads. It might be significantly expanded in its potential to transmit contact by including a 3rd dimension. Through the use of a touch-pad that might sense strain over many factors on the aircraft, and growing the variety of motors we use, we’d be capable of mimic the contact and strain of a hand — strain from all of the factors on the palm, fingers and thumb.

Under are some photographs exhibiting how this may work. The multi-touch-pad senses the strain of the hand in a 16×19 grid — low decision, nevertheless it could be sufficient to present the impression of extra advanced human contact. That is then despatched to a micro-controller, which interprets this information into linear motion information and transmits this to the motor controllers. This raises and lowers particular person rods connected to the motors in accordance with the strain readings from the multi-touch-pad. A sheet of versatile materials is stretched between all factors n the 16×9 grid of actuator rods.

**warmth**One other important element of human contact is temperature. We are able to rework our strain information into modifications in temperature utilizing the peltier components — easy ceramic squares which is able to both turn into hotter or cooler relying on the quantity of present they obtain. This might be felt by the hand of one other particular person, both in real-time or offline. It is also visualised utilizing a warmth resistant materials. The tiny handprint left by a toddler as he leaves to go to high school might be seen and felt by his father when he arrives at his workplace.

The post Spooky Motion at a Distance appeared first on Self Scroll.

]]>Extreme earthquakes are low-probability-high-consequence events, meaning that they are both rare and potentially very damaging. Being rare, we have little historical evidence of their impact. With exponential demographic growth, past damage experience does not even reflect what would happen today (Bilham, 2009). Psychological biases, such as availability heuristic*, are not here to help make sense […]

The post Earthquakes on steroids, more powerful than the fat tail appeared first on Self Scroll.

]]>Extreme earthquakes are **low-probability-high-consequence events**, meaning that they are both rare and potentially very damaging. Being rare, we have little historical evidence of their impact. With exponential demographic growth, past damage experience does not even reflect what would happen today (*Bilham*, 2009). Psychological biases, such as availability heuristic*, are not here to help make sense of those extreme events (*Kahneman*, 2011 — * our tendency to rely on the recent examples that come to mind to evaluate risk). That’s the reason why, when such an earthquake strikes, it appears as a surprise. After the fact, however, it is rationalised by being added to the bucket of possible events. Nassim Taleb used the “black-swan” metaphor to describe these extreme events (*Taleb*, 2007). Didier Sornette proposed the more impressive “dragon-king” to describe even more extreme events (*Sornette*, 2009).

The recurrence rate of Mmax earthquakes remains debated though. Should it be extrapolated from the Gutenberg-Richter law? If the maximum-size event is “characteristic”, its rate would be higher than the one predicted by a power-law. A thorough investigation of Gutenberg-Richter versus characteristic Mmax has been done long ago by *Wesnousky* (1994). An intuitive approach (but only my personal view) that combines **conservation of energy and geometry **could explain how both processes could coexist. Note then that the characteristic earthquake represents a **fattening of the power-law distribution**, meaning more extreme earthquakes of magnitude Mmax.

Earthquake ground motion depends mainly on two parameters, the earthquake magnitude M and the distance R from the fault rupture. Empirical models are used to describe historical records but those only capture average features. Here is the general formulation of a so-called GMPE (ground motion prediction equation):

with PGA the peak ground acceleration and epsilon a random variable representing attenuation uncertainty. For ground acceleration, epsilon is lognormally distributed; for felt intensity, it is normally distributed.

Significant deviations are often observed, which can be due to site-specific and/or earthquake source-related effects, such as **basin amplification**, or **rupture directivity** (e.g., *Bard et al.*, 1988; *Sommerville et al.*, 1997). The figure above combines some USGS ShakeMap data (*Allen et al.*, 2008) with the plotting advantages of R packages (e.g., *Kahle and Wickman*, 2013) to illustrate a textbook example of extreme earthquake severity: Basin amplification during the 1985 M8 Mexico City earthquake. This phenomenon was not known before, as attested by a New York Times article from the same year: “*The powerful earthquake that killed at least 7,000 people here in September was, in effect, a deadly test in nature’s real and very brutal laboratory *[…] *The Mexico City disaster was the first, scientists and engineers say, to test the modern building technology *[…] *Among the key conclusions drawn from the disaster* […] *are that architects, engineers and city planners are going to have to restudy geological formations beneath some cities that might greatly increase the destructive force of an earthquake.*” This historic(al) event led to the development of seismic microzonation to calibrate GMPEs to local geological and geophysical conditions.

Other amplifying factors, such as rupture directivity, can also be implemented in GMPEs (e.g., *Somerville et al.*, 1997) although the use of such modified GMPE remains limited. Physics-based waveform simulations, such as the CyberShake initiative (*Graves et al.*, 2011), can now replace traditional GMPEs to address those potential shaking amplifications and create more realistic seismic hazard maps. This has yet to be applied in most official PSHA models and requires high computational capabilities. It is likely that simulation-based PSHA will become the norm in the not-so-far future.

So far, we have seen how one earthquake can be extreme: (1) naturally, in a fractal network, tail events must occur to efficiently release the energy; (2) this tail can be extended to the longest fault rupture possible in a given region; (3) if this is not enough to release all stored energy, more extremes, bounded at Mmax, must occur, meaning a fattening of the power-law, which is already a fat tail compared to the good-old normal distribution; (4) although ground shaking attenuates with the distance from the rupture plane, local conditions can amplify the shaking, hence leading to more extreme severity. All of this is well known and, if not yet systematically implemented in PSHA, will be soon.

Let’s now move on to the case where we do not have one large earthquake, but two or three… Indeed, not only will a mainshock of magnitude M be followed with high likelihood by an aftershock on magnitude M-1 (so-called Bath law; *Bath*, 1965), it will also increase the stress on some nearby faults, potentially leading to **doublets or even triplets of large earthquakes in a relatively short period of time**: the 2004–2005 M9.0–8.7 Sunda megathrust doublet (Nalbant et al., *Nature*, 2005), 1999 M7.4–7.1 Izmit and Duzce North Anatolian doublet (Parsons et al., *Science*, 2000), and 1811–1812 M7.3–7.0–7.5 New Madrid Central US triplet (Mueller et al., *Nature*, 2004) are good examples (and, in all appearance, high impact-factor journal material). The process is called “clock advance” and is estimated with **static stress modelling** (let’s continue with Stein, *Nature*, 1999 for a review). Here again, the process is relatively well understood and standard models exist for time-dependent seismic hazard applications (e.g., USGS Coulomb 3 software; *Lin and Stein*, 2004; *Toda et al.*, 2011). Yet, such modelling remains seldom used in regional PSHA. The main reason is that the basic formulation of PSHA assumes earthquake independence (read about the early history of PSHA in *McGuire*, 2008).

Now, I will present the recent results of *Mignan et al.* (2018) who investigated the role of large earthquake clustering on the **fattening of the risk curve**. The modelling approach simply combined the USGS Coulomb 3 software for computing stress transfer with a basic Monte Carlo method for simulating time series, a flexible approach for dynamic (multi-)risk modelling (e.g., *Mignan et al.*, 2014; 2017; *Matos et al.*, 2015). The main innovation of this work is that it illustrates in a transparent manner how **earthquake risk self-amplification** can occur, considering both large earthquake clustering and its impact on building vulnerability.

To summarise 24 pages in only a few paragraphs, let us just consider three characteristic earthquakes on three nearby faults A, B and C. To make things easy, they occur with the same occurrence rate and same magnitude, let us say once every three hundred years (r = 1/300+1/300+1/300 = 1/100) and M = Mmax. Also, each one of these earthquakes yields the same loss L(Mmax). What could be the impact of A+B+C clustering on the aggregate exceedance probability (AEP) curve, or risk curve? Let’s first consider the case in which A, B and C are independent. The probability of occurrence of one event, two or three can be estimated from the Poisson distribution (see Table and blue AEP). It would be too cumbersome to discuss how static stress transfer can be computed and how the rate of clusters would be estimated from millions of simulations. Instead, we can mimic the clustering behaviour by the **Negative Binomial distribution** (see Table and red curve). Fortunately for us, *Mignan et al. *(2018) fitted this distribution to their stress transfer results and obtained a dispersion index of about 1.3, which we use here. As one can see, the occurrence of large earthquake doublets or triplets becomes realistic, while it was almost impossible before. This yields once more to some tail fattening.

So far, I have only talked about hazard amplification (higher earthquake magnitude Mmax, higher frequency of Mmax, higher severity, higher likelihood of large earthquake clustering). There are certainly many different ways risk can also be amplified via building vulnerability and exposure aspects. I will here focus on **damage-dependent building vulnerability**, which is directly linked to the clustering of earthquakes. This process describes how a building becomes more fragile as it experiences more shaking episodes. As you will see, the impact can become quite dramatic.

We will follow the generic approach proposed in *Mignan et al.* (2018). More sophisticated methods exist but all are based on the same principle, which is the following: Conceptually, the capacity of a structure degrades with increased damage. We can simply consider, as source of degradation, the **decrease in the plasticity range**

due to the deduction of a residual drift ratio

Deformation below the yield is elastic and has therefore no long-term effect (first equation). Above however, the deformation due to the earthquake is plastic and therefore permanent (second equation). Anytime a new earthquake occurs, it impacts the building capacity via

(*Baker and Cornell*, 2006), the ground acceleration being estimated via a GMPE (see above). Finally, we compute the damage state

where DS1 (DS=1) leads to insignificant damage (permanent deformation tending to 0) and DS5 (DS=5) to building collapse (when the earthquake drift ratio equals the maximum possible strain taken by the building). Although it may appear complicated at first, what is done is just a subtraction, removing a piece of potential deformation at each earthquake, meaning an increased likelihood of failure (read more in *Mignan et al.*, 2018).

Let’s do an exercise with a building of standard yield displacement capacity 0.01 and a relatively low plastic displacement capacity 0.03, representative of some historic building (other parameters are a1 = -3.2 and a2 = 1). Now let’s shake the building with 0.4g several times. What happens? The first earthquake leads to slight damage (DS2). The second, although a clone of the first event, leads to moderate damage (DS3), and the third… to heavy damage (DS4), close to collapse (DS5). Of course, more sophisticated models are needed to estimate the expected behaviour of a specific building, but it is remarkable that a simple equation is all we need to understand the main process leading to amplified building vulnerability.

To finish, here is a map of damage due to a triplet of earthquakes, as simulated in *Mignan et al.* (2018) to identify the impact of damage-dependent vulnerability. Once again, this yields to higher losses for each cluster and therefore to **further fattening of the risk curve**.

I hope that this article proved that **many different physical processes can lead to extreme seismic risk** and that this cannot be described by one universal mathematical relationship (power-law or else). We talked about fractal geometry and other geometric constraints, conservation of energy, dynamic stress and static stress, wave amplification, and material plasticity. It is quite certain that many more aspects could be included. It is only by proper physical modelling of all these aspects that the number of surprise “super-earthquakes” can be minimised.

New studies now undermine the apparent universality of the power-law: *Broido and Clauset* (2018) showed that networks described by a power law are in fact rare, with the log-normal a possible alternative. *Mignan* (2015; 2016a; b) showed in the earthquake case that the famous Omori power-law of aftershocks is ill-defined and should be replaced by a stretched exponential (the topic of a future LinkedIn article). What these studies demonstrate is that **universality is an oversimplification**, that reality is often more complicated than we think. That’s alright, we just need to work a bit more to better understand what is really going on…

*Main reference:*

Mignan, A., L. Danciu and D. Giardini (2018), Considering large earthquake clustering in seismic risk analysis, Nat. Hazards, 91, S149-S172, doi: 10.1007/s11069–016–2549–9

*Other references:*

Allen, T.I., et al. (2008), An Atlas of ShakeMaps for Selected Global Earthquakes, USGS Open-File Report 2008–1236, 34 pp.

Baker, J.W. and C.A. Cornell (2006), Which Spectral Acceleration Are You Using?, Earthquake Spectra, 22, 293–312

Bard, P.-Y., M. Campillo, F.J. Chavez-Garcia and F. Sanchez-Sesma (1988), The Mexico Earthquake of September 19,1985 — A theoretical Investigation of Large- and Small-scale Amplification Effects in the Mexico City Valley, Earthquake Spectra, 4, 609–633

Bath, M. (1965), Lateral Inhomogeneities of the Upper Mantle, Tectonophysics, 2, 483–514

Bilham, R. (2009), The seismic future of cities, Bull. Earthquake Eng., 7, 839–887, doi: 10.1007/s10518–009–9147–0

Broido, A.D. and A. Clauset (2018), Scale-free networks are rare, arXiv: 1801.03400v1

Clauset, A., C.R. Shalizi and M.E.J. Newman (2009), Power-Law Distributions in Empirical Data, SIAM Review, 51, 661–703, doi: 10.1137/070710111

Field, E.H., et al. (2014), Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) — The Time-Independent Model, Bull. Seismol. Soc. Am., 104, 1122–1180, doi: 10.1785/0120130164

Giardini, D. et al. (2013), Seismic Hazard harmonization in Europe (SHARE): online data, Resource, doi: 10.12686/SED-00000001-SHARE

Graves, R., et al. (2011), CyberShake: A Physics-Based Seismic Hazard Model for Southern California, Pure Appl. Geophys., 168, 367–381, doi: 10.1007/s00024–010–0161–6

Gutenberg, B. and C.F. Richter (1944), Frequency of earthquakes in California, Bull. Seismol. Soc. Am., 34, 184–188

Kahle, D. and H. Wickham (2013), ggmap: Spatial Visualization with ggplot2, The R Journal, 5(1), 144–161, ISSN: 2073–4859

Kahneman, D. (2011), Thinking, Fast and Slow, Farrar, Straus and Giroux, 499 pp.

King, G. (1983), The Accommodation of Large Strains in the Upper Lithosphere of the Earth and Other Solids by Self-similar Fault Systems: the Geometrical Origin of b-Value, PAGEOPH, 121, 761–815

Lin, J. and R.S. Stein (2004), Stress triggering in thrust and subduction earthquakes, and stress interaction between the southern San Andreas and nearby thrust and strike-slip faults, J. Geophys. Res., 109, B02303, doi: 10.1029/2003JB002607

Lloyd’s, ed. (2017), Reimagining history, Counterfactual risk analysis, Emerging Risk Report 2017, Understanding risk, 48 pp.

Mandelbrot, B. (1982), The Fractal Geometry of Nature, W.H. Freeman and co., 468 pp.

Matos, J.P., A. Mignan and A.J. Schleiss (2015), Vulnerability of large dams considering hazard interactions, Conceptual application of the Generic Multi-Risk framework, 13th ICOLD Benchmark Workshop on the Numerical Analysis of Dams, Switzerland, 285–292

McGuire, R.K. (2008), Probabilistic seismic hazard analysis: Early history, Earthquake Engng Struct. Dyn., 37, 329–338, doi: 10.1002/eqe.765

Mignan, A., S. Wiemer and D. Giardini (2014), The quantification of low-probablity-high-consequences events: part I. A generic multi-risk approach, Nat. Hazards, 73, 1999–2022, doi: 10.1007/s11069–014–1178–4

Mignan, A., L. Danciu and D. Giardini (2015), Reassessment of the Maximum Fault Rupture Length of Strike-Slip Earthquakes and Inference on Mmax in the Anatolian Peninsula, Turkey, Seismol. Res. Lett., 86(3), 890–900, doi: 10.1785/0220140252

Mignan, A. (2015), Modeling aftershocks as a stretched exponential relaxation, Geophys. Res. Lett., 42, 9726–9732, doi: 10.1002/2015GL066232

Mignan, A., A. Scolobig and A. Sauron (2016), Using reasoned imagination to learn about cascading hazards: a pilot study, Disaster Prevention and Management, 25, 329–344, doi: 10.1108/DPM-06–2015–0137

Mignan, A. (2016a), Revisiting the 1894 Omori Aftershock Dataset with the Stretched Exponential Function, Seismol. Soc. Am., 87, 685–689, doi: 10.1785/0220150230

Mignan, A. (2016b), Reply to “Comment on ‘Revisiting the 1894 Omori Aftershock Dataset with the Stretched Exponential Function’ by A. Mignan” by S. Hainzl and A. Christophersen, Seismol. Soc. Am., 87, 1134–1137, doi: 10.1785/0220160110

Mignan, A., N. Komendantova, A. Scolobig and K. Fleming (2017), Chapter 14: Multi-Risk Assessment and Governance, Handbook of Disaster Risk Reduction & Management, 357–381, doi: 10.1142/9789813207950_0014

New York Times (1985), Lessons emerge from Mexican Quake, November 5 1985 issue

Somerville, P.G., N.F. Smith, R.W. Graves and N.A. Abrahamson (1997), Modification of Empirical Strong Ground Motion Attenuation Relations to Include the Amplitude and Duration Effects of Rupture Directivity, Seismol. Res. Lett., 68, 199–222

Sornette, D. (2009), Dragon-kings, black swans, and the prediction of crises, Int. J. Terraspace Sci. and Engineering, 2, 1–18

Taleb, N.N. (2007), The black swan, Random House, New York, 400 pp.

Toda, S., R.S. Stein, V. Sevilgen and J. Lin (2011), Coulomb 3.3 Graphic-rich deformation and stress-change software for earthquake tectonic, and volcano research and teaching — user guide, USGS Open-File Report 2011–1060m 63 pp.

Utsu, T. (1999), Representation and Analysis of the Earthquake Size Distribution: A Historical Review and Some New Approaches, Pure Appl. Geophys., 155, 509–535

Wesnousky, S.G. (1994), The Gutenberg-Richter or Characteristic Earthquake Distribution, Which Is It? Bull. Seismol. Soc. Am., 84, 1940–1959

Woo, G. (2016), Counterfactual Disaster Risk Analysis, Variance, in press

Youngs, R.R., S.-J. Chiou, W.J. Silva and J.R. Humphrey (1997), Strong Ground Motion Attenuation Relationships for Subduction Zone Earthquakes, Seismol. Res. Lett., 68, 58–73

*This article was originally published on LinkedIn on Jun. 16, 2018, under the title “**Beyond the power-law tail, a tale of extreme earthquake risk**”.*

The post Earthquakes on steroids, more powerful than the fat tail appeared first on Self Scroll.

]]>Evaluation / Debugging Network / Discussion In summary, in this section the authors performed additional experiments to evaluate Integral Gradients. (Such as Pixel ablations or comparing the bounding box of the highest active gradients) And when compared to pure gradient, integral gradient gave Superior results. (It you wish to see more examples please click here.) […]

The post [ Google / ICLR 2017 / Paper Summary ] Gradients of Counterfactuals appeared first on Self Scroll.

]]>**Evaluation / Debugging Network / Discussion**

In summary, in this section the authors performed additional experiments to evaluate Integral Gradients. (Such as Pixel ablations or comparing the bounding box of the highest active gradients) And when compared to pure gradient, integral gradient gave Superior results. (It you wish to see more examples please click here.)

In a setting where, bar for precision is high, such as medical diagnosis it is very important to know what is going on within the network. And to more accurately know what features contribute to which classes, and integral gradients can be used as a tool to gain more insights.

Finally, the authors discusses some limitation of this approach.

**a. Inability to capture Feature interactions** → The model can perform some operation that combines certain features together. Important scores have no way to represent these combinations.

**b. Feature correlations → **If similar feature occurs multiple times the model can assign weights to either one of them. (Or both). But those weights might not be human-intelligible.

The post [ Google / ICLR 2017 / Paper Summary ] Gradients of Counterfactuals appeared first on Self Scroll.

]]>A viral maths question This maths question went viral on Reddit recently: This post in r/funny was quickly cross-posted to r/facepalm with the title ‘Orchestra logic’, and to r/consulting and r/ProgrammingHumor with derogatory titles about project managers and comments about The Mythical Man-Month. This question going viral is interesting to me for (at least) a couple […]

The post Beethoven’s Ninth appeared first on Self Scroll.

]]>This maths question went viral on Reddit recently:

This post in r/funny was quickly cross-posted to r/facepalm with the title ‘Orchestra logic’, and to r/consulting and r/ProgrammingHumor with derogatory titles about project managers and comments about The Mythical Man-Month.

This question going viral is interesting to me for (at least) a couple of reasons:

- I make a living writing maths questions and doing other maths-education-related things, so whenever a bit of maths goes viral I’m excited.
- I’m in the middle of rehearsing Beethoven’s Ninth for a concert in September.

Unfortunately, if you hadn’t already guessed, not everyone was as excited about this question as I was.

When you first read it I imagine that, like me, you saw it as a question about proportion. Using the terminology of ‘Thinking, Fast and Slow’, this is a ‘System 1’ response. If you’re lucky, your ‘System 2’ then kicked in and pointed out that, in reality, the time an orchestra takes to play something isn’t dependent on its size, never mind proportional (directly or inversely) to it. It’s not a proportion question, and the answer is ’40 minutes’.

To solve a maths question, there are roughly two steps:

- Decide what to do.
- Do that.

When the maths question is a word problem, ‘deciding what to do’ involves converting the prose to abstract mathematics.

This question tests how good you are at this. For a student who realises that the time taken shouldn’t change, the second step is easy: just write down ’40 minutes’. But a student who wrongly decides to solve a direct or inverse proportion question not only has more calculation to do, they’ll also get an incorrect answer — either ’20 minutes’ or ’80 minutes’ — at the end, assuming they implement their chosen procedure correctly.

Even if you didn’t fall into the trap, I would wager that you at least noticed it. Nowhere in the picture does the word ‘proportion’ appear and yet, at least for a moment, you ‘knew’ this was a picture of a proportions question just as surely as you ‘knew’ it was a picture of a maths question; just as surely as you ‘knew’ it was a picture of some text.

It’s hard for us to describe how we come to ‘know’ what a word problem like this is about (and I’m writing ‘know’ in inverted commas because some of the things we ‘know’ turn out to be false!) As a result, it’s hard to teach the process of making mathematical sense of a word problem. We have the same problem trying to get a computer to solve a word problem; attempts to analyse the process we use, codify it, and program it into computers have turned out to be less successful in applications than using machine learning techniques.

You can contrast this with the routine (if not easy) arithmetic and algebraic manipulation that appears in the second step; once you know what to do, there is generally a well-defined procedure for doing it. For example, once you know that two quantities are proportional and have set up the equation, it’s straightforward to solve it. We can describe these procedures with complete precision to students and computers alike. More than that, we can explain to students why those procedures work.

I think this particular question looks like a proportions question, in particular an inverse proportion question, because it fits the mould of ‘P workers taking a time T to complete a task’. I’ve seen a lot of questions like that, as I imagine you have, and through experience I’ve learnt to recognise this structure.

I would, in fact, be worried about a student who didn’t initially identify this question as a proportions question. It would suggest to me that they hadn’t developed the kind of intuition required to identify what a word problem is about, and hence would be unable to access a genuine run-of-the-mill proportions word problem as they wouldn’t know where to start.

Developing this ‘system 1’ intuition is important. Among other benefits, interleaving content ensures that intuition isn’t based on, for example, the fact that the worksheet’s title is ‘Proportion’, or the fact that you’ve been learning about proportion all week. Craig Barton’s SSDD problems site takes this idea further. The way to develop intuition is through carefully designed experience, but I’m not convinced that you can hone your intuition to such an extent that a question like this couldn’t catch you out; however big the data set, I don’t think a neural network could catch a novel trick question using current techniques.

To avoid the trap in this question, we need to reflect on and question our intuition.

That’s a ‘system 2’ thing.

As I’ve explained above, I really hope that a student’s intuition tells them that this is a proportions question. Of course I also hope that, on reflection, they realise that their intuition was wrong. Let’s think about why they might not have this realisation.

There are valid concerns about cultural capital and whether knowing who Beethoven was, what a symphony is, and how an orchestra works could give one student an unfair advantage over another. However, if all the students who got the question wrong were asked point blank ‘Does an orchestra with twice as many players take half the time to play something?’ I think the vast majority of them would confidently answer ‘no’.

I don’t think cultural capital is the main issue at play here, and although I would probably prefer a context like…

Yesterday, a train with 120 passengers took 40 minutes to get from London to Reading. Today, the same train has 60 passengers. How long will it take to get to Reading?

Let P be the number of passengers and T the time the train takes.

…I don’t think that making this change would ensure that all students get an answer of 40 minutes.

Suppose that students rush to find the answer and so neglect to spend the time required to understand the problem as a whole, or to reflect on the wider consequences of their answer. Is the solution just to tell them to be more careful?

In his book ‘How Children Fail’, John Holt talks about ‘producers’ and ‘thinkers’. I’m not going to repeat all the arguments that he makes, but he makes a good case that the schooling process pushes students to be ‘producers’ rather than ‘thinkers’.

One of the reasons we get students to show their working is to try to rectify this, to emphasise the importance of the thought process. But generally the working that students are required to show is for the ‘do that’ stage; with our question it might start with the statement ‘P and T are inversely proportional’ and continue with some algebra.

We don’t get students to write essays describing how they decided what to do. Nor do I think we should; trying to put these things into words adds more cognitive load and I’m not convinced it’s germane. Getting students to show their working isn’t a good solution to encouraging reflection in the ‘deciding what to do’ phase; we have to find other methods of signalling its importance. Perhaps we can only provide opportunities for students to experience the consequences of not reflecting.

Maybe we’re being unfair to students by saying they’re just not engaging their brains. They could be engaged in checking any number of things, for example that:

- they’ve used all the information in the question.
- they’ve done a reasonable amount of work.
- the value they get is sane.

A student who successfully applies the procedure to solve an inverse proportion question will find all the items on the checklist are satisfied.

- They use all the values in the question (120 players, 40 minutes, 60 players) and make nontrivial use of the variables P and T defined in the question.
- They do a reasonable amount of work in setting up the proportions question and solving for the unknown.
- The answer they get, 80 minutes, seems like a reasonable amount of time for an orchestra to play something. Had they got an answer of 80 milliseconds or 80 hours they might have smelled a rat.

These checks are often explicitly taught. Why? Because they are useful; any of these checks failing would arouse a student’s suspicion that either they’d decided to do the wrong thing or had slipped up somewhere in the process of doing it. The fact that all three checks pass when the incorrect procedure is applied shows how carefully the question was written.

Lots has been written about the power of checklists, particularly in aviation, but these checklists are generally incredibly specific to the precise task at hand. It would be nigh on impossible to write a checklist that would, for any maths question, tell you whether you’d solved it correctly.

Of course, checklists for individual questions exist; they’re called mark schemes. You could argue that a checklist for solving a proportion question involving a number of workers and a task should contain a check that the task takes more workers less time to do. Anyone who has been in a meeting knows that sometimes the opposite is true. Perhaps with this checklist a student would have avoided the trap in the question — I’ll tell you why I’m not convinced of that shortly — but getting students to memorise numerous context-specific checklists (because they won’t be allowed to bring their checklists into their exams) is clearly a silly idea.

The three checks above aren’t exhaustive but they are simple, relatively unambiguous, and as applicable to a question (seemingly) about inverse proportion as they are to any other. Of course, the first two checks rely on the norms of maths questions being followed, and in the orchestra question they aren’t.

Or should I say it overrules itself?

This is the most interesting case as far as I’m concerned. A student realises that it doesn’t make sense that the second orchestra takes twice the time, but nevertheless gives the answer ‘80 minutes’ because that’s what the question is asking for.

One way of understanding this phenomenon is by thinking of the student as a novice. They’re uncertain that applying this procedure is the right thing to do, but they’ve had similar experiences previously where they were similarly uncertain and it turned out just fine. They’re not expecting complete certainty; if they waited for complete certainty they’d never get anything done.

After getting an answer of 80 minutes they might still have qualms, but again, that’s not unprecedented. This answer, and the procedure leading to it, passes all three checks from the previous section. The answer ’40 minutes’, which they might be tempted by, doesn’t; finding it doesn’t require using the number of players from the question, nor the variables P and T. It doesn’t really require any work at all. 80 minutes must be the correct answer, and the fact that it doesn’t make sense is just one of the many mysteries of maths.

Another way of understanding this phenomenon is by thinking of the student as a cynic disillusioned by repeated promises of real-world applications. They know full well that the only sensible answer is 40 minutes, but this is ‘maths world’ where people buy improbable quantities of fruit, answer reasonable enquiries about their age with riddles, and ignore friction. Leave your critical thinking at the door.

Perhaps this student has objected to contexts before and was branded a pedant or ‘smart aleck’ for their troubles. Maybe their teacher thought their criticisms were just an attempt at getting out of doing the sums. Maybe their teacher was right. In any case, while the student has stopped raising objections publicly, privately their views have only been reinforced and their disillusionment has only grown.

It was this kind of student who posted the question on social media.

The title of the post gives away the thoughts of the person posting:

The question’s author got it wrong. The time an orchestra takes to play something isn’t inversely proportional to its size.

The person who posted the question doesn’t consider the possibility that it might be a ‘trick question’. Given the number of people who upvoted the post, it appears that many people agreed with this assessment.

Why is this?

With this title, people who see this post are primed to look for a mistake in the question and, having found one, don’t feel the need to look for an alternative interpretation of the question. This has interesting parallels with the question itself, whose structure primes you to think about proportion.

Looking at the comments you’ll see that this can’t be the whole answer; there are several people who reject the idea that this question was intentionally written to make you think:

This isn’t the first time this question has gone viral. I first saw it last year on twitter:

The author of the question, a teacher from Nottingham, turned up in the thread a couple of days later and shared the worksheet the question was lifted from:

Although the title of the worksheet is ‘Direct and Inverse Proportion’, we can infer from the warning ‘Beware there is one trick question!’ that our orchestra word problem isn’t just a badly written inverse proportion question. Furthermore, the instruction to ‘Sort these questions into Direct and Inverse proportion’ suggests that the worksheet was written with the importance of ‘deciding what to do’ in mind.

Case closed? Not quite.

Time for some insight from the same century as Beethoven’s Ninth.

In February 1880’s Monthly Packet, published just over 50 years after the premiere of Beethoven’s Ninth, the author Lewis Carroll (who was, by day, the mathematician Charles Dodgson) wrote about a proportional reasoning problem:

The Cats and Rats Again.

‘If 6 cats kill 6 rats in 6 minutes, how many will be needed to kill 100 rats in 50 minutes?’

This is a good example of a phenomenon that often occurs in working problems in double proportion; the answer looks all right at first, but, when we come to test it, we find that, owing to peculiar circumstances in the case, the solution is either impossible or else indefinite, and needing further data. The ‘peculiar circumstance’ here is that fractional cats or rats are excluded from consideration, and in consequence of this the solution is, as we shall see, indefinite.

The solution, by the ordinary rules of Double Proportion, is as follows: —

But when we come to trace the history of this sanguinary scene through all its horrid details, we find that at the end of 48 minutes 96 rats are dead, and that there remain 4 live rats and 2 minutes to kill them in: the question is, can this be done?

Now there are at least

fourdifferent ways in which the original feat, of 6 cats killing 6 rats in 6 minutes, may be achieved. For the sake of clearness let us tabulate them: —

A. All 6 cats are needed to kill a rat; and this they do in one minute, the other rats standing meekly by, waiting for their turn.

B. 3 cats are needed to kill a rat; and this they do it in 2 minutes.

C. 2 cats are needed, and do it in 3 minutes.

D. Each cat kills a rat all by itself, and takes 6 minutes to do it.

In cases A and B it is clear that the 12 cats (who are assumed to come quite fresh from their 48 minutes of slaughter) can finish the affair in the required time; but, in case C, it can only be done by supposing that 2 cats could kill two-thirds of a rat in 2 minutes; and in case D, by supposing that a cat could kill one-third of a rat in 2 minutes. Neither supposition is warranted by the data; nor could the fractional rats (even if endowed with equal vitality) be fairly assigned to the different cats. For my part, if I were a cat in case D, and did not find my claws in good working order, I should certainly prefer to have my on-third-rat cut off from the tail end.

In cases C and D, then, it is clear that we must provide extra cat-power. In case C

lessthan 2 extra cats would be of no use. If 2 were supplied, and if they began killing their 4 rats at the beginning of the time, they would finish them in 12 minutes, and have 36 minutes to spare, during which they might weep, like Alexander, because there were not 12 more rats to kill. In case D, one extra cat would suffice; it would kill its 4 rats in 24 minutes, and have 24 minutes to spare, during which it could have killed another 4. But in neither case could any use be made of the last 2 minutes, except to half-kill rats — a barbarity we need not take into consideration.

To sum up our results. If the 6 cats kill the 6 rats by method A or B, the answer is ‘12;’ if by method C, ‘14;’ if by method D, ‘13.’

This, then, is an instance of a solution made ‘indefinite’ by the circumstances of the case. If any instance of the ‘impossible’ be desired, take the following: — ‘If a cat can kill a rat in a minute, how many would be needed to kill it in the thousandth part of a second?’ The

mathematicalanswer, of course, is ’60,000,’ and no doubt less than this wouldnotsuffice’ but would 60,000 suffice? I doubt it very much. I fancy that at least 50,000 of the cats would never even see the rat, or have any idea of what was going on.

Or take this: — ‘If a cat can kill a rat in a minute, how long would it be killing 60,000 rats?’ Ah, how long, indeed! My private opinion is, that the rats would kill the cat.

Lewis Carroll.

*Hat tip: I first read this on **James Dow Allen’s website** many years ago.*

Although he signed his article as Lewis Carroll, his philosophy as a mathematician shines through just as brightly as his sense of humour as an author.

You may think that his criticisms are just pedantry dressed up in humour and put him in the ‘smart alecks’ camp, but I’d like to know why it should be obvious to a student that one assumption (that cat-hours and rats killed are directly proportional) is valid but another (that the number of members of an orchestra and the time it takes for them to play Beethoven’s Ninth are inversely proportional) is not. Can you come up with a checklist which the cats and rats question passes but the orchestra question doesn’t?

Take a look at the following question and have a think about how you could pull it apart in the style of Lewis Carroll.

Strawberry Pickers R Us employs 15 people to pick one field of strawberries in 10 hours. How many strawberry pickers do they need to pick one field of strawberries in 3 hours?

Let T be the time to pick the strawberries and P the number of pickers.

Certainly you would need more than 15 strawberry pickers. Would they get in each other’s way, reducing everyone’s individual picking rate, or would they be faster as they pick for less time? Do they all pick at the same rate? Maybe the the original 15 pickers, with at least 10 hours experience, are more efficient than the temps they’ve got in. Did they have a lunch break during their 10 hour shift?

The more you think about it, the less proportional this scenario seems.

This question is question 3 on the worksheet from which the original question was taken. Given that we’ve decided that question 5 is the single trick question, I guess the quantities in question 3 must be inversely proportional.

The word problems we have looked at all contain hidden assumptions. We have two options:

- Make the assumptions explicit.
- Accept and embrace the ambiguity.

Option 1 would involve adding something like ‘each (cat | strawberry picker) works independently at the same constant rate…’ to the question. It makes the question pedant-proof and students who might have had reservations about the question without the assumptions explicitly stated will be reassured that there is one unambiguously correct answer.

There’s a danger that students will eventually learn that such boilerplate text just means that ‘these two quantities are proportional’. For high-stakes summative assessments it might be better to just state the proportionality explicitly. You won’t have a word problem any more and so you’re removing a large part of the ‘deciding what to do’, but you’re allowing more students to access the marks for implementing the procedure.

Option 2 would put the onus on the student to state their assumptions, and later perhaps to justify them or explore different sets of assumptions. This takes us from mere word problems to the rich world of mathematical modelling. Pedants can be told that a question is intentionally open-ended, without an objective unique correct answer. ‘Smart alecks’ can be rebranded as ‘mathematical modellers’ and be told to create new models of the scenario rather than just criticising old ones.

While perhaps not suitable for high-stakes examinations, mathematical modelling in the classroom also gives students ample experience of ‘deciding what to do’. The handbooks for the m3 challenge are great resources if you’d like to learn more about mathematical modelling and its benefits. To me it’s clear that, in most cases, mathematical modelling offers a better solution to our woes than trick questions do.

To make progress with mathematical modelling students will of course need fluency in the basics, so it’s too early to throw away the abstract proportionality questions and the simple word problems which bridge the gap. Once they’ve mastered these skills, they can revisit word problems, challenge the assumption of proportionality, and explore more sophisticated models. Hopefully by doing this they’ll learn, if they hadn’t before, that mathematics is more than just the unthinking application of routine procedures.

Maybe this article is an out of proportion response (pun intended) to a maths question going viral, but I think it’s helpful to analyse why people not only get the answer wrong, but think that the question is wrong.

There isn’t an algorithm for deciding how to answer a word problem. This requires intuition which can only come with experience. Developing and refining that intuition is an important part of learning maths, but this point can be missed by students. The curse of knowledge means that we can often miss this point ourselves, as we automatically fill in all the hidden assumptions that we know must be made to answer a question.

By interleaving content you can make sure that students don’t decide what to do based solely on what they’ve learnt most recently. Similarly, by doing mathematical modelling we make sure they don’t rely on whether they have used all the information in a question, whether they have done a reasonable amount of work, or whether some other criterion more about the norms of maths questions than the nature of reality is satisfied. See also the famous ‘How old is the shepherd’ question. Using these criteria can be good exam technique, but they should be used as a check after answering a question and not as a signpost at the start.

When I come to sing in Beethoven’s 9th at the end of September I’ll be sure to time the performance and count the number of players in the orchestra. Will it take the 40 minutes the question asserts, or the oddly precise 55 minutes and 14 seconds that the top comment on the Reddit post suggests? Perhaps it will be nearer the 74 minutes that a CD was designed to store, allegedly in order to hold all of Beethoven’s Ninth. I’ll let you know.

The post Beethoven’s Ninth appeared first on Self Scroll.

]]>When I was reading the MuSig paper from Blockstream I was trying to imagine what would it mean for me as a bitcoin user. Some features of the Schnorr signatures I found really great and convenient, but others are pretty annoying. Here I want to share my thoughts with you, but first, a quick recap: […]

The post How Schnorr signatures may improve Bitcoin appeared first on Self Scroll.

]]>When I was reading the MuSig paper from Blockstream I was trying to imagine what would it mean for me as a bitcoin user. Some features of the Schnorr signatures I found really great and convenient, but others are pretty annoying. Here I want to share my thoughts with you, but first, a quick recap:

Currently in Bitcoin we use ECDSA. To sign a message ** m** we hash it and treat this hash as a number:

Using a private key ** pk** we can generate a signature for message

This algorithm is very common and pretty nice but it can be improved. First, signature verification includes inversion (** 1/s**) and two points multiplications and these operations are very computationally heavy. In Bitcoin every node has to verify all the transactions. This means that when you broadcast a transaction, thousands of computers will have to verify your signature. Making verification process simpler will be very beneficial even if signing process is harder.

Second, every node has to verify every signature separately. In case of m-of-n multisig transaction node may even have to verify the same signature several times. For example, transaction with 7-of-11 multisig input will contain 7 signatures and require from 7 to 11 signature verifications *on every node* in the network. Also such transaction will take a huge amount of space in the block and you will have to pay large fees for that.

Schnorr signatures are generated slightly differently. Instead of two scalars ** (r,s)** we use a point

This equation is linear, so equations can be added and subtracted with each other and still stay valid. This brings us to several nice features of Schnorr signatures that we can use.

To verify a block in Bitcoin blockchain we need to make sure that *all* signatures in the block are valid. If one of them is not valid we don’t care which one — we just reject the whole block and that’s it.

With ECDSA every signature has to be verified separately. Meaning that if we have 1000 signatures in the block we will need to compute 1000 inversions and 2000 point multiplications. In total ~3000 heavy operations.

With Schnorr signatures we can add up all the signature verification equations and save some computational power. In total for a block with 1000 transactions we need to verify that:

*(s1+s2+…+s1000)×G=(R1+…+R1000)+(hash(P,R,m1)×P1+ hash(P,R,m2)×P2+…+hash(P,R,m1000)×P1000)*

Here we have a bunch of point additions (almost free in sense of computational power) and 1001 point multiplication. This is already a factor of 3 improvement — we need to compute roughly one heavy operation per signature.

We want to keep our bitcoins safe, so we might want to use at least two different private keys to control bitcoins. One we will use on a laptop or a phone and another one — on a hardware wallet / cold wallet. So when one of them is compromised we still have control over our bitcoins.

Currently it is implemented via 2-of-2 multisig script. This requires two separate signatures to be included in the transaction.

With Schnorr signatures we can use a pair of private keys ** (pk1,pk2)** and generate a shared signature corresponding to a shared public key

There are three problems with this construction. First one — from UI point of view. To make a transaction we need two communication rounds. First — to calculate common ** R**, and second — to sign. With two private keys it can be done with a single access to the cold wallet: we prepare an unsigned transaction on our online wallet, choose

Second problem is a known Rogue key attack. It is nicely described in the paper or here, so I won’t go into details. The idea is that if one of your devices is hacked (say, your online wallet) and pretends that its public key is ** (P1-P2)** then it can control shared funds with its private key

And there is a third important problem. *You can’t use deterministic k*** ***for signing*. There is a simple attack that allows a hacker to get our private key if you are using deterministic ** k**. Attack looks like this: someone hacked our laptop and has a complete control over one of two private keys (say,

In this attack hacker gets a pair of valid signatures for the same transaction: ** (R1, s1, R2, s2)** and

MuSig solves one of these problem — it makes rogue key attack impossible. The goal is to aggregate signatures and public keys from several parties/devices to a single one but without proving that you have a private key corresponding to the public key.

The aggregated signature corresponds to the aggregated public key. But instead of just adding up public keys of all co-signers we multiply them to some factor. The aggregated public key will be ** P=hash(L,P1)×P1+…+hash(L,Pn)×Pn**. Here

The rest is pretty similar to the previous case. To generate a signature each co-signer choses a random number ** ki** and shares

As you may have noticed, MuSig and key aggregation require *all signers to sign a transaction*. But what if you want to make a 2-of-3 multisig? Is it possible at all to use signature aggregation in this case, or we will have to use our usual OP_CHECKMULTISIG and separate signatures?

Well, it is possible, but with a small change in the protocol. We can develop a new op-code similar to OP_CHECKMULTISIG that checks if aggregated signature corresponds to a particular item in the Merkle tree of public keys.

For example, if we use a 2-of-3 multisig with public keys ** P1**,

But with the Merkle tree of public keys we are not limited to m-of-n multisigs. We can make a tree with any public keys we want. For example, if we have a laptop, a phone, a hardware wallet and a recovery seed we can construct a structure that would allow us to spend bitcoins with a laptop and a hardware wallet, a phone and a hardware wallet or just with a recovery seed. This is currently not possible just with OP_CHECKMULTISIG — only if you construct much more complicated script with branches and stuff.

Schnorr signatures are great. They can save some computational power during block validation and also give us ability to use key aggregation. The last one has some inconveniences, but we aren’t forced to use them — after all, if we want we can continue using normal multisig schemes with separate, non-aggregated signatures and still gain something. I can’t wait to start using them and I hope they will be included in the Bitcoin protocol soon.

I really liked the paper, the MuSig scheme is smart and the paper itself is very easy to read. I would strongly recommend to look through it if you have time.

The post How Schnorr signatures may improve Bitcoin appeared first on Self Scroll.

]]>The misconception of superhuman working memory I taught mathematics and data for over a years and today I’m frequently asked to discuss technical subjects at conferences. Although I originate from a field that enjoys formulas and technical nitty gritty information, you’ll discover practically none in my talks. I likewise prevented them as a stats speaker. […]

The post Why I prevent formulas in my talks appeared first on Self Scroll.

]]>I taught mathematics and data for over a years and today I’m frequently asked to discuss technical subjects at conferences. Although I originate from a field that enjoys formulas and technical nitty gritty information, you’ll discover practically none in my talks. I likewise prevented them as a stats speaker. Here’s why.

Prior to going to grad school in mathematical data, I was a PhD trainee in neuroscience and psychology. I was lucky sufficient to obtain hands-on research study experience on the subject of human attention and memory, which brought me to a funny awareness.

Anybody who declares to be following an equations-based mathematical lecture is most likely fabricating it.

To boil the cognitive science to a point, anybody who declares to be following an equations-based or technical-details-stuffed lecture is most likely devising. There’s one exception: those who have actually currently discovered the majority of the product. Mathematicians are simply as human as anybody and their working memory capability works likewise too. It ends up that basic lectures overload trainees’ working memory by specifying a lot of brand-new signs and formulas for even the brightest trainees to track exactly what’s exactly what.

Believe you’re unique? Read this when, close your eyes, and state it back to me:

AHGJBSKEIFDDRHWSL

Psychologists would recommend that AHGJBSK has to do with as much as you must anticipate human beings to manage Whenever a speaker includes a brand-new sign to their talk, the audience needs to commit working memory capability to tracking exactly what it represents and how it fits with the other brand-new things. That’s the very same working memory required for keeping in mind the previous slide and tracking the sensible argument. AHGJBSK highlights simply how little capability there is to walk around.

Professors and technical speakers, do not take my word for it, attempt it yourself:

- Choose an alphabet you would not have the ability to call characters from. Chinese characters are my preferred option when I practice. 熟能生巧
- Consider your audience. Believe thoroughly about who will remain in the space and exactly what they understand.
- Ask yourself which lingo terms or signs in your formulas or technical nitty gritty information in your slides may not be right away familiar to your audience. If you make sure each human in the space sees x̄ as “sample average” without believing (a good presumption if your audience is data teachers), then you might leave out x̄ from this list. Utilize it easily. Otherwise, take x̄ and its good friends to the next action.
- Change every one of those with a random letter from the alphabet you have actually selected.
- For finest outcomes, have a buddy reformat your slides so that things have actually likewise walked around a bit aesthetically.
- Attempt providing your talk/lesson.

If you stumble, you ‘d likewise have actually lost your audience at this moment. If the cognitive load of remembering exactly what all those brand-new signs suggest is excessive for you (the professional!) then it’s certainly excessive for your bad audience.

As soon as you have actually lost your audience, all they will soak up is summaries and descriptions that you give up plain language. That’s why it’s specifically crucial to pepper any technical talk with standalone plain language summaries.

If you have actually lost your audience and they’re too respectful to leave of your talk (or if they’re a captive audience of impressionable young trainees), you’ll hardly ever discover listeners brave enough to mention that the emperor has no clothing. Normally, nobody calls out, “We have not comprehended a word you have actually stated in the last 30 minutes.” Some folks are limited by good manners, some feel repairing your incompetence is unworthy their time, some are cowed by the wise concerns from the handful of individuals who were professionals in the majority of your talk prior to you provided it, and some are * questioning whether they’re the only ones too silly to comprehend exactly what you’re discussing.*

This latter classification might be investing your whole talk (given that it might also be birdsong) painful over whether they even belong in the space. They may begin to think that they’re impostors. Your talk/lesson, targeted to impress a handful of professionals in the space, entirely misses out on the rest of your audience, which adds to a hazardous environment swarming with impostor syndrome. ( Here’s a connect to my musings on impostor syndrome and exactly what instructors and trainees can do about it.)

I understand that in lots of scholastic disciplines, my own consisted of, providing in this terrible way belongs to the culture. We may have a culture that is less than perfect, however we’re not stayed with it. We can decide to lead modification by example.

At Google, I at first got a great deal of criticism from typically minded coworkers when I revealed that I would be teaching our whole labor force data and artificial intelligence … without formulas. Those courses rapidly ended up being the most popular internal technical training, with evaluations like, *” I found out more in one day than in a whole term of my data master’s degree.”* It can be done.

The hardest thing you’ll need to do is discover the nerve to stop attempting to show that you understand ways to utilize formulas (our company believe you) and begin thinking of exactly what’s really helpful and fascinating to your audience, keeping human working memory restrictions in mind. Let me assist you get going.

** Offer yourself a spending plan: ** no greater than 7 brand-new things (signs, theorems, principles, formulas, and so on) in working memory. That number drops when listeners are less encouraged. When I state I go for 3– 5, that does not suggest just 5 things discovered over the lesson. It implies just 5 things filled into working memory * at a time. * If your audience is seeing something for the very first time, end up with it now (and inform your audience when you’re finished with it so they can drop it from working memory) or spend for it from the spending plan. Do not lean on it later on unless you have actually kept your audience’s working memory without mess.

If you’re truthful with yourself, you’ll see that few of the information that look so gorgeous to you really assist your audience. Do not lose their time with formulas or technical nitty gritty information they cannot soak up today. Rather, inform them * how* to utilize that formula when they’re stooped over it with pen and paper. Inform them why they must be thrilled about it and how the information suits the higher image. Inform them why it was tough to obtain/ find and exactly what the crucial insight that drove that discovery was. Point your audience to any formulas or information they will require later on by suggesting the location to look and exactly what they will wish to utilize them for. Inform them why they must care! Get them fired up or they’ll believe your subject is uninteresting or, even worse, that they’re bad at it.

The post Why I prevent formulas in my talks appeared first on Self Scroll.

]]>